Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
Oh hey looks like another Chat-GPT assisted legal filing, this time in an expert declaration about the dangers of generative AI: https://www.sfgate.com/tech/article/stanford-professor-lying-and-technology-19937258.php
The two missing papers are titled, according to Hancock, “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance” and “The Influence of Deepfake Videos on Political Attitudes and Behavior.” The expert declaration’s bibliography includes links to these papers, but they currently lead to an error screen.
Irony can be pretty ironic sometimes.
andrew tate’s “university” had a leak, exposing cca 800k usernames and 325k email addresses of people that failed to pay $50 monthly fee
entire thing available at DDoSectrets, just gonna drop tree of that torrent:
├── Private Channels │ ├── AI Automation Agency.7z │ ├── Business Mastery.7z │ ├── Content Creation + AI Campus.7z │ ├── Copywriting.7z │ ├── Crypto DeFi.7z │ ├── Crypto Trading.7z │ ├── Cryptocurrency Investing.7z │ ├── Ecommerce.7z │ ├── Health & Fitness.7z │ ├── Hustler's Campus.7z │ ├── Social Media & Client Acquisition.7z │ └── The Real World.7z ├── Public Channels │ ├── AI Automation Agency.7z │ ├── Business Mastery.7z │ ├── Content Creation + AI Campus.7z │ ├── Copywriting.7z │ ├── Crypto DeFi.7z │ ├── Crypto Trading.7z │ ├── Cryptocurrency Investing.7z │ ├── Ecommerce.7z │ ├── Fitness.7z │ ├── Hustler's Campus.7z │ ├── Social Media & Client Acquisition.7z │ └── The Real World.7z └── users.json.7z
yeah i studied defi and dropshipping at andrew tate’s hustler university
statements dreamed up by the utterly deranged
Ugh. Tangentially related: a streamer I follow has been getting lots of people in her chat saying that one of the taters wants to hire her. I’ve started noticing comments like “I love white culture” and weird fantasies about the roman empire. Historically she’s also been asked multiple times what her ethnicity is (she is white), specifically if she is scandanavian, which I am starting to view under some kind of white supremacist lens. I’ve told her to ignore anything mentioning the taters or “top g” as one of them is known.
Honestly, I’m worried that she could get brigaded by these creeps, even if she shows no response whatsoever.
“Yeah I thought about going into civil engineering but the department of hustling really spoke to me y’know?”
i have never felt imposter syndrome since
nikhil suresh, probably
I’m just curious how many hits you would get if you searched for ‘4 hour work week’, as iirc that is where all these people stole the idea from. (well, not totally, the idea they are stealing is selling others the idea of the 4 hour work week, but I hope you get what I mean, 4 hour work weeks all the way down).
not many. 2 hits for “4 hour workweek” and 35 for “4 hour work week” another 4 for “4-hour workweek” and 5 for “4-hour work week”
alright it’s 14gb of json files let me figure out how to grep it in reasonable way and i’ll get there for now i’ll say that the biggest one (size) in private channels is “crypto trading” (2 gb) then “crypto investing” (1.6gb), while in public channels it’s “the real world” (1.7 gb) and “ecommerce” (0.87 gb)
See, isn’t the 4-hour work week one of those “just make other people work 50+hours a week on your behalf and take the money they’ve earned for it” schemes? This looks much broader rather than being married to a specific sub-scam. Like, if crypto is down they can sell drop shipping. If drop shipping is cringe they can sell AI slop monetization. If Amazon tightens their standards and starts locking out AI stuff they can go back to crypto.
It’s in the same genre of trying to monetize being a conspicuous asshole, but it is one of the more complex evolutions, at least compared to the standard grift-luencer.
obligatory IBCK.
The word “deranged” is getting a workout lately, ain’t it?
fwiw i attribute this to the stop doing x meme (that I believe skillissuer is referencing):
been thinking about making one for (memory) safe c++, but unfortunately I don’t know the topic deep enough to make the meme good
One of my favorite meme templates for all the text and images you can shove into it, but trying to explain why you have one saved on your desktop just makes you look like the Time Cube guy
Interesting post and corresponding mastodon thread on the non-decentralised-ness of bluesky by cwebber.
https://dustycloud.org/blog/how-decentralized-is-bluesky/
https://social.coop/@cwebber/113527462572885698
The author is keen about this particular “vision statement”:
Preparing for the organization as a future adversary.
The assumption being, stuff gets enshittified and how might you guard your product against the future stupid and awful whims of management and investors?
Of course, they don’t consider that it cuts both ways, and Jack Dorsey’s personal grumbles about Twitter. The risk from his point of view was the company he founded doing evil unthinkable things like, uh, banning nazis. He’s keen for that sort of thing to never happen again on his platforms.
note that cwebber wrote the ActivityPub spec, btw
how come every academic I have worked with has given me some variation of
they already have all of my data, I don’t really care about my privacy
i’m in computer science 🙃
When people start going on about having nothing to hide usually it helps to point out how there’s currently no legal way to have a movie or a series episode saved to your hard drive.
I suspect great overlap between the nothing-to-hide people and the people who watch the worst porn imaginable but think incognito mode is magic.
what’s wild is in the ideal case, a person who really doesn’t have anything to hide is both unimaginably dull and has effectively just confessed that they would sell you out to the authorities for any or no reason at all
people with nothing to hide are the worst people
the marketing fucks and executive ghouls who came up with this meme (that used to surface every time I talked about wanting to de-Google) are also the ones who make a fuckton of money off of having a real-time firehose of personal data straight from the source, cause that’s by far what’s most valuable to advertisers and surveillance firms (but I repeat myself)
The thing is, I’m pretty sure the overwhelming majority of the data is effectively worthless out of the online advertising grift. It’s thoughtlessly collected junk sold as data for its own sake.
It works because no one working in advertising knows what a human beings is.
my strong impression is that surveillance advertising has been an unmitigated disaster for the ability to actually sell products in any kind of sensible way — see also the success of influencer marketing, under the (utterly false) pretense that it’s less targeted and more authentic than the rest of the shit we’re used to
but marketing is an industry run by utterly incompetent morally bankrupt fuckheads, so my impression is also that none of them particularly know or care that the majority of what they’re doing doesn’t work; there’s power in surveillance and they like that feeling, so the data remains extremely valuable on the market
Never thought I’d die fighting alongside a League of Legends fan.
Aye. That I could do.
You just know Netflix’s inbox is getting flooded with the absolute worst shit League of Legends players can come up with right now
And having played more LoL than I care to admit in high school, that’s some truly vile shit. If only it actually made it through the filters to whoever actually made the relevant choices.
Dude discovers that one LLM model is not entirely shit at chess, spends time and tokens proving that other models are actually also not shit at chess.
The irony? He’s comparing it against Stockfish, a computer chess engine. Computers playing chess at a superhuman level is a solved problem. LLMs have now slightly approached that level.
For one, gpt-3.5-turbo-instruct rarely suggests illegal moves,
Writeup https://dynomight.net/more-chess/
HN discussion https://news.ycombinator.com/item?id=42206817
Particularly hilarious at how thoroughly they’re missing the point. The fact that it suggests illegal moves at all means that no matter how good it’s openings are the scaling laws and emergent behaviors haven’t magicked up an internal model of the game of Chess or even the state of the chess board it’s working with. I feel like playing games is a particularly powerful example of this because the game rules provide a very clear structure to model and it’s very obvious when that model doesn’t exist.
I remember when several months (a year ago?) when the news got out that gpt-3.5-turbo-papillion-grumpalumpgus could play chess around ~1600 elo. I was skeptical the apparent skill wasn’t just a hacked-on patch to stop folks from clowning on their models on xitter. Like if an LLM had just read the instructions of chess and started playing like a competent player, that would be genuinely impressive. But if what happened is they generated 10^12 synthetic games of chess played by stonk fish and used that to train the model- that ain’t an emergent ability, that’s just brute forcing chess. The fact that larger, open-source models that perform better on other benchmarks, still flail at chess is just a glaring red flag that something funky was going on w/ gpt-3.5-turbo-instruct to drive home the “eMeRgEnCe” narrative. I’d bet decent odds if you played with modified rules, (knights move a one space longer L shape, you cannot move a pawn 2 moves after it last moved, etc), gpt-3.5 would fuckin suck.
Edit: the author asks “why skill go down tho” on later models. Like isn’t it obvious? At that moment of time, chess skills weren’t a priority so the trillions of synthetic games weren’t included in the training? Like this isn’t that big of a mystery…? It’s not like other NN haven’t been trained to play chess…
LLMs sometimes struggle to give legal moves. In these experiments, I try 10 times and if there’s still no legal move, I just pick one at random.
uhh
Battlechess both could choose legal moves and also had cool animations. Battlechess wins again!
Here are the results of these three models against Stockfish—a standard chess AI—on level 1, with a maximum of 0.01 seconds to make each move
I’m not a Chess person or familiar with Stockfish so take this with a grain of salt, but I found a few interesting things perusing the code / docs which I think makes useful context.
Skill Level
I assume “level” refers to Stockfish’s Skill Level option.
If I mathed right, Stockfish roughly estimates Skill Level 1 to be around 1445 ELO (source). However it says “This Elo rating has been calibrated at a time control of 60s+0.6s” so it may be significantly lower here.
Skill Level affects the search depth (appears to use depth of 1 at Skill Level 1). It also enables MultiPV 4 to compute the four best principle variations and randomly pick from them (more randomly at lower skill levels).
Move Time & Hardware
This is all independent of move time. This author used a move time of 10 milliseconds (for stockfish, no mention on how much time the LLMs got). … or at least they did if they accounted for the “Move Overhead” option defaulting to 10 milliseconds. If they left that at it’s default then 10ms - 10ms = 0ms so 🤷♀️.
There is also no information about the hardware or number of threads they ran this one, which I feel is important information.
Evaluation Function
After the game was over, I calculated the score after each turn in “centipawns” where a pawn is worth 100 points, and ±1500 indicates a win or loss.
Stockfish’s FAQ mentions that they have gone beyond centipawns for evaluating positions, because it’s strong enough that material advantage is much less relevant than it used to be. I assume it doesn’t really matter at level 1 with ~0 seconds to produce moves though.
Still since the author has Stockfish handy anyway, it’d be interesting to use it in it’s not handicapped form to evaluate who won.
@gerikson @BlueMonday1984 the only analysis of computer chess anybody needs https://youtu.be/DpXy041BIlA
Stack overflow now with the sponsored crypto blogspam Joining forces: How Web2 and Web3 developers can build together
I really love the byline here. “Kindest view of one another”. Seething rage at the bullshittery these “web3” fuckheads keep producing certainly isn’t kind for sure.
a better-thought-out announcement is coming later today, but our WriteFreely instance at gibberish.awful.systems has reached a roughly production-ready state (and you can hack on its frontend by modifying the
templates
,pages
,static
, andless
directories in this repo and opening a PR)! awful.systems regulars can ask for an account and I’ll DM an invite link!Strap in and start blasting the Depeche Mode.
When the reporter entered the confessional, AI Jesus warned, “Do not disclose personal information under any circumstances. Use this service at your own risk.
Do not worry my child, for everything you say in this hallowed chamber is between you, AI Jesus, and the army of contractors OpenAI hires to evaluate the quality of their LLM output.
The mask comes off at LWN, as two editors (jake and corbet) dive in to frantically defend the honour of Justine fucking Tunney against multiple people pointing out she’s a Nazi who fills her projects with racist dogwhistles
Not the only trans NRXer to pull this I’m afraid. I could say things but I really can’t I think.
Is Google lacing their free coffee??? How could a woman with at least one college degree believe that the government is even mechanically capable of dissolving into a throne for Eric Schmidt.
fuck me that is some awful fucking moderation. I can’t imagine being so fucking bad at this that I:
- dole out a ban for being rude to a fascist
- dole out a second ban because somebody in the community did some basic fucking due diligence and found out one of the accounts defending the above fascist has been just a gigantic racist piece of shit elsewhere, surprise
- in the process of the above, I create a safe space for a fascist and her friends
but for so many of these people, somehow that’s what moderation is? fucking wild, how the fuck did we get here
See, you’re assuming the goal of moderation is to maintain a healthy social space online. By definition this excludes fascists. It’s that old story about how to make sure your punk bar doesn’t turn into a nazi punk bar. But what if instead my goal is to keep the peace in my nazi punk bar so that the normies and casuals keep filtering in and out and making me enough money that I can stay in business? Then this strategy makes more sense.
Centrists Don’t Fucking Be Like This challenge not achieved yet again
fwiw this link didn’t jump me to a specific reply (if you meant to highlight a particular one)
It didn’t scroll for me either but there’s a reply by this corbet person with a highlighted background which I assume is the one intended to be linked to
Post by Corbet the editor. “We get it: people wish that we had not highlighted work by this particular author. Had we known more about the person in question, we might have shied away from the topic. But the article is out now, it describes a bit of interesting technology, people have had their say, please let’s leave it at that.”
So you updated the article to reflect this right? padme.jpg
Seems like they’ve actually done this now. There’s a preface note now.
This topic was chosen based on the technical merit of the project before we were aware of its author’s political views and controversies. Our coverage of technical projects is never an endorsement of the developers’ political views. The moderation of comments here is not meant to defend, or defame, anybody, but is in keeping with our longstanding policy against personal attacks. We could certainly have handled both topic selection and moderation better, and will endeavor to do so going forward.
Which is better than nothing, I guess, but still feels like a cheap cop-out.
Side-note: I can actually believe that they didn’t know about Justine being a fucking nazi when publishing this, because I remember stumbling across some of her projects and actually being impressed by it, and then I found out what an absolute rabbit hole of weird shit this person is. So I kinda get seeing the portable executables project, thinking, wow, this is actually neat, and running with it.
Not that this is an excuse, because when you write articles for a website that should come with a bit of research about the people and topic you choose to cover and you have a bit more responsibility than someone who’s just browsing around, but what do I know.
Well, at least they put down something. More than I expected.
And doing research on people? In this economy?
so is corbet the same kind of fucker that’ll complain “everything is so political nowadays”? it seems like they are
@dgerard @BlueMonday1984 also, and I know this is way beside the point, update the design of your website, motherfuckers
I don’t run any websites, what are you coming at me for
most of the dedicated Niantic (Pokemon Go, Ingress) game players I know figured the company was using their positioning data and phone sensors to help make better navigational algorithms. well surprise, it’s worse than that: they’re doing a generative AI model that looks to me like it’s tuned specifically for surveillance and warfare (though Niantic is of course just saying this kind of model can be used for robots… seagull meme, “what are the robots for, fucker? why are you being so vague about who’s asking for this type of model?”)
Quick, find the guys who were taping their phones to a ceiling fan and have them get to it!
Jokes aside I’m actually curious to see what happens when this one screws up. My money is on one of the Boston Dynamics dogs running in circles about 30 feet from the intended target without even establishing line of sight. They’ll certainly have to test it somehow before it starts autonomously ordering drone strikes on innocent people’s homes, right? Right?
Pokemon Go To The War Crimes
Pokemon Go To The Hague
Peter Watts’s Blindsight is a potent vector for brain worms.
Watts has always been a bit of a weird vector. While he doesn’t seem a far righter himself, he accidentally uses a lot of weird far right dogwhistles. (prob some cross contamination as some of these things are just scientific concepts (esp the r/K selection thing stood out very much to me in the rifters series, of course he has a phd in zoology, and the books predate the online hardcore racists discovering the idea by more than a decade, but still odd to me)).
To be very clear, I don’t blame Watts for this, he is just a science fiction writer, a particularly gloomy one. The guy himself seems to be pretty ok (not a fan of trump for example).
That’s a good way to put it. Another thing that was really en vogue at one point and might have been considered hard-ish scifi when it made it into Rifters was all the deep water telepathy via quantum brain tubules stuff, which now would only be taken seriously by wellness influencers.
not a fan of trump for example
In one the Eriophora stories (I think it’s officially the sunflower circle) I think there’s a throwaway mention about the Kochs having been lynched along with other billionaires on the early days of a mass mobilization to save what’s savable in the face of environmental disaster (and also rapidly push to the stars because a Kardashev-2 civilization may have emerged in the vicinity so an escape route could become necessary in the next few millenia and this scifi story needs a premise).
Huh. Say more?
Oh man where to begin. For starters:
- Sentience is overrated
- All communication is manipulative
- Assumes intelligence has a “value” and that it stacks like a Borderlands damage buff
- Superintelligence operates in the world like the chaos god Tzeench from WH40K. Humans can’t win, because all events are “just as planned”
- Humanity is therefore gormless and helpless in the face of superintelligence
It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.
Sentience is overrated
Not sentience, self awareness, and not in a parτicularly prescriptive way.
Blindsight is pretty rough and probably Watt’s worst book that I’ve read but it’s original, ambitious and mostly worth it as an introduction to thinking about selfhood in a certain way, even if this type of scifi isn’t one’s cup of tea.
It’s a book that makes more sense after the fact, i.e. after reading the appendix on phenomenal self-model hypothesis. Which is no excuse – cardboard characters that are that way because the author is struggling to make a point about how intelligence being at odds with self awareness would lead to individuals with nonexistent self-reflection that more or less coast as an extension of their (ultrafuturistic) functionality, are still cardboard characters that you have to spend a whole book with.
I remember he handwaves a lot of stuff regarding intelligence, like at some point straight up writing that what you are reading isn’t really what’s being said, it’s just the jargonaut pov character dumbing it way down for you, which is to say he doesn’t try that hard for hyperintelligence show-don’t-tell. Echopraxia is better in that regard.
It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.
Not really, there are some common ideas mostly because tesrealism already is scifi tropes awkwardly cobbled together, but usually what tescreals think is awesome is presented in a cautionary light or as straight up dystopian.
Like, there’s some really bleak transhumanism in this book, and the view that human cognition is already starting to become alien in the one hour into the future setting is kind of anti-longtermist, at least in the sense that the utilitarian calculus turns way messed up.
And also I bet there’s nothing in The Sequences about Captain Space Dracula.
I got a really nice omnibus edition of Blindsight/Echopraxia that was printed in the UK, but ultimately, the necessarily(?) cardboard nature of the vampire character in Echopraxia was what left me cold. The first chapter or two are some of the most densely-packed creative sci-fi ideas I’ve ever read, but I came to the book looking for more elaboration on the vampires, and didn’t really get that. Valerie remains an inscrutable other. The most memorable interaction she has is when she’s breaking her arm and making the POV character guy reset it, seemed like she was hitting on him?
I hear you. I should clarify, because I didn’t do a good job of saying why those things bothered me and nerd-vented instead. I understand that an author doesn’t necessarily believe the things used as plot devices in their books. Blindsight a horror/speculative fiction book that asks “what if these horrible things were true” and works out the consequences in an entertaining way. And, no doubt there’s absolutely a place for horror in spec fic, but Blindsight just feels off. I think @Soyweiser explained the vibes better than I did. Watts isn’t a bad guy. Maybe it’s just me. To me, it feels less Hellraiser and more Human Centipede i.e. here’s a lurid idea that would be tremendously awful in reality, now buckle up and let’s see how it goes to an uncomfortable extent. That’s probably just a matter of taste, though.
Unfortunately, the kind of people who read these books don’t get that, because media literacy is dead. Everyone I’ve heard from (online) seems to think that it is saying big deep things that should be taken seriously. It surfaces in discussions about whether or not ChatGPT is “alive” and how it might be alive in a way different from us. Eric Schmidt’s recent insane ramblings about LLMs being an “alien intelligence,” which don’t call Blindsight out directly, certainly resonate the same way.
Maybe I’m being unfair, but it all just goes right up my back.
It might be just the all but placeholder characters that give it a b-movie vibe. I’d say it’s a book that’s both dumber and smarter that people give it credit for, but even the half-baked stuff gets you thinking. Especially the self-model stuff, and how problematic it can be to even discuss the concept in depth in languages that have the concept of a subject so deeply baked in.
I thought that at worst one could bounce off to the actual relevant literature like Thomas Metzinger’s pioneering, seminal and terribly written thesis, or Sack’s The Man Who Mistook His Wife For A Hat.
Blindsight being referenced to justify LLM hype is news to me.
I, too, have done the “all communication is manipulative”, but in the same way as one would do a bar trick:
all communication is manipulative, for any words I say/write that you perceive instantly manipulate (as in the physical manner / modifying state) your thoughts, and this is done so without you requesting I do so
it’s a handy stunt with which to drive an argument about a few parts of communication, rhetoric, etc. because it gives a kinda good handle on some meta without getting too deep into things
(although there was one of my friends who really, really hated the framing)
Explaining in detail is kind of a huge end-of-book spoiler, but “All communication is manipulative” leaves out a lot of context and personally I wouldn’t consider how it’s handled a mark against Blindsight.
At work, I’ve been looking through Microsoft licenses. Not the funniest thing to do, but that’s why it’s called work.
The new licenses that have AI-functions have a suspiciously low price tag, often as introductionary price (unclear for how long, or what it will cost later). This will be relevant later.
The licenses with Office, Teams and other things my users actually use are not only confusing in how they are bundled, they have been increasing in price. So I have been looking through and testing which licenses we can switch to a cheaper, without any difference for the users.
Having put in quite some time with it, we today crunched the numbers and realised that compared to last year we will save… (drumroll)… Approximately nothing!
But if we hadn’t done all this, the costs would have increased by about 50%.
We are just a small corporation, maybe big ones gets discounts. But I think it is a clear indication of how the AI slop is financed, by price gauging corporate customers for the traditional products.
There’s got to be some kind of licensing clarity that can be actually legislated. This is just straight-up price gouging through obscurantism.
now seeing EAs being deeply concerned about RFK running health during a H5N1 outbreak
dust specks vs leopards
The way many of the popular rat blogs started to endorse Harris in the last second before the US election felt a lot like an attempt at plausible deniability.
Sure we’ve been laying the groundwork for this for decade, but we wanted someone from our cult of personality to undermine democracy and replace it with explicit billionaire rule, not someone with his own cult of personality.
Anyone here read “World War Z”? There’s a section there about how the health authorities in basically all countries supress and deny the incipient zombie outbreak. I think about that a lot nowadays.
Anyway the COVID response, while ultimately better than the worst case scenario (Spanish Flu 2.0) has made me really unconvinced we will do anything about climate change. We had a clear danger of death for millions of people, and the news was dominated by skeptics. Maybe if it had targetted kids instead of the very old it would have been different.
It’s not just systemic media head-up-the-assery, there’s also the whole thing about oil companies and petrostates bankrolling climate denialism since the 70s.
When I run into “Climate change is a conspiracy” I do the wide-eyed look of recognition and go “Yeah I know! Have you heard about the Exxon files?” and lead them down that rabbit hole. If they want to think in terms of conspiracies, at least use an actual, factual conspiracy.
If H5N1 does turn into a full-blown outbreak, part of me expects it’ll rack up a heavier deathtoll than COVID.