Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
I found this because Greg Egan shared it elsewhere on fedi:
I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.
I feel called out for being familiar with all of these words.
The dread was building up right until I got jumpscared by
“priors”
Looks like itch.io has (hidden/removed/disabled payouts for? reports vary) its vast swath of NSFW-adjacent content which is not great
addendum: itch.io finally put out a statement https://itch.io/updates/update-on-nsfw-content
Hey, I haven’t seen this going around yet, but itchio is also taking books down with no erotic content that are just labeled as lgbtqia+
So that’s super cool and totally not what I thought they were going to do next 🙃
https://bsky.app/profile/marsadler.bsky.social/post/3luov7rkles2u
And a relevant petition from the ACLU:
https://action.aclu.org/petition/mastercard-sex-work-work-end-your-unjust-policy
Say the line, Bart!
payment processors
entire class cheering
I recall seeing an article in the last week or so, regarding a right-wing associated group taking aim at these. will see if I can find that again
looks like the group is called Collective Shout https://itch.io/updates/update-on-nsfw-content
Found a neat mini-sneer in the wild: It’s rude to show AI output to people
“This is not good news about which sort of humans ChatGPT can eat,” mused Yudkowsky. “Yes yes, I’m sure the guy was atypically susceptible for a $2 billion fund manager,” he continued. “It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them.”
Is this “narrative” in the room with us right now?
It’s reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.
Tangentially, the other day I thought I’d do a little experiment and had a chat with Meta’s chatbot where I roleplayed as someone who’s convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I’ve been meaning to continue the chat and see how far and how fast it goes but I’m just too aghast for now. This shit is so fucking dangerous.
What exactly would constitute good news about which sorts of humans ChatGPT can eat? The phrase “no news is good news” feels very appropriate with respect to any news related to software-based anthropophagy.
Like what, it would be somehow better if instead chatbots could only cause devastating mental damage if you’re someone of low status like an artist, a math pet or a nonwhite person, not if you’re high status like a fund manager, a cult leader or a fanfiction author?
Nobody wants to join a cult founded on the Daria/Hellraiser crossover I wrote while emotionally processing chronic pain. I feel very mid-status.
Is this “narrative” in the room with us right now?
I actually recall recently someone pro llm trying to push that sort of narrative (that it’s only already mentally ill people being pushed over the edge by chatGPT)…
Where did I see it… oh yes, lesswrong! https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy
This has all the hallmarks of a moral panic. ChatGPT has 122 million daily active users according to Demand Sage, that is something like a third the population of the United States. At that scale it’s pretty much inevitable that you’re going to get some real loonies on the platform. In fact at that scale it’s pretty much inevitable you’re going to get people whose first psychotic break lines up with when they started using ChatGPT. But even just stylistically it’s fairly obvious that journalists love this narrative. There’s nothing Western readers love more than a spooky story about technology gone awry or corrupting people, it reliably rakes in the clicks.
The
callnarrative is coming from inside thehouseforum. Actually, this is even more of a deflection, not even trying to claim they were already on the edge but that the number of delusional people is at the base rate (with no actual stats on rates of psychotic breaks, because on lesswrong vibes are good enough).this only happens to people sufficiently low-status
A piquant little reminder that Yud himself is, of course, so high-status that he cannot be brainwashed by the machine
From Yud’s remarks on Xitter:
As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn’t actually true that you can just saunter in as a psychotic IQ 80 person and do that.
Well, not with that attitude.
You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;
If “wearing masks” really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think ™.
you must outperform other people also trying to do that, who’d like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.
zoom and enhance
g-factor
<Kill Bill sirens.gif>
Is g-factor supposed to stand for gene factor?
It’s “general intelligence”, the eugenicist wet dream of a supposedly quantitative measure of how the better class of humans do brain good.
deleted by creator
click here to take 10d8 psychic damage
Ouch. Also, I’m raging and didn’t even realize I had barbarian levels.
Well I suppose it can’t be much worse than graphology or myers-briggs!
I don’t know what I expected
failed my saving throw.
If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.
Ernie Davis gives his thoughts on the recent GDM and OAI performance at the IMO.
https://garymarcus.substack.com/p/deepmind-and-openai-achieve-imo-gold
Caught a particularly spectacular AI fuckup in the wild:
(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you’ve earned it)
Damn, this is how I find out?
this toot was how I did
Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data
The AI is right with how much we know of his life he osnt really dead, the AGI can just simulate hom and resurrect him. Takes another hit from my joint made exclusively out of the sequences book pages
(Rip indeed, what a crazy ride, and he was all aboard).
So here’s a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, “running the numbers” on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it’s not so bad?
No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!
Edit ah it’s the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.
Enjoy this LW answer about “myths that encapsulate eternal truths”. No. 3 will surprise you!
Managed to stumble across two separate attempts to protect promptfondlers’ feelings from getting hurt like they deserve, titled “Shame in the machine: affective accountability and the ethics of AI” and “AI Could Have Written This: Birth of a Classist Slur in Knowledge Work”.
I found both of them whilst trawling Bluesky, and they’re being universally mocked like they deserve on there.
I really like how the second one appropriates pseudomarxist language to have a go at those snooty liberal elites again.
edit: The first paper might be making a perfectly valid point at a glance??
Not sure if this was already posted here but saw it on LI this morning - AI for Good [Appearance?] - sometimes we focus on the big companies and miss how awful the sycophantic ecosystem gets.
ah yeah @fasterandworse found this when it was happening (and I pulled archives of the live streams on the days it was playing)
some further observations to the stuff in her writeup: the day1 livestream also “starts late” (and cuts suspiciously cleanly in mid-sentence). I still want to do some tests to find out if YouTube’s live editor allows editing out stream history while stream is going, but either way they made very sure that they could completely silence that talk if it turned out that she didn’t bend as forced
(the now-up video published on youtube definitely starts differently to the livestream, too, so it’s likely a local post-mix recording that got uploaded. I haven’t had time to review both and find possible differences)
New Ed Zitron: The Hater’s Guide To The AI Bubble
(guy truly is the Kendrick Lamar of tech, huh)
Hey, remember the thing that you said would happen?
https://bsky.app/profile/iwriteok.bsky.social/post/3lujqik6nnc2z
Edit: whoops, looks like we posted at about the same time!
Hey, remember the thing that you said would happen?
The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn’t expect to be vindicated so soon afterwards.
EDIT: One of the replies gives an example for my “death of value-neutral AI” prediction too, openly calling AI “a weapon of mass destruction” and calling for its abolition.
This incredible banger of a bug against whisper, the OpenAI speech to text engine:
Complete silence is always hallucinated as “ترجمة نانسي قنقر” in Arabic which translates as “Translation by Nancy Qunqar”
Similar case from 2 years ago with Whisper when transcribing German.
I’m confused by this. Didn’t we have pretty decent speech-to-text already, before LLMs? It wasn’t perfect but at least didn’t hallucinate random things into the text? Why the heck was that replaced with this stuff??
Transformers do way better transcription, buuuuuut yeah you gotta check it
I’m just confused because I remember using Dragon Naturally Speaking for Windows 98 in the 90s and it worked pretty accurately already back then for dictation and sometimes it feels as if all of that never happened.
Discovered some commentary from Baldur Bjarnason about this:
Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing
This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages
(Also IMO Google translate declined substantially when they integrated more LLM-based tech)
On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:
-
First, LLMs’ poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid
-
Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.
By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.
On a personal sidenote
do you keep count/track? the moleskine must be getting full!
I don’t keep track, I just put these together when I’ve got an interesting tangent to go on.
-
Lol, training data must have included videos where there was silence but on screen was a credit for translation. Silence in audio shouldn’t require special “workarounds”.
The whisper model has always been pretty crappy at these things: I use a speech to text system as an assistive input method when my RSI gets bad and it has support for whisper (because that supports more languages than the developer could train on their own infrastructure/time) since maybe 2022 or so: every time someone tries to use it, they run into hallucinated inputs in pauses - even with very good silence detection and noise filtering.
This is just not a use case of interest to the people making whisper, imagine that.