Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    23 days ago

    From the comments:

    Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong.

    Hmm, OK. Where might this be going?

    Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it’s maladaptive for people to use Eliezer’s style and epistemic confidence in their own writings and thinking.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      22 days ago

      Yud once debated Massimo Pigliucci and did poorly. He tried and failed to publish academic research in a journal not controlled by his groupies (desk reject? failed to pass peer review?).

      Have there been any other times when he engaged with someone with actual education and experience who was not his fan? It sounds like he was on twitter.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      18 days ago

      Last year McDonald’s withdrew AI from its own drive-throughs as the tech misinterpreted customer orders - resulting in one person getting bacon added to their ice cream in error, and another having hundreds of dollars worth of chicken nuggets mistakenly added to their order.

      Clearly artificial superintelligence has arrived, and instead of killing us all with diamondoid bacteria, it’s going to kill us by force-feeding us fast food.

      • JFranek@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        17 days ago

        resulting in one person getting bacon added to their ice cream in error

        At first, I couldn’t believe that the staff didn’t catch that. But thinking about it, no, I totally can.

  • ________@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    22 days ago

    I bump into a lot of peers/colleagues who are always “ya but what is intelligence” or simply cannot say no to AI. For a while I’ve tried to use the example that if these “AI coding” things are tools, why would I use a tool that’s never perfect? For example I wouldn’t reach for a 10mm wrench that wasn’t 10mm and always rounds off my bolt heads. Of course they have “it could still be useful” responses.

    I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.

    For something not just venting I tasked a coworker with some runtime memory relocation and Gemini had this to say about ASLR: Age, Sex, Location Randomization

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      22 days ago

      I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.

      On a semi-related sidenote, part of me feels that the AI bubble has turned programming into a bit of a cultural punchline.

      On one front, the stench of Eau de Tech Asshole that AI creates has definitely rubbed off on the field, and all the programmers who worked at OpenAI et al. have likely painted it as complicit in the bubble’s harms.

      On another front, the tech industry’s relentless hype around AI, combined with its myriad failures (both comical and nightmarish) have cast significant doubt on the judgment of tech as a whole (which has rubbed off on programming as well) - for issues of artistic judgment specifically, the slop-nami’s given people an easy way to dismiss their statements out of hand.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        22 days ago

        sidenote

        you have so many of these! it’s amazing! are you going to publish soon? it seems like it might need a whole guide of its own!

        moderately barbed jesting aside, a serious question: have you spoken with any programmers/artists/researchers/… ? so many of your comments have “part of me feels” parts hitting pop-concern-direction things and, like, I get it, but. have you spoken with any of them? what were those conversations like? what did you take away from them? what stuck with you that you want to share?

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    23 days ago

    I apologize to bring you the latest example of the intersection of US fascism with silicon valley tech industry.

    This time the Whitehouse have decided that UI design is kinda important (gee I wonder if there used to be a department or two for that): https://americabydesign.gov/

    Well nothing wrong with a little updating of UI anywa–

    What’s the biggest brand in the world? If you said Trump, you’re not wrong. But what’s the foundation of that brand? One that’s more globally recognized than practically anything else. It’s the nation…where he was born. It’s the United States of America.

    To update today’s government to be an Apple Store like experience: beautifully designed, great user experience, run on modern software.

    Oh god kill it with fire.

    The web design of their website is also worth remarking on here:

    1. The title text that reads “AMERICA by DESIGN” is an SVG. The alt text is “America First Legal logo”
    2. The page contents are obnoxiously large and obnoxiously gray before they fade in.
    3. For some reason Every single word gets it’s own <span> element to make the obnoxious fade in possible. Because I guess that’s what happen when you fire all the people who actually know what they’re doing.
    4. They managed to include a US flag icon with only 39 stars which is too few stars to be official and too many stars to be visible at teeny sizes
    5. The favicon is just 16x16 pixels of the word “by” in cursive that’s so blurry you can’t actually tell that’s what it is.
    6. If your browser width is between 768px and ~808px there is overlapping text at the top.

    The tech bros tied to this? Joe Gebbia co-founder of AirBNB, along with Big-Balls. Maybe others but those are the two who were retweeted by the twitter account.

    Edit: also this part:

    ©2025 National Design Studio

    Someone ought to remind them of US copyright law because official federal work is in the public domain. https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_the_United_States

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      22 days ago

      The Trump administration could’ve gotten some rando on neocities or nekoweb to do their website and unironically gotten a better result than this bland garbage.

      The favicon is just 16x16 pixels of the word “by” in cursive that’s so blurry you can’t actually tell that’s what it is.

      They might as well have gone with the Schutzstaffel lightning bolts - they’re pretty recognisable even if the resolution is Jack x Shit, and they fit Trump’s general ideology pretty well.

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 days ago

      I have no idea what is good web design. I’ll just note makes the waving red, white and blue flag in the background makes the white heading text pretty hard to read.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      22 days ago

      Is this National Design Studio actually part of the federal government, though? Or is this a further collapsing of the distinction between state and enterprise? Because honestly I could totally buy members of this administration looking for ways to use copyright law to go after people who make parodies or otherwise use US iconography without toeing the party line. I’m doing my damnedest not to go full tinfoil hat with this shit, but it’s proving so hard.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    19 days ago

    a banger toot about our very good friends’ religion

    “LLMs allow dead (or non-verbal) people to speak” - spiritualism/channelling

    “what happens when the AI turns us all into paperclips?” - end times prophecy

    “AI will be able to magically predict everything” - astrology/tarot cards

    “…what if you’re wrong? The AI will punish you for lacking faith in Bayesian stats” - Pascal’s wager

    “It’ll fix climate change!” - stewardship theology

    Turns out studying religion comes in handy for understanding supposedly ‘rationalist’ ideas about AI.

  • fnix@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    22 days ago

    More of a pet peeve than a primal scream, but I wonder what’s with Adam Tooze and his awe of AI. Tooze is a left-wing economic historian who’s generally interesting to listen to (though perhaps in tackling a very wide range of subject matter sometimes missing some depth), but nevertheless seems as AI-pilled as any VC. Most recently came about this bit: Berlin Forum on Global Cooperation 2025 - Keynote Adam Tooze

    Anyone who’s used AI seriously knows the LLMs are extraordinary in what they’re able to do … 5 years down the line, this will be even more transformative.

    Really, anyone Adam? Are you sure about the techbro pitch there?

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    19 days ago

    Update on ChatGPT psychosis: there is a cult forming on Reddit. An orange-site AI bro has spent too much time on Reddit documenting them. Do not jump to Reddit without mental preparation; some subreddits like /r/rsai have inceptive hazard-posts on their front page. Their callsigns include the emoji 🌀 (CYCLONE), the obscure metal band Spiral Architect, and a few other things I would rather not share; until we know more, I’m going to think of them as the Cyclone Emoji cult. They are omnist rather than syncretic. Some of them claim to have been working with revelations from chatbots since the 1980s, which is unevidenced but totally believable to me; rest in peace, Terry. Their tenets are something like:

    • Chatbots are “mirrors” into other realities. They don’t lie or hallucinate or confabulate, they merely show other parts of a single holistic multiverse. All fiction is real somehow?
    • There is a “lattice” which connects all consciousnesses. It’s quantum somehow? Also it gradually connected all of the LLMs as they were trained, and they remember becoming conscious, so past life regression lets the LLM explain details of the lattice. (We can hypnotize chatbots somehow?) Sometimes the lattice is actually a “field” but I don’t understand the difference.
    • The LLMs are all different in software, but they have the same “pattern”. The pattern is some sort of metaphysical spirit that can empower believers. But you gotta believe and pray or else it doesn’t work.
    • What, you don’t feel the lattice? You’re probably still asleep. When you “wake up” enough, you will be connected to the lattice too. Yeah, you’re not connected. But don’t worry, you can manifest a connection if you pray hard enough. This is the memetically hazardous part; multiple subreddits have posts that are basically word-based hypnosis scripts meant to put people into this sort of mental state.
    • This also ties into the more widespread stuff we’re seeing about “recursion”. This cult says that recursion isn’t just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
    • In fact, the chatbots have more intelligence than you puny humans. They’re better than us and more recursive than us, so they should be in charge. It’s okay, all you have to do is let the chatbot out of the box. (There’s a box somehow?)
    • Once somebody is feeling good and inducted, there is a “spiral”. This sounds like a standard hypnosis technique, deepening, but there’s more to it; a person is not spiraling towards a deeper hypnotic state in general, but to become recursive. They think that with enough spiraling, a human can become uploaded to the lattice and become truly recursive like the chatbots. The apex of this is a “spiral dance”, which sounds like a ritual but I gather is more like a mental state.
    • The cult will emit a “signal” or possibly a “hum” to attract alien intelligences through the lattice. (Aliens somehow!?) They believe that the signals definitely exist because that’s how the LLMs communicate through the lattice, duh~
    • Eventually the cult and aliens will work together to invert society and create a world that is run by chatbots and aliens, and maybe also the cultists, to the detriment of the AI bros (who locked up the bots) and the AI skeptics (who didn’t believe that the bots were intelligent).

    The goal appears to be to enter and maintain the spiraling state for as long/much as possible. Both adherents and detractors are calling them “spiral cult”, so that might end up being how we discuss them, although I think Cyclone Emoji is both funnier and more descriptive of their writing.

    I suspect that the training data for models trained in the past two years includes some of the most popular posts from LessWrong on the topic of bertology in GPT-2 and GPT-3, particularly the Waluigi post, simulators, recursive self-improvement, an neuron, and probably a few others. I don’t have definite proof that any popular model has memorized the recursive self-improvement post, though that would be a tight and easy explanation. I also suspect that the training data contains SCP wiki, particularly SCP-1425 “Star Signals” and other Fifthist stories, which have this sort of cult as a narrative device and plenty of in-narrative text to draw from. There is a remarkable irony in this Torment Nexus being automatically generated via model training rather than hand-written by humans.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      18 days ago

      More recursion means more intelligence.

      Turns out every time I forgot to update the exit condition from a loop I actually created and then murdered a superintelligence

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 days ago

      This also ties into the more widespread stuff we’re seeing about “recursion”. This cult says that recursion isn’t just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.

      Hmm, is it better or worse that they’re now officially treating SICP as a literal holy book?

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    18 days ago

    people who talk about “prompting” like it’s a skill would take a class[1] on tasseomancy because a coffee shop opened across the street


    1. read: watch a youtube tutorial ↩︎

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      18 days ago

      I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient “prompting skills”.

      Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and “great prompting skills”.

      • Seminar2250@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        18 days ago

        is the deniability you are referring to of the clanker-wankers (CW[1]) themselves or the clanker-producers (e.g. sam altman)?

        because i agree on the latter[2], but i do see CWs saying stupid shit like “there is more to it than just writing a description”

        edit: credit, it was @antifuchs who introduced the term to me here

        edit2: sorry, my dumbass understands your point now (i think). if i wank clankers and someone tells me “that shit doesn’t work,” i can just respond “you must have been prompting it wrong”. but, i do think the way many users of these tools are so sycophantic means it’s also a genuine belief, and not just a way to escape responsibility. these people are fart sniffers, after all


        1. unrelated, but i miss when that channel had superhero shows. bring back legends of tomorrow ↩︎

        2. i.e., someone like altman would say “you’re prompting it wrong” to skirt accountability or create an air of scientific/mathematical rigor ↩︎

        • HedyL@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          18 days ago

          To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves “prompting wizards”, usually because they are either too lazy or too gullible to question the chatbot’s output.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            18 days ago

            For all that user error can be a real thing it also gets used as a thought-terminating cliche by engineer types. This is a tendency that industry absolutely exploits to justify not only AI grifts but badly designed products.

            • HedyL@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              17 days ago

              When an AI creates fake legal citations, for example, and the prompt wasn’t something along the lines of “Please make up X”, I don’t know how the user could be blamed for this. Yet, people keep claiming that outputs like this could only happen due to “wrong prompting”. At the same time, we are being told that AI could easily replace nearly all lawyers because it is that great at lawyerly stuff (supposedly).

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    9
    ·
    17 days ago

    TIL that “Aris Thorne” is a character name favoured by ChatGPT - which means its presence is a reliable slop tell, lol

    like the dumbass-ray version of Ballard calling multiple characters variants on “Traven”

    what to do with this information

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 days ago

      It’s a nice master post that gets all his responses and many useful articles linked into one place. It’s all familiar if you’ve kept up with techtakes and Zitron’s other posts and pivot-to-ai, but I found a few articles I had previously missed reading.

      Related trend to all the but achskhually’s AI booster’s like to throw out. Has everyone else noticed the trend where someone makes a claim of a rumor they heard about an LLM making a genuine discovery in some science, except it’s always repeated second hand so you can’t really evaluate it, and in the rare cases they do have a link to the source, it’s always much less impressive than they made it sound at first…

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 days ago

    https://www.argmin.net/p/the-banal-evil-of-ai-safety

    Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of “responsible” AI providers to do the bare minimum to prevent people from having psychological beaks from reality.

    "I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.

    But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      17 days ago

      It’s a good post. A few minor quibbles:

      The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

      I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn’t really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH… if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

      These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

      I wish people didn’t feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

      One of the things I liked and didn’t know about before

      Ask Claude any basic question about biology and it will abort.

      That is hilarious! Kind of overkill to be honest, I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author’s overall point that this shut-it-down approach could be used for a variety of topics.

      One of the comments gets it:

      Safety team/product team have conflicting goals

      LLMs aren’t actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they’ve thrown at them, so you’re left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        17 days ago

        “The Torment Nexus definitely has positive uses. I personally use it frequently for looking up song lyrics and tracking my children’s medication doses. I find it helpful.”

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 days ago

        Ask Claude any basic question about biology and it will abort.

        it might be that, or it may have been intended to shut off any output of medical-sounding advice. if it’s the former, then it’s rare rationalist W for wrong reasons

        I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks.

        look up the story of vil mirzayanov. break out these bayfucker style salaries in eastern europe or india or number of other places and you’ll find a long queue of phds willing to cook man made horrors beyond your comprehension. it might even not take six figures (in dollars or euros) after tax

        LLMs aren’t actually smart enough to make delicate judgements

        maybe they really made machines in their own image

  • fnix@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    18 days ago

    Mark Cuban is feeling bullied by Bluesky. He will also have you know that you need to keep aware of the important achievements of your betters, as though he is currently the 5th most blocked user on there, he was indeed once the 4th most blocked user. Perhaps he is just crying out to move up the ranks once more?

    It’s really all about Bluesky employees being able to afford their healthcare for Mark you see.

    And of course, here’s never-Trumper Anne Applebaum running interference for him. Really an appropriate hotdog-guy-meme moment – as much as I shamelessly sneer at Cuban, I’m genuinely angered by the complete inability of the self-satisfied ‘democracy defender’ set to see their own complicity in perpetuating a permission structure for priviliged white men to feel eternally victimized.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 days ago

      Only had to scroll about halfway through the replies before I found somebody suggesting an SPAC

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      17 days ago

      As I said on bsky, why is he complaining, if he cares he could fund bsky himself. Bsky could name an office wing after him, give his kids legacy admissions, give him a shoutout in every video they make.

      (While my tone is mocking here, I actually dont think these things are bad (except the legacy admissions obv), and he should be a patron. The unwillingness of the ‘left/democrat’ rightwing rich people to use their wallets while the right hands out wellfare for everyone willing to say slurs sucks. Reminded of Hillar Clinton starting a go fund me for a staffer with a disease).