Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)

  • istewart@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    The whole culture of writing “system prompts” seems utterly a cargo-cult to me. Like if the ST: Voyager episode “Tuvix” was instead about Lt. Barclay and Picard accidentally getting combined in the transporter, and the resulting sadboy Barcard spent the rest of his existence neurotically shouting his intricately detailed demands at the holodeck in an authoritative British tone.

    If inference is all about taking derivatives in a vector space, surely there should be some marginally more deterministic method for constraining those vectors that could be readily proceduralized, instead of apparent subject-matter experts being reduced to wheedling with an imaginary friend. But I have been repeatedly assured by sane, sober experts that it is just simply is not so

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I don’t have any good lay literature, but get ready for “steering vectors” this year. It seems like two or three different research groups (depending on whether I count as a research group) independently discovered them over the past two years and they are very effective at guardrailing because they can e.g. make slurs unutterable without compromising reasoning. If you’re willing to read whitepapers, try Dunefsky & Cohan, 2024 which builds that example into a complete workflow or Konen et al, 2024 which considers steering as an instance of style transfer.

      I do wonder, in the engineering-disaster-podcast sense, exactly what went wrong at OpenAI because they aren’t part of this line of research. HuggingFace is up-to-date on the state of the art; they have a GH repo and a video tutorial on how to steer LLaMA. Meanwhile, if you’ll let me be Bayesian for a moment, my current estimate is that OpenAI will not add steering vectors to their products this year; they’re already doing something like it internally, but the customer-facing version will not be ready until 2027. They just aren’t keeping up with research!

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      When I first learned that you could program a chatbot merely by giving instructions in English sentences as if it was a human being, I admit I was impressed. I’m a linguist, natural language processing is really hard. There was a certain crossing over boundaries over the idea of telling it at chatbot level, e.g. “and you will never delete files outside this directory”, and this “system prompt” actually shaping the behaviour of the chatbot. I don’t have much interest in programming anymore but I wondered how this crossing of levels was implemented.

      The answer of course is that it’s not. Programming a chatbot by talking to it doesn’t actually work.