Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      23 days ago

      On this topic I’ve been seeing more 503 lately, are the servers running into issue, or am i getting caught in anti-scraper cross-fire?

      • self@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        11
        ·
        23 days ago

        nope, you’ve been getting caught in the fallout from us not having this yet. the scrapers have been so intense they’ve been crashing the instance repeatedly.

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          5
          ·
          23 days ago

          when you get this working i am totally copying this for rationalwiki

          i nearly installed caddy just to get iocaine

          • self@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            4
            ·
            23 days ago

            I saw that! fortunately once iocaine is configured it seems to just work, but it’s also very much software that kicks and screams the entire way there. in my case the problem wasn’t even nginx-related, I just typoed the config section for the request handler and it silently defaulted to the mode where it returns garbage for every incoming request.

            • bitofhope@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              22 days ago

              Just a heads-up, I tried reading up on Iocaine and the project website is giving me the madlibs nonsense version on my phone’s browser, so I hope the version you’re planning to enable here isn’t quite as aggressive (the making.awful link is currently working for me).

              Between this and Cloudflare’s geolocation provider no longer saying my IPv6 address block is in Russia, I’m hopeful that my browsing experience might ever so slightly improve for a bit.

              • self@awful.systemsOP
                link
                fedilink
                English
                arrow-up
                0
                ·
                21 days ago

                making is running the version of the configuration I intend to deploy, so if it works for you there it should (hopefully) work in prod too

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    23 days ago

    Not a scream just a nice thing people might enjoy. Somebody made a funny comic about what we all are thinking about

    Random screenshot which I found particularly funny (Zijn rant klopt):

    Image description

    Two people talking to each other, one a bald heavily bespectacled man in the distance, and the other a well dressed skullfaced man with a big mustache. Conversation goes as follows:

    “It could be the work of the French!”

    “Or the Dutch”

    “Could even be the British!”

    “Filthy pseudo German apes, The Dutch!”

    “The Russ…”

    “Scum of the earth marsh dwelling Dutch

        • saucerwizard@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 days ago

          I was told repeatedly growing up that they like Canadians over there because of the whole liberation thing. Is this true?

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            22 days ago

            Yes we do. Lot of Canadians gave their lives for our liberation. (not just Canadians, which is why the Trump admin removing the sign about the Black Americans at the American WW2 burial ground here has not gone over well, but also the French gave a heroic defense of Zeeland at the start of the war, and the Brits, and the Polish (they got the blame for the failure of market garden for some stupid reasons, but they jumped late even when the operation wasn’t going well, after being stalled due to the weather)).

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      you know, if those ASML folks in dutchland weren’t quite so busy what with their EUV lasers and all that, we might not be in quite this same pickle right now,

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      23 days ago

      Definitely been seeing the pattern of: “if you don’t like AI, you are being x-phobic” where “x” is a marginalised group that the person is using the name of as a cudgel. They probably never cared about this group before, but what’s important to this person is that they glaze AI over any sort of principle or ethics. Usually it’s ableist, as is basically any form of marginalisation/discrimination.

      E: read the link. Lmao that’s… not xenophobia. What a piece of shit

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    22 days ago

    Eurogamer has opinions about genai voices in games.

    Arc Raiders is set in a world where humanity has been driven underground by a race of hostile robots. The contradiction here between Arc Raiders’ themes and the manner of its creation is so glaring that it makes me want to scream. You made a game about the tragedy of humans being replaced by robots while replacing humans with robots, Embark!

    https://www.eurogamer.net/arc-raiders-review

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      thanks! I tried to link it in the usual way, but I think a bug might have blanked the url box before I hit post.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can’t resist glazing him, even in the context of an blog post on not being too deferential:

    Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI

    Another lesswronger pushes back on that and is highly upvoted (even among the doomer’s that think Eliezer is a genius, mostly still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w

    The OP gets mad because this is off topic from what they wanted to talk about (they still don’t acknowledge the irony).

    A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse

    And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo

    No big point to this, just a microcosm of lesswronger being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least sneerclubers are direct and come out and say what they mean on the rare occasions they have beef.)

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      Gentoo is firmly against AI contributions as well. NetBSD calls AI code “tainted”, while FreeBSD hasn’t been as direct yet but isn’t accepting anything major.

      QEMU, while not an OS, has rejected AI slop too. Curl also famously is against AI gen. So we have some hope in the systems world with these few major pieces of software.

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        I’m actually tempted to move to NetBSD on those grounds alone, though I did notice their “AI” policy is

        Code generated by a large language model or similar technology, such as GitHub/Microsoft’s Copilot, OpenAI’s ChatGPT, or Facebook/Meta’s Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core. [emphasis mine]

        and I really don’t like the energy of that fine print clause, but still, better than what Debian is going with, and I always had a soft spot for NetBSD anyway…

        • rook@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          I generally read stuff like that netbsd policy as “please ask one of our ancient, grumpy, busy and impatient grognards, who hate people in general and you in particular, to say nice things about your code”.

          I guess you can only draw useful conclusions if anyone actually clears that particular obstacle.

    • flaviat@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      Linus: All those years of screaming at developers for subpar code quality and yet doesn’t use that energy for literal slop

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    One thing I’ve heard repeated about OpenAI is that “the engineers don’t even know how it works!” and I’m wondering what the rebuttal to that point is.

    While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I’ve heard this repeated at least twice (one was on the Panic World pod, the other QAA).

    I would believe that it’s possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.

    It seems like magical thinking to me, and a way of saying one or both of “we didn’t write shit down and therefore have no idea how the functionality works” and “we do not practically have a way to determine how a specific output was arrived at from any given prompt.” The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).

    Anybody else have thoughts on countering the magic “the engineers don’t know how it works!”?

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      Not gonna lie, I didn’t entirely get it either until someone pointed me at a relevant xkcd that I had missed.

      Also I was somewhat disappointed in the QAA team’s credulity towards the AI hype, but their latest episode was an interview with the writer of that “AGI as conspiracy theory” piece from last(?) week and seemed much more grounded.

      • BurgersMcSlopshot@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        the mention in QAA came during that episode and I think there it was more illustrative about how a person can progress to conspiratorial thinking about AI. The mention in Panic World was from an interview with Ed Zitron’s biggest fan, Casey Newton if I recall correctly.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      19 days ago

      well, I can’t counter it because I don’t think they do know how it works. the theory is shallow and the outputs of, say, an LLM are of remarkably high quality and in an area (language) that is impossibly baroque. the lack of theory and fundamental understanding is a huge problem for them because it means “improvements” can only come about by throwing money and conventional engineering at their systems. this is what I’ve heard from people for about ten years.

      to me that also means it isn’t something that needs to be countered. it’s something the context of which needs to be explained. it’s bad for the ai industry that they don’t know what they’re doing

      • jaschop@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        I think I heard a good analogy for this in Well There’s Your Problem #164.

        One topic of the episode was how people didn’t really understand how boilers worked, from a thermal mechanics point if view. Still steam power was widely used (e.g. on river boats), but much of the engineering was guesswork or based on patently false assumptions with sometimes disastrous effects.

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          another analogy might be an ancient builder who gets really good at building pyramids, and by pouring enormous amounts of money and resources into a project manages to build a stunningly large pyramid. “im now going to build something as tall as what will be called the empire state building,” he says.

          problem: he has no idea how to do this. clearly some new building concepts are needed. but maybe he can figure those out. in the meantime he’s going to continue with this pyramid design but make them even bigger and bigger, even as the amount of stone required and the cost scales quadratically, and just say he’s working up to the reallyyyyy big building…

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      I mean if you ever toyed around with neural networks or similar ML models you know it’s basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.

      There’s a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There’s no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.

      In other words, “engineers don’t know how it works” can have two meanings - that they’re hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don’t have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it’s not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don’t know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn’t collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I’m aware, largely true, or at least I haven’t seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it’d be a major achievement everyone would be talking about.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      Another ironic point… Lesswronger’s actually do care about ML interpretability (to the extent they care about real ML at all; and as a solution to making their God AI serve their whims not for anything practical). A lack of interpretability is a major problem (like irl problem, not just scifi skynet problem) in ML, you can models with racism or other bias buried in them and not be able to tell except by manually experimenting with your model with data from outside the training set. But Sam Altman has turned it from a problem into a humble brag intended to imply their LLM is so powerful and mysterious and bordering on AGI.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    " The ‘Big Short’ Guy Shuts Down Hedge Fund Amid AI Bubble Fears"

    https://gizmodo.com/the-big-short-guy-shuts-down-hedge-fund-amid-ai-bubble-fears-2000685539

    ‘Absolutely’ a market bubble: Wall Street sounds the alarm on AI-driven boom as investors go all in

    https://finance.yahoo.com/news/absolutely-a-market-bubble-wall-street-sounds-the-alarm-on-ai-driven-boom-as-investors-go-all-in-200449201.html?guccounter=1

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    22 days ago

    (edit: advance warning that clicking these links might cause eyestrain and trigger rage)

    so for a while now sheer outrageous ludicrous nonsense of the trumpist-era USA politics has been making a bit of an impact on the local ZA racists (and, weirdly, not only the white nationalists but also the black nationalists - some of it has shone through in EFF and BFLF propaganda strains), and I knew that with the orange godawful-king ascension to his hoped-throne it was only a matter of time before shit here escalated

    anyway, it’s happened. the same organisation also put up some ads along the main highway ahead of the G20 summit

    (upside: some of those have already been pulled down. downside: the org put up some more. don’t know what’s happened with the latest yet)

    fuck these people