Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    Lesswronger notices all of the rationalist’s attempts at making an “aligned” AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate

    Notably, the author doesn’t realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      Putting this into the current context of LLMs… Given how Eliezer still repeats the “diamondoid bacteria” line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      ah yes, that great mark of certainty and product security, when you have to unleash pitbulls to patrol the completely not dangerous park that everyone can totally feel at ease in

      (and of course I bet the damn play is a resource exhaustion attack on critics, isn’t it)

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I don’t think it’s a resource exhaustion attack as much as a combination of legitimate paranoia (the consequence of a worldview where only billionaires are capable of actual agency) and attempt to impose that on reality by reverse-astroturfing any opposition by tying it to other billionaire AI bros.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        noted for advancements in cryptography, and “stayed impartial” (iirc not quite defending, but also not acknowledging nor distancing) when the jacob appelbaum shit hit wider knowledge

        probably about all you need to know in a nutshell

        the most recent shit before this when I recall seeing his name pop up was when he was causing slapfight around Kyber (ML-KEM) in the cryptography spaces, but I don’t have links at hand

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      What is the Range Rover in this analogy? A common belief about the 2008 Iceland bubble, which may very well not be true but was widely reported, is that Iceland’s credit was used to buy luxuries like high-end imported cars; when the bubble burst, many folks supposedly committed insurance fraud by deliberately destroying their own cars which they could no longer afford to finance. (I might suggest that credit bubbles are fundamentally distinct from investment bubbles.)

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        By my guess, the servers and datacentres powering the LLMs will end up as the AI bubble’s Range Rover equivalent - they’re obscenely expensive for AI corps to build and operate, and are practically impossible to finance without VC billions. Once the bubble bursts and the billions stop rolling in, I expect the servers to be sold off for parts and the datacentres to be abandoned.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      The blatant covering for the confabulated zip code is some peak boosterism. It knows what an address looks like and that some kind of postal code has to go there, and while it was pretty close I would still expect that to get returned to sender. Pretty close isn’t good enough.

  • Don Piano@feddit.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Kind of generic: I am a researcher and recently started a third party funded project where I won’t teach for a while. I kinda dread what garbage fire I’ll return to in a couple of years when I teach again, how much AI slop will be established on the sides of teachers and students.

  • dnn25519@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    As a CS student, I wonder why us and artists are always the one who are attacked the most whenever some new “insert tech stuff” comes out. And everyone’s like: HOLY SHIT PROGRAMMERS AND ARTISTS ARE DEAD, without realizing that most of these things are way too crappy to actually be… good enough to replace us?

    • Alex@lemmy.vg
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      My guess would be because most people don’t understand what you all actually do so gen AI output looks to them like their impression of the work you do. Just look at the game studios replacing concept artists with Midjourney, not grasping what concept art even is for and screwing up everyone’s workflow as a result.

      I’m neither a programmer nor an artist I can sorta understand how people get fooled. Show me a snippet of nonsense code or image and I’ll nod along if you say it’s good. But then as a writer (even if only hobbyist) I am able to see how godawful gen AI writing is whereas some non-writers won’t, and so I extrapolate from that since it’s not good at the thing I have domain expertise in, it probably isn’t good at the things I don’t understand.

      • stormeuh@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        17 hours ago

        I feel like aggressive proponents of genAI are so because they are intimidated and/or jealous of the people they say they will replace. They lack the skills and critical thinking to become good at the task they want to replace, but also unwilling or unable to put in the work.

        Instead of reckoning with this, they construct a phantasm where artists are “gatekeeping art”, and genAI is going to disrupt that gatekeeping. Meanwhile I think deep down they know that what genAI produces is derivative by definition, and not “real art” by any means.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    I was thinking about ethics in game journalism in software engineering and I think it might be easier to create a whitelist than a blacklist:

    What are some serious software/hardware companies that have NOT participated in the AI bubble? No AI nonsense in their marketing slides, no mentions of AI on their landing page, etc.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I was planning to mention Procreate as well, but felt like that’d be spamming the replies a bit.

        On a wider note, I expect it’ll be primarily art-related software/hardware companies that will have avoided AI participation - with how utterly artists have rejected the usage of AI, and resisted its intrusion into their spaces, the companies working with them likely view rejecting AI as an easy way of earning good PR with their users, and embracing it as a business liability at best, and a one-way trip past the trust thermocline at worst.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Gonna cheat a little bit and put one-woman consultancy firm/personal blog deadSimpleTech up as an example. The sole member is Iris Meredith, whose involvement begins and ends at publicly lambasting AI’s continued shittiness.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Where the fuck has that guy been for 20 years? I’ve seen that happen many times with junior programmers during my 20 years of experience.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        also from a number of devs who went borderline malicious compliance in “adopting tdd/testing” but didn’t really grok the assignment

        • BurgersMcSlopshot@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          At a recent job, I definitely saw malicious compliance/incompetence when it came to writing tests. My team and I would work hard to retrofit tests into older functionality and adjacent teams if they bothered to write tests would avoid testing anything of consequence.