Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    From Lila Byock:

    A 4th grader was assigned to design a book cover for Pippi Longstocking using Adobe for Education.

    The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.

      I try to avoid having to even see the outputs of these fucking systems, but you just made me realize that there’s going to be more than a few of them that will “leak” (read: preferentially deliver, by way of training focus) the kinks of its particular owner. I mean it’s already happening for the textual replies on twitter, soothing felon’s ever so bruised ego. the chance of it not Shipping beyond that is pretty damn zero :|

      god I hate all of this

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 days ago

    2 links from my feeds with crossover here

    Lawyers, Guns and Money: The Data Center Backlash

    Techdirt: Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric

    Unfortunately Techdirt’s Mike Masnick is a signatory some bullshit GenAI-collaborationist manifesto called The Resonant Computing Manifesto, along with other suspects like Anil Dash. Like so many other technolibertarian manifestos, it naturally declines to say how their wonderful vision would be economically feasible in a world without meaningful brakes on the very tech giants they profess to oppose.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      i am pretty sure i am shredding the Resonant Computing Manifesto for Monday

      and of course Anil Dash signed it

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        The people who build these products aren’t bad or evil.

        No, I’m pretty sure that a lot of them just are bad and evil.

        With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise.

        [citation needed]

        [to a source that’s not laundered slop, ya dingbats]

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          20 days ago

          to a source that’s not laundered slop, ya dingbats

          Ha thats easy. Read Singularity Sky by Charles Stross see all the wonders the festival brings.

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      It’s the McMindfulness guy, nice to see that he is still kicking around.

      In Empire of AI, she shows how CEO Sam Altman cloaks monopoly ambitions in humanitarian language—his soft-spoken, monkish image (gosh, little Sammy even practices mindfulness!)

      lol ofc he does

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      I like this. Kinda wish it was either 10x longer and explained things a bit, or 10x shorter and was more shitposty. Still, good

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        “Nah, salary stuff is private”, starting to think this sort of stuff is an idea introduced to protect capital and nobody else.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 days ago

          I was teasing this out in my head to try come up with a good sneer. First thought: for an organisation that tries to appeal to EAs, you’d think that they would do a good job of being transparent about why so much money is being spent on someone with such low output. But immediate rebuttal: the whole point of the TESCREAL cult shit is that yud get free tuocs because he’s the chosen one to solve alignment.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            23 days ago

            Was thinking more about how the radical, dont fall to biasses think for yourself and cone here to really learn to think (so we can stop the paperclipmachine and resurrect the dead) defend a half million dollars salary with a ‘thats private’.

            But that is the same conclusion. The prophet must be protected.

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        23 days ago

        Cinnas

        The Rolling Stone article is a bit odd (it appears to tell the story of the ex-employee who created Miricult twice, the first time without names and the second naming the accuser) but I trust them that MIRI did pay the accuser. Rolling Stone are a serious news organization which can be sued.

        • GorillasAreForEating@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          17 days ago

          Yeah, I think Rolling Stone was worried about getting sued and omitted Helm’s name in the first draft (or something like that).

          I know who the alleged victim was, and I think there probably was a crime and blackmail payments but the alleged victim didn’t want to come forward for a number of reasons (among other things, he’s still part of the rationalist community and has faced a lot of harassment from the public after an unrelated newspaper article outed him as being trans). I’d also point out that the only person that miricult directly accused of statutory rape was one of Yudkowsky’s employees rather than Yudkowsky himself. That being said, the journalist who wrote the Rolling Stone article claims she got a copy of the police report Helm filed and only Yudkowsky was named.

          Even if miricult was total bullshit I’m confident that the alleged victim was lying about not being exploited by other rationalists; a few years later he and a couple of other people posted accounts of being sexually abused by a rationalist (unrelated to miricult) and it led to the abuser being ostracized from the rationalist community.

          Anyways I know a lot more about this but I’d rather not discuss the details on a publicly viewable forum to protect the privacy of the people involved.

          • CinnasVerses@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            I agree that its gross to discuss a lot of this in public, and that underage sex is often an ethical grey area. I had no idea that the person who accused BD of pushing him into substance use and extreme BDSM scenarios is also the person who allegedly had sex underage with a MIRI staffer while living in a Rationalist group home.

            • GorillasAreForEating@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              16 days ago

              Ziz’s blog had posts that revealed his identity and mentioned some of the BD stuff, once I found them it was just a matter of putting two and two together, so to speak

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          I believe he was trying to explain why it looked like MIRI had paid money out to an alleged sexual abuser. The analogy was constructed something like this:

          1. A and B work at a company C
          2. A has conflict with B.
          3. C decides to fire B.
          4. unrelated to 1, 2, or 3, B has a wife D, who dies in mysterious circumstances, leading A to strongly believe that B killed D.
          5. The police, E, perform an investigation and decide not to pursue a case against B
          6. C pays out B’s severance, unrelated to 2, 4, or 5.

          Don’t blame me or how I remembered this if this doesn’t make sense.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            Additionally he said something to the effect of I don’t blame you for not knowing this, it wasn’t effectively communicated to the media like it’s no big deal, which isn’t really helping to beat the allegations of don’t ask don’t tell policies about SA in rat related orgs.

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              16 days ago

              Can confirm. This was like if the pope walked into an r/atheism meetup and showed his texts saying “dw bro, I’ll just move you to a different diocese, btw this totally isn’t about the allegations wink wink”

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      The documentation for “Turbo mode” for Google Antigravity:

      Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

      No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)

      Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.

      It’s hard not to give the user a hard time when they write:

      Bro, I didn’t know I needed a seatbelt for AI.

      But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good

      but it is very fucking funny to watch them FAFO

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: “Never let an LLM have any decision-making power.” At most, LLMs will serve as a heuristic function for an algorithm that actually works.

      Unlike the railroads of the First Gilded Age, I don’t think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it’s not worth spending lots of money on a task where you don’t need reliability.

      The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

      The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true “use cases” to be mainly spam, and perhaps students cheating on homework.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        26 days ago

        Pessimistically I think this scourge will be with us for as long as there are people willing to put code “that-mostly-works” in production. It won’t be making decisions, but we’ll get a new faucet of poor code sludge to enjoy and repair.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      26 days ago

      I know it is a bit of elitism/priviledge on my part. But if you don’t know about the existence of google translate(*), perhaps you shouldn’t be doing vibe coding like this.

      *: this of course, could have been a LLM based vibe translation error.

      E: And I guess my theme this week is translations.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    The documentation for “Turbo mode” for Google Antigravity:

    Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

    No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)

    Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.

    It’s hard not to give the user a hard time when they write

    Bro, I didn’t know I needed a seatbelt for AI.

    But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      bring back rich people rolling their own submarines and getting crushed to death in the bathyal zone

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        (note: out-of-order to linked post for comment cohesion)

        Terminals are an invisible technology to most

        what a fucking sentence

        …that are hyper present in the everyday life of many in the tech industry.

        hyper? like this?

        But the terminal itself is boring, the real impact of Ghostty is going to be in libghostty and making all of this completely available for many use cases. My hope is that through building a broadly adopted shared underlayer of terminals around the industry we can do some really interesting things.

        oh good so the rentier bridgetroll wants to do just a monopoly play? that’s fine I’m sure. note: I don’t think there’s a more charitable reading of this. those shared underlayers already exist, in the form of decades of protocol and other development. many of them suck and I agree about trying to do better, but I (rather strongly) suspect hashi and I have very different ideas of what that looks like

        I’ve already addressed the belittling of the project I really find useful and care about. So let’s just move on to the financial class.

        Regardless of my financial ability to support this project, any project that financially survives (for or non-profit) at the whims of a single donor is an unhealthy project

        “uwu, think of the poor projects. yes sure I could throw $20m at this in some kind of funny trust and have it live forever but that wouldn’t allow me to evade the point so much!”

        I paid a 9-figure tax bill and also donated over 5% of my other stuff to charity this year

        “I’m not as bad as the other billionaires I promise

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 days ago

          I’m too fucking old to care about hipster terminals, so I had no idea ghostty was started by a (former) billionaire. If forced to choose a new terminal I will certainly take this fact into consideration.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      23 days ago

      all things aside, is current ghostty any good, or still an audiophile consolephile-ware?

      i’m generally reluctant to try something which reeks of intensive self-promotion, but few months ago i decided to finally see what’s the hype about, and, well, it’s a terminal emulator.

      wezterm does much more, and with a much cleaner ui, and it’s programmable, and the author doesn’t remind me that hashicorp is a thing that exists.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      ghosTTy is the username of a schizoposter on Something Awful who only shows up to post bitcoin price charts and get mocked into oblivion. I wonder if there’s any connection?

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      I took psychic damage by scrolling up and seeing promptsimon posting a real doozie:

      I have been enjoying hitting refresh on https://fuckthisurl/froztbyte-scrubbed-it-intentionally throughout today and watching the number grow - it’s nice to see a clear example of people donating to a new non-profit open source project.

      “oooh! look at the vanity project go! weeeee, isn’t having a famous face attached to it fun?” with exactly no reflection on the fucking daunting state of open source funding in multiple other domains and projects

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    saw this elsewhere. the account itself appears to be a luckey stan account, but the next

    There’s more crust than air or sea or land… so a vehicle that moves through the crust of the earth is going to be a huge deal

    I have built working prototypes of this

    so are we talking mining, or The Core (2003)? it feels like he’s trying to pitch it as though it’s Tiberian Sun style subterrean APC, but I can’t be sure whether I’m reading into it

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    27 days ago

    Edited it into a reply to Hanson now believing in Aliens, but seems like the SSC side of rationalism has a larger group of people also believing in miracles: https://www.astralcodexten.com/p/the-fatima-sun-miracle-much-more (I have not in depth read the article, going by what others reported about this incident, there also seem to be related LW posts).

    Read it a bit now, noticed that scott doesn’t know people who speak Portuguese and is relying on mt. (Also unclear what type of mt).

    • BioMan@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      The long expected collapse of the rationalists out of their flagging cult into ordinary religion and conspiracy theory continues apace.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.

    Few IT projects are displays of rational decision-making from which AI can or should learn.

    Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.

    The article continues to talk about how we can’t do IT, and wraps up with

    It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined

    It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.

    https://spectrum.ieee.org/it-management-software-failures

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      Considering the sorry state of the software industry, plus said industry’s adamant refusal to learn from its mistakes, I think society should actively avoid starting or implementing new software, if not actively cut back on software usage when possible, until the industry improves or collapses.

      That’s probably an extreme position to take, but IT as it stands is a serious liability - one that AI’s set to make so much worse.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        25 days ago

        For a lot of this stuff at the larger end of the scale, the problem mostly seems to be a complete lack of accountability and consequences, combined with there being, like, four contractors capable of doing the work, with three giant accountancy firms able to audit the books.

        Giant government projects always seem to be a disaster, be they construction, heathcare, IT, and no heads ever roll. Fujitsu was still getting contracts from the UK government even after it was clear they’d been covering up the absolute clusterfuck that was their post office system that resulted in people being driven to poverty and suicide.

        At the smaller scale, well. “No warranty or fitness for any particular purpose” is the whole of the software industry outside of safety critical firmware sort of things. We have to expend an enormous amount of effort to get our products at work CE certified so we’re allowed to sell them, but the software that runs them? we can shovel that shit out of the door and no-one cares.

        I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          25 days ago

          I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

          Considering how “vibe coding” has corroded IT infrastructure at all levels, the AI bubble is set to trigger a 2008-style financial crisis upon its burst, and AI itself has been deskilling students and workers at an alarming rate, I can easily see why.

          • o7___o7@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            25 days ago

            In the land of the blind the one-eyed man will make a killling as an independent contractor cleaning up after this blows up.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    A second post on software project management in a week, this one from deadsimpletech: failed software projects are strategic failures.

    A window into another it disaster I wasn’t aware of, but clearly there is no shortage of those. An australian one this time.

    And of course, without having at least some of that expertise in-house, they found themselves completely unable to identify that Accenture was either incompetent, actively gouging them or both.

    (spoiler alert, it was both)

    Interesting mention of clausewitz in the context of management, which gives me pause a bit because techbros famously love the “art of war”, probably because sun tzu was patiently explaining obvious things to idiots and that works well on them. “On war” might be a better text, I guess.

    https://deadsimpletech.com/blog/failed_software_projects

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      I associate Clausewitz (and especially John Boyd) references more with a Palantir / Stratfor / Booz / LE-MIC-consulting class compared to your typical bay area YC techbro in the US, and a very different crowd over in AU / NZ where grognards probably outnumber the actual military. LWers never bring up Clausewitz either but love Sun Tzu. But as far as software strategy posts go, I’d much rather read a Clausewitz tie-in than, say, Mythical Man Month or Agile anything.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        Much of the content of mythical man month is still depressingly relevant, especially in conjunction with brooks’ later stuff like no silver bullets. A lot of senior tech management either never read it, or read it so long ago that they forgot the relevant points beyond the title.

        It’s interesting that clausewitz doesn’t appear in lw discussions. That seems like a big point in favour of his writing.

        • nfultz@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 days ago

          If you liked Brooks, you might give Gerald Weinberg a try. A bit more folksy / less corporate.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      the base use for LLMs is gonna be hypertargetted advertising, malware, political propaganda etc

      well the base case for LLMs is that, right now

      the privacy nerds won’t know what hit them