Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 hours ago

    KeepassXC (my password manager of choice) are “experimenting” with ai code assistants 🫩

    https://www.reddit.com/r/KeePass/comments/1lnvw6q/comment/n0jg8ae/

    I’m a KeePassXC maintainer. The Copilot PRs are a test drive to speed up the development process. For now, it’s just a playground and most of the PRs are simple fixes for existing issues with very limited reach. None of the PRs are merged without being reviewed, tested, and, if necessary, amended by a human developer. This is how it is now and how it will continue to be should we choose to go on with this. We prefer to be transparent about the use of AI, so we chose to go the PR route. We could have also done it locally and nobody would ever know. That’s probably how most projects work these days. We might publish a blog article soon with some more details.

    The trace of petulance in the response… “we could have done it secretly, that’s how most projects do it” is not the kind of attitude I’m happy to see attached to a security critical piece of software.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      It definitely feels like the first draft said for the longest time we had to use AI in secret because of Woke.

  • zogwarg@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    9 hours ago

    Some changes to adventofcode this year, will only have 12-days of puzzles, and no longer have global leaderboard according to the faq:

    Why did the number of days per event change?

    It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).

    Scaling it a bit down rather than completely burning out is nice i think.

    What happened to the global leaderboard?

    The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)

    While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc?

    If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.

    Probably the most positive change here, it’s a bit of shame we can’t have nice things, a no real way to police stuff like people using AI for leaderboard times. Still keeping the private one, for smaller groups of people, that can set expectations is unfortunately the only pragmatic thing to do.

    Should I use AI to solve Advent of Code puzzles?

    No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.

    It’s nice to know the creator (Eric Wastl) has a good head on his shoulders.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      only have 12-days of puzzles

      Obligatory oh good I might actually get something job-related done this December comment.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 hours ago

      I feel like the private leaderboards are also more in keeping with the spirit of the thing. You can’t really have a friendly competition with a legion of complete strangers that you have no interaction with outside of comparing final times. Even when there’s nothing on the line the consequences for cheating or being a dick are nonexistent, whereas in a a private group you have to deal with all your friends knowing you’re an asshole going forward.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    9 hours ago

    More bias-laundering through AI, phrenology edition! https://www.economist.com/business/2025/11/06/should-facial-analysis-help-determine-whom-companies-hire

    I couldn’t actually read the article because paywall, but here’s a paper that the article is probably about: AI Personality Extraction from Faces: Labor Market Implications

    Saying the quiet part out loud:

    First, an individual’s genetic profile significantly influences both their facial features and personality. Certain variations in DNA correlate with specific facial features, such as nose shape, jawline, and overall facial symmetry, defined broadly as craniofacial characteristics

    Second, a person’s pre- and post-natal environment, especially hormone exposure, has been shown to affect both facial characteristics and personality

    To their credit the paper does say that this is a terrible idea, though I don’t know how much benefit of the doubt to give them (I don’t have time to take a closer look):

    This research is not intended, and should not viewed, as advocacy for the usage of Photo Big 5 or similar technologies in labor market screening.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 hours ago

    OpenAI’s financials are putrid, but they want everyone’s money. What would stop them avoiding scrutiny of an IPO by going public via the SPAC route? Sorry if this is a dumb question!

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      I would assume nothing stops them but I would love to get an analysis of why they may not be looking into this from someone who actually knows what they’re talking about. Best I can come up with from a complete layman’s perspective is that they’re concerned about the valuation they’d end up with. Not sure if the retail market had enough juice to actually pay for a company that is hypothetically one of the most valuable companies in the world, and puncturing that narrative might bring the whole bubble down (in a way that costs a lot of normal investors their shirts, of course).

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    1 day ago

    Mozilla destroys the 20-year-old volunteer community that handled Japanese localization and replaces it with a chatbot. It compounds this by deleting years of work with zero warning. Adding insult to insult, Mozilla then rolls a critical failure on “reading the room.”

    Would you be interested to hop on a call with us to talk about this further?

    https://support.mozilla.org/en-US/forums/contributors/717446

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    This looks like a rebranding of Urbit: Radiant Computer

    Has AI in its guts but not something they mention on the front page. Slop images throughout tho

    https://radiant.computer/system/os/ - “It’s an AI-native operating system. Artificial neural networks are built in and run locally. The OS understands what applications can do, what they expose, and how they fit together. It can integrate features automatically, without extra code. AI is used to extend your ability, help you understand the system and be your creative aid.”

    https://radiant.computer/system/network/ - “Radiant rejects the Web as a general purpose software platform, while embracing the Internet protocols as the powerful substrate on which sovereign technologies like Tor, BitTorrent, Gemini and Bitcoin are built.”

    • mlen@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      What do you mean by decline? Years ago I’ve been involved in a local ruby community in Poland and even back then his takes were considered unhinged.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 days ago

    There’s a Charles Stross novel from 2018 where cultists take over the US government and begin a project to build enough computational capacity to summon horrors from beyond space-time (in space). It’s called The Labyrinth Index and it’s very good!

    So anyway, this happened:

    https://www.wsj.com/tech/ai/openai-isnt-yet-working-toward-an-ipo-cfo-says-58037472

    Also, this:

    https://bsky.app/profile/edzitron.com/post/3m4wrv2xak22x

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 days ago

      What’s a government backstop, and does it happen often? It sounds like they’re asking for a preemptive bail-out.

      I checked the rest of Zitron’s feed before posting and its weirder in context:

      Interview:

      She also hinted at a role for the US government “to backstop the guarantee that allows the financing to happen”, but did not elaborate on how this would work.

      Later at the jobsite:

      I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word “backstop” and it mudlled the point.

      She then proceeds to explain she just meant that the government ‘should play its part’.

      Zitron says she might have been testing the waters, or its just the cherry on top of an interview where she said plenty of bizzare shit

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        Between this and the IPO talk it seems like we’re looking at some combination of trying to feel out exit strategies for the bubble they’ve created, trying to say whatever stuff keeps the “OpenAI is really big” narrative in the headlines, and good old fashioned business idiocy.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        Every horrible person in my life “tests the waters” like that before going mask-off 100% asshole.

        It gives that feeling, doesn’t it?

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        exuberance

        Truly a rightwing tech, after getting all the attention, money and data they now are mad people dont love it enough.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        What’s a government backstop, and does it happen often? It sounds like they’re asking for a preemptive bail-out.

        Zitron’s stated multiple times a bailout isn’t coming, but I’m not ruling it out myself - AI has proven highly useful as a propaganda tool and an accountability sink, the oligarchs in office have good reason to keep it alive.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 days ago

    So, today in AI hype, we are going back to chess engines!

    Ethan pumping AI-2027 author Daniel K here, so you know this has been “ThOrOuGHly ReSeARcHeD” ™

    Taking it at face value, I thought this was quite shocking! Beating a super GM with queen odds seems impossible for the best engines that I know of!! But the first * here is that the chart presented is not classical format. Still, QRR odds beating 1600 players seems very strange, even if weird time odds shenanigans are happening. So I tried this myself and to my surprise, I went 3-0 against Lc0 in different odds QRR, QR, QN, which now means according to this absolutely laughable chart that I am comparable to a 2200+ player!

    (Spoiler: I am very much NOT a 2200 player… or a 2000 player… or a 1600 player)

    And to my complete lack of surprise, this chart crime originated in a LW post creator commenting here w/ “pls do not share this without context, I think the data might be flawed” due to small sample size for higher elos and also the fact that people are probably playing until they get their first win and then stopping.

    Luckily absolute garbage methodologies will not stop Daniel K from sharing the latest in Chess engine news.

    But wait, why are LWers obsessed with the latest Chess engine results? Ofc its because they want to make some point about AI escaping human control even if humans start with a material advantage. We are going back to Legacy Yud posting with this one my friends. Applying RL to chess is a straight shot to applying RL to skynet to checkmate humanity. You have been warned!

    LW link below if anyone wants to stare into the abyss.

    https://www.lesswrong.com/posts/eQvNBwaxyqQ5GAdyx/some-data-from-leelapieceodds

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      One of the core beliefs of rationalism is that Intelligence™ is the sole determinant of outcomes, overriding resource imbalances, structural factors, or even just plain old luck. For example, since Elon Musk is so rich, that must be because he is very Intelligent™, despite all of the demonstrably idiotic things he has said over the years. So, even in an artificial scenario like chess, they cannot accept the fact that no amount of Intelligence™ can make up for a large material imbalance between the players.

      There was a sneer two years ago about this exact question. I can’t blame the rationalists though. The concept of using external sources outside of their bubble is quite unfamiliar to them.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 days ago

        two years ago

        🪦👨🏼➡️👴🏼

        since Elon Musk is so rich, that must be because he is very Intelligent™

        Will never be able to understand why these mfs don’t see this as the unga bunga stupid ass caveman belief that it is.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      I was wondering why Eliezer picked chess of all things in his latest “parable”. Even among the lesswrong community, chess playing as a useful analogy for general intelligence has been picked apart. But seeing that this is recent half-assed lesswrong research, that would explain the renewed interest in it.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    fyi over the last couple of days firefox added perplexity as search engine, must have been as an update

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 days ago

        Still think it is wild they used the libgen dataset(s) and basically gotten away with it apart from some minor damages only for US publishers (who actually registered their copyright). Even more so as my provider blocks libgen etc.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Part of me wants to see Google actually try this and get publicly humiliated by their nonexistent understanding of physics, part of me dreads the fact it’ll dump even more fucking junk into space.

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          3 days ago

          Considering we’ve already got a burgeoning Luddite movement that’s been kicked into high gear by the AI bubble, I’d personally like to see an outgrowth of that movement be what ultimately kicks it off.

          There were already some signs of this back in August, when anti-AI protesters vandalised cars and left “Butlerian Jihad” leaflets outside a pro-AI business meetup in Portland.

          Alternatively, I can see the Jihad kicking off as part of an environmentalist movement - to directly quote Baldur Bjarnason:

          [AI has] turned the tech industry from a potential political ally to environmentalism to an outright adversary. Water consumption of individual queries is irrelevant because now companies like Google and Microsoft are explicitly lined up against the fight against climate disaster. For that alone the tech should be burned to the ground.

          I wouldn’t rule out an artist-led movement being how the Jihad starts, either - between the AI industry “directly promising to destroy their industry, their work, and their communities” (to quote Baldur again), and the open and unrelenting contempt AI boosters have shown for art and artists, artists in general have plenty of reason to see AI as an existential threat to their craft and/or a show of hatred for who they are.

          • fullsquare@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            3 days ago

            i think you need to be a little bit more specific unless sounding a little like an unhinged cleric from memritv is what you’re going for

            but yeah nah i don’t think it’s gonna last this way, people want to go back to just doing their jobs like it used to be, and i think it may be that bubble burst wipes out companies that subsidized and provided cheap genai, so that promptfondlers hammering image generators won’t be as much of a problem. propaganda use and scams will remain i guess

            • BlueMonday1984@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              i think you need to be a little bit more specific unless sounding a little like an unhinged cleric from memritv is what you’re going for

              I’ll admit to taking your previous comment too literally here - I tend to assume people are completely serious unless I can clearly tell otherwise.

              but yeah nah i don’t think it’s gonna last this way, people want to go back to just doing their jobs like it used to be, and i think it may be that bubble burst wipes out companies that subsidized and provided cheap genai, so that promptfondlers hammering image generators won’t be as much of a problem. propaganda use and scams will remain i guess

              Scams and propaganda will absolutely remain a problem going forward - LLMs are tailor-made to flood the zone with shit (good news for propagandists), and AI tools will provide scammers with plenty of useful tools for deception.