Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • jaschop@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 hours ago

    House of Saud asks if the Iran war was caused by AI sycophancy.

    I’m reminded of the Kremlin reality distortion field that appears to have informed the decision to invade Ukraine.

    • cornflake@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 hours ago

      Worthy read, but it smelled like it had been drafted by AI too. Lots of the tropes and a bit repetitive.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 hours ago

      A) At this point I would be more surprised to learn that AI psychosis wasn’t infecting the upper tiers of the white house tbh. Like, at this point we could get a leak that Hegseth had been developing a literal god complex alongside his LLM mistress and I wouldn’t bat an eye.

      B) It seems like a particularly bad sign that this is coming from thr Saudis given that they’ve been a consistent ally that the US has spent a lot of material resources and political capital to support.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 hours ago

        I mean it looks kinda swastikesque imo, especially with the ambiguity over whether it’s supposed to be one or two "I"s behind it. (In some cases it’s FII with the second I split, and sometimes it’s FIIInstitute with the top of the second and bottom of the third “I” visible).

        • aninjury2all@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 hours ago

          Their Branding Guidelines have several logos - including ones where the 7 doesn’t accidentally form a Wolfsangel/Ger-Rune/Lightening Bolt type of shape.

          These AI orgs don’t get the benefit of the doubt (FYI on further investigation this looks like a Saudi outfit. Not that they’re any better, mind you)

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    A lesswronger asks are we rationalfic protagonists the baddies? https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/richard-ngo-s-shortform?commentId=uDuzmfMEvEqpyApLh

    tldr; rationalfic has a very common trend of the protagonist gaining and using overwhelming power to radically reform the world. This is almost (with a few notable exceptions) portrayed as clearly unambiguously good thing.

    My take: Don’t get me wrong, the Wizarding World (for example), as canonically portrayed needs some very strong reforms if not an entire revolution. But rationalfic almost never portrays the slow hard work of building support networks and alliances and developing a materialist theoretical understanding of how to reform society, as opposed to a lone (or small friend group) rationalist hero finding some overwhelming magical or technological advantage they can use to single-handedly take control and use their rationalist intellect to unilaterally fix everything. Part of it is the normal disconnect of fiction to the real world were it is more narrative satisfying (and easier to write) to have a central protagonist the solves the major problems or is at least directly involved with them, and rationalfic involves that protagonist gaining even more agency than they canonically do. The problem is that rationalist take this attitude back into real life, and so end up idolizing mythologized techbro billionaires or venture capitalist or the myth of the lone genius scientist/inventor.

    Also, quality sneer in the replies, “rational” teletubbies: https://tomasbjartur.bearblog.dev/rational-teletubbies/

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      there’s a lot i want to pull out from this comment by ngo

      first, shorn of context, i don’t know that this sort of power fantasy reflects so poorly on the rationalists. or perhaps it does, and in that case also reflects poorly on me, since it’s my preferred power fantasy. the world sucks and it would be nice to magically make it better.

      second, we must remember that rat stories are implicitly either recipes for social change or warnings that society ought to stay away from particular demons. rationalism is in large part a political movement with what they believe to be practical aims

      third, if ngo’s marxist fiction from the 1800s all ended with communist revolutions, the worrying thing for a member of the movement would not be a fantasy of triumph or a sense of certainty of triumph, but rather an inability to connect triumphant outcomes to action under the present conditions. as you highlighted, the fantastical element of these stories is in conflict with the practicality of their aims

      fourth, as far as i can tell, that is not ngo’s objection at all. what he seems to be concerned about is the possibility that rationalists will make serious progress on actually taking over the world and make terrible things happen once they do. i don’t take this possibility seriously at all. fundamentally, rationalists are lapdogs, forever licking the negligently outstretched hands of billionaires. they cause real harm as lackeys of the ultrawealthy and vectors for the diseases of racism, eugenics, etc, but to take ngo’s concerns seriously i would have to buy into the same fantasy of magical omnipotence he’s pointing to, because there seems to be no other path from here to rationalist dictatorship.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        Your first point is true… with the key words being “shorn of context”. When you look at how many ratfics go in that direction your second and fourth points become problems.

        As to your fourth point… the techbro billionaires like Elon or Peter Thiel do like referencing fiction (often in hamfisted or ignorant ways that makes me think a bit of fandom gatekeeping actually is good sometimes… i.e. naming your surveillance company palantir, or naming one of your kids a nonsensical WH40K reference). So I wouldn’t entirely neglect the possibility of rationalist managing a bit of inspiration to the billionaires in between the bootlicking. And although there may not be a magical “coup the government” power in real life, the influence they are trying to focus on themselves and harness is still worrying.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      They also don’t want to believe in chaos theory. This post tries to explain it to them, but check out Gwern in the comments being skewered by a book written by Freeman Dyson around the time he was born. They want the future to be perfectly predictable (even though Yud says that 1 and 0 are not probabilities) and they don’t like game theory, repeated games, or non-zero-sum games, because those reward people from building trust then violating it.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 days ago

        - hey chaotic systems are a thing, you can’t predict every single last detail

        gwern: as a proof that you’re wrong, what if we placed everywhere very fast robots that avoid getting into chaotic systems in the first place

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          The attitude that you can substitute a bunch of cheap tricks and hacks to get around fundamentally difficult problems reminds me of the techbro attitude that leads to stuff like pushing fundamentally non-viable technologies (like Theranos or the LLM boosters) and of DOGE trying to asking an LLM how to cut the DEI.

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 days ago

          I feel so sad because so many of his examples are ways to make people think you won. And we are about to see what happens when you take 20-30% of global fossil fuel and helium production offline for six months to a few years. Public relations and cooking the books can’t change that.

          Its easy to make people believe you are wise and know the future. There is no way to predict the weather one month out much better than we can now, and if you plant your crops and the sun scorches them, those crops are dead and you have to wait until next season to replant.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 hours ago

            On a purely rhetorical point, it seems like the whole counterargument from Gwern is just an argument-by-disorganization or something to that effect. He doesn’t actually challenge the factual information presented, but does shift how those facts are framed and what the actual contention is in the background, and then avoids actually engaging with the new contention from the bottom up.

            In a lot of discussions with singularity cultists (both pros and antis) they assume that a true superintelligence would render the whole universe deterministically predictable to a sufficient degree to allow it to basically do magic. This is how the specifics of "how and why does the AI kill all humans again?’ tend to be elided, for example. This same kind of thinking is also at the heart of their obsession with “superpredictors” who can, it is assumed, use some kind of trick to beat this kind of mathematical limit in certainty (this is the part where I say something about survivorship bias). In the context of that discussion, the fact that a relatively simple arrangement of components following relatively simple, deterministic rules is still not meaningfully predictable past a dozen or so sequential events due to the magnification of the inevitable error in our understanding of the initial circumstances is a logical knockout.

            Rather than engage with this, however, Gwern and his compatriots in the thread focus in on the tangent about how high-level pinball players are able to control for that uncertainty by avoiding the region of the board where those error-magnifying parts are. However this is not the same argument and begs the question of whether those high-chaos areas are always avoidable as they are in a pinball machine. Rather than engage with that question, Gwern doubles down on the pinball analogy, shifting the question even further from “how well can we predict the deterministic motion of a ball given the inevitable uncertainty of our initial state” to “how many ways can we convince a third party we’ve gotten a high score on a pinball machine”. At this point we’re not just moving the goalposts, we’ve moved the entire stadium into low earth orbit and gotten real cute about whether we’re playing 🏈 or ⚽ football.

            And given the conversation surrounding the thread and these topics on LW I’m not even going to assume that such a wild shift is the result of bad faith instead of simple disorganization and sloppiness of rhetoric. This is what happens to a community that conflates “it makes me feel smart” with “it actually communicates the point effectively”.

            • CinnasVerses@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 hours ago

              Gwern’s turn to “what if I just make people believe I won at pinball?” also come back to their idea that the smartest being is the best manipulator, even though some excellent manipulators like Trump don’t have a lot of logical-analytical intelligence, and some brilliant thinkers like John Nash get into a fight with the inside of their own head and lose. It also reminds me of how they love markets in theory but are not interested in starting a business which would compete with other businesses.

              • YourNetworkIsHaunted@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                10 hours ago

                Ironically I think it’s also been discussed most frequently within Rationalist circles that these types of intelligence aren’t often correlated. I’m not going to chase down links right now because doing an SSC archive exploration requires more mental fortitude than I currently possess, but I distinctly remember that a recurring theme was “if nerds are so smart why don’t they rule the world?” In my less cynical days I had assumed that his confusion on this point was largely rhetorical, intended to illustrate some part of whatever point was buried in the beigeness. Now it seems like I was falling victim to the ability to project whatever tangentially-related thesis you want onto the essay and find supporting arguments because of how badly it’s written.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Part of what makes the RatFic version of this so weird imo is that despite being ostensibly rooted in relatively low-hanging fruit (e.g. what if we industrialized this pre modern setting, what if we rationally looked at the rules of this magic system, etc.) nobody other than the protagonist has ever thought about these things and even once the protagonist starts demonstrating some real world-conquering results (benevolently, of course) nobody ever really seems to want to copy their successes. Part of what made the actual industrial revolution unfold the way it did was because of the ensuing arms race of it. In addition to causing the lines on various economist’s charts to go nearly vertical this also basically culminated in the first world war, which seems like the kind of event that they should be aware of. But of course in RatFic it seems like anyone who can’t be talked around to joining up with our protagonist is too weak or woke or stupid to actually pose a threat to the Glorious March of Rational Progress.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Once you commit to the idea that only your main characters have ever tried to study magic scientifically, you’re locked in to making all the rest of the magical world into dullards. (Really, no other eleven-year-olds were ever into computer programming, chemistry sets, exotic marine animals, outer space, or dinosaurs?) Or, to look at it another way, the only way you can find the premise plausible is if you’re already inclined to dismiss most of humanity as “NPCs”.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          Being the kind of writer I am, whenever this comes up I am tempted to suggest ways it could have been done better. But, first, I am not glazing the work of Rowling, even indirectly, no way, no how. Fuck her for all the pain she has wrought, and fuck the whole LessWrong crew for tacitly accepting it. Second, HPMoR was cult shit all along, not meant to teach science but to sow distrust of scientists under the glossy sheen of being able to name the six quarks.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 day ago

            Being the kind of writer I am, whenever this comes up I am tempted to suggest ways it could have been done better.

            The premise kind of does work in a setting like Harry Potter, the wizarding world is insular enough that a clever kid could bring in some new ideas. The problem is Eliezer wanted to throw in too many shortcuts. Its not enough for creativity with transmutations to give the protagonist a small edge, transmutation is made into the ultimate all-purpose spell so the protagonist can exploit it easier. The protagonist isn’t just moderately better at Patronus with some muggle psychology, his patronus can kill dementors. And the philosopher’s stone is changed into some ancient atlantean super-magic, because fuck wizards ever inventing anything, and also instead of some moderate rate of its typical mythological powers it is super transmutation.

            But, first, I am not glazing the work of Rowling, even indirectly, no way, no how.

            Rowling went mask off transphobe in 2018, HPMOR finished in 2015. So I won’t blame Eliezer for not picking a different fandom at the time. Eliezer has actually said moderately supportive comments, including of using people’s preferred pronouns (we’ve mocked another lesswronger for writing long screeds complaining about this). In general, I think the average lesswrong attitude towards trans people is better than the average American’s attitude… but that is because the bar is in hell. But yeah I’ve seen plenty of shitty takes towards trans people on lesswrong.

            Second, HPMoR was cult shit all along, not meant to teach science but to sow distrust of scientists under the glossy sheen of being able to name the six quarks.

            Yep. And it didn’t even stick to its premise of “try to do science to magic and compare muggle scientifically gained knowledge to magic” and instead went into some Ender’s game pastiche followed by Death Note style “I know you know I know” plotting, then Harry gets handed all the magical power handed to him at the end of the story thanks to Dumbledore following some insane combination of prophecy.

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            I have also occasionally been tempted to try and get a Goncharov thing going, where everyone collectively recalls that Tommy Berry and the Forevernight Forest got them into reading.

            It was just after an ordinary afternoon tea, on an ordinary Sunday, the first cold day of autumn, when Tommy Berry discovered that Time was no longer adding up in the ordinary way.

            Tommy had only managed to drink one cup of very indifferently warm tea, and eat the last plain saltine from the bottom of the bag. Everything else had been gobbled up or drunk down by his uncle Myrvold, who was rotund as a boulder and about as kind, and his step-aunt Meredith, who was thin as a snake and considerably more mean. So, yes, it was altogether quite the ordinary teatime.

            Tommy had a secret, you see. In fact, he had two, a big one that he knew about and an even bigger one that was just about to fall on top of him.

            His first secret was that he had a library card. He had stolen an adult’s library card. Or that is how Uncle Myrvold and Step-Aunt Meredith would have described it, if they knew.

            Carruthers, who lived down the end of the lane and always yelled at Tommy to mind his hedges, and who let his dog chase Tommy and the other children, had made a big show of throwing his library card into the roadway because, he said, the library was full of immoral books. A car had then driven over it, and then a whole lorry, and then Tommy had snatched it up. Something told him that anything Carruthers hated, he should save, and anything that Myrvold and Meredith would be angry about, he should hold onto.

            Tommy had heard adults say that something was “burning a hole in my pocket”. He wondered if this was what that meant. It felt like he was carrying a hot coal in the pocket of his threadbare corduroy jacket, and no one could know.

            The library had a new machine. He had seen adults use it. You could go up to it, wave a book under a red laser light like at the grocery store, then show the machine your card, and it would check out the book for you. Tommy made a plan. He would slip out of the house just after tea. He would walk the five blocks to the library. He would find a book that Myrvold and Meredith and Carruthers and every other grownup would not want him to read. He would wait until the librarian was busy dealing with a whole queue of people. And then he would use the machine.

            Everything went perfectly until the very last step.

            There was a girl at the machine.

            He had a big fat book in his hands, a book he had picked because it had “Murder” in the title and would last a long time, and there was a girl in front of him at the library machine.

            Murder at Wizard University?” she asked him, right to his face, like they had already been introduced, like they had known each other since nursery school. “That’s not a book for little kids.” His stomach dropped, right into his feet. He didn’t know that a stomach could do such a thing.

            And then she tilted the stack of books she was carrying toward him, showing him the titles on their spines. “Neither are these,” she said.

            And she pulled out her own library card. It was black, like a rectangle cut out of the midnight sky.

            That’s all I wrote in the thread that prompted me to take a stab. Oh, I think I had decided that the girl’s name is Elfriede? And the principal of magic school is nonbinary.

            “Why, of course there’s a potion for changing,” said Professor Shade. “That is what potions do. I don’t know where I’d be without it. It is ever so helpful to reach the top shelf, but on the other hand, men’s fashions haven’t been truly swank in a hundred fifty years.”

            • YourNetworkIsHaunted@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              15 hours ago

              Fuck it, I’m good to Gonch this out.

              I forget, did we ever actually learn who the killer was in Murder at Wizard University? I remember it kept coming up through the first book as a kind of motif for how this new world wasn’t necessarily as safe and clean as Tommy expected, but I think that whole business with the Thoughtknot ended up overshadowing it before the actual killer was revealed. Like, I get it thematically or whatever but it just stuck in my head as a loose thread and has bugged me for years.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      “I did the AI that people largely hate, but the next guy, HE’S the real AI CEO. Just remember him, not me, the guy who did the stuff.”

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 days ago

    They blarney engines got to my patent lawyer.

    I’m sooo ready for this shit to be done.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      What if your art could literally move to a different platform am I right?

      We should have let them take elfwood.

  • EponymousBosh@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    A friend of mine is working on an internal AI chatbot at their company, so that the Least-Productive Team will have something to answer the same 5 questions that they keep asking to Friend’s (extremely productive) team, instead of wasting Friend et al’s time.

    So I guess that’s the one use case of AI bots: to dangle keys in front of MBAs who are too stupid to do their own jobs. Which explains everything, really.

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 days ago

        Hey, it’s not just that, that’s unfair to the chatbots. They’re also used out of contempt for one’s employees

        • antifuchs@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          Ah, of course! I come from a corp culture where co-workers were also called customers (“users” actually but you know)

            • corbin@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              On one hand, no, it’s an inevitable consequence of a company becoming so large that it needs a department to manage its internal infrastructure. When I worked at Google, my customers were Googlers; that is, the services I owned were only queried by fellow employees. On the other hand, books like The Circle are popular precisely because they capture the quasi-cult vibe of working at places like Google.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      They are now also dropping their porn chatbots. (Remember though, they were late with that, from what I heard the market on that was already filled).

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 days ago

          Not sure, from what I heard lot of porn people use other models anyway. Not exactly sure which models (I heard there are also places where there are jailbroken models, but im not going to look for that for obvious reasons), but from what I heard a while back grok/gemini/deepseek were popular. Low N on that however. Grok seems to be esp popular for revenge porn sort of shit however.

          For example see the usage stats here for roleplay using janitor.ai (most roleplay is not erotic however (the site has no ‘erotic content’ tracking), but still, openAI’s models are not in the top 20). This is however free vs priced models etc (and deepseek is a lot cheaper, which is important for roleplay esp if you want a history, as that context needs to be added every time (the powercost of all this must be insane)).

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      spare a thought and a prayer for all these careers of ambitious propagandists and engagement farmers, dislocated without warning

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Really hope this is going to be the wave breaking, as it is getting so stupid. (Some dutch library had all its new books be ai slop (some even quite racist it seems), and this isnt a single incident).

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Carl Bergstrom notes a publicity stunt by Anthropic:

    “The AI Grad Student”: A Harvard professor describes working with Claude.

    Early on, he describes misconduct that would cause any student to be terminated: “It faked results, hoping I wouldn’t notice.”

    But he ends the essay with “Now I’m doing 100% of my research with LLMs”.

    Am I losing my mind?

    Hang around for the “trust me bro, I saw it on YouTube” guy in the comments.

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      when Bergstrom came to campus last year, he mentioned that he wargamed a pandemic response with vaccinations for the Bush administration, and they were worried enough about people raiding vaccine trucks, mad max style, that they planned for all these armed escorts.

      … instead we get to live in eddington.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        I’ve been rewatching ReGenesis and you can sorta see a similar background to it (it being of the same vintage): bio attacks are the Big Bad, there’s actual mobilisation against viral spread, etc

        kinda whiplash just how fucking rapidly the antivaxxer movement (and general anti-science) managed to spread and become popular in the years hence (and specifically circa the ~2016 mark)

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          As a counterpoint, there’s a local outbreak of meningitis in Kent (UK) and panicked kids are lining up for vaccinations there.

          COVID mostly hit the invisible elderly. It’s hard to imagine what would have happened if kids 0-6 were the most vulnerable.

          • TrashGoblin@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            Over here, measles hits mostly kids, but if anything, the new measles epidemic seems to have made antivaxxers double down.

            • gerikson@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              Yeah it’s tragic, but I think it can be “explained” in that incorrect idea that measles isn’t that bad, really, and it was “natural” to get it. I don’t know the exact lethality differences between the diseases in question - just that for an otherwise healthy 20 year old university student, being forced to stay at home because of a disease mostly killing octogenarians is easier to protest against than something that’s killing you and your fellow ravers.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 days ago

            People are trying to get vaccinations for that here in .nl (as in one other country it already spread to a university iirc, so not totally unwise)

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 days ago

            oh sure, I didn’t mean to imply that good isn’t happening, more observing on how far and popular the whackjobs have managed to come

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      You know, when I think about securely holding onto things and protecting them without damaging or dropping them, I think of a fucking OPEN CLAW said nobody ever.

    • fiat_lux@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 days ago

      The Tumblr thread about covers it, but I had a quick look through the code and this caught my attention in the get2fa.ts file. It’s used in Authorisation headers for https://api.resend.com/emails requests. I’ve never used an aws secretsmanager but I’m pretty sure this is a no.

      const RESEND_API_KEY_SECRET_ARN = 'arn:aws:secretsmanager:us-east-2:<REDACTED_HARDCODED_STRING>:secret:RESEND_API_KEY-<REDACTED_HARDCODED_STRING>';

      const result = execSync( 'aws secretsmanager get-secret-value --secret-id "${RESEND_API_KEY_SECRET_ARN}" --region us-east-2 --profile <REDACTED_HARDCODED_STRING> --query SecretString --output text', { encoding: 'utf-8' } ).trim();

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Oh jolly can’t wait for this to go viral enough that my boss schedules time to ask me about it.

      The tumblr thread is a must read if you’ve ever been near HIPAA regulated infrastructure.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      Is it going to be a bunch of ArmV8 cores shoved into the same package? Probably going to be a bunch of ArmV8 cores shoved into the same package. You know, for AI.