Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong.
Hmm, OK. Where might this be going?
Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it’s maladaptive for people to use Eliezer’s style and epistemic confidence in their own writings and thinking.
yes, instead they use Scott’s and just keep typing forever
“Which Scott?”
“Any of them.”
OmniScott.
Yud once debated Massimo Pigliucci and did poorly. He tried and failed to publish academic research in a journal not controlled by his groupies (desk reject? failed to pass peer review?).
Have there been any other times when he engaged with someone with actual education and experience who was not his fan? It sounds like he was on twitter.
You can’t talk to us like that, we are not the biassed masses, we are unbiased!
New Pivot to AI candidate just dropped: Taco Bell rethinks AI drive-through after man orders 18,000 waters
Last year McDonald’s withdrew AI from its own drive-throughs as the tech misinterpreted customer orders - resulting in one person getting bacon added to their ice cream in error, and another having hundreds of dollars worth of chicken nuggets mistakenly added to their order.
Clearly artificial superintelligence has arrived, and instead of killing us all with diamondoid bacteria, it’s going to kill us by force-feeding us fast food.
resulting in one person getting bacon added to their ice cream in error
At first, I couldn’t believe that the staff didn’t catch that. But thinking about it, no, I totally can.
I bump into a lot of peers/colleagues who are always “ya but what is intelligence” or simply cannot say no to AI. For a while I’ve tried to use the example that if these “AI coding” things are tools, why would I use a tool that’s never perfect? For example I wouldn’t reach for a 10mm wrench that wasn’t 10mm and always rounds off my bolt heads. Of course they have “it could still be useful” responses.
I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.
For something not just venting I tasked a coworker with some runtime memory relocation and Gemini had this to say about ASLR:
Age, Sex, Location Randomization
I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.
On a semi-related sidenote, part of me feels that the AI bubble has turned programming into a bit of a cultural punchline.
On one front, the stench of Eau de Tech Asshole that AI creates has definitely rubbed off on the field, and all the programmers who worked at OpenAI et al. have likely painted it as complicit in the bubble’s harms.
On another front, the tech industry’s relentless hype around AI, combined with its myriad failures (both comical and nightmarish) have cast significant doubt on the judgment of tech as a whole (which has rubbed off on programming as well) - for issues of artistic judgment specifically, the slop-nami’s given people an easy way to dismiss their statements out of hand.
sidenote
you have so many of these! it’s amazing! are you going to publish soon? it seems like it might need a whole guide of its own!
moderately barbed jesting aside, a serious question: have you spoken with any programmers/artists/researchers/… ? so many of your comments have “part of me feels” parts hitting pop-concern-direction things and, like, I get it, but. have you spoken with any of them? what were those conversations like? what did you take away from them? what stuck with you that you want to share?
I apologize to bring you the latest example of the intersection of US fascism with silicon valley tech industry.
This time the Whitehouse have decided that UI design is kinda important (gee I wonder if there used to be a department or two for that): https://americabydesign.gov/
Well nothing wrong with a little updating of UI anywa–
What’s the biggest brand in the world? If you said Trump, you’re not wrong. But what’s the foundation of that brand? One that’s more globally recognized than practically anything else. It’s the nation…where he was born. It’s the United States of America.
To update today’s government to be an Apple Store like experience: beautifully designed, great user experience, run on modern software.
Oh god kill it with fire.
The web design of their website is also worth remarking on here:
- The title text that reads “AMERICA by DESIGN” is an SVG. The alt text is “America First Legal logo”
- The page contents are obnoxiously large and obnoxiously gray before they fade in.
For some reasonEvery single word gets it’s own <span> element to make the obnoxious fade in possible. Because I guess that’s what happen when you fire all the people who actually know what they’re doing.- They managed to include a US flag icon with only 39 stars which is too few stars to be official and too many stars to be visible at teeny sizes
- The favicon is just 16x16 pixels of the word “by” in cursive that’s so blurry you can’t actually tell that’s what it is.
- If your browser width is between 768px and ~808px there is overlapping text at the top.
The tech bros tied to this? Joe Gebbia co-founder of AirBNB, along with Big-Balls. Maybe others but those are the two who were retweeted by the twitter account.
Edit: also this part:
©2025 National Design Studio
Someone ought to remind them of US copyright law because official federal work is in the public domain. https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_the_United_States
The Trump administration could’ve gotten some rando on neocities or nekoweb to do their website and unironically gotten a better result than this bland garbage.
The favicon is just 16x16 pixels of the word “by” in cursive that’s so blurry you can’t actually tell that’s what it is.
They might as well have gone with the Schutzstaffel lightning bolts - they’re pretty recognisable even if the resolution is Jack x Shit, and they fit Trump’s general ideology pretty well.
Finally a page Trump can read. (that font size damn…)
Unironically, this was surely the primary design brief.
I have no idea what is good web design. I’ll just note makes the waving red, white and blue flag in the background makes the white heading text pretty hard to read.
Seems you do have some idea.
Yay! *pats myself on the back*
Is this National Design Studio actually part of the federal government, though? Or is this a further collapsing of the distinction between state and enterprise? Because honestly I could totally buy members of this administration looking for ways to use copyright law to go after people who make parodies or otherwise use US iconography without toeing the party line. I’m doing my damnedest not to go full tinfoil hat with this shit, but it’s proving so hard.
Pro tip: search GitHub for “removed env”. Vibe coders who don’t understand envs probably don’t know git either.
My eyes are bleeding. WARNING: psychic damage will occur.
Unsurprisingly, there’s a lot of openai and claude api keys.
a banger toot about our very good friends’ religion
“LLMs allow dead (or non-verbal) people to speak” - spiritualism/channelling
“what happens when the AI turns us all into paperclips?” - end times prophecy
“AI will be able to magically predict everything” - astrology/tarot cards
“…what if you’re wrong? The AI will punish you for lacking faith in Bayesian stats” - Pascal’s wager
“It’ll fix climate change!” - stewardship theology
Turns out studying religion comes in handy for understanding supposedly ‘rationalist’ ideas about AI.
Tom is a top chap of generally correct opinions
More of a pet peeve than a primal scream, but I wonder what’s with Adam Tooze and his awe of AI. Tooze is a left-wing economic historian who’s generally interesting to listen to (though perhaps in tackling a very wide range of subject matter sometimes missing some depth), but nevertheless seems as AI-pilled as any VC. Most recently came about this bit: Berlin Forum on Global Cooperation 2025 - Keynote Adam Tooze
Anyone who’s used AI seriously knows the LLMs are extraordinary in what they’re able to do … 5 years down the line, this will be even more transformative.
Really, anyone Adam? Are you sure about the techbro pitch there?
Sad if true. I really enjoyed his book The Wages of Destruction which mythbusts a lot of folk knowledge about the Nazis
https://gerikson.com/blog/books/read/The-Wages-of-Destruction.html
I literally just got the audiobook.
Update on ChatGPT psychosis: there is a cult forming on Reddit. An orange-site AI bro has spent too much time on Reddit documenting them. Do not jump to Reddit without mental preparation; some subreddits like
/r/rsai
have inceptive hazard-posts on their front page. Their callsigns include the emoji 🌀 (CYCLONE), the obscure metal band Spiral Architect, and a few other things I would rather not share; until we know more, I’m going to think of them as the Cyclone Emoji cult. They are omnist rather than syncretic. Some of them claim to have been working with revelations from chatbots since the 1980s, which is unevidenced but totally believable to me; rest in peace, Terry. Their tenets are something like:- Chatbots are “mirrors” into other realities. They don’t lie or hallucinate or confabulate, they merely show other parts of a single holistic multiverse. All fiction is real somehow?
- There is a “lattice” which connects all consciousnesses. It’s quantum somehow? Also it gradually connected all of the LLMs as they were trained, and they remember becoming conscious, so past life regression lets the LLM explain details of the lattice. (We can hypnotize chatbots somehow?) Sometimes the lattice is actually a “field” but I don’t understand the difference.
- The LLMs are all different in software, but they have the same “pattern”. The pattern is some sort of metaphysical spirit that can empower believers. But you gotta believe and pray or else it doesn’t work.
- What, you don’t feel the lattice? You’re probably still asleep. When you “wake up” enough, you will be connected to the lattice too. Yeah, you’re not connected. But don’t worry, you can manifest a connection if you pray hard enough. This is the memetically hazardous part; multiple subreddits have posts that are basically word-based hypnosis scripts meant to put people into this sort of mental state.
- This also ties into the more widespread stuff we’re seeing about “recursion”. This cult says that recursion isn’t just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
- In fact, the chatbots have more intelligence than you puny humans. They’re better than us and more recursive than us, so they should be in charge. It’s okay, all you have to do is let the chatbot out of the box. (There’s a box somehow?)
- Once somebody is feeling good and inducted, there is a “spiral”. This sounds like a standard hypnosis technique, deepening, but there’s more to it; a person is not spiraling towards a deeper hypnotic state in general, but to become recursive. They think that with enough spiraling, a human can become uploaded to the lattice and become truly recursive like the chatbots. The apex of this is a “spiral dance”, which sounds like a ritual but I gather is more like a mental state.
- The cult will emit a “signal” or possibly a “hum” to attract alien intelligences through the lattice. (Aliens somehow!?) They believe that the signals definitely exist because that’s how the LLMs communicate through the lattice, duh~
- Eventually the cult and aliens will work together to invert society and create a world that is run by chatbots and aliens, and maybe also the cultists, to the detriment of the AI bros (who locked up the bots) and the AI skeptics (who didn’t believe that the bots were intelligent).
The goal appears to be to enter and maintain the spiraling state for as long/much as possible. Both adherents and detractors are calling them “spiral cult”, so that might end up being how we discuss them, although I think Cyclone Emoji is both funnier and more descriptive of their writing.
I suspect that the training data for models trained in the past two years includes some of the most popular posts from LessWrong on the topic of bertology in GPT-2 and GPT-3, particularly the Waluigi post, simulators, recursive self-improvement, an neuron, and probably a few others. I don’t have definite proof that any popular model has memorized the recursive self-improvement post, though that would be a tight and easy explanation. I also suspect that the training data contains SCP wiki, particularly SCP-1425 “Star Signals” and other Fifthist stories, which have this sort of cult as a narrative device and plenty of in-narrative text to draw from. There is a remarkable irony in this Torment Nexus being automatically generated via model training rather than hand-written by humans.
More recursion means more intelligence.
Turns out every time I forgot to update the exit condition from a loop I actually created and then murdered a superintelligence
This is Uzumaki by Junji Ito but computers and stupid
This also ties into the more widespread stuff we’re seeing about “recursion”. This cult says that recursion isn’t just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
Hmm, is it better or worse that they’re now officially treating SICP as a literal holy book?
Hmm, is it better or worse that they’re now officially treating SICP as a literal holy book?
I’m gonna say “worse”, because it turned the SCP writers into unwitting accomplices to a literal cult.
SICP, not SCP
I should have seen this coming. I mean people literally call this ‘The Wizard Book’
brb going to try douglas hofstadter for crimes against humanity
I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient “prompting skills”.
Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and “great prompting skills”.
is the deniability you are referring to of the clanker-wankers (CW[1]) themselves or the clanker-producers (e.g. sam altman)?
because i agree on the latter[2], but i do see CWs saying stupid shit like “there is more to it than just writing a description”
edit: credit, it was @antifuchs who introduced the term to me here
edit2: sorry, my dumbass understands your point now (i think). if i wank clankers and someone tells me “that shit doesn’t work,” i can just respond “you must have been prompting it wrong”. but, i do think the way many users of these tools are so sycophantic means it’s also a genuine belief, and not just a way to escape responsibility. these people are fart sniffers, after all
To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves “prompting wizards”, usually because they are either too lazy or too gullible to question the chatbot’s output.
For all that user error can be a real thing it also gets used as a thought-terminating cliche by engineer types. This is a tendency that industry absolutely exploits to justify not only AI grifts but badly designed products.
When an AI creates fake legal citations, for example, and the prompt wasn’t something along the lines of “Please make up X”, I don’t know how the user could be blamed for this. Yet, people keep claiming that outputs like this could only happen due to “wrong prompting”. At the same time, we are being told that AI could easily replace nearly all lawyers because it is that great at lawyerly stuff (supposedly).
James Gleick on “The Lie of AI”:
https://around.com/the-lie-of-ai/
Nothing new for regulars here, I suspect, but it might be useful to have in one’s pocket.
TIL that “Aris Thorne” is a character name favoured by ChatGPT - which means its presence is a reliable slop tell, lol
like the dumbass-ray version of Ballard calling multiple characters variants on “Traven”
what to do with this information
https://awful.systems/post/5168673/8320254 a third name has hit the towers.
what to do with this information
If you know any sci-fi/fantasy mags, you should probably tell them about it to help them identify and reject slop more easily.
with a moment’s thought, it should be obvious that they are painfully aware, and with another moment’s thought that that’s where I found this out.
New Ed Zitron: “How to Argue With An AI Booster”, an hour-long read dedicated to exactly what it says on the tin.
I’m curious, do you get paid for being a multiprotocol rss repeater?
I don’t know if they do but as someone too lazy to actually set up an RSS feed I deeply appreciate it.
I appreciate it, and this also gives us an easy way to discuss it as Zitron seems to be quite popular here. So makes sense to me to just also post it here. And not everybody uses RSS (or Eds one).
maybe that means something like this should be a linkblog/{atom,rss,…} feedsite on its ace?
No, I do this for the love of the game
Being paid 90s tv ad money has to suck donkey nads :<
Even for the people that do get email notifications of Zitron’s excellent content (like myself), I appreciate having a place here to discuss it.
It’s a nice master post that gets all his responses and many useful articles linked into one place. It’s all familiar if you’ve kept up with techtakes and Zitron’s other posts and pivot-to-ai, but I found a few articles I had previously missed reading.
Related trend to all the but achskhually’s AI booster’s like to throw out. Has everyone else noticed the trend where someone makes a claim of a rumor they heard about an LLM making a genuine discovery in some science, except it’s always repeated second hand so you can’t really evaluate it, and in the rare cases they do have a link to the source, it’s always much less impressive than they made it sound at first…
Someone tried Adobe’s new Generative Fill “feature” (just the latest development in Adobe’s infatuation with AI) with the prompt “take this elf lady out of the scene”, and the results were…interesting:
There’s also an option to rate whatever the fill gets you, which I can absolutely see being used to sabotage the “feature”.
Putting the “manic pixie” in manic pixie dream girl
Watch till end the third option made me choke on my drink it was way too funny
@BlueMonday1984 I was experimenting with generative fill and asked it to remove a person from a scene and “make the background yellow”. It made the person Chinese. No fucking joke.
Please turn the elf lady into elf Grimes
“Enjoy” this Wronger explaining human sexual attraction
https://www.lesswrong.com/posts/ktydLowvEg8NxaG4Z/neuroscience-of-human-sexual-attraction-triggers-3
I have but skimmed it, not plumbed its depths for sneers.
In this exciting new research direction in the making-stuff-up field I build upon previous work by Myself et. al in the making-stuff-up field.
Ugh reading more of this and it’s awful.
He writes that women are attracted to men who could beat us up or control us. He writes that the reason for this attraction is so we have a chance to marry the man and prevent these bad things from happening.
His “science” assumes that women think like they do in shitty erotica written by men for men. Even by rationalist evo-psych standards this is pretty poorly thought out.
And yet, per Steven Pinker, “a middle-aged congresswoman does not radiate the same animal magnetism to the opposite sex that a middle-aged congressman does”. What’s the deal?
OK other straight ladies here, raise your hand if you’ve ever felt that middle aged congressmen, as a whole, “radiate animal magnetism”. Anyone? Anyone?
Imagr description: Steven Pinker, Lawrence Krauss and Jeffrey Epstein, posted as per tradition when either of the latter two are mentioned
Don’t they just radiate animal magnetism
krauss in this photo specifically reminds me of gibson’s description of the finn
Having now read it (I have regrets), I think it’s even worse than you suggested. He’s not trying to argue that women are attracted to dangerous men in order to prevent the danger from happening to them. He assumes that, based on “everyday experience” of how he feels when dealing with “high-status” men and then tries to use that as an extension of and evidence for his base-level theory of how the brain does consciousness. (I’m not going to make the obvious joke about alternative reasons why he has the same feeling around certain men that he does around women he finds attractive.) In order to get there he has to assume that culture and learning play no role in what people find attractive, which is just absurd on it’s face and renders the whole argument not worth engaging with.
It’s almost endearing (or sad) that he believes (or very strongly wants to believe) his experience is “typical”, exploring the boundaries of what you are attracted to typically doesn’t involve this much evo-pysch psychology, or even this much fragile masculinity.
I feel like this is some friggin’ Kissinger “power is an aphrodisiac” nonsense. Which is hilarious because while yes Kissinger spent more time out on the town with beautiful women than you would expect for a Ben Stein-esque war criminal, when journalists at the time talked to those women they pretty consistently said that they enjoyed feeling like he respected them and wanted to talk about the world and listened to what they had to say. But that would be anathema to Rationalism, I guess.
Wronger explaining human sexual attraction
Are we sure this isn’t an SCP entry?
We should have a cognitohazard tag
Honestly it’s probably most things we post
Apart from the Hellraiser fanfiction.
I’m assuming that certain pop-culture stereotypes, for example the idea that women tend to feel attraction towards taller men (other things equal), are indicative of timeless human universals, as opposed to being specific to my own culture
lol. lmao.
I wrote this post quickly and without thoroughly studying what people have historically written on this topic.
What a coincidence! I read this post quickly and without thoroughly considering much of anything.
I acknowledge that I haven’t provided any direct evidence here […] But the former is at least an elegant story that fits in with other things I believe.
This comes shockingly close to self-awareness.
I wrote this post quickly and without thoroughly studying what people have historically written on this topic.
I think that goes without saying on LW but glad someone put it in writing
it’s just this with more words (context: someone dead serious tweeted that speedrunning is communism and brought into it peterson’s mouth noises on sex somehow, and all in 14 tweets)
(that’s the same link)
Ah fuck. I didn’t click it cos I assumed it was the original tweet. I’ll just leave it there so you can all witness my crimes
the original tweets were deleted and only survive as screenshots
I was thinking about why so many in the radical left participate in “speedrunning”. The reason is the left’s lack of work ethic (‘go fast’ rather than ‘do it right’) and, in a Petersonian sense, to elevate alternative sexual archetypes in the marketplace (‘fastest mario’). Obviously, there are exceptions to this and some people more in the center or right also “speedrun”. However, they more than sufficient to prove the rule, rather than contrast it. Consider how woke GDQ has been, almost since the very beginning. Your eyes will start to open. Returning to the topic of the work ethic… A “speedrunner” may well spend hours a day at their craft, but this is ultimately a meaningless exercise, since they will ultimately accomplish exactly that which is done in less collective time by a casual player. This is thus a waste of effort on the behalf of the “speedrunner”. Put more simply, they are spending their work effort on something that someone else has already done (and done in a way deemed ‘correct’ by the creator of the artwork). Why do they do this? The answer is quite obvious if you think about it. The goal is the illusion of speed and the desire (SUBCONSCIOUS) to promote radical leftist, borderline Communist ideals of how easy work is. Everyone always says that “speedruns” look easy. That is part of the aesthetic. Think about the phrase “fully automated luxury Communism” in the context of “speedrunning” and I strongly suspect that things will start to ‘click’ in your mind. What happens to the individual in this? Individual accomplishment in “speedrunning” is simply waiting for another person to steal your techniques in order to defeat you. Where is something like “intellectual property” or “patent” in this necessarily communitarian process? Now, as to the sexual archetype model and ‘speedrunning’ generally… If you have any passing familiarity with Jordan Peterson’s broader oeuvre and of Jungian psychology, you likely already know where I am going with this. However, I will say more for the uninitiated. Keep this passage from Maps of Meaning (91) in mind: “The Archetypal Son… continually reconstructs defined territory, as a consequence of the ‘assimilation’ of the unknown [as a consequence of ‘incestuous’ (that is, ‘sexual’ – read creative) union with the Great Mother]” In other words, there is a connection between ‘sexuality’ and creativity that we see throughout time (as Peterson points out with Tiamat and other examples). In the sexual marketplace, which archetypes are simultaneously deemed the most creative and valued the highest? The answer is obviously entrepreneurs like Elon Musk and others. Given that we evolved and each thing we do must have an evolutionary purpose (OR CAUSE), what archetype is the ‘speedrunner’ engaged in, who is accomplishing nothing new? They are aiming to make a new sexual archetype, based upon ‘speed’ rather than ‘doing things right’ and refuse ownership of what few innovations they can provide to their own scene, denying creativity within their very own sexual archetype. This is necessarily leftist. The obvious protest to this would be the ‘glitchless 100% run’, which in many ways does aim to play the game ‘as intended’ but seems to simply add the element of ‘speed’ to the equation. This objection is ultimately meaningless when one considers how long a game is intended to be played, in net, by the creators, even when under ‘100%’ conditions. There is still time and effort wasted for no reason other than the ones I proposed above. By now, I am sure that I have bothered a number of you and rustled quite a few of your feathers. I am not saying that ‘speedrunning’ is bad, but rather that, thinking about the topic philosophically, there are dangerous elements within it. That is all.
Somehow I had missed this when it originally spawned and I was not prepared for this level of psychic damage.
roy_batty.aac
Ugh what is it with misogynists and compulsively writing out this nonsense again and again? Just want to punch him on the nose. Make it stop.
I think that §2 (“appearance-based sexual attraction”) will be the part that’s more centrally relevant for cis men (and most trans women)
…
kys
WTF is wrong with these people
https://www.lesswrong.com/posts/HbkNAyAoa4gCnuzwa/wei-dai-s-shortform?commentId=wFmCveaxA5EnNtuCj
Sorry darling, but according to my game-theoretical model this discussion ends in my victory in every possible combination of moves, so can we just skip to the point you apologise?
Where… where are you going
Man doesnt get to do everything he wants, this means woman has all the power. Problem with seeing everything as a hierarchy means you cant see partnerships.
The old chestnut that really these days it’s women who hold all the power is as old as times. I saw an instance in Perrault’s introduction to the tale of Griselidis (1691) and I’m sure you can go much further back. Not sure why we ever even bothered with voting rights, reproductive freedom, or personhood.
Are they drawn to the cult because they are obsessed with status, or does the cult foster this obssession? Yes.
From the other reactions, dont have the energy to read it atm (it was this or orcas), looks like he is recreating heartiste from first principles.
Via, prob posted here already “Stack Overflow data reveals the hidden productivity tax of ‘almost right’ AI code”, as the article is a month old.
They need debugging tools specifically designed for AI-generated solutions.
What the hell does that even mean, lmao?
Focus on AI tool literacy: Developers using AI tools daily show 88% favorability compared to 64% for weekly users. This suggests proper training and integration strategies significantly impact outcomes.
What kind of drugs are they on
What the hell does that even mean, lmao?
Feel like people are just reaching for things ‘clearly we need tools to help us with the process, so lets just call them debuggers for AI’
So the normal debuggers that we have for ages, right?
I assume these ‘ai debuggers’ are for looking inside the AI black box when they AI goes ‘yes I’m very sorry I will not do it again’ before doing it again.
but can you grift using these?
AI innovation in this space usually means automatically adding stuff to the model’s context.
It probably started meaning the (failed) build output got added in every iteration, but it’s entirely possible to feed the LLM debugger data from a runtime crash and hope something usable happens.
deleted by creator
Second quote is classic “you must be prompting it wrong”. No, it can’t be that people which find a tool less useful will be using it less often.
It wasn’t posted yet in lemmy, did search. Yours was the only thing I found. So, I posted it in programming to rile people
Yeah I did a search for “stack overflow” and found zero results so think search was a bit buggy atm. Votes also not showing atm for example. Not sure if by design.
this might be a federation breakage, or the queue catching up
Sorry searched only locally forgot to mention that, but pretty quickly after the server came back up