Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)
From Lila Byock:
A 4th grader was assigned to design a book cover for Pippi Longstocking using Adobe for Education.
The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.
The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.
I try to avoid having to even see the outputs of these fucking systems, but you just made me realize that there’s going to be more than a few of them that will “leak” (read: preferentially deliver, by way of training focus) the kinks of its particular owner. I mean it’s already happening for the textual replies on twitter, soothing felon’s ever so bruised ego. the chance of it not Shipping beyond that is pretty damn zero :|
god I hate all of this
2 links from my feeds with crossover here
Lawyers, Guns and Money: The Data Center Backlash
Techdirt: Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric
Unfortunately Techdirt’s Mike Masnick is a signatory some bullshit GenAI-collaborationist manifesto called The Resonant Computing Manifesto, along with other suspects like Anil Dash. Like so many other technolibertarian manifestos, it naturally declines to say how their wonderful vision would be economically feasible in a world without meaningful brakes on the very tech giants they profess to oppose.
Anybody writing a manifesto is already a bit of a red flag.
i am pretty sure i am shredding the Resonant Computing Manifesto for Monday
and of course Anil Dash signed it
The people who build these products aren’t bad or evil.
No, I’m pretty sure that a lot of them just are bad and evil.
With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise.
[citation needed]
[to a source that’s not laundered slop, ya dingbats]
i don’t know if they are unusually evil, but they sure are greedy
to a source that’s not laundered slop, ya dingbats
Ha thats easy. Read Singularity Sky by Charles Stross see all the wonders the festival brings.
New and lengthy sneer from Current Affairs just dropped: AI is Destroying the University and Learning Itself
article is informing me that it isn’t X - it’s Y
It’s the McMindfulness guy, nice to see that he is still kicking around.
In Empire of AI, she shows how CEO Sam Altman cloaks monopoly ambitions in humanitarian language—his soft-spoken, monkish image (gosh, little Sammy even practices mindfulness!)
lol ofc he does
Etymology Nerd has a really good point about accelerationists, connects them to religion
I like this. Kinda wish it was either 10x longer and explained things a bit, or 10x shorter and was more shitposty. Still, good
/r/SneerClub discusses MIRI financials and how Yud ended up getting paid $600K per year from their cache.
Malo Bourgon, MIRI CEO, makes a cameo in the comments to discuss Ziz’s claims about SA payoffs and how he thinks Yud’s salary (the equivalent of like 150.000 malaria vaccines) is defensible for reasons that definitely exist, but they live in Canada, you can’t see them.
Guy does a terrible job explaining literally anything. Why, when trying to explain all the SA based drama, does he choose to create an analogy where the former employee is heavily implied to have murdered his wife?
S/o to cinnaverses for mixing it up in there.
“Nah, salary stuff is private”, starting to think this sort of stuff is an idea introduced to protect capital and nobody else.
I was teasing this out in my head to try come up with a good sneer. First thought: for an organisation that tries to appeal to EAs, you’d think that they would do a good job of being transparent about why so much money is being spent on someone with such low output. But immediate rebuttal: the whole point of the TESCREAL cult shit is that yud get free tuocs because he’s the chosen one to solve alignment.
Was thinking more about how the radical, dont fall to biasses think for yourself and cone here to really learn to think (so we can stop the paperclipmachine and resurrect the dead) defend a half million dollars salary with a ‘thats private’.
But that is the same conclusion. The prophet must be protected.
Cinnas
The Rolling Stone article is a bit odd (it appears to tell the story of the ex-employee who created Miricult twice, the first time without names and the second naming the accuser) but I trust them that MIRI did pay the accuser. Rolling Stone are a serious news organization which can be sued.
Yeah, I think Rolling Stone was worried about getting sued and omitted Helm’s name in the first draft (or something like that).
I know who the alleged victim was, and I think there probably was a crime and blackmail payments but the alleged victim didn’t want to come forward for a number of reasons (among other things, he’s still part of the rationalist community and has faced a lot of harassment from the public after an unrelated newspaper article outed him as being trans). I’d also point out that the only person that miricult directly accused of statutory rape was one of Yudkowsky’s employees rather than Yudkowsky himself. That being said, the journalist who wrote the Rolling Stone article claims she got a copy of the police report Helm filed and only Yudkowsky was named.
Even if miricult was total bullshit I’m confident that the alleged victim was lying about not being exploited by other rationalists; a few years later he and a couple of other people posted accounts of being sexually abused by a rationalist (unrelated to miricult) and it led to the abuser being ostracized from the rationalist community.
Anyways I know a lot more about this but I’d rather not discuss the details on a publicly viewable forum to protect the privacy of the people involved.
I agree that its gross to discuss a lot of this in public, and that underage sex is often an ethical grey area. I had no idea that the person who accused BD of pushing him into substance use and extreme BDSM scenarios is also the person who allegedly had sex underage with a MIRI staffer while living in a Rationalist group home.
Ziz’s blog had posts that revealed his identity and mentioned some of the BD stuff, once I found them it was just a matter of putting two and two together, so to speak
Damn, I missed this and now the comment is deleted. Do you happen to remember what he said?
I believe he was trying to explain why it looked like MIRI had paid money out to an alleged sexual abuser. The analogy was constructed something like this:
- A and B work at a company C
- A has conflict with B.
- C decides to fire B.
- unrelated to 1, 2, or 3, B has a wife D, who dies in mysterious circumstances, leading A to strongly believe that B killed D.
- The police, E, perform an investigation and decide not to pursue a case against B
- C pays out B’s severance, unrelated to 2, 4, or 5.
Don’t blame me or how I remembered this if this doesn’t make sense.
Additionally he said something to the effect of I don’t blame you for not knowing this, it wasn’t effectively communicated to the media like it’s no big deal, which isn’t really helping to beat the allegations of don’t ask don’t tell policies about SA in rat related orgs.
Can confirm. This was like if the pope walked into an r/atheism meetup and showed his texts saying “dw bro, I’ll just move you to a different diocese, btw this totally isn’t about the allegations wink wink”
Hey Google, did I give you permission to delete my entire D drive?
It’s almost as if letting an automated plagiarism machine execute arbitrary commands on your computer is a bad idea.
The documentation for “Turbo mode” for Google Antigravity:
Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)
No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)
Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.
It’s hard not to give the user a hard time when they write:
Bro, I didn’t know I needed a seatbelt for AI.
But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”
yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good
but it is very fucking funny to watch them FAFO
After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: “Never let an LLM have any decision-making power.” At most, LLMs will serve as a heuristic function for an algorithm that actually works.
Unlike the railroads of the First Gilded Age, I don’t think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it’s not worth spending lots of money on a task where you don’t need reliability.
The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?
The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true “use cases” to be mainly spam, and perhaps students cheating on homework.
Pessimistically I think this scourge will be with us for as long as there are people willing to put code “that-mostly-works” in production. It won’t be making decisions, but we’ll get a new faucet of poor code sludge to enjoy and repair.
I know it is a bit of elitism/priviledge on my part. But if you don’t know about the existence of google translate(*), perhaps you shouldn’t be doing vibe coding like this.
*: this of course, could have been a LLM based vibe translation error.
E: And I guess my theme this week is translations.
The documentation for “Turbo mode” for Google Antigravity:
Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)
No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)
Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.
It’s hard not to give the user a hard time when they write
Bro, I didn’t know I needed a seatbelt for AI.
But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”
A lobster wonders why the news that a centi-millionaire amateur jet pilot has decided to offload the cost of developing his pet terminal software onto peons begging for contributions has almost 100 upvotes, and is absolutely savaged for being rude to their betters
https://lobste.rs/s/dxqyh4/ghostty_is_now_non_profit#c_b0yttk
bring back rich people rolling their own submarines and getting crushed to death in the bathyal zone
enter hashimoto. cringe intensifies
(note: out-of-order to linked post for comment cohesion)
Terminals are an invisible technology to most
what a fucking sentence
…that are hyper present in the everyday life of many in the tech industry.
hyper? like this?
But the terminal itself is boring, the real impact of Ghostty is going to be in libghostty and making all of this completely available for many use cases. My hope is that through building a broadly adopted shared underlayer of terminals around the industry we can do some really interesting things.
oh good so the rentier bridgetroll wants to do just a monopoly play? that’s fine I’m sure. note: I don’t think there’s a more charitable reading of this. those shared underlayers already exist, in the form of decades of protocol and other development. many of them suck and I agree about trying to do better, but I (rather strongly) suspect hashi and I have very different ideas of what that looks like
I’ve already addressed the belittling of the project I really find useful and care about. So let’s just move on to the financial class.
Regardless of my financial ability to support this project, any project that financially survives (for or non-profit) at the whims of a single donor is an unhealthy project
“uwu, think of the poor projects. yes sure I could throw $20m at this in some kind of funny trust and have it live forever but that wouldn’t allow me to evade the point so much!”
I paid a 9-figure tax bill and also donated over 5% of my other stuff to charity this year
“I’m not as bad as the other billionaires I promise”
I’m too fucking old to care about hipster terminals, so I had no idea ghostty was started by a (former) billionaire. If forced to choose a new terminal I will certainly take this fact into consideration.
all things aside, is current ghostty any good, or still an
audiophileconsolephile-ware?i’m generally reluctant to try something which reeks of intensive self-promotion, but few months ago i decided to finally see what’s the hype about, and, well, it’s a terminal emulator.
wezterm does much more, and with a much cleaner ui, and it’s programmable, and the author doesn’t remind me that hashicorp is a thing that exists.
second person today I saw mentioning wezterm, guess I should look sometime for familiarity
ghosTTy is the username of a schizoposter on Something Awful who only shows up to post bitcoin price charts and get mocked into oblivion. I wonder if there’s any connection?
I took psychic damage by scrolling up and seeing promptsimon posting a real doozie:
I have been enjoying hitting refresh on https://fuckthisurl/froztbyte-scrubbed-it-intentionally throughout today and watching the number grow - it’s nice to see a clear example of people donating to a new non-profit open source project.
“oooh! look at the vanity project go! weeeee, isn’t having a famous face attached to it fun?” with exactly no reflection on the fucking daunting state of open source funding in multiple other domains and projects
there’s some more cursor fun too. no sneers yet, I’ve barely started reading
saw it via jonny who did do some notes
oh I just saw this is almost a month old! still funny tho
(and I’ve been busy af afkspace)
saw this elsewhere. the account itself appears to be a luckey stan account, but the next
There’s more crust than air or sea or land… so a vehicle that moves through the crust of the earth is going to be a huge deal
I have built working prototypes of this
so are we talking mining, or The Core (2003)? it feels like he’s trying to pitch it as though it’s Tiberian Sun style subterrean APC, but I can’t be sure whether I’m reading into it
hey crazy idea, what if we made a vehicle that moves across the habitable surface of the crust instead
it would need to be organized somehow, make it big and electric to be efficient. people go to the same places every day so you can just put a durable track of some kind in right place,
I’m thinking nydus worms from SC2 or the GLA tunnel system in C&C generals.
right? like, is felon finally getting competition for unhinged billionaire gamerposting?
announcing “leeroy jenkins” mode for grok where it just posts your tweet drafts and you can’t delete them
Don’t forget he made a prototype of a gaming helmet with shotgun shells aimed to kill the wearer.
Edited it into a reply to Hanson now believing in Aliens, but seems like the SSC side of rationalism has a larger group of people also believing in miracles: https://www.astralcodexten.com/p/the-fatima-sun-miracle-much-more (I have not in depth read the article, going by what others reported about this incident, there also seem to be related LW posts).
Read it a bit now, noticed that scott doesn’t know people who speak Portuguese and is relying on mt. (Also unclear what type of mt).
The long expected collapse of the rationalists out of their flagging cult into ordinary religion and conspiracy theory continues apace.
This does mean there is a potential future where the pope joins sneerclub
Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.
Few IT projects are displays of rational decision-making from which AI can or should learn.
Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.
The article continues to talk about how we can’t do IT, and wraps up with
It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined
It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.
Now I’m even more skeptical of the programmers (and managers) who endorse LLMs.
Considering the sorry state of the software industry, plus said industry’s adamant refusal to learn from its mistakes, I think society should actively avoid starting or implementing new software, if not actively cut back on software usage when possible, until the industry improves or collapses.
That’s probably an extreme position to take, but IT as it stands is a serious liability - one that AI’s set to make so much worse.
For a lot of this stuff at the larger end of the scale, the problem mostly seems to be a complete lack of accountability and consequences, combined with there being, like, four contractors capable of doing the work, with three giant accountancy firms able to audit the books.
Giant government projects always seem to be a disaster, be they construction, heathcare, IT, and no heads ever roll. Fujitsu was still getting contracts from the UK government even after it was clear they’d been covering up the absolute clusterfuck that was their post office system that resulted in people being driven to poverty and suicide.
At the smaller scale, well. “No warranty or fitness for any particular purpose” is the whole of the software industry outside of safety critical firmware sort of things. We have to expend an enormous amount of effort to get our products at work CE certified so we’re allowed to sell them, but the software that runs them? we can shovel that shit out of the door and no-one cares.
I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.
I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.
Considering how “vibe coding” has corroded IT infrastructure at all levels, the AI bubble is set to trigger a 2008-style financial crisis upon its burst, and AI itself has been deskilling students and workers at an alarming rate, I can easily see why.
In the land of the blind the one-eyed man will make a killling as an independent contractor cleaning up after this blows up.
HN discusses aliens https://news.ycombinator.com/item?id=46111119
“I am very interested.”
Bet you are, bud.
DoD tries to cover up development of U2 and F117, and entire religion grows up from this
Please keep these people away from my precious nerd-subjects for the love of god.
Chariots of the Gods was released in 1968. I think that ship may have sailed decades ago.
sob
How many aliens can damce on the head of a pin?
A second post on software project management in a week, this one from deadsimpletech: failed software projects are strategic failures.
A window into another it disaster I wasn’t aware of, but clearly there is no shortage of those. An australian one this time.
And of course, without having at least some of that expertise in-house, they found themselves completely unable to identify that Accenture was either incompetent, actively gouging them or both.
(spoiler alert, it was both)
Interesting mention of clausewitz in the context of management, which gives me pause a bit because techbros famously love the “art of war”, probably because sun tzu was patiently explaining obvious things to idiots and that works well on them. “On war” might be a better text, I guess.
I associate Clausewitz (and especially John Boyd) references more with a Palantir / Stratfor / Booz / LE-MIC-consulting class compared to your typical bay area YC techbro in the US, and a very different crowd over in AU / NZ where grognards probably outnumber the actual military. LWers never bring up Clausewitz either but love Sun Tzu. But as far as software strategy posts go, I’d much rather read a Clausewitz tie-in than, say, Mythical Man Month or Agile anything.
Much of the content of mythical man month is still depressingly relevant, especially in conjunction with brooks’ later stuff like no silver bullets. A lot of senior tech management either never read it, or read it so long ago that they forgot the relevant points beyond the title.
It’s interesting that clausewitz doesn’t appear in lw discussions. That seems like a big point in favour of his writing.
If you liked Brooks, you might give Gerald Weinberg a try. A bit more folksy / less corporate.
More grok shit: https://futurism.com/artificial-intelligence/grok-doxxing it in contrast to most other models, is very good at doxing people.
Amazing how everything Musk makes is the worst in class (and somehow the Rationalists think he will be their saviour (that is because he is a eugenicist)).
the base use for LLMs is gonna be hypertargetted advertising, malware, political propaganda etc
well the base case for LLMs is that, right now
the privacy nerds won’t know what hit them
(thinks) groxxing















