

OpenAi yearly payroll runs in the billions, so they probably aren’t hurting.
That Almsost AGI is short for Actually Bob and Vicky seems like quite the embarrassment, however.
It’s not always easy to distinguish between existentialism and a bad mood.


OpenAi yearly payroll runs in the billions, so they probably aren’t hurting.
That Almsost AGI is short for Actually Bob and Vicky seems like quite the embarrassment, however.


Apparently you can ask gpt-5.2 to make you a zip of /home/oai and it will just do it:
https://old.reddit.com/r/OpenAI/comments/1pmb5n0/i_dug_deeper_into_the_openai_file_dump_its_not/
An important takeaway I think is that instead of Actually Indian it’s more like Actually a series rushed scriptjobs - they seem to be trying hard to not let the llm do technical work itself.
Also, it seems their sandboxing amounts to filtering paths that star with /.


Very good read actually.
Except, from the epilogue:
People are working to resolve [intelligence heritability issue] with new techniques and meta-arguments. As far as I understand, the frontline seems to be stabilizing around the 30-50% range. Sasha Gusev argues for the lower end of that band, but not everyone agrees.
The not-everyone-agrees link is to acx and siskind’s take on the matter, who unfortunately seems to continue to fly under the radar as a disingenuous eugenicist shitweasel with a long-term project of using his platform to sane-wash gutter racists who pretend at doing science.


Hyperstition is such a bad neologism, apparently doubleplus superstition equals self-fullfilling prophecy (transitive)? They don’t even bother to verb it properly… Nick Land got a nonsense word stuck in his head and now there’s a whole subculture of midwit thought leader wannabes parroting that shit.


Additionally he said something to the effect of I don’t blame you for not knowing this, it wasn’t effectively communicated to the media like it’s no big deal, which isn’t really helping to beat the allegations of don’t ask don’t tell policies about SA in rat related orgs.


OpenAI Declares ‘Code Red’ as Google Threatens AI Lead
I just wanted to point out this tidbit:
Altman said OpenAI would be pushing back work on other initiatives, such as advertising, AI agents for health and shopping, and a personal assistant called Pulse.
Apparently a fortunate side effect of google supposedly closing the gap is that it’s a great opportunity to give up on agents without looking like complete clowns. And also make Pulse even more vapory.


The kids were using Adobe for Education. This calls itself “the creative resource for K–12 and Higher Education” and it includes the Adobe Express AI image generator.
I feel the extent to which schooling in the USA is of the this arts and crafts class brought to you by Carl’s Jr™ variety is probably understated.


/r/SneerClub discusses MIRI financials and how Yud ended up getting paid $600K per year from their cache.
Malo Bourgon, MIRI CEO, makes a cameo in the comments to discuss Ziz’s claims about SA payoffs and how he thinks Yud’s salary (the equivalent of like 150.000 malaria vaccines) is defensible for reasons that definitely exist, but they live in Canada, you can’t see them.



Graham Linehan is a normal and well man.
A few hours later, he sends me an example of how he’s been using AI. It’s a “hidden role deduction” game he’s working on. At the top is the prompt he put into ChatGPT: “You are five blind lesbian adventurers out for a good night out. Slaying dragons and whatnot. But one of your number is a hulking great troll pretending to be a woman. Find the troll lesbian and then devise an amusing punishment without giving him an erection.


No idea if it was intentional given how long a series’ production cycle can be before it ends up on tv/streaming, but it’s hard not to see Vince Gilligan’s Pluribus as a weird extended impact-of-chatbots metaphor.
It’s also somewhat tedious and seems to be working under the assumption that cool cinematography is a sufficient substitute for character development.


most BNPL loans aren’t reported to credit bureaus, creating what regulators call “phantom debt.” That means other lenders can’t see when someone has taken out five different BNPL loans across multiple platforms. The credit system is flying blind.
Only good things can come of this.


I always thought it was cool that (there is a case to be made that) HPL created Azathoth, the monstrous nuclear chaos beyond angled space, as a mythological reimagining of a black hole. Stuff like The Dreams in the Witch-house shows he was up to date on a bunch of cutting edge for the time physics stuff, at least as far as terminology is concerned, massive nerd that he was.


‘Genetic engineering to merge with machines’ is both a stream of words with negative meaning and something I don’t think he could come up with on his own, like the solar system sized dyson sphere or the lab leak stuff. He just strikes me as too incurious to have come across the concepts he mashes together on his own.
Simplest explanation I guess is he’s just deliberately joeroganing the CEO thing and that’s about as deep as it goes.


Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”
Fun article.
Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.
Occasionally I feel that Altman may be plugged into something that’s even dumber and more under the radar than vanilla rationalism.


users trade off decision quality against effort reduction
They should put that on the species’ gravestone.


What if quantum but magically more achievable at nearly current technology levels. Instead of qbits they have pbits (probabilistic bits, apparently) and this is supposed to help you fit more compute in the same data center.
Also they like to use the word thermodynamic a lot to describe the (proposed) hardware.


I feel the devs should just ask the chatbot themselves before submitting if they feel it helps, automating the procedure invites a slippery slope in an environment were doing it the wrong way is being pushed extremely strongly and executives’ careers are made on 'I was the one who led AI adoption in company x (but left before any long term issues became apparent)’
Plus the fact that it’s always weirdos like the hating AI is xenophobia person who are willing to go to bat for AI doesn’t inspire much confidence.


Everything about this screams vaporware.


As far as I can tell there’s absolutely no ideology in the original transformers paper, what a baffling way to describe it.
James Watson was also a cunt, but calling “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid” one of the founding texts of eugenicist ideology or whatever would be just dumb.
If the great AI swindle has taught us anything, is that what’s good for normal people isn’t really important when all the macro-economic incentives point the other way and towards the pockets of the ultra rich.
robert anton wilson intensifies