

Every time I see a rationalist bring up the term “Moloch” I get a little angrier at Scott Alexander.
Every time I see a rationalist bring up the term “Moloch” I get a little angrier at Scott Alexander.
I use the term “inspiring” loosely.
Putting this into the current context of LLMs… Given how Eliezer still repeats the “diamondoid bacteria” line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.
Lesswronger notices all of the rationalist’s attempts at making an “aligned” AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate
Notably, the author doesn’t realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.
It’s a good post. A few minor quibbles:
The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.
I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn’t really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH… if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?
These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.
I wish people didn’t feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.
One of the things I liked and didn’t know about before
Ask Claude any basic question about biology and it will abort.
That is hilarious! Kind of overkill to be honest, I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author’s overall point that this shut-it-down approach could be used for a variety of topics.
One of the comments gets it:
Safety team/product team have conflicting goals
LLMs aren’t actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they’ve thrown at them, so you’re left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1
Lots of woo and mysticism already has a veneer of stolen Quantum terminology. It’s too far from respectable to get the quasi-expert endorsement or easy VC money that LLM hype has gotten, but quantum hucksters fusing quantum computing nonsense with quantum mysticism can probably still con lots of people out of their money.
system memory
System memory is just the marketing label for “having an LLM summarize a bunch of old conversations and shoving it into a hidden prompt”. I agree that using that term is sneer-worthy.
I have three more examples of sapient marine mammals!
I was thinking this also, like it’s the perfect parody of several lesswrong and EA memes: overly concerned with animal suffering/sapience, overly concerned with IQ stats, openly admitting to no expertise or even relevant domain knowledge but driven to pontificate anyway, and inspired by existing science fiction… I think the last one explains it and it isn’t a parody. As cinnasverses points out, Cetacean intelligence shows up occasionally in sci-fi. to add to the examples… sapient whales warning the team of an impending solar flare in Stargate Atlantis via echolocation induced hallucinations, the dolphins in hitchhiker’s guide to the galaxy, and the whales showing up to help in one book of Animorphs.
I was trying to figure out why he hadn’t turned this into an opportunity to lecture (or write a mini-fanfic) about giving more attack surface to the AGI to manipulate you… I was stumped until I saw your comment. I think that is it, expressing his childhood distrust of authority trumps lecturing us on the AI-God’s manipulations.
I have context that makes this even more cringe! “Lawfulness concerns” refers to like, Dungeons and Dragons lawfulness. Specifically the concept of lawfulness developed in the Pathfinder fanfiction we’ve previously discussed (the one with deliberately bad BDSM and eugenics). Like a proper Lawful Good Paladin of Iomedae wouldn’t put you in a position where you had to trust they hadn’t rigged the background prompt if you went to them for spiritual counseling. (Although a Lawful Evil cleric of Asmodeus totally would rig the prompt… Lawfulness as a measuring stick of ethics/morality is a terrible idea even accepting the premise of using Pathfinder fanfic to develop your sense of ethics.)
you can’t have an early version that you’ll lie about being a “big step towards General Quantum Computing” or whatever
So you might think that… but I recall some years ago an analog computer was labeled as quantum annealing or something like that… oh wait, found the wikipedia article: https://en.wikipedia.org/wiki/Quantum_annealing and https://en.wikipedia.org/wiki/D-Wave_Systems . So it sounds to a naive listener like the same sort of thing as the quantum computers that are supposed to break cryptography and even less plausible things, but actually it can only do one very specific algorithm.
I bet you could squeeze the “quantum” label onto a variety of analog computers well short of general quantum computing and have it technically not be fraud and still fool lots of idiot VCs!
It’s a nice master post that gets all his responses and many useful articles linked into one place. It’s all familiar if you’ve kept up with techtakes and Zitron’s other posts and pivot-to-ai, but I found a few articles I had previously missed reading.
Related trend to all the but achskhually’s AI booster’s like to throw out. Has everyone else noticed the trend where someone makes a claim of a rumor they heard about an LLM making a genuine discovery in some science, except it’s always repeated second hand so you can’t really evaluate it, and in the rare cases they do have a link to the source, it’s always much less impressive than they made it sound at first…
Even for the people that do get email notifications of Zitron’s excellent content (like myself), I appreciate having a place here to discuss it.
Apparently Eliezer is actually against throwing around P(doom) numbers: https://www.lesswrong.com/posts/4mBaixwf4k8jk7fG4/yudkowsky-on-don-t-use-p-doom ?
The objections to using P(doom) are relatively reasonable by lesswrong standards… but this is in fact once again all Eliezer’s fault. He started a community centered around 1) putting overconfident probability “estimates” on subjective uncertain things 2) need to make a friendly AI-God, he really shouldn’t be surprised that people combine the two. Also, he has regularly expressed his certainty that we are all going to die to Skynet in terms of ridiculously overconfident probabilities, he shouldn’t be surprised that other people followed suit.
Guns don’t kill people, people kill people.
I missed that it’s also explicitly meant as rationalist esoterica.
It turns in that direction about 20ish pages in… and spends hundreds of pages on it, greatly inflating the length from what could be a much more readable length. It then gets back to actual plot events after that.
I hadn’t heard of MAPLE before, is it tied to lesswrong? From the focus on AI it’s at least adjacent to it… so I’ll add that to the list of cults lesswrong is responsible for. So all in all, we’ve got the Zizians, Leverage Research, and now Maple for proper cults, and stuff like Dragon Army and Michael Vassar’s groupies for “high demand” groups. It really is a cult incubator.
I actually think “Project Lawful” started as Eliezer having fun with glowfic (he has a few other attempts at glowfics that aren’t nearly as wordy… one of them actually almost kind of pokes fun at himself and lesswrong), and then as it took off and the plot took the direction of “his author insert gives lectures to an audience of adoring slaves” he realized he could use it as an opportunity to squeeze out all the Sequence content he hadn’t bothered writing up in the past decade^ . And that’s why his next attempt at a HPMOR-level masterpiece is an awkward to read rp featuring tons of adult content in a DnD spinoff, and not more fanfiction suitable for optimal reception to the masses.
^(I think Eliezer’s writing output dropped a lot in the 2010s compared to when he was writing the sequences and the stuff he has written over the past decade is a lot worse. Like the sequences are all in bite-size chunks, and readable in chunks in sequence, and often rephrase legitimate science in a popular way, and have a transhumanist optimism to them. Whereas his recent writings are tiny little hot takes on twitter and long, winding, rants about why we are all doomed on lesswrong.)
Chiming in to agree your prediction write-ups aren’t particularly good. Sure they spark discussion, but the whole forecasting/prediction game is one we’ve seen the rationalists play many times, and it is very easy to overlook or at least undercount your misses and over hype your successes.
In general… I think your predictions are too specific and too optimistic…