https://tubitv.com/movies/100010483/the-wicker-man
Warning, very NSFW.
https://tubitv.com/movies/100010483/the-wicker-man
Warning, very NSFW.
I wasn’t limiting it to LLMs specifically. I don’t think it is up for debate that as years go by, new “AI” stuff periodically starts existing that didn’t exist before. That’s still true even though people tend to overhype the capabilities of LLMs specifically and conflate LLMs with “AI” just because they are good at appearing more capabale than they are.
If you wanted to limit to to LLM and get some specifics about capabilities that start to emerge as the model size grows and how, here’s a good intro: https://arxiv.org/abs/2206.04615
Ah yes, if there’s one lesson to be gained from the last few years, it is that AI technology never changes, and people never connect it to anything in the real world. If only I’d used a Pokémon metaphor, I would have realized that earlier.
Here’s a video of an expert in the field saying it more coherently and at more length than I did:
You’re free to decide that you are right and we are wrong, but I feel like that’s more likely to be from the Dunning-Kruger effect than from your having achieved a deeper understanding of the issues than he has.
AI developers need to generate criti-hype — “criticism” that says the AI is way too cool and powerful and will take over the world, so you should give them more funding to control it.
This isn’t quite accurate. The criticism is that if new AI abilities run ahead of the ability to make the AI behave sensibly, we will reach an inflection point where the AI will be in charge of the humans, not vice versa, before we make sure that it won’t do horrifying things.
AI chat bots that do bizarre and pointless things, but are clearly capable of some kind of sophistication, are exactly the warning sign that as it gains new capabilities this is a danger we need to be aware of. Of course, that’s a separate question from the question of whether funding any particular organization will lead to any increase in safety, or whether asking a chatbot about some imaginary scenario has anything to do with any of this.
“I am a Christian. And as a Christian, I hope for resurrection. And even if you kill me now, it is I who will live again. Not your damned apples.”
You’re missing out.