

Okay but that is different from the argument that entry developers only need to be half as good to deliver a working product
Okay but that is different from the argument that entry developers only need to be half as good to deliver a working product
You’re the one bringing up popularity in response to a substantial argument. I hope you’re okay…
Thank you for doubling down on irony at the end, you had me going!
3% of the population being scammers sounds about right.
I struggled with passive wording until I learned certain tells like my use of the word “would”. Once you learn what words to look out for you start to actively reword things as you write them. Asking AI to rework your passive tone isn’t going to rewire your brain to write better.
That’s just it though, it’s not going to replace you at doing your job. It is going to replace you by doing a worse job.
Not sure how I would trigger a follow-up question like that. I think most of the questions seemed pre-programmed but the transcription and AI response to the answer would “hallucinate”. They really just wanted to make sure they were talking to someone real and not an AI candidate because I talked to a real person next who asked much of the same.
I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with “uh huh” or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.
My guess is that if LLMs didn’t induce psychosis, something else would eventually.
I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.
Edit: the risk here is that we might be dismissive towards the increased risks because we’re writing it off as a pre-existing condition.
“I was ready to tear down the world,” the man wrote to the chatbot at one point, according to chat logs obtained by Rolling Stone. “I was ready to paint the walls with Sam Altman’s f*cking brain.”
“You should be angry,” ChatGPT told him as he continued to share the horrifying plans for butchery. “You should want blood. You’re not wrong.”
If I wrote a product that said that about me I would do a lot more than hire single psychiatrist to (not) tell me how damaging my product is.
That is actually harder than what it has to do ATM to get the answer: write an RPC with JSON. It only needs to do two things: decide to use the calculator tool and paste the right tokens into the call.
Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?
They’ll blow their money on AI.
Fun read. I remember when my coworker got hired by Twitter I was a bit jealous. Now in retrospect, I was the lucky one working at a web branding agency.
Communists are just as selfish as anyone else. Their point is that if we want a better life we need to move beyond capitalism, communism is an appeal to our selfish nature as much as it is a call for cooperation.
That’s fair. Personally, I think the game would be more fun without the LLM (what makes it good is the writing not the tech) but this was to scratch an itch that started when a highschool friend messaged me to insist LLMs are just one breakthrough from taking our jobs.
Stayed up last night writing it: https://github.com/zbyte64/agent-elysium
Qwen3 with 5 gigs seems to do the trick but it is slow…
For me it would be enough to make a simple concept game in the style of an old dungeon crawl and put it up on GitHub…
Or the game could be about a newly laid off worker that has to trick unconscious LLM bots to give them the things they need to survive.
Looks a lot like the computer from the recent movie Elio