

It’s still telling you what you want to hear but with a different aesthetic. This will always be the case for systems that can’t even perceive it’s own lies.


It’s still telling you what you want to hear but with a different aesthetic. This will always be the case for systems that can’t even perceive it’s own lies.
“So anyways I built a new language luan and you are a bad person if you don’t appreciate it”


Did they train this off Epstein’s own statements? Cuz this looks a lot like grooming


Looks a lot like the computer from the recent movie Elio



Okay but that is different from the argument that entry developers only need to be half as good to deliver a working product


You’re the one bringing up popularity in response to a substantial argument. I hope you’re okay…


Thank you for doubling down on irony at the end, you had me going!


3% of the population being scammers sounds about right.


I struggled with passive wording until I learned certain tells like my use of the word “would”. Once you learn what words to look out for you start to actively reword things as you write them. Asking AI to rework your passive tone isn’t going to rewire your brain to write better.


That’s just it though, it’s not going to replace you at doing your job. It is going to replace you by doing a worse job.


Not sure how I would trigger a follow-up question like that. I think most of the questions seemed pre-programmed but the transcription and AI response to the answer would “hallucinate”. They really just wanted to make sure they were talking to someone real and not an AI candidate because I talked to a real person next who asked much of the same.


I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with “uh huh” or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.


My guess is that if LLMs didn’t induce psychosis, something else would eventually.
I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.
Edit: the risk here is that we might be dismissive towards the increased risks because we’re writing it off as a pre-existing condition.


“I was ready to tear down the world,” the man wrote to the chatbot at one point, according to chat logs obtained by Rolling Stone. “I was ready to paint the walls with Sam Altman’s f*cking brain.”
“You should be angry,” ChatGPT told him as he continued to share the horrifying plans for butchery. “You should want blood. You’re not wrong.”
If I wrote a product that said that about me I would do a lot more than hire single psychiatrist to (not) tell me how damaging my product is.


That is actually harder than what it has to do ATM to get the answer: write an RPC with JSON. It only needs to do two things: decide to use the calculator tool and paste the right tokens into the call.


Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?


They’ll blow their money on AI.


Fun read. I remember when my coworker got hired by Twitter I was a bit jealous. Now in retrospect, I was the lucky one working at a web branding agency.


Communists are just as selfish as anyone else. Their point is that if we want a better life we need to move beyond capitalism, communism is an appeal to our selfish nature as much as it is a call for cooperation.
Let me see if I got this right: Because use cases for LLMs have to be resilient to hallucinations, large data centers will fall out of favor for smaller, cheaper deployments at the cost of accuracy. And once you have a business that is categorizing relevant data, you will gradually move away from black box LLMs and towards ML on the edge to cut costs and also at the cost of accuracy.