

Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?
Eating shit and not getting sick might be considered a skill, but beyond selling a yoga class, what use is it?
They’ll blow their money on AI.
Fun read. I remember when my coworker got hired by Twitter I was a bit jealous. Now in retrospect, I was the lucky one working at a web branding agency.
Communists are just as selfish as anyone else. Their point is that if we want a better life we need to move beyond capitalism, communism is an appeal to our selfish nature as much as it is a call for cooperation.
That’s fair. Personally, I think the game would be more fun without the LLM (what makes it good is the writing not the tech) but this was to scratch an itch that started when a highschool friend messaged me to insist LLMs are just one breakthrough from taking our jobs.
Stayed up last night writing it: https://github.com/zbyte64/agent-elysium
Qwen3 with 5 gigs seems to do the trick but it is slow…
For me it would be enough to make a simple concept game in the style of an old dungeon crawl and put it up on GitHub…
Or the game could be about a newly laid off worker that has to trick unconscious LLM bots to give them the things they need to survive.
In a world that chases status, be prestigious
I’ll keep that in mind…
Talking about Alpha Evolve https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/ ?
First, Microsoft isn’t using this yet but even if they were it doesn’t work in this context. What Google did was they wrote a fitness function to tune the Generative process. Why not have some rubric that scores the code as our fitness function? Because the function needs to be continuous for this to work well, no sudden cliffs. But also they didn’t address how this would work in a multi-objective space, this technique doesn’t let the LLM make reasonable trade offs between complexity and speed.
The point is to get open source maintainers to further train their program because they already scraped all our code. I wonder if this will become a larger trend among corporate owned open source projects.
I’ve deployed LangChain to production shudders. My use case involved sending images results back to the “agent” and that use case is an after thought for many of these services. I ended up extending the Gemini Vertex client to fake it. The artifacts system is basically pass around a dictionary and pray both ends agree on the shape.
This is not an endorsement of LLMs in general. I’m working to replace it with a decision tree.
…and this opposition means that our disagreements can only be perceived through the lens of personal faults.
If LangChain was written via VibeCoding then that would explain a lot.
Newsom is pitching Generative AI to make government more “efficient” : https://abc7.com/post/gavin-newsom-announces-ai-driven-efforts-help-california-reduce-traffic-jams-improve-road-safety/16279785/
I am sensing a new standup comedy routine where a tablet pretending to be AI has a script to say some of the worse takes while claiming to be different people at different points. And the host dissects the responses with observational humor
Stop confusing young autistic vulnerable people.
— Date Unknown
I’m old and autistic and not confused by the fact trans women are women. Hope that helps.
It was Biden in a fat suit this whole time.
Value, for some, is purely an emotional state.
That is actually harder than what it has to do ATM to get the answer: write an RPC with JSON. It only needs to do two things: decide to use the calculator tool and paste the right tokens into the call.