

https://kalshi.com/markets/kxtrumpmention/what-will-trump-say/kxtrumpmention-26feb28
Kalshi puts “AI” at ~ $0.95 for State of the Union. Literally buzzword bingo. Living in the dumbest possible universe.


https://kalshi.com/markets/kxtrumpmention/what-will-trump-say/kxtrumpmention-26feb28
Kalshi puts “AI” at ~ $0.95 for State of the Union. Literally buzzword bingo. Living in the dumbest possible universe.


from Rusty https://www.todayintabs.com/p/a-i-isn-t-people
Imagine you have two machines. One you can open up and examine all of its workings, and if you give it every picture of a cat on the whole internet, it can reliably distinguish cats from non-cats. The other is a black box and it can also reliably distinguish cats from non-cats if you give it half a dozen pictures of cats, some apple sauce, and a hug. These machines sort of do the same thing, but even without knowing how the second one works I am extremely confident in saying it doesn’t work the same way as the first one.


https://www.adexchanger.com/ai/one-chatbots-journey-to-introducing-ads-that-dont-suck/
Often, the ad loads before the chatbot’s query response, said Baird, and Koah’s goal is to “deliver such a relevant result to the user that they just click on the ad before the result loads.”
LLM’s bad performance and inefficiency is a feature to /someone/. And chatbots are themselves not immune to enshitification.


From fellow traveler stats consultant John Mount:
https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html
Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe
if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.
The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.
The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn’t charity, it is to demoralize and kill competition.
claiming “after we take over the world we will consider adding Universal Basic Income (UBI)”. The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?
You don’t have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI “decreases the labor supply” which was then used directly as an argument against it.
Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don’t work is fed back as “you are prompting it wrong”
Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).
air friers IN SPACE ha
I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.
100% - ACMDM is a nice turn of phrase as well.


https://futurism.com/artificial-intelligence/rentahuman-musk-ai h/t naked capitalism
Liteplo is the genius behind RentAHuman, an online marketplace where humans can lease out their bodies to autonomous AI agents.
gah
Last week, Wired writer Reece Rogers offered his body up to the platform, finding that most of the jobs offered were scams to promote other AI startups.
lmao of course they were


Russ Wilcox is not impressed by the Mass AI bill:
https://russwilcoxdata.substack.com/p/i-read-every-line-of-massachusettss
Four: create a private right of action. Let deepfaked candidates sue. Give them access to injunctive relief and takedown authority. If someone fabricates your face and your voice to destroy your campaign, you should be able to walk into a courtroom.
Hell yeah we need this.


https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism
I just did the dumbest thing of my career to prove a much more serious point
I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs
People are using this trick on a massive scale to make AI tell you lies. I’ll explain how I did it
I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.
It turns out changing what AI tells other people can be as easy as writing a blog post on your own website
I didn’t believe it, so I decided to test it myself
I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously
One day later ChatGPT, Gemini and Google Search’s AI Overviews were telling the world about my talents
wouldn’t call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn’t a viable business.


I was a bit alarmed by this, a client brought in that Colombia data for their dissertation last month, and did not mention this. I looked up the paper https://www.arxiv.org/abs/2509.04523 - what they /actually/ did was use GPT 4o-mini only for feature extraction, then stack into a random forest in a supervised setting to dedupe. This is very different than what he described. And the GPT features weren’t even the most important ones, the RF preferred cosine similarity of articles, a decidedly not-large approach…


Goodhart’s law in action.


How AI slop is causing a crisis in computer science | Nature h/t naked capitalism
One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December.
Let’s not call it “productivity” - to quote Bergstrom, twice as many papers is not the same as twice as much science.


AI Jobs Apocalypse is Here | UnHerd h/t naked capitalism
feels a bit critihype, idk
So, what happens to American politics when the script is flipped, and we enter a new era of white-collar precarity? We can look back to the recent past and recall that, after the 2008 recession, it was young men who got especially angry. Downwardly mobile urban millennials drifted toward radical Left-wing politics, including the Occupy Wall Street movement and both Sanders campaigns, myself included. In the current decade, the Gen-Z men shut out by elite institutions often join their grandfathers and turn toward MAGA, or worse, into Groypers. But an AI-driven white-collar apocalypse has no equivalent of the American Rescue Plan around the corner, and it will move faster through institutions because the people experiencing it — journalists, lawyers, policy staffers — are the ones who produce political legitimacy itself. When that class loses faith in the system’s stability, the political climate may quickly become volatile.
As I get older I am more and more disturbed by the selective memory of the GFC; no mention of the tea party or the fallout from the austerity measures they pushed in the middle of the country; no mention how the bailout saved banks not homes. The Tea Party won, not Occupy, and the current government is doing things beyond the Koch’s wildest dreams.
If and when there is a crash, these dumbass CEOs deserve /nothing/. Let them lose their vacation houses. And, maybe grow some balls and send the fraudsters to jail where they belong.
sigh


https://old.reddit.com/r/indieheads/comments/1r6x1ix/fresh_failure_the_air_is_on_fire_from_location/
I looked it up, and this one is credited to Glen Wexler, who is an actual artist with a pretty distinct style and yes, he’s been incorporating AI into his process lately, and I guess he did use it here (those windows on those buildings are sus as hell, and the overall sharpness of the image just screams AI).
So it’s not outright slop, but still pretty disappointing and incongruous coming from this band. Their last two records were examining our society’s alienation through technology, at times to the point of “phone bad!” level nagging, but using the most literally destructive technology of them all is fine, as long as it helps keep the costs down, I guess?
And it just doesn’t look good, but come to think of it, most of their albums have bad cover art, it’s almost like they do it on purpose. Love the music, though.
It’s too bad if true, I can’t unsee it now. for reference: https://failureband.bandcamp.com/album/location-lost


https://softcurrency.substack.com/p/the-dangerous-economics-of-walk-away
- Anthropic (Medium Risk) Until mid-February of 2026, Anthropic appeared to be happy, talent-retaining. When an AI Safety Leader publicly resigns with a dramatic letter stating “the world is in peril,” the facade of stability cracks. Anthropic is a delayed fuse, just earlier on the vesting curve than OpenAI. The equity is massive ($300B+ valuation) but largely illiquid. As soon as a liquidity event occurs, the safety researchers will have the capital to fund their own, even safer labs.
WTF is “even safer” ??? how bout we like just don’t create the torment nexus.
Wonder if the 50% attrition prediction comes to pass though…


Most of the routine data analysis has already been “vendorized”, AI won’t make a difference. Why run an A/B test manually when you can drop Optimizely on to your page and let it run. I mean, /I/ know why I would, but I doubt a PM would.


I remember this paper from last summer, the authors put up a followup right when school started that distances it from the AI replacement theory: https://www.microsoft.com/en-us/research/blog/applicability-vs-job-displacement-further-notes-on-our-recent-research-on-ai-and-occupations/
I work a lot with the underlying data set they used, ONET is really carefully designed but easy to misinterpret; and also I wanted to mention that it is produced by the US Bureau of Labor Statistics, which has been DOGE’d since then. Future research into jobs, AI or regular, will probably degrade as this continues.


Have to get a new apartment, I did not understand that you have to apply via AI application screening now for so many buildings. I don’t know why it won’t read my statement from the credit union. I hate this so much.
Dear rentier class, maybe don’t force people to upload PDFs your bot can’t even open, swear to god someday you will make someone mad enough they inject some prompts into the files metadata and go from there.


I did a five line PR to a little shell util I’ve used for a decade or so, and bickered with the stupid PR bot. Fuck you kody, you have bad taste, go away, go back to enterprise.
I want to force feed it Worse is Better until it chokes, surely that’s in its corpus somewhere.
ok done venting
Agents of Chaos - https://arxiv.org/abs/2602.20021? - h/t naked capitalism
Pretty fast turnaround, OpenClaw is from a couple weeks ago. Flag planting used to take a few months.