science shows as true what you thought was only 99% true
https://www.youtube.com/watch?v=uVf7VUX_iUk&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251015-ai-is-not-popular-and-ai-users-are-unpleasant-asshats - podcast
time: 5 min 57 sec



I’m just spitballing here, but I suspect it’s for the same reason people with “dark triad” traits (narcissism, Machiavellianism, and psychopathy) are more successful in business and politics than the average person.
Dark triad types give quick, confident, and persuasive answers, and aggressively challenge anyone who disagrees with them. But they don’t actually care if the answers are true as long as they can win the debate or argument they’re having. This lets them be totally confident and persuasive in any situation - whether they know the answer or not - and so demonstrate more “leadership skills” than people who are less willing to bullshit.
Same with policies - a dark triad type is going to confidently and aggressively support policies that make him look good or benefit him personally in other ways. He doesn’t actually care whether they are good policies or bad policies, whether they’ll be good for the organization or the people or not - the dark triad type will lie, cheat, or steal to make sure his policies look successful, get himself promoted upwards, and blame his successor for the long term failure of the policy.
(If you were a dark triad type, you might, for example, enact policies that crash the economy and drive inflation through the roof while making yourself and your cronies incredibly rich, then cancel all the reports that track inflation, hunger, unemployment, etc, to conceal the impact of your policies, and go on a social media blitz claiming the economy is better than ever and any problems are someone else’s fault. Just as a hypothetical example.)
I’m kind of not surprised people who care more about persuasiveness than honesty, and more about results than processes, would find AI tools appealing.
As a certified bullshitter myself, I often find myself really annoyed with llms because their bullshitting is just so obvious
I would think because ai is basically just a yes man they can get instant gratification from. Easier to manipulate than a real human, when they’re wrong, you can berate them without year of pushback.
For example: https://youtu.be/qhwbUL2mJMs