• 0 Posts
  • 482 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it’s dark alchemy. And while it’s not that the rabbit hole doesn’t go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.

    I’m not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can’t do what they do so that you don’t ask the incredibly obvious questions about why it’s so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don’t know what kinda excuse the business idiots and political bullshitters are going to come up with.


  • One of the YouTube comments was actually kind of interesting in trying to think through just how wildly you would need to change the creative process in order to allow for the quirks and inadequacies of this “tool”. It really does seem like GenAI is worse than useless for any kind of artistic or communicative project. If you have something specific you want to say or you have something specific you want to create the outputs of these tools are not going to be that, no matter how carefully you describe it in the prompt. Not only that, but the underlying process of working in pixels, frames, or tokens natively, rather than as a consequence of trying to create objects, motions, or ideas, means that those outputs are often not even a very useful starting point.

    This basically leaves software development and spam as the only two areas I can think of where GenAI has a potential future, because they’re the only fields where the output being interpretable by a computer is just as if not more important than whatever its actual contents are.


  • It’s also a case where I think the lack of intentionality hurts. I’m reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn’t “secretly fascist” and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called “the weird part of YouTube.”

    ChatGPT and other bots don’t have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it’s always trying to create the next part of the story you most want to hear. We’ve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it’s pretty well known that there are ‘cult hoppers’ who will join a variety of different fringe groups because there’s something about being in a fringe group that they’re attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.










  • Adding onto this chain of thought, does anyone else think the talk page’s second top-level comment from non-existent user “habryka” is a bit odd? Especially since after Eigenbra gives it a standard Wikipedian (i.e. unbearably jargon-ridden and a bit pedantic but entirely accurate and reasonable in its substance) reply, new user HandofLixue comes in with:

    ABOUT ME You seem to have me confused with Habryka - I did not make any Twitter post about this. Nonetheless, you have reverted MY edits…

    Kinda reads like they’re the same person? I mean Habryka is also active further down the thread so this is almost certainly just my tinfoil hat being too tight and cutting off circulation and/or reading this unfold in bits and pieces rather than putting it all together.




  • User was created earlier today as well. Two earlier updates from a non-account-holder may be from the same individual. Did a brief dig through the edit logs, but I’m not very practiced in Wikipedia auditing like this so I likely missed things. Their first couple changes were supposedly justified by trying to maintain a neutral POV. By far the larger one was a “culling of excessive references” which includes removing basically all quotes from Cade Metz’ work on Scott S and trimming various others to exclude the bit that says “the AI thing is a bit weird” or “now they mostly tell billionaires it’s okay to be rich”.



  • That hatchet job from Trace is continuing to have some legs, I see. Also a reread of it points out some unintentional comedy:

    This is the sort of coordination that requires no conspiracy, no backroom dealing—though, as in any group, I’m sure some discussions go on…

    Getting referenced in a thread on a different site talking about editing an article about themselves explicitly to make it sound more respectable and decent to be a member of their technofascist singularity cult diaspora. I’m sorry that your blogs aren’t considered reliable sources in their own right and that the “heterodox” thinkers and researchers you extend so much grace to are, in fact, cranks.



  • Finally circling back around to this.

    Feels like I am not just doing my job but also the work the operator of the service or product I am having to use through chat should have paid professionals to do. And I’m not getting paid for it.

    Speaking as someone who has worked extensively in IT support, I think that’s the sales pitch for these chatbots. They don’t want to give users tools and knowledge to solve their own problems - or rather they do but the chatbots aren’t part of that. The chatbots are supposed to replace the people who would interact with the relevant systems on your behalf. And honestly, working with a support person is already a deeply unsatisfying interaction in the vast majority of cases. In even the best case scenario it involves acknowledging that some part of your job has exceeded your ability and you need specialized help, and handling that well is a very rare personality trait. But the massive variety of interconnected systems that we rely on are too complex for this to not be a common occurrence. Even if you did radically open everything from internal bug trackers to licensing systems to communications there wouldn’t be enough time in the day for everyone to learn those systems well enough to perfectly self-solve all their problems, and that lack of systems knowledge would be a massive drain on your operations. But trying to fit in an LLM chatbot is the worst of both worlds, in that your users are both locked away from the tools and knowledge that would let them solve their own issues but still need to learn how to wrangle your intermediary system, and that system doesn’t have the human ability to connect and build a working relationship and get through those issues in a positive way.