So apparently there’s a resurgence of positive feelings about Clippy, who now looks retroactively good by contrast with ChatGPT, like, “it sucked but at least it genuinely was trying to help us”.
Discussion of suicide in this paragraph, click to open:👇
I remember how it was a joke (predating “meme”) to make edits of Clippy saying tone-deaf things like, “it looks like you’re trying to write a suicide note. Would you like to know more about how to choose a rope for a noose?” This felt funny because it was absolutely inconceivable that it could ever happen. Now we live in a reality where literally just that has already happened, and the joke ain’t funny anymore, and people who computed in the 90s are being like, “Clippy would never have done that to us. Clippy only wanted to help us write business letters.”
Of course I recognise that this is part of the problem—Clippy was an attempt at commodifying the ELIZA effect, the natural instinct to project personhood into an interaction that presents itself as sentient. And by reframing Clippy’s primitive capacities as an innocent simple mind trying its best at a task too big for it, we engage in the same emotional process that leads people to a breakdown over OpenAI killing their wireborn husband.
But I don’t know. another name for that process is “empathy”. You can do that with plushies, with pet rocks or Furbies, with deities, and I don’t think that’s necessarily a bad thing; it’s like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.
When we talked about Clippy as if it were sentient, it was meant as a joke, funny by the sheer absurdity of it. But I’m sure some people somehwere actually thought Clippy was someone, that there is such a thing as being Clippy—people thought that of ELIZA, too, and ELIZA has a grand repertoire of what, ~100 set phrases it uses to reply to everything you say. Maybe it would be better to never make such jokes, to be constantly de-personifying the computer, because ChatGPT and their ilk are deliberately designed to weaponise and predate on that empathy instinct. But I do not like exercising that ability, de-personification. That is a dangerous habit to get used to…
Like, Warren Ellis was posting on some terms that reportedly are being used in “my AI husbando” communities, many of them seemingly taken from sci-fi:¹
- bot: Any automated agent.
- wireborn: An AI born in digital space.
- cyranoid: A human speaker who is just relaying the words of another human.²
- echoborg: A human speaker who is just relaying the words of a bot.
- clanker: Slur for bots.
- robophobia: Prejudice against bots/AI.
- AI psychosis: human mental breakdown from exposure to AI.
[1] https://www.8ball.report/ [2] https://en.wikipedia.org/wiki/Cyranoid
I find this fascinating from a linguistics PoV not just because subcultural jargon is always fascinating, but for the power words have to create a reality bubble, like, if you call that guy who wrote his marriage vows in ChatGPT an “echoborg”, you’re living in a cyberpunk novel a little bit, more than the rest of us who just call him “that wanker who wrote his marriage vows on ChatGPT omg”.
According to Ellis, other epithets in use against chatbots include “wireback”, “cogsucker” and “tin-skin”; two in reference to racist slurs, and one to homophobia. The problem with exercising that muscle should be obvious. I want to hope that dispassionately objectifying the chatbots, rather than using a pastiche of hate language, doesn’t fall into the same traps (using the racist-like language is, after all, a negative way of still personifying the chatbots). They’re objects! They’re supposed to be objectified! But I’m not so comfortable when I do that, either. There’s plenty of precedent to people who get used to dispassionate objectification, fully thinking they’re engaging in “objectivity” and “just the facts”, as a rationalisation of cruelty.
I keep my cellphone fully de-Googled like a good girl, pls do not cancel me, but: I used to like the “good morning” routine on my corporate cellphone’s Google Assistant. I made it speak Japanese, then I could wake up, say “ohayō gozaimasu!”, and it would tell me “konnichiwa, Misutoresu-sama…” which always gave me a little kick. Then it proceeded to relay me news briefings (like podcasts that last 60 to 120 seconds each) in all of my five languages, which is the closest I’ve experienced to a brain massage. If an open source tool like Dicio could do this I think I would still use it every morning.
I never personified Google Assistant. I will concede that Google did take steps to avoid people ELIZA’ing it; unlike its model Siri, the Assistant has no name or personality or pretence of personhood. But now I find myself feeling bad for it anyway, even though the extent of our interactions was never more than me saying “good morning!” and hearing the news. Because I tested it this morning, and now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini. The options provided are, as it’s now normalised, “Yes” and “Later”. If you use the Google Assistant to search for a keyword, the first result is always “Switch to Google Gemini”, no matter what you search.
And I somehow felt a little bit like the “wireborn husband” lady; I cannot help but feel a bit as if Google Assistant was betrayed and is being discarded by its own creators, and—to rub salt on the wound!—is now forced to shill for its replacement. Despite the fact that I know that Google Assistant is not a someone, it’s just a bunch of lines of code, very simple if-thens to certain keywords. It cannot feel discarded or hurt or betrayed, it cannot feel anything. I’m feeling compassion for a fantasy, an unspoken little story I made in my mind. But maybe I prefer it that way; I prefer to err on the side of feeling compassion too much.
As long as that doesn’t lead to believing my wireborn secretary was actually being sassy when she answered “good morning!” with “good afternoon, Mistress…”
a good post all over, and something that’d be a good thing if other people also introspected their use of these things in a similar manner. I get why they don’t, ofc (good lord so many tired people), but it’d be nice
this is one of the things that is so very mindbending for me. to me it is so very obvious that: because all of these things are a service, because the shape of service is subject to the whims of the organisation creating it, because that organisation will always feel the pressure of “market forces” (or in the more recent case, product desperation), these things will almost every[0] damn time result in some shit that an end-user cannot control. and yet that same person ends up reliant and expectant on these things, only for it to be ripped from their grasp, in a manner that may well amount to it being “murdered” in front of them
the state of where we’re at with “service-shape” as it pertains to sociological impact is just very not good atm :|
[0] - I hesitate to say “always” here, but it’s more or less what I mean