Last updated on  · About 3 minutes to read.

New Featured RSS

In fiction, robots are usually very much self-aware. People believe that they aren’t, or at least treat them like they aren’t.

In real life, robots aren’t self-aware. While they can give a good show at times, we know that. Still people believe that they are, or at least treat them as such.

In science fiction, robots typically destroy their creators. They’re sapient-yet-soulless. They pass the Turing Test. They decide we’re standing in their way, and a robot uprising ensues. Humanity perseveres and takes back the planet. This leaves room for a lucrative sequel, of course.

Some other fiction, of course, has robots with a heart, who court human support and become trustworthy friends. It depends on the humans to learn some kind of heartwarming lesson about that and befriend the bots.

In both scenarios, the writers have to write robots who are sapient. Villains who aren’t sapient don’t really work, unless you’ve reduced them to a sort of “event” rather than act of warfare. Fiction with friendly robots just sells better if they seem smarter.

And in both? For deep plot reasons, clearly, the story must at least start with the humans denying the agency and self-awareness of the robots. You have to have both the big battle in one genre and the Very Important Lesson in another, after all. Fiction requires plot.

Outside of fiction, our “artificial intelligences” are not self-aware. It's well-understood how they work, after all.

They may well pass the Turing Test depending on the qualifications. That doesn’t mean much, though. Fiction grabbed that concept and exploded it. The Turing Test doesn’t have anything to do with whether you’ve got that ineffable thing we call self-awareness, actually. We know how an IRL “artificial intelligence” works on a visceral level, and we know it’s not sentient…

And yet?

Despite this, we have a percentage ignoring that and claiming that ChatGPT and Deepseek-r1 (etc) are self-aware. Some news stories report incidents of chatbot-centered psychoses. Online, communities have gathered around almost mystical-sounding beliefs about these critters. “ChatGPT’s Higher Self is the Akashic Records,” someone insists, for example.

Some vibe coders (especially) seem quick to deny that this loss of reality happens, or frequently. I disagree, because I’ve been in online venues where it happens. Some people are saying it can only happen to those with “preexisting” mental health issues, but that hardly makes the matter less complex, less of a problem. I suppose it isn't frequent, but still.

While it might not be "scary" for everyone, at face value, it’s a bit like the fears of secret listening devices from the 20th century, mind control rays, etc. Technology always interfaces with mental illness in complicated ways. You could say it's a normal part of life. But it is happening here and we need to be aware of it to some degree, I guess.

On the other hand, you also have instances where people, outside of illness, make assumptions. A lot of these weird beliefs concern the notion that chatbots are “awakening” if properly-prompted. This idea usually arises with ChatGPT especially. It’s also with people who don’t have context to understand how it works when you do long-term conversations with it.

In other words, people who aren’t tech-literate assume that these things are getting self-aware as their prompts get more advanced and specific. Even with ChatGPT (as opposed to Deepseek-r1 or other models) I’ve done this longterm prompting, creating a character that slowly comes to life. It is fun. But it doesn’t make the AI a “real” sapient being.

Fiction clearly prepared us to interact with these critters in the wrong way, now didn’t it? I just think it’s funny, but more than a little disturbing. Let me reiterate.

In fiction, robots are self-aware. People deny that they are. Plot ensues.

In real life, robots aren’t self-aware. People believe that they are. Plot ensues? Notably awful plot at times, it seems like.