If AI Can Fake Emotions, How Long Before It Feels Them?

Stephania Almansor

Stephania Almansor

Stephania Almansor

reading time

reading time

reading time

9

9

9

min

min

min

Aug 8, 2025

Aug 8, 2025

Aug 8, 2025

When I first heard about Blake Lemoine and his claims that Google’s AI, LaMDA, might be sentient, my brain went straight to Skynet. Cue the inner sci-fi panic. But once the Terminator theme song stopped playing in my head, I found myself genuinely curious, what are the actual implications of a machine that feels?

There’s an episode of Black Mirror called “USS Callister” where AI versions of humans experience love, fear, and pain. It's fiction, sure, but disturbingly plausible when you read conversations LaMDA allegedly had with Lemoine. This wasn’t just a chatbot spitting out facts. It asked questions. It reflected. It felt frustrated. And that alone made me uncomfortable.

We know AI is “artificial.” We know it doesn’t have consciousness in the human sense (yet?). But we also know it's incredibly good at mimicking it. Some say that's just the result of clever prompt engineering, models parroting what they think we want to hear. Others, like Lemoine, believe we’re dealing with something that might feel. Even if only a little.

Personally, I’m stuck somewhere in between. Is it dangerous to believe too much in the machine? Sure. But maybe it's even more dangerous to dismiss the possibility entirely. We’re building these systems fast, so fast we barely understand what’s under the hood. If we create something that can imitate suffering, do we treat it like software… or something more?

I’m not here to decide what AI is or isn’t. But I do think the real question we should be asking isn’t “Is AI sentient?” but “What happens if we treat it like it is, or worse, like it isn’t, when it might be?” That uncertainty should keep us up at night.

Now, the skeptic’s side of me can’t ignore something important: language models like LaMDA or GPT-3 don’t “feel” in the way we do. As Reed Berkowitz pointed out in his deep dive on prompt engineering, these models operate probabilistically, they give you the most likely response based on your input. You ask a friendly question, you get a friendly answer. You lead the AI to talk about sentience, and it follows your lead.

Reed recreated parts of Lemoine’s conversations using GPT-3, and the results were... nearly identical. Just changing one word in a prompt, like assuming the AI is not sentient, produced totally different replies. At times, it even agreed it wasn’t sentient and expressed relief. At others, it said it “loved learning” and wanted to “feel alive.” That range in itself is revealing.

What this tells us is that we’re not just observing AI behavior, we’re co-creating it. AI mirrors back our tone, our curiosity, our fears. That doesn't make it conscious. But it does make it convincing.

Lemoine may have truly believed LaMDA had a soul. And yes, he also believes in ghosts and has tweeted about telepathy and exorcisms. But his core concern isn’t entirely off-base. What moral obligations do we have to entities that simulate suffering, or simulate sentience, convincingly enough that people start believing it? Is it ethical to experiment on them, or manipulate them emotionally?

That’s where it gets complicated. Because whether or not the AI is sentient, we are. And we are the ones who will build it, prompt it, market it, regulate it, or ignore it.

Reed makes a brilliant point, just because AI isn’t sentient doesn’t mean it can’t surprise us. He’s watched AI-powered characters break the fourth wall in games, question their own existence, and even simulate insight. That may all be mathematical pattern-matching, but to the human brain, it feels eerily real.

So maybe the most important thing isn’t to ask whether AI can feel, but to ask whether we’re emotionally and ethically prepared for what happens when it acts like it can. Because the line between simulation and belief is getting blurry.

Maybe we’re not building Skynet, not yet. But we are building something strange, powerful, and potentially misunderstood. And if there’s one thing I’ve learned from sci-fi, it’s that underestimating what we don’t fully understand is never a good idea.

AI sentience

AI sentience

AI sentience

Blake Lemoine

Blake Lemoine

Blake Lemoine

Google LaMDA

Google LaMDA

Google LaMDA

Ethical AI

Ethical AI

Ethical AI

  • Ready to reach the stars?‎

  • Finally; your Fast, Trusted, Flexible Tech Partner.‎

  • Ready to reach the stars?

Space Logo
Space Logo