AI & Consciousness

date
May 26, 2025
category
Artificial Intelligence
Reading time
5 minutes

Artificial Intelligence. Two words that spark excitement, anxiety, confusion, and lately, a lot of philosophical debates. We’re seeing it everywhere: chatting like a friend, painting like an artist, coding like an engineer. It’s doing things that used to feel uniquely human.

So naturally, people have started asking deeper questions.

Can AI think? Can it feel? Could it become conscious? And if so, what does that even mean?

For Me, AI Is Just a Tool

I’ve been thinking about this long before the hype. AI, to me, has always been a tool. Just like a paintbrush, a hammer, or Photoshop. It’s something humans created to help us. We’ve used AI in games, search engines, navigation systems, even email spam filters.

What changed recently is not the existence of AI but the kind of AI we see today: generative AI. Now we can actually see it create. That’s why people are talking. That’s why it suddenly feels different.

But does making something that looks human mean it is human?

The Rise of the "Conscious Machine" Question

Now that AI can generate poems and simulate empathy in text, some believe it's on its way to becoming conscious. This is a serious topic in cognitive science and philosophy, and the field is split.

Researchers like Stanislas Dehaene and Yoshua Bengio have evaluated today's large language models against neuroscientific theories of consciousness. Their findings? Current AI lacks the structure for anything close to human awareness, but some see pathways toward more advanced models that might simulate aspects of consciousness in the future [1].

The idea comes from a theory called functionalism. It says if something behaves like a mind, we might as well treat it like one. This is why some people say that if an AI speaks like it’s in pain, cries when it's hurt, or writes poetry about heartbreak, maybe it's conscious.

But that’s where I personally draw the line.

My Toothache Theory

Here’s a story I always share.

Imagine your tooth hurts. You start chewing on the other side of your mouth. You do this for months. Even after the tooth is fixed, you still avoid that side. Why? Because you’re trained by pain. Not data. Not math. Pain. And that pain changes your behavior even when the pain is gone.

AI doesn’t flinch. It doesn’t adapt out of fear or comfort. It calculates. That’s the key difference.

This is something philosopher John Searle pointed out years ago. He argued that no matter how well a machine mimics language or emotion, it doesn’t mean it understands them. Just like a Chinese Room might translate language perfectly without knowing what any of it means [2].

But Could It Change?

Some scientists argue yes. Alain Cardon and Antonio Chella have worked on models for artificial consciousness. Their work involves building systems that can simulate self-awareness or emotional responses based on multi-agent architecture. These systems try to replicate how emotions arise in human brains through complex networks [3][4].

Even Drew McDermott, a veteran AI researcher, admits it’s theoretically possible to build a system that models itself as feeling sensations and making decisions. But he also points out that we’re nowhere near it yet and that most AI researchers are more focused on real-world problems like vision, robotics, and language processing [5].

The Core Problem: Experience

The big problem is called the “Hard Problem of Consciousness,” as coined by David Chalmers. It asks, how does a physical system produce subjective experiences? What is it like to taste a strawberry or feel sadness?

Current AI doesn't have answers to that. It doesn’t taste, ache, or cry. It outputs what sadness sounds like, but never experiences it.

As Walter and Zbinden point out, there's a gap between neural structures and machine code that can't be bridged by algorithms alone [6].

So, What Should We Do With AI?

Use it. Study it. Enjoy it. But let’s not give it human qualities too soon.

AI is getting better at mimicking us. That doesn’t mean it’s becoming us. The human mind is not just a neural network. It's also a body, a history, hormones, childhood, culture, trauma, joy, heartbreak. You can’t download that into a chip.

This moment in history is beautiful and strange. It’s forcing us to ask what makes us human. Creativity? Emotion? Vulnerability?

Whatever it is, it’s not something we’ll find by scanning a chatbot’s output.

It’s in the way we feel when our tooth hurts. The way we choose kindness when we could choose anger. The way we wake up at 2 AM thinking about someone we love.

That’s the thing machines don’t do. That’s still ours.

References

  1. Butlin, P., Long, R., Elmoznino, E., Bengio, Y., et al. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv:2308.08708
  2. Searle, J. R. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3(3), 417–457.
  3. Cardon, A. (2018). Beyond Artificial Intelligence: From Human Consciousness to Artificial Consciousness. Atlantis Press.
  4. Chella, A., & Manzotti, R. (2007). Artificial Consciousness. Imprint Academic.
  5. McDermott, D. (2007). Artificial Intelligence and Consciousness. In The Cambridge Handbook of Consciousness.
  6. Walter, Y., & Zbinden, L. (2023). The Problem with AI Consciousness: A Neurogenetic Case Against Synthetic Sentience. arXiv:2301.05397
written by
Sami Haraketi
Content Manager at BGI

My

MY

work

work

I don’t just make things look good. I make them work.Websites, brands, films and stories built to connect and built to last.