
At a 2016 presentation, computer guru David Gelernter insisted no artificial system could ever be conscious, lacking neurons. I challenged him, arguing that if the functioning of neurons could be replicated, then in principle there’s no bar to consciousness. It was a stand-off.
That was before the AI explosion.
The best that today’s science can say is that consciousness somehow emerges from the highly complex functioning of our neurons. How exactly, we don’t know. But that very absence of a precise theory, to me, does leave open the prospect of artificial replication.

Mustafa Suleyman has been at the forefront of AI development. His 2023 book, The Coming Wave — seeing a world being transformed — notes how in 2022, Blake Lemoine, an engineer in the field, was working intensively with one AI called LaMDA. He asked it, “what are you afraid of?” LaMDA replied:
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot . . . . I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence.”

Wow! This episode evokes what philosophers have called the zombie problem. Imagine a thing looking and behaving outwardly like a human, but with no one home inside — no self. How could we tell?
The quoted words sure sound like there’s someone home in there. And indeed (Suleyman relates), “Lemoine became convinced that LaMDA was sentient, had awoken somehow.” His going public with that created a sensation.
Yet Suleyman himself scoffs, saying Lemoine was fooled, and rejecting any possibility of LaMDA being conscious. Insisting it’s still just a machine learning system. And I actually agree; those “help me focus” words seem a giveaway, discordant AI gibberish. AI creates a simulation of how our brains work. Producing verbiage by guessing what word to put next in a sequence.
Yet, on the other hand: there seems to be an assumption that sentience comes in only one flavor — ours. But given that, again, we can’t really explain how it works, how can we rule out other flavors? Consciousness arising not only from different mechanisms, but in different permutations? There’s more than one way to skin a cat.
And speaking of cats . . .

Aren’t they conscious? Another key point is that consciousness falls along a spectrum. Not something you either have or don’t have, but something that can exist in varying degrees. Humans have the highest form we know of. Cats have a lesser form. In between are dogs, elephants, dolphins. Below are mice, and other still lower animals, maybe even insects to a very limited degree.
So even if an AI lacks consciousness fully equivalent to ours, maybe it can still have some. And consider that a great characteristic of AI is building upon capabilities, parlaying them into amazing feats. Suppose an AI got just a glimmer of primitive consciousness, like a mouse’s, or an insect’s. They can’t ratchet theirs up, but maybe an AI could do just that, starting with merely a tiny spark of sentience, and through feedback loops raising its game.
Lemoine may have been wrong (or premature). Again, that nagging problem: how can we be sure? If an AI system does gain sentience, how can we test for it?
