The Hard Problem of AI

By Fsrcoin

What does the Artificial Intelligence (AI) explosion mean for our future? That’s the Great Topic of the day.

Back in 2013, my seminal Humanist magazine article, The Human Future: Upgrade or Replacement?* foresaw a convergence of biological humankind with the artificial — we’d incorporate ever more technological improvements into ourselves, until the distinction between human and robot ultimately vanishes and Humanity 2.0 arises.

The latest Humanist magazine’s cover story is headed “The Dangers of Artificial Intelligence.” Seeing that, I mused, wouldn’t it be fun to ask an AI to address this? Well, guess what. The editors did exactly that. The article was authored by an AI.**

Typical for AI, it’s quite well written, glib, articulate, with plenty of relevant metaphors, seemingly thoughtful. But as I read it the feeling grew that it wasn’t really saying anything. Certainly nothing interesting. Just recapping the kinds of problems and issues people have been buzzing about; doing it repetitively in fact, without really providing any insight or wisdom.

My own most recent look at AI had said that all ChatGPT and the like are actually doing is just predicting what word comes next in a sequence, having been trained on billions of words of pre-existing text. Without even understanding the words. So what it’s doing is not remotely thinking.

Subsequently a David Brooks column cited Douglas Hofstadter (of Gödel, Escher, Bach fame) saying pretty much the same thing. This gratified Brooks (and me). But then he goes on to report he checked with Hofstadter for an update in light of the most recent AI advancements — and Hofstadter now says we can no longer be sure AI is not thinking!

Then my wife gave me a book, AI Ethics, by Mark Coeckelbergh. As I started reading, the flavor of this human-written work and the AI-authored Humanist article seemed quite different. But soon, not so much. I began noticing how many sentences ended with question marks. Coeckelbergh does catalog issues raised by AI, as well as answers other commenters have suggested, but never endorses any. The book is devoid of a point of view, which I found maddening. It calls for wisdom, but offers none.

Here’s a quote near the end: “it is one thing to name a number of ethical principles and quite another to figure out how to implement them in practice.” Well, yeah. And then (his emphasis), “it remains unclear what exactly we should do.”

A key problem is moral agency. The author seems to dance around this without ever sinking his teeth into the real issue — the concept of agency requires sentience. A stone can’t have moral agency even if it squashes you. However, I noted the book was published in 2020 — paleolithic times in terms of AI development. AI sentience didn’t seem an issue then. Now (as Hofstadter suggests) it may well be.

If an AI can be conscious we’re in a whole different world. Now the question becomes: how can we know? An AI today often acts as though it’s conscious. May even claim to be. But is it just an act? This is philosophy’s old “zombie” problem. Suppose a creature looks and behaves exactly as a human, but without consciousness inside. How can we tell?

Pioneering computer theorist Alan Turing’s test was whether an entity could convince a human interlocutor that it too is human. But today’s AI can do that easily. Yet we don’t have a way to determine if it truly has a consciousness, a self.

Some refuse to consider that possibility. I argued this in 2016 with computer thinker David Gelernter who believed there could be no sentience without neurons. Whereas I couldn’t see ruling it out if an artificial system replicates the functioning of neurons.

It isn’t magic, even if we don’t yet really understand how consciousness and selfhood arise and work in humans. Neuroscientists call this “the hard problem.” How they might work in an artificial system could be quite different. An even harder problem?

* https://rationaloptimist.wordpress.com/2013/07/07/the-human-future-upgrade-or-replacement/

** https://thehumanist.com/magazine/fall-2023/features/the-dangers-of-artificial-intelligence