Culture Magazine

Artificial Intelligence and Our Ethical Responsibility

By Fsrcoin

(A virus-free and Trump-free post. (At least until I added this.))

Artificial Intelligence and our ethical responsibility
Artificial Intelligence (AI) was originally conceived as replicating human intelligence. That turns out to be harder than once thought. What is rapidly progressing is deep machine learning, with resulting artificial systems able to perform specific tasks (like medical diagnosis) better than humans. That’s far from the integrated general intelligence we have. Nevertheless, an artificial system for the latter may yet be inevitable in the future. Some foresee a coming “singularity” when AI surpasses human intelligence and then takes over its own further evolution. Which changes everything.

Much AI fearmongering warns this could be a mortal threat to us. That superior AI beings could enslave or even eliminate us. I’m extremely skeptical toward such doomsaying; mainly because AI would still be imprisoned under human control. (“HAL” in 2001 did get unplugged.) Nevertheless, AI’s vast implications raise many ethical issues, much written about too.

One such article, with a unique slant, was by Paul Conrad Samuelsson in Philosophy Now magazine. He addresses our ethical obligations toward AI.

Start from the question of whether any artificial system could ever possess a humanlike conscious self. I’ve had that debate with David Gelernter, who answered no. Samuelsson echoes my position, saying “those who argue against even the theoretical possibility of digital consciousness [disregard] that human consciousness somehow arises from configurations of unconscious atoms.” While Gelernter held that our neurons can’t be replicated artificially, I countered that their functional equivalent surely can be. Samuelsson says that while such “artificial networks are still comparatively primitive,” eventually “they will surpass our own neural nets in capacity, creativity, scope and efficiency.”

Artificial Intelligence and our ethical responsibility
And thus attain consciousness with selves like ours. Having the ability to feel — including to suffer.

I was reminded of Jeremy Bentham’s argument against animal cruelty: regardless of whatever else might be said of animal mentation, the dispositive fact is their capacity for suffering.

Samuelsson considers the potential for AI suffering a very serious concern. Because, indeed, with AI capabilities outstripping the human, the pain could likewise be more intense. He hypothesizes a program putting an AI being into a concentration camp, but on a loop with a thousand reiterations per second. Why, one might ask, would anyone do that? But Samuelsson then says, “Picture a bored teenager finding bootlegged AI software online and using it to double the amount of pain ever suffered in the history of the world.”

Artificial Intelligence and our ethical responsibility

That may still be far-fetched. Yet the next passage really caught my attention. “If this description does not stir you,” Samuelsson writes, “it may be because the concept of a trillion subjects suffering limitlessly inside a computer is so abstract to us that it does not entice our empathy. But this itself shows us” the problem. We do indeed have a hard time conceptualizing an AI’s pain as remotely resembling human pain. However, says Samuelsson, this is a failure of imagination.

Art can help here. Remember the movie “Her?” (See my recap: https://rationaloptimist.wordpress.com/2014/08/07/her-a-love-story/)

Artificial Intelligence and our ethical responsibility
Samantha, in the film, is a person, with all the feelings people have (maybe more). The fact that her substrate is a network of circuits inside a computer rather than a network of neurons inside a skull is immaterial. If anything, her aliveness did finally outstrip that of her human lover. And surely any suffering she’s made to experience would carry at least equal moral concern.

I suspect our failure of imagination regarding Samuelsson’s hypotheticals is because none of us has ever actually met a Samantha. That will change, and with it, our moral intuitions.

AI rights are human rights.


Back to Featured Articles on Logo Paperblog