(A virus-free and Trump-free post. (At least until I added this.))
Much AI fearmongering warns this could be a mortal threat to us. That superior AI beings could enslave or even eliminate us. I’m extremely skeptical toward such doomsaying; mainly because AI would still be imprisoned under human control. (“HAL” in 2001 did get unplugged.) Nevertheless, AI’s vast implications raise many ethical issues, much written about too.
One such article, with a unique slant, was by Paul Conrad Samuelsson in Philosophy Now magazine. He addresses our ethical obligations toward AI.
Start from the question of whether any artificial system could ever possess a humanlike conscious self. I’ve had that debate with David Gelernter, who answered no. Samuelsson echoes my position, saying “those who argue against even the theoretical possibility of digital consciousness [disregard] that human consciousness somehow arises from configurations of unconscious atoms.” While Gelernter held that our neurons can’t be replicated artificially, I countered that their functional equivalent surely can be. Samuelsson says that while such “artificial networks are still comparatively primitive,” eventually “they will surpass our own neural nets in capacity, creativity, scope and efficiency.”
I was reminded of Jeremy Bentham’s argument against animal cruelty: regardless of whatever else might be said of animal mentation, the dispositive fact is their capacity for suffering.
Samuelsson considers the potential for AI suffering a very serious concern. Because, indeed, with AI capabilities outstripping the human, the pain could likewise be more intense. He hypothesizes a program putting an AI being into a concentration camp, but on a loop with a thousand reiterations per second. Why, one might ask, would anyone do that? But Samuelsson then says, “Picture a bored teenager finding bootlegged AI software online and using it to double the amount of pain ever suffered in the history of the world.”
That may still be far-fetched. Yet the next passage really caught my attention. “If this description does not stir you,” Samuelsson writes, “it may be because the concept of a trillion subjects suffering limitlessly inside a computer is so abstract to us that it does not entice our empathy. But this itself shows us” the problem. We do indeed have a hard time conceptualizing an AI’s pain as remotely resembling human pain. However, says Samuelsson, this is a failure of imagination.
Art can help here. Remember the movie “Her?” (See my recap: https://rationaloptimist.wordpress.com/2014/08/07/her-a-love-story/)
I suspect our failure of imagination regarding Samuelsson’s hypotheticals is because none of us has ever actually met a Samantha. That will change, and with it, our moral intuitions.
AI rights are human rights.