Culture Magazine

Could a Machine Ever Feel Emotion? – David Gelernter

By Fsrcoin

UnknownI recently heard a talk by Yale Professor David Gelernter, notable guru of computer science and artificial intelligence.* His new book is The Tides of Mind. That’s his metaphor for human consciousness cycling between varying states: early in the day we’re full of energy, seeing the world differently from later, when attention shifts from the external to the internal realm, and insistence of memory crowds out use of reason. After reaching a mid-afternoon low point, one cycles back upward somewhat before cycling back down again toward sleep. (I’ve always felt sharpest, doing my best work, in the morning; I’m drafting this at 5 AM in an airport; in mid-afternoon I’m soporific.)

Gelernter spoke of his project to emulate these workings of the mind in a computer program. He said the spectrum’s “top edge,” where rationality predominates, is easiest to model; it gets harder lower down, where we become less like calculating machines and more emotive. And Gelernter said – categorically – that no artificial system would ever be able to feel like a human feels.

Unknown-1
This I challenged in the question period, suggesting that everything a human mind does must emerge out of neurons’ information processing – admittedly a massively complex system – but if such a system could be mimicked artificially, couldn’t all its effects, including consciousness and emotion, arise therein? I referenced the movie Her.

 Gelernter replied at great length. He said that some man-made systems already approach that degree of complexity (actually, I doubt this), but nobody imagines they’re conscious. He quoted Paul Ziff that a computer can do nothing that’s not a performance – a simulation of mind functioning, not the real thing.

Unknown-5
Making notes, I wrote the words “Chinese Room” before Gelernter spoke them. This refers to John Searle’s famous thought experiment: a person in a room, using a set of rules, can respond to incoming messages in Chinese, thus appearing to understand Chinese, without actually understanding Chinese. Likewise a computer, using programmed rules, could appear to converse and understand, without actually understanding.

images-1
Gelernter contrasted the view of “computationalists” like Daniel Dennett who – consistent with my question – regard the mind as basically akin to a computer – the brain is the hardware, the mind is the software. Gelernter acknowledged this is a majority view. It says that while a single neuron can do nothing, nor can a thousand, when a brain has trillions of interconnections, mind emerges. But this Gelernter dismissed, analogizing that a single grain of sand can do nothing, but a trillion can’t either.

images-2
Gelernter asserted that computationalists actually have no evidence for their stance, and it boils down to being an axiom – an assumption, like Euclid’s axiom that parallel lines never meet (though never meeting is the definition of parallel lines, which is something different).

I found none of this persuasive. Someone later asked me what’s the antithesis of “computationalism.” I said “magicalism.” Because Gelernter seemed to posit something magical that creates mind, above and beyond mechanistic neural processing.

Unknown-3
This argument has been going on for centuries. But it’s really Gelernterists who engage in axioms – that is, assuming something must be true, albeit unprovable. And I call the opposing view materialism – that all phenomena must be explicable rationally – and the mind must arise from what neurons physically do – because there is no other possibility. I do not believe in magic.

Talking with Gelernter afterward, he offered a somewhat better argument – that to get a mind from neurons, you need, well, neurons. That their specific characteristics, with all their chemistry, are indispensable, and their effects could not be reproduced in a system made, say, of plastic. He analogized neurons to the steel girders holding up the building – thanks to steel’s particular characteristics – and girders made of something else, like potato chips, wouldn’t do.

Unknown-4

But I still wasn’t persuaded. Gelernter had said, again, that computer programs can only simulate human mind phenomena; for example, a program that “learns” is simulating learning but not actually learning as a human does. I think that’s incorrect – and exemplifies Gelernter’s error. What does “learning” mean? Incorporating new information to change the response to new situations – becoming smarter from experience. Computer programs now do exactly this.

Neuronal functioning is very special and sophisticated, and would be very hard to truly reproduce in a system not made from actual neurons. But not impossible, because it’s not magical. I still see no reason, in principle, why an artificial system could not someday achieve the kind of complex information processing that human brains do, which gives rise to consciousness, a sense of self, and feelings.**

Those who’ve said something is impossible have almost always proven wrong. And Arthur C. Clarke said any sufficiently advanced technology is indistinguishable from magic.

* In 1993 he survived an attack by the Unabomber, whose brother, David Kaczynski, has been to my house (we had an interesting discussion about spirituality) – my three degrees of separation to Gelernter.

** See my famous article in The Humanist magazine: The Human Future: Upgrade or Replacement.


Back to Featured Articles on Logo Paperblog