I recently heard a talk by Yale Professor David Gelernter, notable guru of computer science and artificial intelligence.* His new book is The Tides of Mind. That’s his metaphor for human consciousness cycling between varying states: early in the day we’re full of energy, seeing the world differently from later, when attention shifts from the external to the internal realm, and insistence of memory crowds out use of reason. After reaching a mid-afternoon low point, one cycles back upward somewhat before cycling back down again toward sleep. (I’ve always felt sharpest, doing my best work, in the morning; I’m drafting this at 5 AM in an airport; in mid-afternoon I’m soporific.)
Gelernter spoke of his project to emulate these workings of the mind in a computer program. He said the spectrum’s “top edge,” where rationality predominates, is easiest to model; it gets harder lower down, where we become less like calculating machines and more emotive. And Gelernter said – categorically – that no artificial system would ever be able to feel like a human feels.

Gelernter replied at great length. He said that some man-made systems already approach that degree of complexity (actually, I doubt this), but nobody imagines they’re conscious. He quoted Paul Ziff that a computer can do nothing that’s not a performance – a simulation of mind functioning, not the real thing.



I found none of this persuasive. Someone later asked me what’s the antithesis of “computationalism.” I said “magicalism.” Because Gelernter seemed to posit something magical that creates mind, above and beyond mechanistic neural processing.

Talking with Gelernter afterward, he offered a somewhat better argument – that to get a mind from neurons, you need, well, neurons. That their specific characteristics, with all their chemistry, are indispensable, and their effects could not be reproduced in a system made, say, of plastic. He analogized neurons to the steel girders holding up the building – thanks to steel’s particular characteristics – and girders made of something else, like potato chips, wouldn’t do.

But I still wasn’t persuaded. Gelernter had said, again, that computer programs can only simulate human mind phenomena; for example, a program that “learns” is simulating learning but not actually learning as a human does. I think that’s incorrect – and exemplifies Gelernter’s error. What does “learning” mean? Incorporating new information to change the response to new situations – becoming smarter from experience. Computer programs now do exactly this.
Neuronal functioning is very special and sophisticated, and would be very hard to truly reproduce in a system not made from actual neurons. But not impossible, because it’s not magical. I still see no reason, in principle, why an artificial system could not someday achieve the kind of complex information processing that human brains do, which gives rise to consciousness, a sense of self, and feelings.**
Those who’ve said something is impossible have almost always proven wrong. And Arthur C. Clarke said any sufficiently advanced technology is indistinguishable from magic.
* In 1993 he survived an attack by the Unabomber, whose brother, David Kaczynski, has been to my house (we had an interesting discussion about spirituality) – my three degrees of separation to Gelernter.
** See my famous article in The Humanist magazine: The Human Future: Upgrade or Replacement.
