How to Create a Mind

By Fsrcoin

Humans try to understand our reality. Including how our minds do that. 

“Futurist” Ray Kurzweil has posited a coming “singularity” when artificial intelligence outstrips ours, and everything changes. His book How to Create a Mind seeks to reverse-engineer our minds, to apply that knowledge to AI’s development.

Our thinking about something, perceiving something, remembering something, etc., may seem simple. We just do it. Like tapping an app on your phone just brings it up. But hidden, behind that app icon, is a tremendous web of complexity. Our minds are like that. We normally don’t need to peek under the hood. Unless we want to truly understand ourselves.

Consider hitting a baseball. Coming at you with maybe a second to calculate its path, and the precise body motions needed to connect bat with ball. Imagine trying to work it all out consciously. But we don’t have to. The brain does it for us.

Steven Pinker’s book How the Mind Works went through an exercise of identifying all the logic steps for answering a fairly simple question, how an uncle and nephew are related. That answer might seem obvious. Yet the necessary logic consumed quite a few pages — reminding me of Russell and Whitehead in Principia Mathematica laying out 362 pages of logic to reach 1+1=2. 

But Pinker’s example assumes you understand the question in the first place. And that’s a whole ‘nother thing — which Kurzweil explores. What does “understanding” really mean?

The mind can be seen as arising (or emerging) from the the workings of billions of neurons. Kurzweil probes how that happens, on a deep level. Pattern recognition is central. We are bombarded with incoming sensory data; its information content, in bits, is astronomical. If we couldn’t detect patterns to make it intelligible we couldn’t function.

You see a mass of pixels, detect the pattern of a lion, and run. (Indeed, for extra safety, evolution actually gave us overdeveloped pattern recognition, often seeing things that aren’t there. Making us suckers for supposed paranormal and supernatural stuff, including religion.) 

Kurzweil casts the brain as consisting largely of a massive number of parallel processing modules (each comprising around a hundred neurons) for pattern recognition. And this too, like the uncle-nephew logic mentioned, is deep with complexity. You don’t just simply seea pattern. Much has to happen for that perception to arise. 

Take reading. You seemingly glide across the page effortlessly. But obviously, before you can understand a sentence, you have to understand each word; and before you can even see a word, you have to see each letter. But it doesn’t stop there. An “A” has two slanted upright lines, and a horizontal line. The brain has to register not only each of those, but also their orientations and positioning. Then it has to refer back to, and compare against, its stored database of letter memory, to come up with the brilliant synthesis: “That’s an A!”

Kurzweil describes our brain’s pattern recognition modules as working hierarchically; passing information up and down the line. You start with the A’s three components. That information goes to the next level(s) where the lines’ positions and orientations are registered. Once you’ve got the A, it goes up to a yet higher level bringing it together with other letters. More upward steps are needed to “get” a whole sentence.

But meantime, information is also being passed down the hierarchy, which Kurzweil deems at least equally important. Because at each level, the system generates tentative conclusions and predictions of what’s likely coming next. This greatly speeds the whole process. 

If you’ve got an A, and then a P, P, and L, you may expect an E next. The context can eliminate other possibilities (I, A, or Y). This analysis would occur at a yet higher level, and be passed back down the system.

This at least is Kurzweil’s model. I’m not sure I entirely buy it. While the logic is unarguable, I think we learn shortcuts. I don’t think the brain has to go through all those steps to grasp the word “apple;” we do recognize it as a unit, in one go. That’s what learning to read really is. 

Nevertheless, the Kurzweil model helps to understand some aspects of our mental processing. At the highest levels of the hierarchy, we are collating inputs even from different sensory systems, and developing abstract concepts. This is the level at which the self emerges.

Kurzweil discusses IBM’s “Watson” program that won at Jeopardy! Watson understood the questions sufficiently to answer them, but some say that’s different from what is meant when we say a human “understands” something. Kurzweil counters, however, that the hierarchical processing in both cases is really the same. What’s different is having a sense of self. 

Consciousness and the self are deep conundrums. Philosophers posit the zombie problem: if a seeming human exhibits all the behavior we expect, but without inner conscious experience, how could anyone tell the difference?

At some point this will become a big issue with respect to artificial intelligence. Claims will be made for AI consciousness. Kurzweil believes we’ll accept it as a matter of course, citing how we empathize with characters like R2D2 in popular entertainment. I think that’s way too optimistic and the real thing will provoke ferocious resistance. Some people still can’t accept other ethnicities as fully human. Robot protest marches will demand their human rights.

And while Kurzweil thinks we will accept artificial consciousness that emulates the human sort, what about completely different, alien forms of consciousness? May be hard to conceptualize, but we certainly cannot assume ours is the only possible kind. What might the differences be? Here’s one: they may not necessarily have emotions — love or fear, for example — that mirror ours.

And if we do encounter some non-human consciousness, machine or otherwise, how — as with zombies — will we know it? Pioneer computer theorist Alan Turing proposed the Turing Test. Whether a machine, interrogated by a human, can convince them it is conscious. This never made sense to me. A human’s mere subjective judgment here cannot be conclusive. Surely a computer can be programmed (like Watson) sufficiently to give answers that seem to pass the Turing test.

Amconscious? I perform, to myself, all the indicia of consciousness, as a zombie would. Am I fooling myself, in the way a zombie would? But who or what is “myself” in that question? This is actually a puzzle I think about a lot. My brain has thoughts I know about. And I know I know about them. And know that I do. This can go on forever with no final knower. I can never seem to put my finger on the “me-ness” at the bottom of it all. This is what makes consciousness and the self such maddeningly hard problems. And if we don’t truly understand the nature of our own consciousness, how could we determine whether some other entity is conscious? 

Kurzweil then tackles the free will conundrum. A key aspect concerns the distinction between conscious and unconscious decision making. The famous Libet experiment seemed to show that a conscious decision to act is preceded by unconscious readying in the brain. Kurzweil discusses this and then poses the question: does it matter? If our actions and decisions arise from both unconscious and conscious brain activity, don’t both aspects represent one’s mind? Both really just parts of one unified system?

Kurzweil hypothesizes a procedure to create an artificial duplicate of you. Down to every cell and neuron. Maybe with some improved roboticized features. It certainly, of course, behaves as you do. If you are conscious, so must it be. But would you be okay with having your old incarnation dispensed with, replaced by the new one? “You” would still exist, no? Well, I don’t think so. (That’s a problem regarding teleportation. “Beam me up, Scotty” may have seemed fine in Star Trek, but I would refuse it.)

But Kurzweil goes on: imagine a more limited procedure, replacing one brain module with an improved artificial one. No problem there. We already do such things — e.g., cochlear implants. Of course you’re still you. But suppose we keep going and in steps replace every part of your brain.

This is the ancient story of the Ship of Theseus. So famous it was preserved. Its wooden planks would periodically rot and be replaced. In time, none of the original wood remained. Was it still “the Ship of Theseus?” Our bodies actually do this too, replacing our cells constantly (though brain cells are the longest lived). You still feel you are you.

Kurzweil does envision progressively more extensive replacement of our biological parts and systems with superior artificial ones. In my own landmark 2013 Humanist magazine article, The Human Future: Upgrade or Replacement? I foresaw an eventual convergence between our biological selves and the artificial systems we devise to enhance our capabilities. Human intelligence has enabled us to make advances, solve problems, and improve our quality of life at an incredibly accelerating pace. That will go into overdrive once conscious artificial intelligence kicks in. Kurzweil says an “ultraintelligent” machine will be the last invention humanity will ever have to make.