Debate Magazine

Cognitive Maps & Brain Territories

By Cris

While considering the current status of cognitive science, which can also be considered an inquiry into the state of an emerging art, two recent articles address the issue from different disciplinary perspectives. The first, by psychologist Gary Marcus, biophysicist Adam Marblestone, and neuroscientist Jeremy Freeman, discusses the problems surrounding big-money and big-data brain mapping projects that are being touted as the next big thing in science. While the authors laud these projects, they are cautious about results:

But once we have all the data we can envision, there is still a major problem: How do we interpret it? A mere catalog of data is not the same as an understanding of how and why a system works.

When we do know that some set of neurons is typically involved in some task, we can’t safely conclude that those neurons are either necessary or sufficient; the brain often has many routes to solving any one problem. The fairy tales about brain localization (in which individual chunks of brain tissue correspond directly to abstract functions like language and vision) that are taught in freshman psychology fail to capture how dynamic the actual brain is in action.

One lesson is that neural data can’t be analyzed in a vacuum. Experimentalists need to work closely with data analysts and theorists to understand what can and should be asked, and how to ask it. A second lesson is that delineating the biological basis of behavior will require a rich understanding of behavior itself. A third is that understanding the nervous system cannot be achieved by a mere catalog of correlations. Big data alone aren’t enough.

Across all of these challenges, the important missing ingredient is theory. Science is about formulating and testing hypotheses, but nobody yet has a plausible, fully articulated hypothesis about how most brain functions occur, or how the interplay of those functions yields our minds and personalities.

Theory can, of course, take many forms. To a theoretical physicist, theory might look like elegant mathematical equations that quantitatively predict the behavior of a system. To a computer scientist, theory might mean the construction of classes of algorithms that behave in ways similar to how the brain processes information. Cognitive scientists have theories of the brain that are formulated in other ways, such as the ACT-R framework invented by the cognitive scientist John Anderson, in which cognition is modeled as a series of “production rules” that use our memories to generate our physical and mental actions.

The challenge for neuroscience is to try to square high-level theories of behavior and cognition with the detailed biology and biophysics of the brain.

This challenge is so significant, and difficult, that many cognitive scientists have bracketed it, or set it aside, as too complex. For tractability reasons, they construct cognitive models — and test them — without any reference to the actual brain. While this may be acceptable for a science in its infancy, this fundamental bridging problem cannot long be ignored, or simplistically set aside as insoluble.

In the second, computer scientist Jaron Lanier discusses the myth of artificial intelligence and “religion” built around the speculative hypothesis, or fear, of singularity. Ironically, the tech futurists who get all mystical about these issues are, in other aspects of their lives, devoted to applied technology that works and which of course makes money. Lanier, mindful of the fact that AI and cognitive science are cognate disciplines which, for all their impressive achievements, are not close to creating sentient machines or explaining human minds, is skeptical:

There’s a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don’t know how most kinds of thoughts are represented in the brain. We’re starting to understand a little bit about some narrow things. That doesn’t mean we never will, but we have to be honest about what we understand in the present.

This is something I’ve called, in the past, “premature mystery reduction,” and it’s a reflection of poor scientific mental discipline. You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you’re a lesser scientist. I don’t see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things. A great example is the Human Brain Project in Europe, which is a lot of public money going into science that’s very influenced by this point of view, and it has upset some in the neuroscience community for precisely the reason I described.

There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.

This is, or should be, good news. It’s good not because Elon Musk is probably wrong about the existential threats posed by AI, but because we acknowledge ignorance and ask the right kinds of questions. Answers will come, in due course, but our measures should be in decades, if not centuries. In the meantime, we should continuously remind ourselves that maps are not territories.


Back to Featured Articles on Logo Paperblog