Brains Don't Have to Be Computers (A Purple Peril)

By Andrew D Wilson @PsychScientists
A common response to the claim that we are not information processors is that this simply cannot be true, because it is self-evidently the case that brains are transforming and processing information - they are performing computations. Greg Hickok throws this ball a lot, and his idea is clear in this quote from his book 'The Myth of Mirror Neurons':
Once you start looking inside the brain you can’t escape the fact that it processes information. You don’t even have to look beyond a single neuron. A neuron receives input signals from thousands of other neurons, some excitatory, some inhibitory, some more vigorous than others. The output of the neuron is not a copy of its inputs. Instead its output reflects a weighted integration of its inputs. It is performing a transformation of the neural signals it receives. Neurons compute. This is information processing and it is happening in every single neuron and in every neural process whether sensory, motor, or “cognitive.”
Hickok, pg 256.
There are two claims here. First, neurons are processing information because their input is not the same as their output; they are transforming the former into the latter. Second, this process is computational; 'neurons compute'.

This is a widely held view; psychologist Gary Marcus even wrote about this in the NYT saying 'Face it, your brain is a computer'. In response, Vaughn Bell at Mindhacks posted about this op-ed and this issue in a nicely balanced piece called 'Computation is a lens'. He sums up the issue nicely by asking 'Is the brain a computer or is computation just a convenient way of describing its function?'. The answer, I propose here, is that computation is a fantastically powerful description of the activity of the brain that may or may not be (and probably isn't) the actual mechanism by which the brain does whatever it does. This is ok, because, contra Hickok,  not every process that sits in between an input and a different output has to be a computational, information processing one

The Polar Planimeter
I'd like to illustrate with the example of the polar planimeter. My favorite perceptual psychophysicist, Sverker Runeson, wrote a paper in 1976 called 'On the possibility of "smart" perceptual mechanisms'. In it, he described the idea of a smart device and used the planimeter as an example.I blogged about "smartness" and this device in one of my first posts on this blog because it's one of the standard examples in the field of how to get behavior without computation; the other, of course, is the Watt's Steam Governor

The polar planimeter is a device for the direct measurement of the area of irregularly shaped surfaces. It consists of two arms joined at a wheel. You anchor the 'pole' arm somewhere outside the shape and trace the shape with the 'tracer' arm. The wheel mechanism turns at a set rate as you trace and when you finish tracing you simply read off the current state of the wheel; that's the area. 

Figure 1. The polar planimeter; pictures with annotations borrowed from this description of their operation

Planimeters have been in use for a long time; the first definite record of one dates to 1836 but the idea had been around for at least 20 years by then (see this great online book for more). The algorithm that describes why their activity produces area is based on Green's Theorem (although the device predates the 1851 proof of Green's theorem). 

Planimeters work because they are built in a particular way. They are dynamical systems with a particular composition, organisation and calibration, and it's the time-extended activity of this system in the context of an appropriate task that produces functional behavior (the measurement of area) in a smart fashion. Runeson liked this example because area is the kind of thing people assume you need to get to by measuring something simple (like some lengths) then transforming those via a computation (multiplication) into an area. To a planimeter, area is the simple unit and Runeson used this idea to show that the direct measurement of Gibsonian higher order invariants was possible, if the measurement device was right.


Here's the other interesting thing, more relevant to the current discussion. Polar planimeters take an input (the activity of the tracing arm) and turn it into a different output (a measurement of area). According to Hickok, that's computation and information processing (and Greg has told me he thinks the planimeter is computing). Except that it's not. Nothing in the planimeter is implementing any of the steps required to solve this problem computationally. Worse, if you describe its activity computationally, you will not have accurately described how it produces area from the act of tracing. You will not have the right mechanism, and you will therefore ask the wrong questions as you do science on the planimeter (e.g. you'll go hunting for the 'length detectors' and the 'multiplication module' or whatever the equivalents are demanded by Green's Theorem). 


The consequences for science

Neuroscientists point to neural activity observed during a task and say that it must be implementing some computational step in the process, because that step is required. That step is only required if the system is solving the problem computationally, though, so this reasoning is circular (I say the brain is computing and point to this activity as computation but I only interpret that activity that way because I say the brain is computing...). Computation is not the only option, and behavioural data (on prospective control, collective behaviours like swarms and herds, and more) shows that computation is not what the system ever seems to be doing.  If perception is instead smart like the planimeter but we begin with a computational description instead, we'll go looking for the wrong things and interpret, say, neural activity observed during the task incorrectly ('that activity must be implementing the multiplication step').
Summary

Just because your output doesn't match your input doesn't mean you were computing in-between. The activity of dynamical systems can achieve this basic goal without implementing any computations; weather systems do not compute how to respond to changing climate, polar planimeters do not compute area and, to answer Vaughn Bell's question, stones do not compute their projectile motion in order to fall appropriately. This is, in fact, the reason why ecological psychologists whole-hardheartedly embraced dynamical systems theory and led the charge to bring it into psychology as a better tool box than computation. This is also why we don't have to be information processors.

Further reading

Smart perceptual mechanisms 
What Else Could It Be? The Case of the Centrifugal Governor 

Gary Marcus:  'Face it, your brain is a computer'

Vaughn Bell: 'Computation is a lens'
Runeson, S. (1977). On the possibility of "smart" perceptual mechanisms. Scandinavian Journal of Psychology, 18 (1), 172-179. Download