Thinking About Representations in Relation to Mechanisms

By Andrew D Wilson @PsychScientists
As Chemero (2011) observed, there are two ways to think about debates in science. We can either debate about the actual facts of the matter in the world or we can debate about the best way to explain the world. Some debates aren't amenable to the first type of debate, because there is no evidence that can definitively rule out one of the options. The only debate we can really have about representations is in terms of their role in explanations of behavior. This is different than how I thought about representations a few years back when I wanted to argue that invoking representations was inherently a bad idea. Now I think we need to consider the utility or representations as part of explanations for psychological phenomena. In this post, I will argue that the concept of representations is not helpful in developing a particular class of explanation - ontic mechanistic explanations (described below). This is the first of two posts on this idea. In the next post I will attempt to explicitly compare cognitive and ecological approaches to behavior in terms of how well they set us up to identify real parts and operations.
Types of explanations

We're all familiar with the notion of functional explanations (Cummins, 1975). Increasingly, mechanistic explanations are being invoked in psychology (Bechtel & Abrahamsen, 2010)Bechtel, 2008, Craver, 2007). But, given differences in the vocabulary used by philosophers of science to describe these types of explanation, it can be helpful to define them next to each other using some common terms.


Functional and mechanistic explanations both have as their target a capacity of a system. In the behavioral sciences, the capacity of interest is likely to be the expression of a particular behavior. Functional explanations break the target capacity into smaller capacities that also belong to the system (Cummins, 1975). Mechanistic explanations break the target capacity into constituent parts and operations that, when organized appropriately, create the capacity (Bechtel & Abbrahamsen, 2005, 2010). Functional and mechanistic models are both explanatory models – that is, they can be used to answer “w- questions” about a system (Craver, 2007). Explanatory models permit manipulation and control of the system as well as answering counterfactuals regarding the system (Weiskopf, 2011).
An important difference between types of explanation is whether they refer to material transformations. Explanations that refer to material transformations are ontic (Salmon, 1984, Illari, 2013). In the language of functional models, this means that a capacity would identify with a particular material activity. In the language of mechanistic models, this means that the parts and operations have particular physical counterparts. Mechanistic models based on real parts and operations - ontic mechanistic models (Salmon, 1984, Craver, 2007) - provide more robust explanations than functional or non-ontic mechanistic models (Bechtel, 2008; Bechtel & Abrahamsen, 2010; Weiskopf, 2011) because ontic mechanistic models characterize the causal structures responsible for the phenomenon of interest (Craver, 2007). Functional models and non-ontic mechanistic models can explain some aspects of a phenomena (e.g., how it might conceivably occur), and, if the material basis of parts and operations are unknown, these may be the best types of explanations available. However, if they are attainable, mechanistic models based on real parts and operations offer a number of advantages to other explanations. Bechtel and Abrahamsen (2010) identify six benefits of mechanistic models including the ability to:
  1. demonstrate that a given mechanism is sufficient to produce the target phenomenon
  2. explore the functioning of the mechanism in a larger parameter space than is accessible in experiments
  3. identify whether candidate parts are essential to the mechanism’s functioning
  4. explore how particular types of damage might affect the system by perturbing the model in particular ways
  5. to explain how coordinated behavior can emerge from the coupling of simpler mechanisms
  6. to explore the consequences of altering the relations between multiple mechanisms
Some of these benefits are not out of reach of non-ontic mechanisms or functional models. However, the full range of benefits can only be realized in models based on real parts and operations.

Explanations and representations

Mental representations can be used as constituents of functional explanations and, some argue, non-ontic mechanistic explanations. Explanations using mental representations in this way can get us some of the benefits listed above (principally 1, 2, and a bit of 4). But, let's say that we'd like to get the other benefits as well. Can we identify mental representations and processes with material transformations and thus develop representational ontic mechanistic models?

The mechanism literature sets the bar for this identification quite high (see Craver 2007 for a discussion of what counts as real parts and activities). It's a higher bar than the idea that information about physical transformations can "inform cognitive theories" as Greg Hickok argues and provides evidence for here. Finding the neural correlates of something isn't sufficient. Even knowing that specific types of brain damage have particular functional consequences isn't necessarily sufficient (it would depend on how formally one could define the function and its own identification with material transformations). For example, it is uncontroversial to say that the hippocampus is important for memory. But, the hippocampus isn't identical to memory. It is part of a memory system. Screwing with the hippocampus screws with memory, but it isn't the part that does memory. Memory requires multiple areas of the brain working together, but subtractive methods in neuroimaging or insights resulting from lesion studies are mostly going to tell us that certain bits are necessary for memory. This leaves unanswered what the physical counterparts are to the complete memory system. This is true whether you want to construe memory as a part (the memory for x) or an operation (remembering x) or as a complex capacity composed of its own parts and operations.

There has been a lot of hard core neuroscience on memory as well (e.g., Bailey's work on the molecular basis of memory). Though this work is very cool, it doesn't (yet) connect up to ontic mechanistic models of behavior. Developing ontic mechanistic models requires identifying all the material transformations involved in the components. Representations (and associated mental processes) were developed as components of functional models. They are well-suited to these explanations, but, I think, less well-suited to ontic mechanistic explanations.

Perhaps, though, if researchers pulled together and tried specifically to solve this problem, then we could find the physical counterpart to memory? This pursuit will always be complicated by the twin problems of neural reuse (Anderson 2007, 2010) and neural degeneracy (Frinz, Bucher & Marder, 2004; Sporns, 2010). In a systematic review of the evidence to date, Anderson (2007) found that a given neural circuit was involved in a variety of tasks, which he characterized as evidence for neural re-use. Neural re-use is the idea that neural circuits originally dedicated to one capacity are frequently borrowed, without losing their original function, to achieve newer capacities. Anderson showed that a typical cortical region is involved in an average of nine different task domains. That is, regions didn’t specialize solely in particular types of tasks (e.g., memory tasks), they were involved across domains that didn’t share any obvious cognitive commonality. The principle of neural re-use, which Anderson argues is the rule rather than the exception neural organization, makes it very challenging to identify cognitive capacities with neural activity because many capacities can map onto similar activity.
Neural degeneracy presents the opposite problem – a single capacity may be expressed through many different neural implementations (Sporns, 2011). An example taken from research on the lobster gut hits home just how much of a challenge degeneracy can pose to researchers who use neuroscience to ground our understanding of cognitive capacities. Prinz, Bucher, and Marder (2004) modelled all possible combinations of neuron types and synapse strengths in the lobster gut to see how many of these combinations expressed a particular function – pyloric muscle contractions. Of 20,500,000 possible model circuits, 4,047,375 (20%) generated pyloric-like activity. Prinz et al then reduced this set to only those circuits which could occur in actual lobster guts. They found that in actual lobsters, pyloric muscle activity could arise from 450,000 different neural circuits (read Andrew's take on this here).
The unwelcome implication of this is that advances in neuroscience cannot, in and of themselves, identify relevant real parts and operations underpinning behavior. This is because the task, as laid out by the cognitive research programme, is to identify neural counterparts to cognitive representations and processes (in the next post I will show that the problem of finding neural counterparts does not arise in the ecological approach). As Weiskopf puts it: “Thus in psychology we have the obvious, if depressing, truth that the mind cannot simply be read off of the brain” (2011).

Bechtel uses an interesting analogy to illustrate the challenge for explanations of psychological phenomena:

“The situation might be compared to the state of fermentation research in the late nineteenth century. By describing the potential intermediates formed in the process of fermentation as themselves undergoing fermentations, physiologists looked too high. They provided little explanatory gain, since researchers were appealing to the phenomenon to be explained to describe the operations that were to provide the explanation. In contrast, by focusing on the elemental composition of sugar and alcohol and appealing to operations of adding or deleting atoms to explain organic processes such as fermentation, chemists focused too low. The chemists clearly appealed to operations on components in a mechanism to explain the phenomenon, but this approach was underconstrained. Researchers lacked principles for determining which operations were really possible” (Bechtel, 2008 p 989). Researchers eventually identified another level of analysis - biochemistry - at which the operations relevant to fermentation function. Because of this identification, the field was, at last, able to move forward in developing ontic mechanistic models. Psychology has also tended to look to high (representations) and too low (neurons) making it difficult to identify real parts and operations that are actually relevant to psychological phenomena. If this analogy is apt, then a key issue for psychology and other behavioural sciences is what level of analysis is most appropriate for identifying real parts and operations. The cognitive research program evolved to enable functional explanations of cognitive capacities expressed in terms of relationships between representations and cognitive processes. It has proved difficult to use this level of analysis to identify real parts and operations involved in explanations of behavior.
Our thesis is that ecological information (Gibson, 1979) picks out the effective level of analysis for behavioural mechanisms and that the identification of real component parts and operations throughout the system follows from this choice (I'll argue this explicitly in the following post). This is not to say that this is the only level of analysis in behavioural mechanisms. Complete mechanisms of behavior will be multilevel (e.g., Craver, 2007) and will involve contributions from multiple neural and bodily components, as well as ecological-informational ones However, beginning with the ecological level provides essential constraints on what the neural and molecular components need to accomplish, making the behavioural analysis the right starting point. In defending multi-level (and multi-disciplinary) explanations in neurosciences, Craver (2007) suggests “[i]t is not the case that theories at one level are reduced to theories at another. Rather, different fields add constraints that shape the space of possible mechanisms for a phenomenon. Constraints from different fields are the tiles that fill in the mechanism sketch to produce and explanatory mosaic.”
In an upcoming post, I will work through an example to illustrate how starting with an ecological behavioural model, based on real parts and operations, facilitates asking ontic questions about how the nervous system and other bodily systems support behavior.

Anderson, M. L. (2007). Massive redeployment, exaptation, and the functional integration of cognitive operations. Synthese, 159(3), 329-345.

Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and brain sciences, 33(04), 245-266.

Bechtel, W. (2008). Mechanisms in cognitive psychology: What are the operations?. Philosophy of science, 75(5), 983-994.

Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421-441.

Bechtel, W., & Abrahamsen, A. (2010). Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science. Studies in History and Philosophy of Science Part A, 41(3), 321-333.

Chemero, A. (2011). Radical embodied cognitive science. MIT press.

Craver, C. F. (2007). Explaining the brain. Oxford: Oxford University Press.

Cummins, R. (1975). Functional explanation. Journal of Philosophy, 72, 741-764.

Gibson, J. J. (1979). The ecological approach to visual perception.

Illari, P. (2013). Mechanistic explanation: Integrating the ontic and epistemic. Erkenntnis, 78(2), 237-255.

Prinz, A. A., Bucher, D., & Marder, E. (2004). Similar network activity from disparate circuit parameters. Nature neuroscience, 7(12), 1345-1352.

Salmon, W. C. (1984, January). Scientific explanation: Three basic conceptions. In PSA: Proceedings of the biennial meeting of the philosophy of science association (pp. 293-305). Philosophy of Science Association.

Sporns, O. (2011). Networks of the Brain. MIT press.

Weiskopf, D. A. (2011). Models and mechanisms in psychological explanation. Synthese, 183(3), 313-338.