Skip to main content

Advertisement

Log in

The Brain as an Input–Output Model of the World

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

An underlying assumption in computational approaches in cognitive and brain sciences is that the nervous system is an input–output model of the world: Its input–output functions mirror certain relations in the target domains. I argue that the input–output modelling assumption plays distinct methodological and explanatory roles. Methodologically, input–output modelling serves to discover the computed function from environmental cues. Explanatorily, input–output modelling serves to account for the appropriateness of the computed function to the explanandum information-processing task. I compare very briefly the modelling explanation to mechanistic and optimality explanations, noting that in both cases the explanations can be seen as complementary rather than contrastive or competing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

From Ramsey (2007: 81; Fig. 3c); with permission from Cambridge University Press

Fig. 2
Fig. 3

Adapted from Cannon and Robinson (1987: 1384; Fig. 1); reprinted by permission of the American Physiological Society (APS)

Fig. 4
Fig. 5

Reprinted by permission of Macmillan Publishers Ltd: Zipser and Andersen (1988: 679–684; Fig. 4)

Similar content being viewed by others

Notes

  1. See, e.g., Frigg and Hartmann (2017), Weisberg (2013).

  2. See Swoyer (1991) for a general discussion about the relation between modelling and surrogative reasoning. See Grush (2004) for a discussion about modelling and surrogative reasoning in the brain.

  3. Less-than-isomorphism characterizations are in terms of partial isomorphism (French and Ladyman 1999; Da Costa and French 2003), homomorphism (Bartels 2006), and similarity (Giere 2004).

  4. The inputs and outputs need not be peripheral to the brain. In some examples discussed below we talk about sub-systems whose inputs are received and/or their outputs are projected to other parts of the nervous system. The inputs and outputs are very often (magnitude) values of certain properties such as voltages.

  5. The term input–output representation is coined by Ramsey (2007: 68–77), who associates it with task analysis.

  6. See also Gallistel and King: "Representations are functioning homomorphisms. They require structure-preserving mappings (homomorphisms) from states of the world (the represented system) to symbols in the brain (the representing system). These mappings preserve aspects of the formal structure of the world" (2009: x).

  7. See, e.g., Suárez (2010).

  8. Thus Griffiths et al. (2008) say that the big computational question that underlies the Bayesian approach is "How does the mind build rich, abstract, veridical models of the world given only the sparse and noisy data that we observe through our senses?". See also Clark (2015), who further emphasizes the central role of generative models in the hypothesis that the brain is a prediction machine.

  9. We can say that truth-preserving is just a special case of the morphism relation.

  10. See the reviews by Robinson (1968, 1989) and the one by Leigh and Zee (2006).

  11. See also Goldman et al. (2002).

  12. To keep things simpler, I will use here the terms distance and position interchangeably. New (horizontal) position is evaluated on the basis of the distance from the previous position.

  13. Note that in Fig. 3 the term E stands for both the representing (output) neural activity and the represented eye position. Similarly the term Ė stands for both the representing (input) neural activity and the represented eye velocity. This presentation is customary in neuroscience. This sort of presentation underscores (again) the modelling assumption, as it is apparent that the integration relation holds in both representing and represented domains.

  14. This ability is achieved by different animals. A well-known example is the desert ant (Cataglyphis fortis) that returns home after an outward travel of hundreds of meters.

  15. See Mittelstaedt and Mittelstaedt (1982), Collett and Collett (2000), Etienne and Jeffery (2004), Conklin and Eliasmith (2005), McNaughton et al. (2006) and Gallistel and King (2009).

  16. It has been more recently suggested that path integration in rats is computed by the grid cells located in the dorsolateral medial entorhinal cortex (dMEC) (Hafting et al. 2005).

  17. See also Kaplan and Craver (2011), Piccinini and Craver (2011), Miłkowski (2013) and Boone and Piccinini (2016).

  18. They also point out that a "full-blown" mechanistic explanation need not specify the entire properties of the mechanism. It should specify the entire properties that are relevant to the explanandum phenomenon; in some cases (e.g., computational explanations) these properties might all be abstract (e.g., medium-independent) properties (Boone and Piccinini 2016).

  19. There is tension, however, about what counts as a computational explanation. Kaplan seems to claim that computational explanations in neuroscience are adequate to the extent that they describe relevant mechanisms (see also Piccinini 2015; Miłkowski 2013). We suggest that computational explanations of information-processing phenomena also involve a modelling, non-mechanistic, component (Bechtel and Shagrir 2015; Shagrir and Bechtel 2017).

  20. Chirimuuta argues that this minimality conflicts with the more chauvinistic statements about the dominance of mechanistic explanations. Talking about the normalization model, she says that “my key claim is that the use of the term ‘normalization’ in neuroscience retains much of its original mathematical-engineering sense. It indicates a mathematical operation—a computation—not a biological mechanism”, and that this model “departs fully from the model-to-mechanism mapping framework that has been proposed as the criterion for explanatory success” (Chirimuuta 2014); she refers here to Kaplan’s model-to-mechanism mapping (3M) requirement (Kaplan 2011; Kaplan and Craver 2011). For a reply see Kaplan (2017) who argues that the implementation of the normalization equation (in different species) is an essential part of the explanation.

  21. Colin Klein suggested that we might be dealing here with different why questions.

  22. A similar question arises for the different algorithms that support the same function, which is why using one algorithm rather than another.

  23. Marr writes:

    Up to now I have studiously avoided using the word edge, preferring instead to discuss the detection of intensity changes and their representation by using oriented zero-crossing segments. The reason is that the term edge has a partly physical meaning—it makes us think of a real physical boundary, for example—and all we have discussed so far are the zero values of a set of roughly band-pass second-derivative filters. We have no right to call these edges, or, if we do have a right, then we must say so and why (1982: 68).

References

  • Andersen, R. A., Essick, G. K., & Siegel, R. M. (1985). Encoding of spatial location by posterior parietal neurons. Science, 230, 456–458.

    Article  Google Scholar 

  • Bartels, A. (2006). Defending the structural concept of representation. THEORIA. Revista de Teoría, Historia y Fundamentos de la Ciencia, 21, 7–19.

    MathSciNet  MATH  Google Scholar 

  • Bassett, J. P., & Taube, J. S. (2001). Neural correlates for angular head velocity in the rat dorsal tegmental nucleus. Journal of Neuroscience, 21, 5740–5751.

    Google Scholar 

  • Bechtel, W. (2012). Understanding endogenously active mechanisms: A scientific and philosophical challenge. European Journal for Philosophy of Science, 2, 233–248.

    Article  Google Scholar 

  • Bechtel, W., & Richardson, R. C. (1993). Discovering complexity: Decomposition and localization as strategies in scientific research. Princeton: Princeton University Press.

    Google Scholar 

  • Bechtel, W., & Shagrir, O. (2015). The non-redundant contributions of Marr’s three levels of analysis for explaining information-processing mechanisms. Topics in Cognitive Science, 7, 312–322.

    Article  Google Scholar 

  • Boone, W., & Piccinini, G. (2016). Mechanistic abstraction. Philosophy of Science, 83, 686–697.

    Article  Google Scholar 

  • Cannon, S. C., & Robinson, D. (1987). Loss of the neural integrator of the oculomotor system from brain stem lesions in monkey. Journal of Neurophysiology, 57, 1383–1409.

    Article  Google Scholar 

  • Carandini, M., & Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13, 51–62.

    Article  Google Scholar 

  • Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness of computational explanation in neuroscience. Synthese, 191, 127–153.

    Article  Google Scholar 

  • Churchland, P. M. (2007). Neurophilosophy at work. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Clark, A. (2015). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford: Oxford University Press.

    Google Scholar 

  • Collett, M., & Collett, T. S. (2000). How do insects use path integration for their navigation? Biological Cybernetics, 83, 245–259.

    Article  Google Scholar 

  • Conklin, J., & Eliasmith, C. (2005). Controlled attractor network model of path integration in the rat. Journal of Computational Neuroscience, 18, 183–203.

    Article  MathSciNet  Google Scholar 

  • Craver, C. F. (2016). The explanatory power of network models. Philosophy of Science, 83, 698–709.

    Article  MathSciNet  Google Scholar 

  • Cummins, R. (1989). Meaning and mental representation. Cambridge: MIT Press.

    Google Scholar 

  • Da Costa, N. C. A., & French, S. (2003). Science and partial truth: A unitary understanding of models and scientific reasoning. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. Cambridge: MIT Press.

    MATH  Google Scholar 

  • Eliasmith, C., & Anderson, C. H. (2003). Neural engineering: Computation, representation and dynamics in neurobiological systems. Cambridge: MIT Press.

    Google Scholar 

  • Etienne, A. S., & Jeffery, K. J. (2004). Path integration in mammals. Hippocampus, 14, 180–192.

    Article  Google Scholar 

  • Fodor, J. A. (1994). The elm and the expert: Mentalese and its semantics. Cambridge: MIT Press.

    Google Scholar 

  • French, S., & Ladyman, J. (1999). Reinflating the semantic approach. International Studies in the Philosophy of Science, 13, 103–121.

    Article  MathSciNet  MATH  Google Scholar 

  • Frigg, R., & Hartmann, S. (2017). Models in science. In Zalta E. N. (Ed.), The Stanford Encyclopedia of Philosophy. <https://plato.stanford.edu/entries/models-science>.

  • Gallistel, C. R., & King, A. (2009). Memory and the computational brain: Why cognitive science will transform neuroscience. New York: Blackwell/Wiley.

    Book  Google Scholar 

  • Giere, R. N. (2004). How models are used to represent reality. Philosophy of Science, 71, 742–752.

    Article  Google Scholar 

  • Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of Science, 69, S342–S353.

    Article  Google Scholar 

  • Glimcher, P. W. (1999). Oculomotor control. In R. A. Wilson & F. C. Kiel (Eds.), MIT encyclopedia of cognitive science (pp. 618–620). Cambridge: MIT Press.

    Google Scholar 

  • Goldman, M. S., Kaneko, C. R., Major, G., Aksay, E., Tank, D. W., & Seung, H. S. (2002). Linear regression of eye velocity on eye position and head velocity suggests a common oculomotor neural integrator. Journal of Neurophysiology, 88, 659–665.

    Article  Google Scholar 

  • Griffiths, T. L., Kemp, C., & Tenenbaum, J. B. (2008). Bayesian models of cognition. In R. Sun (Ed.), The Cambridge handbook of computational cognitive modeling (pp. 59–100). Cambridge: Cambridge University Press.

    Google Scholar 

  • Grush, R. (2001). The semantic challenge to computational neuroscience. In P. Machamer, R. Grush, & P. McLaughlin (Eds.), Theory and method in the neurosciences (pp. 155–172). Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences, 27, 377–442.

    Google Scholar 

  • Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., & Moser, E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436, 801–806.

    Article  Google Scholar 

  • Haugeland, J. (1981). Semantic engines: An introduction to mind design. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, and artificial intelligence (pp. 1–34). Cambridge: MIT Press.

    Google Scholar 

  • Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9, 181–197.

    Article  Google Scholar 

  • Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160, 106–154.

    Article  Google Scholar 

  • Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese, 183, 339–373.

    Article  Google Scholar 

  • Kaplan, D. M. (2017). Neural computation, multiple realizability, and the prospects for mechanistic explanation. In Kaplan, D. M. (Ed.), Explanation and integration in mind and brain science. Oxford University Press (forthcoming).

  • Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience : A mechanistic perspective. Philosophy of Science, 78, 601–627.

    Article  MathSciNet  Google Scholar 

  • Leigh, R. J., & Zee, D. S. (2006). The neurology of eye movements (4th ed.). New York: Oxford University Press.

    Google Scholar 

  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67, 1–25.

    Article  MathSciNet  Google Scholar 

  • Marr, D. C. (1982). Vision: A computational investigation into the human representation and processing of visual information. New York: Freeman.

    Google Scholar 

  • Marr, D. C., & Hildreth, E. C. (1980). Theory of edge detection. Proceedings of the Royal Society of London, Series B: Biological Sciences, 207, 187–217.

    Article  Google Scholar 

  • McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I., & Moser, M.-B. (2006). Path integration and the neural basis of the ‘cognitive map’. Nature Reviews Neuroscience, 7, 663–678.

    Article  Google Scholar 

  • Miłkowski, M. (2013). Explaining the computational mind. Cambridge: MIT Press.

    Google Scholar 

  • Mittelstaedt, H., & Mittelstaedt, M.-L. (1982). Homing by path integration. In F. Papi & H. G. Wallraff (Eds.), Avian navigation (pp. 290–297). Berlin: Springer.

    Chapter  Google Scholar 

  • O’Brien, G., & Opie, J. (2009). The role of representation in computation. Cognitive Processing, 10, 53–62.

    Article  Google Scholar 

  • O’Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. Oxford: Clarendon Press.

    Google Scholar 

  • Piccinini, G. (2015). Physical computation: A mechanistic account. Oxford: Oxford University Press.

    Book  MATH  Google Scholar 

  • Piccinini, G., & Craver, C. F. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183, 283–311.

    Article  Google Scholar 

  • Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge: MIT Press.

    Google Scholar 

  • Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Robinson, D. A. (1968). The oculomotor control system: A review. Proceedings of the IEEE, 56, 1032–1049.

    Article  Google Scholar 

  • Robinson, D. A. (1989). Integrating with neurons. Annual Review of Neuroscience, 12, 33–45.

    Article  Google Scholar 

  • Rusanen, A.-M., & Lappi, O. (2016). On computational explanations. Synthese, 193, 3931–3949.

    Article  MathSciNet  Google Scholar 

  • Ryder, D. (2004). SINBAD neurosemantics: A theory of mental representation. Mind and Language, 19, 211–240.

    Article  Google Scholar 

  • Seung, H. S. (1998). Continuous attractors and oculomotor control. Neural Networks, 11, 1253–1258.

    Article  Google Scholar 

  • Shagrir, O. (2010). Marr on computational-level theories. Philosophy of Science, 77, 477–500.

    Article  MathSciNet  Google Scholar 

  • Shagrir, O. (2012). Structural representations and the brain. British Journal for the Philosophy of Science, 63, 519–545.

    Article  Google Scholar 

  • Shagrir, O., & Bechtel, W. (2017). Marr’s computational level and delineating phenomena. In Kaplan, D. M. (Ed.), Integrating mind and brain science: Mechanistic perspectives and beyond. Oxford University Press (forthcoming).

  • Shapiro, L. A. (2016). Mechanism or bust? Explanation in psychology. British Journal for the Philosophy of Science (forthcoming).

  • Sharp, P. E., Tinkelman, A., & Cho, J. (2001). Angular velocity and head direction signals recorded from the dorsal tegmental nucleus of Gudden in the rat: Implications for path integration in the head direction cell circuit. Behavioral Neuroscience, 115, 571–588.

    Article  Google Scholar 

  • Suárez, M. (2010). Scientific representation. Philosophy Compass, 5, 91–101.

    Article  Google Scholar 

  • Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87, 449–508.

    Article  MathSciNet  Google Scholar 

  • Weisberg, M. (2013). Simulation and similarity: Using models to understand the world. New York: Oxford University Press.

    Book  Google Scholar 

  • Woodward, J. (2003). Making things happen: A theory of causal explanation. New York: Oxford University Press.

    Google Scholar 

  • Zipser, D., & Andersen, R. A. (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679–684.

    Article  Google Scholar 

Download references

Acknowledgements

I am grateful to Lotem Elber-Dorozko, Jens Harbecke, Shahar Hechtlinger, David Kaplan, Colin Klein, Arnon Levy, Gal Patel and two anonymous referees for their comments. Early versions of the paper were presented at seminars in Macquarie University, Tel-Aviv University, University of Canterbury, University of Otago and at the following conferences: The Aims of Brain Research: Scientific and Philosophical Perspectives (Jerusalem), Conference of the International Association for Computing and Philosophy (Thessaloniki), and the 7th AISB Symposium on Computing and Philosophy (London). I thank the participants for stimulating discussion. This research was supported by a grant from GIF, the German-Israeli Foundation for Scientific Research and Development.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oron Shagrir.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shagrir, O. The Brain as an Input–Output Model of the World. Minds & Machines 28, 53–75 (2018). https://doi.org/10.1007/s11023-017-9443-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-017-9443-4

Keywords

Navigation