Abstract
It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (Sect. 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (Sects. 3 and 4). We then examine two possible solutions (Sect. 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (Sect. 6).
Similar content being viewed by others
Notes
Throughout this paper, when discussing mechanistic explanations, we refer only to constitutive explanations, of part-whole relations, and not to explanations that appeal to causal relations in general.
Computational, medium-independent, entities can include phenomena, capacities, states, properties, functions, operations, variables and so on. Medium-dependent entities can include the same types (though our account is also consistent with the view that the abstract/medium-dependent characterization does not pertain to the phenomena themselves but rather to their descriptions). We will usually refer to only one of these entities, but the account applies to the others as well. The term component will be used in the context of part/whole relation: A component is an essential part (but not necessarily a spatial one) of a phenomenon.
There are, however, different ways to account for the nature of these “medium-independent” properties. Fodor (1975) and Stich (1983) describe them as “syntactic” properties, and Fodor (1994) accounts for the latter in terms of high-level physical properties. Haugeland (1981) describes them as “formal” (see also Fodor 1980). Piccinini (2015) describes computational properties as “mathematical” or “formal”, and others have suggested that, regarding computations, the relevant physical properties of the implementing physical systems are only their degrees of freedom (Coelho Mollo 2018; Piccinini and Bahar 2013).
While it seems straightforward to associate the computational explanations discussed here with Marr’s computational level (1982), algorithmic descriptions of a system can also be abstract and computational in the meaning we discuss here, as long as they are “medium-independent”. These algorithmic descriptions are more similar to mechanistic explanations in that they usually decompose the explanandum into its parts, while computational level explanations describe ‘what’ function the system performs and ‘why’ (Shagrir and Bechtel 2017).
One can also ask how the implementational hierarchy is decomposed. Depending on one’s view of a level of explanation, the implementational hierarchy will include different details. It can include merely a reference to the physical structures that underlie the computational function. Alternatively, this hierarchy can also describe functions executed by these structures, albeit, medium-dependent functions. To illustrate, diodes, which are used on occasion to build logic gates in computers, have the function of passing electric current in exactly one direction. Description of such functions can be a part of the implementational hierarchy, because such functions are not abstract, but instead describe medium-dependent processes. In both cases, the decomposition of the implementational hierarchy will depend on some function. In the first case it is the computational function, and in the second it is the medium-dependent function (which may or may not coincide with the computational function).
As one of the reviewers has kindly pointed out, the levels in this hierarchical model of reinforcement learning are chosen somewhat arbitrarily. For example, the computation describing the RPE may also be included directly in the module for the calculation of the action-values. However, we believe that this does not undermine the possibility that this model is hierarchical, because this issue exists in many mechanistic hierarchies. It is frequently possible to consider including a decomposition of some component directly at the higher level (e.g., the description of neurotransmitter release can be included in the explanation of the vestibulo-ocular reflex directly instead of figuring in a lower level).
Although, if the reader disagrees about the specific hierarchy, the decomposition of reinforcement learning into modules that perform sub-computations should be enough to show that there is a computational model here, and we can ask how it is integrated with implementational details.
Some may argue that relations between computational components can already be considered causal relations. We discuss the possible outcomes of this position in Sect. 5.
If this is the case, some issues regarding this view should be resolved. Most importantly, how functions can remain medium-independent when it is necessary to state the brain structure in which they occur (Haimovici 2013).
The first view, according to which the computational properties are eliminated when the implementational details are in place, might be more fitting with the standard ("flat”) view of realization. The second view might be more fitting with the dimensioned view of realization (Gillett 2002).
References
Bechtel, W. (2009). Looking down, around, and up: Mechanistic explanation in psychology. Philosophical Psychology, 22, 543–564. https://doi.org/10.1080/09515080903238948.
Bechtel, W., & Shagrir, O. (2015). The non-redundant contributions of Marr’s three levels of analysis for explaining information-processing mechanisms. Topics in Cognitive Science, 7, 312–322. https://doi.org/10.1111/tops.12141.
Behrens, T. E. J., Woolrich, M. W., Walton, M. E., & Rushworth, M. F. S. (2007). Learning the value of information in an uncertain world. Nature Neuroscience, 10, 1214–1221. https://doi.org/10.1038/nn1954.
Boone, W., & Piccinini, G. (2016). The cognitive neuroscience revolution. Synthese, 193, 1509–1534. https://doi.org/10.1007/s11229-015-0783-4.
Botvinick, M. M. (2012). Hierarchical reinforcement learning and decision making. Current Opinion in Neurobiology, 22, 956–962. https://doi.org/10.1016/j.conb.2012.05.008.
Botvinick, M. M., Niv, Y., & Barto, A. (2009). Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective. Cognition, 113, 262–280. https://doi.org/10.1016/j.cognition.2008.08.011.
Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness of computational explanation in neuroscience. Synthese, 191, 127–153. https://doi.org/10.1007/s11229-013-0369-y.
Chirimuuta, M. (2018). Explanation in computational neuroscience: Causal and non-causal. The British Journal for the Philosophy of Science, 69, 849–880. https://doi.org/10.1093/bjps/axw034.
Coelho Mollo, D. (2018). Functional individuation, mechanistic implementation: The proper way of seeing the mechanistic view of concrete computation. Synthese, 195, 3477–3497. https://doi.org/10.1007/s11229-017-1380-5.
Craver, C. F. (2016). The explanatory power of network models. Philosophy of Science, 83, 698–709. https://doi.org/10.1086/687856.
Craver, C. F., & Povich, M. (2017). The directionality of distinctively mathematical explanations. Studies in History and Philosophy of Science, 63, 31–38. https://doi.org/10.1016/j.shpsa.2017.04.005.
Cummins, R. (1983). The nature of psychological explanation. Cambridge: MIT Press.
Cummins, R. (2000). “How does it work?” vs. “What are the laws?” Two conceptions of psychological explanation. In F. Keil & R. A. Wilson (Eds.), Explanation and cognition (pp. 117–145). Cambridge: MIT Press.
Dewhurst, J. (2018). Individuation without representation. The British Journal for the Philosophy of Science, 69, 103–116. https://doi.org/10.1093/bjps/axw018.
Doya, K. (2000). Complementary roles of basal ganglia and cerebellum in learning and motor control. Current Opinion in Neurobiology, 10, 732–739. https://doi.org/10.1016/S0959-4388(00)00153-7.
Doya, K. (2008). Modulators of decision making. Nature Neuroscience, 11, 410–416. https://doi.org/10.1038/nn2077.
Egan, F. (2017). Function-theoretic explanation and neural mechanisms. In D. M. Kaplan (Ed.), Explanation and integration in mind and brain science (pp. 145–163). Oxford: Oxford University Press.
Elber-Dorozko, L., & Loewenstein, Y. (2018). Striatal action-value neurons reconsidered. eLife, 7, e34248. https://doi.org/10.7554/eLife.34248.
Fodor, J. A. (1968). Psychological explanation: An introduction to the philosophy of psychology. New York: Random House.
Fodor, J. A. (1975). The language of thought. Cambridge: Harvard University Press.
Fodor, J. A. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences, 3, 63–73. https://doi.org/10.1017/S0140525X00001771.
Fodor, J. A. (1994). The elm and the expert. Cambridge: MIT Press.
Gillett, C. (2002). The dimensions of realization: A critique of the standard view. Analysis, 62, 316–323. https://doi.org/10.1093/analys/62.4.316.
Gillett, C. (2016). Reduction and emergence in science and philosophy. Cambridge: Cambridge University Press.
Haimovici, S. (2013). A problem for the mechanistic account of computation. Journal of Cognitive Science, 14, 151–181. https://doi.org/10.17791/jcs.2013.14.2.151.
Harbecke, J. (in review). The methodological role of mechanistic-computational models in cognitive science.
Haugeland, J. (1981). Semantic engines: An introduction to mind design. In J. Haugeland (Ed.), Mind design, philosophy, psychology, artificial intelligence. Cambridge: MIT Press.
Hollerman, J. R., & Schultz, W. (1998). Dopamine neurons report an error in the temporal prediction of reward during learning. Nature Neuroscience, 1, 304–309. https://doi.org/10.1038/1124.
Hoshi, E., Tremblay, L., Féger, J., Carras, P. L., & Strick, P. L. (2005). The cerebellum communicates with the basal ganglia. Nature Neuroscience, 8, 1491–1493. https://doi.org/10.1038/nn1544.
Huneman, P. (2010). Topological explanations and robustness in biological sciences. Synthese, 177, 213–245. https://doi.org/10.1007/s11229-010-9842-z.
Ito, M., & Doya, K. (2009). Validation of decision-making models and analysis of decision variables in the rat basal ganglia. The Journal of Neuroscience, 29, 9861–9874. https://doi.org/10.1523/JNEUROSCI.6157-08.2009.
Ito, M., & Doya, K. (2011). Multiple representations and algorithms for reinforcement learning in the cortico-basal ganglia circuit. Current Opinion in Neurobiology, 21, 368–373. https://doi.org/10.1016/j.conb.2011.04.001.
Kable, J. W., & Glimcher, P. W. (2009). The neurobiology of decision: Consensus and controversy. Neuron, 63, 733–745. https://doi.org/10.1016/j.neuron.2009.09.003.
Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., & Hudspeth, A. J. (2013). Principles of neural science (Vol. 5). New York: McGraw-Hill.
Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese, 183, 339–373. https://doi.org/10.1007/s11229-011-9970-0.
Kaplan, D. M. (2017). Neural computation, multiple realizability, and the prospects for mechanistic explanation. In D. M. Kaplan (Ed.), Explanation and integration in mind and brain science (pp. 164–189). Oxford: Oxford University Press.
Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience : A mechanistic perspective. Philosophy of Science, 78, 601–627. https://doi.org/10.1086/661755.
Kim, J. (1998). Mind in a physical world. Cambridge: MIT Press.
Lange, M. (2013). What makes a scientific explanation distinctively mathematical? The British Journal for the Philosophy of Science, 64, 485–511. https://doi.org/10.1093/bjps/axs012.
Lee, E., Seo, M., Monte, O. D., & Averbeck, B. B. (2015). Injection of a dopamine type 2 receptor antagonist into the dorsal striatum disrupts choices driven by previous outcomes, but not perceptual inference. The Journal of Neuroscience, 35, 6298–6306. https://doi.org/10.1523/JNEUROSCI.4561-14.2015.
Li, J., & Daw, N. D. (2011). Signals in human striatum are appropriate for policy update rather than value prediction. Journal of Neuroscience, 31, 5504–5511. https://doi.org/10.1523/JNEUROSCI.6316-10.2011.
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco: W. H. Freeman.
Miłkowski, M. (2013). Explaining the computational mind. Cambridge: MIT Press.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2016). Human-level control through deep reinforcement learning. Nature, 518, 529–533. https://doi.org/10.1038/nature14236.
Mongillo, G., Shteingart, H., & Loewenstein, Y. (2014). The misbehavior of reinforcement learning. Proceedings of the IEEE, 102, 528–541. https://doi.org/10.1109/JPROC.2014.2307022.
O’Doherty, J. P., Dayan, P., Schultz, J., Deichmann, R., Friston, K., & Dolan, R. J. (2004). Dissociable role of ventral and dorsal striatum in instrumental conditioning. Science, 304, 452–454. https://doi.org/10.1126/science.1094285.
Piccinini, G. (2015). Physical computation: A mechanistic account. Oxford: Oxford University Press.
Piccinini, G., & Bahar, S. (2013). Neural computation and the computational theory of cognition. Cognitive Science, 34, 453–488. https://doi.org/10.1111/cogs.12012.
Piccinini, G., & Craver, C. F. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183, 283–311. https://doi.org/10.1007/s11229-011-9898-4.
Rathkopf, C. (2015). Network representation and complex systems. Synthese, 195, 55–78. https://doi.org/10.1007/s11229-015-0726-0.
Rusanen, A., & Lappi, O. (2016). On computational explanations. Synthese, 193, 3931–3949. https://doi.org/10.1007/s11229-016-1101-5.
Samejima, K., Ueda, Y., Doya, K., & Kimura, M. (2005). Representation of action-specific reward values in the striatum. Science, 310, 1337–1340. https://doi.org/10.1126/science.1115270.
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275, 1593–1599. https://doi.org/10.1126/science.275.5306.1593.
Shagrir, O. (2006). Why we view the brain as a computer. Synthese, 153, 393–416. https://doi.org/10.1007/s11229-006-9099-8.
Shagrir, O. (2016). Advertisement for the philosophy of the computational sciences. In P. Humphreys (Ed.), The Oxford handbook of philosophy of science (pp. 15–42). Oxford: Oxford University Press.
Shagrir, O., & Bechtel, W. (2017). Marr’s computational level and delineating phenomena. In D. M. Kaplan (Ed.), Explanation and integration in mind and brain science (pp. 190–214). Oxford: Oxford University Press.
Shapiro, L. A. (2017). Mechanism or bust? Explanation in psychology. The British Journal for the Philosophy of Science, 68, 1037–1059. https://doi.org/10.1093/bjps/axv062.
Shoemaker, S. (2001). Realization and mental causation. In C. Gillett & B. Loewer (Eds.), Physicalism and its discontents. Cambridg: Cambridge University Press.
Shteingart, H., & Loewenstein, Y. (2014). Reinforcement learning and human behavior. Current Opinion in Neurobiology, 25, 93–98. https://doi.org/10.1016/j.conb.2013.12.004.
Shteingart, H., Neiman, T., & Loewenstein, Y. (2013). The role of first impression in operant learning. Journal of Experimental Psychology: General, 142, 476–488. https://doi.org/10.1037/a0029550.
Sprevak, M. (2010). Computation, individuation, and the received view on representation. Studies in History and Philosophy of Science Part A, 41, 260–270. https://doi.org/10.1016/j.shpsa.2010.07.008.
Stich, S. (1983). From folk psychology to cognitive science: The case against belief. Cambridge: MIT Press.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.
Tai, L. H., Lee, A. M., Benavidez, N., Bonci, A., & Wilbrecht, L. (2012). Transient stimulation of distinct subpopulations of striatal neurons mimics changes in action value. Nature Neuroscience, 15, 1281–1289. https://doi.org/10.1038/nn.3188.
Wang, A. Y., Miura, K., & Uchida, N. (2013). The dorsomedial striatum encodes net expected return, critical for energizing performance vigor. Nature Neuroscience, 16, 639–647. https://doi.org/10.1038/nn.3377.
Watkins, C. J. C. H., & Dayan, P. (1992). Q-Learning. Machine Learning, 8, 279–292. https://doi.org/10.1007/BF00992698.
Weiskopf, D. A. (2011). Models and mechanisms in psychological explanation. Synthese, 183, 313–338. https://doi.org/10.1007/s11229-011-9958-9.
Acknowledgements
We thank Matteo Colombo, Nir Fresco, Arnon Levy, Corey J. Maley, Marcin Miłkowski, Gualtiero Piccinini, Mark Sprevak, the referees from Synthese journal, and the project members of the GIF project “Causation and computation in cognitive neuroscience” (Ori Hacohen, Jens Harbecke, Shahar Hechtlinger, Vera Hoffmann-Kolss, Jan Philipp Köster, and Carlos Zednik) as well as the participants in the IACAP2017 conference, EPSA17 symposium on ‘The Computational Mind’, and the participants of The Third Jerusalem-MCMP Workshop in the Philosophy of Science for helpful comments, which greatly helped to improve this manuscript. This paper was presented also in the colloquia seminars in Tel Chai college and Ben Gurion University. We also thank Zehava Cohen for creating the original figures in this paper. This research was supported by a grant from the GIF, the German- Israeli Foundation for Scientific Research and Development. Lotem Elber-Dorozko is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Elber-Dorozko, L., Shagrir, O. Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences. Synthese 199 (Suppl 1), 43–66 (2021). https://doi.org/10.1007/s11229-019-02230-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-019-02230-9