Abstract
This paper addresses concerns raised recently by Datteri (Biol Philos 24:301–324, 2009) and Craver (Philos Sci 77(5):840–851, 2010) about the use of brain-extending prosthetics in experimental neuroscience. Since the operation of the implant induces plastic changes in neural circuits, it is reasonable to worry that operational knowledge of the hybrid system will not be an accurate basis for generalisation when modelling the unextended brain. I argue, however, that Datteri’s no-plasticity constraint unwittingly rules out numerous experimental paradigms in behavioural and systems neuroscience which also elicit neural plasticity. Furthermore, I propose that Datteri and Craver’s arguments concerning the limitations of prosthetic modelling in basic neuroscience, as opposed to neuroengineering, rests on too narrow a view of the ways models in neuroscience should be evaluated, and that a more pluralist approach is needed. I distinguish organisational validity of models from mechanistic validity. I argue that while prosthetic models may be deficient in the latter of these explanatory virtues because of neuroplasticity, they excel in the former since organisational validity tracks the extent to which a model captures coding principles that are invariant with plasticity. Changing the brain, I conclude, is one viable route towards explaining the brain.
Similar content being viewed by others
Notes
Illustrative videos are available as supplementary materials at the Nature website, http://www.nature.com/nature/journal/v453/n7198/suppinfo/nature06996.html.
This clarification is needed because there is a sense in which any skill learning extends the brain beyond its previous repertoire of functions—e.g. learning to type, play the violin. I do not assume that there is a difference in kind between the kinds of brain plasticity and extension of function required for skill learning, and those observed following BCI use. It is just that the latter will not be observed in the absence of specific technological interventions because they rely on new kinds of brain-implant-body connections offered by the technology.
Scare quotes because I do not aim to reinforce the simplistic picture of the brain as sandwiched between sensory inputs and motor outputs (see Hurley 1998).
Thanks to an anonymous reviewer for raising this concern and for suggesting the electric plug metaphor.
To pursue the electric plug metaphor, imagine an electrical motor built in the UK and designed to operate on a 240 V supply. Using a standard plug adaptor, the device is switched on in the USA and because it now only has a 110 V supply it doesn’t operate at full speed. But this device has an inherent capacity to modify internal components in response to the demands of the new electrical input, and in time begins to run as it did in the UK. It behaves as if it has grown an internal step-up transformer. This is what the brain is like as it adapts to the BCI.
Note, however, that it need not be movement in an artificial body part that is generated, since many BCI experiments just require subjects to control the movement of a cursor on a computer monitor; and also, it has been shown that parts of the brain other than motor cortex can be co-opted for this purpose (Leuthardt et al. 2011).
It might be suggested that a prosthetic that used both accurate electrode placement and a more naturalistic decoding algorithm would have no need to rely on cortical plasticity. In a follow up to this paper, I explain the practical and theoretical limitations on making decoding models maximally realistic in this way.
One may also object to my claim that BCI’s functionally extend the motor cortex by suggesting the alternative hypothesis that the co-opting of circuits for the new tasks is just normal re-use (see Anderson (2010); and thanks to an anonymous reviewer for raising this suggestion). As it happens, there are grounds for thinking that some phenomena commonly attributed to plasticity may actually be instances of re-use—e.g. that M1 has been described by different labs as encoding abstract direction of movement or controlling muscle activity, depending on experiments performed in those labs (Meel Velliste, personal communication). According to the simple plasticity account, one or both of these functions is not naturally performed by M1, and it must learn to do it; but it could be that M1 is able to perform and switch between both of these functions even under non-experimental conditions. Importantly, the reuse hypothesis does not predict there will be structural changes in neural circuits called on to perform different functions. However, what is clear from the literature on BCI’s, and normal motor skill learning, is that such changes are also taking place, e.g. in the form of alteration of motor cortical neurons’ directional tuning preferences and domain of control (see Sanes and Donoghue 2000 for review), and so such effects are universally considered as instances of plasticity. It is these phenomena that I focus on.
I will return to this issue in “Conclusions and questions” below.
“Second, as far as ArB2 is concerned, many studies show that bionic implantation is likely to produce long-term changes in the biological system. It has been widely demonstrated … that the implantation of a bionic interface and the connection with external devices typically produces plastic changes in parts of the biological system, such as long-term changes of neural connectivity. Other plastic changes affect the activity of neurons.” (313).
One could of course argue that Datteri ought to have treated the lamprey and the motor cortex cases differently because of the difference in degree of input–output matching, instead of lumping them together. In effect, that is to concede my point that Datteri’s framework is inappropriate for most of BCI research. It does seem, however, that Datteri underestimates the prevalence of BCI’s showing poor input–output matching, and the importance of plasticity for the working of most BCI’s. He writes that in the case of M1 interfaces, ArB2 is likely to be contravened by undetected changes just because the initial state of the biological system is less well characterised and so “plastic changes may be hard to detect and predict due to the lack of adequate theoretical models” (315). But from what is known already about the way that such techniques extend brain function, there is no question of any researchers being unaware of plastic changes they induce in motor cortex! Datteri neglects the importance of plasticity to the actual working of the BCI. To reiterate, functioning prosthetic implants are possible because the brain adapts to them.
Following p. 11 above, the sense of “identity” here is that of having anatomical components and physiological properties that are effectively indistinguishable for the scientists comparing the systems. A plastically modified system will not be identical, in this sense, to the original one.
For simplicity of exposition, and consistency with the rest of the paper, I focus on Craver’s example of the BCI for movement control, rather than the alternative case study of Berger’s prosthetic hippocampus. The conclusions he draws are not different for the two examples.
The dimensions of completeness and verification describe how exhaustively and faithfully the model or simulation reproduces features of the biological mechanism. As Craver writes “All models and simulations of mechanisms omit details to emphasize certain key features of a target mechanism over others. Models are useful in part because they commit such sins of omission” (842). I will return to this point in “Conclusions and questions” below, and in this section concentrate on the three kinds of validity.
Given that the topic is systems neuroscience, rather than cellular or molecular neuroscience which study sub-neuronal mechanisms, I understand the key “parts” here to be neurons, so that for a model of a brain circuit to be mechanistically valid it must be quite anatomically accurate, featuring the same number and type of neurons as in the actual mechanism.
One wonders if Craver is saying that if multiple realizability were to occur in a non-bionic experiment, this would cause the same epistemic problem. In fact one cannot assume that mechanisms in systems neuroscience are not multiply-realized across individuals and across the lifespan. No two brains are identical, and circuits controlling perceptions and actions are sculpted and personalized by genetics and experience. It seems that the problem of failing to achieve mechanistic and phenomenal validity generalizes to non-bionic systems neuroscience, on Craver’s analysis. This point is comparable to the one made above (“Neuroplasticity in non-bionic experiments” section) that Datteri’s no-plasticity constraint must apply to non-bionic experiments in systems neuroscience, if it is to apply to bionic ones. However a more charitable reading of Craver takes up the point that the range of inputs and outputs used by nature is much narrower than that use by engineers (“The space of functional inputs and outputs is larger than the space of functional inputs and outputs that development and evolution have thus far had occasion to exploit.” p.847). Basic neuroscience, in its quest for phenomenal validity, can be said to be targeting this subspace of the expanse of possible inputs and outputs. Likewise, systems neuroscientists could be said to be working towards a description of the small range of mechanisms employed by different people for a specific function.
One might object that this experiment works by intervening on a natural mechanism in the brain, not by modelling the hybrid mechanism as a route towards modelling the brain. I would disagree with this interpretation of the experiment. While the BCI is certainly a tool for intervening on the natural system, my central point is that findings from the hybrid system serve rather straightforwardly as the bases for hypotheses about the natural system. Scientists are modelling the hybrid system, but it turns out that coding in the hybrid system need not be characterised any differently from the natural one in spite of cortical reorganisation.
I will discuss this result in the next section.
Note that in Craver’s definition of mechanistic validity, the model’s representations of parts, activities, and organizational features must all be relevantly similar to the actual mechanism’s. The crucial point of this section was that validity with respect to organization can come apart from more anatomical accuracy concerning parts (neurons), and so needs to be evaluated separately. For further discussion see “Conclusions and questions” section below.
Here is another example from (non-bionic) visual neuroscience: Freeman et al. (2011) present new fMRI data on orientation tuning of neurons in primary visual cortex, which they account for in terms of the retinotopic organisation of V1. They write that, “our results provide a mechanistic explanation” (p. 4804) of the pattern of findings. Again, what they describe is an organisational principle, rather than a detailed circuit model.
See Craver (2010: 842) quoted in note 16 above: incomplete models are primarily “useful”, and omissions are “sins” rather than explanatory virtues; cf. “How-possibly models are often heuristically useful in constructing and exploring the space of possible mechanisms, but they are not adequate explanations. How-actually models, in contrast, describe real components, activities, and organizational features of the mechanism that in fact produces the phenomenon. They show how a mechanism works, not merely how it might work” (2007: 112); and Datteri (2009: 308) “Underspecified models and mechanism sketches are progressively refined as model discovery proceeds, until a full-fledged mechanism model is worked out.”
This is obviously a very brief sketch of an alternative approach, which will be presented more fully in a follow up to this paper.
References
Anderson ML (2010) Neural reuse: a fundamental organizational principle of the brain. Behav Brain Sci 33:245–313
Bach-y-Rita P (1972) Brain mechanisms in sensory substitution. Academic Press, London
Barlow HB (1972) Single units and sensation: a neuron doctrine for perceptual psychology? Perception 1:371–394
Batterman RW (2002) Asymptotics and the role of minimal models. Br J Philos Sci 53(1):21–38
Bechtel W, Richardson RC (1993) Discovering complexity. Princeton University Press, Princeton
Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS (2003) Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol 1(2):193–208
Cartwright N (1983) How the laws of physics lie. Oxford University Press, Oxford
Chapin JK, Moxon KA, Markowitz RS, Nicolelis MA (1999) Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex. Nat Neurosci 2(7):664–670
Chirimuuta M, Gold IJ (2009) The embedded neuron, the enactive field? In: Bickle J (ed) Handbook of philosophy and neuroscience. Oxford University Press, Oxford
Clark A (2004) Natural-born cyborgs: minds, technologies, and the future of human intelligence. Oxford University Press, Oxford
Clark A (2008) Supersizing the mind: embodiment, action, and cognitive extension. Oxford University Press, Oxford
Clark A, Chalmers DJ (1998) The extended mind. Analysis 58(1):7–19
Craver CF (2007) Explaining the brain. Oxford University Press, Oxford
Craver CF (2010) Prosthetic models. Philos Sci 77(5):840–851
Datteri E (2009) Simulation experiments in bionics: a regulative methodological perspective. Biol Philos 24:301–324
David SV, Vinje WE, Gallant JL (2004) Natural stimulus statistics alter the receptive field structure of V1 neurons. J Neurosci 24:6991–7006
de Weerd P, Pinaud R, Bertini G (2006) Plasticity in V1 induced by perceptual learning. In: Pinaud R, Tremere LA, de Weerd P (eds) Plasticity in the visual system: from genes to circuits. Springer, Berlin
Dretske F (1994) If you can’t make one, you don’t know how it works. Midwest Stud Philos 19(1):468–482
Farah MJ (1994) Neuropsychological inference with an interactive brain: a critique of the “locality” assumption. Behav Brain Sci 17(1):43–61
Freeman J, Brouwer GJ, Heeger DJ, Merriam EP (2011) Orientation decoding depends on maps, not columns. J Neurosci 31(13):4792–4804
Ganguly K, Carmena J (2009) Emergence of a stable cortical map for neuroprosthetic control. PLoS Biol 7(7):1–13
Harrison RV, Gordon KA, Mount RJ (2005) Is there a critical period for cochlear implantation in congenitally deaf children? Analyses of hearing and speech perception performance after implantation. Dev Psychobiol 46(3):252–261
Hochberg LR, Serruya MD, Friehs GM (2006) Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442:164–171
Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, Haddadin S, Liu J, Cash SS, van der Smagt P, Donoghue JP (2012) Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485:372–377
Holmes DJ, Meese TS (2004) Grating and plaid masks indicate linear summation in a contrast gain pool. J Vision 4(12):7
Hurley S (1998) Consciousness in action. Harvard University Press, Cambridge
Jarosiewicz B, Chase SM, Fraser GW, Velliste M, Kass RE, Schwartz AB (2008) Functional network reorganization during learning in a brain-computer interface paradigm. Proc Natl Acad Sci USA 105:19486–19491
Karniel A, Kositsky M, Fleming KM (2005) Computational analysis in vitro: dynamics and plasticity of a neuro-robotic system. J Neural Eng 2:S250–S265
Kenet T, Arleli A, Tsodyks M, Grinvald A (2006) Are single neurons soloists or are they obedient members of a huge orchestra? In: Hemmen JLv, Sejnowski TJ (eds) 23 problems in systems neuroscience. Oxford University Press, Oxford
Kourtzi Z (2010) Visual learning for perceptual and categorical decisions in the human brain. Vision Res 50(4):433–440
Koyama S, Chase SM, Whitford AS, Velliste M, Schwartz AB, Kass RE (2009) Comparison of brain–computer interface decoding algorithms in open-loop and closed-loop control. J Comput Neurosci 29:73–87
Legenstein R, Chase SM, Schwartz AB, Maass W (2010) A reward-modulated hebbian learning rule can explain experimentally observed network reorganization in a brain control task. J Neurosci 30(25):8400–8410
Legge GEF, Foley JM (1980) Contrast masking in human vision. J Opt Soc Am 70:1458–1470
Lenay C, Gapenne O, Hanneton S, Marque C, Genouelle C (2003) Sensory substitution: limits and perspectives. In: Hatwell Y, Streri A, Gentaz E (eds) Touching for knowing. John Benjamins Publishing Group, Amsterdam
Leuthardt EC, Gaona C, Sharma M, Szrama N, Roland J, Freudenberg Z, Solis J, Breshears J, Schalk G (2011) Using the electrocorticographic speech network to control a brain–computer interface in humans. J Neural Eng 8:1–11
Lutz D (2011) Epidural electrocorticography may finally allow enduring control of a prosthetic or paralyzed arm by thought alone. Retrieved 14 April 2012 from: http://news.wustl.edu/news/Pages/21876.aspx
Machery E (2011) Developmental disorders and cognitive architecture. In: Adriaens PR, Block AD (eds) Maladapting minds: philosophy, psychiatry, and evolutionary theory. Oxford University Press, Oxford
Mitchell SD (2002) Integrative pluralism. Biol Philos 17(1):55–70
Morrison MC (1998) Modelling nature: between physics and the physical world. Philosophia Naturalis 35:65–85
Musallam S, Corneil BD, Greger B, Scherberger H, Andersen RA (2004) Cognitive control signals for neural prosthetics. Science 305:258–262
Nicolelis M (2003) Brain-machine interfaces to restore motor function and probe neural circuits. Nat Rev Neurosci 4:417–422
Nicolelis M, Lebedev M (2009) Principles of neural ensemble physiology underlying the operation of brain–machine interfaces. Nat Rev Neurosci 10:530–540
Pinaud R, Tremere LA, de Weerd P (eds) (2006) Plasticity in the visual system from genes to circuits. Springer, Berlin
Ptito M, Moesgaard SM, Gjedde A, Kupers R (2005) Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain 128:606–614
Reger BD, Fleming KM, Sanguineti V (2000) Connecting brains to robots: an artificial body for studying the computational properties of neural tissues. Artif Life 6(4):307–324
Sagi D (2011) Perceptual learning in vision research. Vision Res 51:1552–1566
Sanes JN, Donoghue JP (2000) Plasticity and primary motor cortex. Ann Rev Neurosci 23:393–415
Schirber M (2005) Monkey’s brain runs robotic arm. Retrieved 25 June 2012, from http://www.biotele.com/Monkey.htm
Schwartz A (2007) Useful signals from motor cortex. J Physiol 579:581–601
Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP (2002) Instant neural control of a movement signal. Nature 416:141–142
Shaw CA, McEachern J (eds) (2001) Toward a theory of neuroplasticity. Psychology Press, Philadelphia
Taylor DM, Tillery SI, Schwartz AB (2002) Direct cortical control of 3D neuroprosthetic devices. Science 296:1829–1832
Thomas M, Karmiloff-Smith A (2002) Are developmental disorders like cases of adult brain damage? Implications from connectionist modelling. Behav Brain Sci 25(6):727–750
Velliste M, Perel S, Chance Spalding M, Whitford AS, Schwartz AB (2008) Cortical control of a prosthetic arm for self-feeding. Nature 453:1098–1101
Wimsatt WC (1987) False models as means to truer theories neutral models in biology. In: Nitecki M, Hoffman A (eds) Oxford University Press, Oxford, pp 23–55
Zelenin PV, Deliagina TG, Grillner S (2000) Postural control in the lamprey: a study with a neuro-mechanical model. J Neurophysiol 84:2880–2887
Acknowledgements
I would like to thank Peter Machamer for numerous helpful comments on drafts of this paper, and the two anonymous referees for their thoughtful criticisms. Many thanks to Meel Velliste for answering countless queries. Thanks also to Carl Craver, Jim Bogen, and Sandra Mitchell for discussions and encouragement, and to the audience at the 2011 meeting of the Society for Philosophy of Science in Practice where this material was first presented.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chirimuuta, M. Extending, changing, and explaining the brain. Biol Philos 28, 613–638 (2013). https://doi.org/10.1007/s10539-013-9366-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10539-013-9366-2