We argue that dynamical and mathematical models in systems and cognitive neuro- science explain (rather than redescribe) a phenomenon only if there is a plausible mapping between elements in the model and elements in the mechanism for the phe- nomenon. We demonstrate how this model-to-mechanism-mapping constraint, when satisfied, endows a model with explanatory force with respect to the phenomenon to be explained. Several paradigmatic models including the Haken-Kelso-Bunz model of bimanual coordination and the difference-of-Gaussians model of visual receptive fields are (...) explored. (shrink)
The central aim of this paper is to shed light on the nature of explanation in computational neuroscience. I argue that computational models in this domain possess explanatory force to the extent that they describe the mechanisms responsible for producing a given phenomenon—paralleling how other mechanistic models explain. Conceiving computational explanation as a species of mechanistic explanation affords an important distinction between computational models that play genuine explanatory roles and those that merely provide accurate descriptions or predictions of phenomena. It (...) also serves to clarify the pattern of model refinement and elaboration undertaken by computational neuroscientists. (shrink)
Completeness is an important but misunderstood norm of explanation. It has recently been argued that mechanistic accounts of scientific explanation are committed to the thesis that models are complete only if they describe everything about a mechanism and, as a corollary, that incomplete models are always improved by adding more details. If so, mechanistic accounts are at odds with the obvious and important role of abstraction in scientific modelling. We respond to this characterization of the mechanist’s views about abstraction and (...) articulate norms of completeness for mechanistic explanations that have no such unwanted implications. _1_ Introduction _2_ A Balancing Act: When Do Details Matter? _3_ The Norms of Causal Explanation _4_ The Norms of Constitutive Explanation _5_ Salmon-Completeness _6_ From More Details to More Relevant Details _7_ Non-explanatory Virtues of Abstraction _8_ From Explanatory Models to Explanatory Knowledge _9_ Mechanistic Completeness Reconsidered _10_ Conclusion. (shrink)
Abstract While agreeing that dynamical models play a major role in cognitive science, we reject Stepp, Chemero, and Turvey's contention that they constitute an alternative to mechanistic explanations. We review several problems dynamical models face as putative explanations when they are not grounded in mechanisms. Further, we argue that the opposition of dynamical models and mechanisms is a false one and that those dynamical models that characterize the operations of mechanisms overcome these problems. By briefly considering examples involving the generation (...) of action potentials and circadian rhythms, we show how decomposing a mechanism and modeling its dynamics are complementary endeavors. (shrink)
Advocates of extended cognition argue that the boundaries of cognition span brain, body, and environment. Critics maintain that cognitive processes are confined to a boundary centered on the individual. All participants to this debate require a criterion for distinguishing what is internal to cognition from what is external. Yet none of the available proposals are completely successful. I offer a new account, the mutual manipulability account, according to which cognitive boundaries are determined by relationships of mutual manipulability between the properties (...) and activities of putative components and the overall behavior of the cognitive mechanism in which they figure. Among its main advantages, this criterion is capable of (a) distinguishing components of cognition from causal background conditions and lower-level correlates, and (b) showing how the core hypothesis of extended cognition can serve as a legitimate empirical hypothesis amenable to experimental test and confirmation. Conceiving the debate in these terms transforms the current clash over extended cognition into a substantive empirical debate resolvable on the basis of evidence from cognitive science and neuroscience. (shrink)
Since its introduction, multivariate pattern analysis, or ‘neural decoding’, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the decoder’s dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the dictum, arguing that it is false: decodability is (...) a poor guide for revealing the content of neural representations. However, we also suggest how the dictum can be improved on, in order to better justify inferences about neural representation using MVPA. 1Introduction 2A Brief Primer on Neural Decoding: Methods, Application, and Interpretation 2.1What is multivariate pattern analysis? 2.2The informational benefits of multivariate pattern analysis 3Why the Decoder’s Dictum Is False 3.1We don’t know what information is decoded 3.2The theoretical basis for the dictum 3.3Undermining the theoretical basis 4Objections and Replies 4.1Does anyone really believe the dictum? 4.2Good decoding is not enough 4.3Predicting behaviour is not enough 5Moving beyond the Dictum 6Conclusion. (shrink)
This is a story about three of my favorite philosophers—Donnellan, Russell, and Frege—about how Donnellan’s concept of having in mind relates to ideas of the others, and especially about an aspect of Donnellan’s concept that has been insufficiently discussed: how this epistemic state can be transmitted from one person to another.
Recently, it has been provocatively claimed that dynamical modeling approaches signal the emergence of a new explanatory framework distinct from that of mechanistic explanation. This paper rejects this proposal and argues that dynamical explanations are fully compatible with, even naturally construed as, instances of mechanistic explanations. Specifically, it is argued that the mathematical framework of dynamics provides a powerful descriptive scheme for revealing temporal features of activities in mechanisms and plays an explanatory role to the extent it is deployed for (...) this purpose. It is also suggested that more attention should be paid to the distinctive methodological contributions of the dynamical framework including its usefulness as a heuristic for mechanism discovery and hypothesis generation in contemporary neuroscience and biology. (shrink)
Since its introduction, multivariate pattern analysis, or ‘neural decoding’, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the decoder’s dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the dictum, arguing that it is false: decodability is (...) a poor guide for revealing the content of neural representations. However, we also suggest how the dictum can be improved on, in order to better justify inferences about neural representation using MVPA. (shrink)
Part 1 sets out the logical/semantical background to ‘On Denoting’, including an exposition of Russell's views in Principles of Mathematics, the role and justification of Frege's notorious Axiom V, and speculation about how the search for a solution to the Contradiction might have motivated a new treatment of denoting. Part 2 consists primarily of an extended analysis of Russell's views on knowledge by acquaintance and knowledge by description, in which I try to show that the discomfiture between Russell's semantical and (...) epistemological commitments begins as far back as 1903. I close with a non-Russellian critique of Russell's views on how we are able to make use of linguistic representations in thought and with the suggestion that a theory of comprehension is needed to supplement semantic theory. (shrink)
Physicalism and antireductionism are the ruling orthodoxy in the philosophy of biology. But these two theses are difficult to reconcile. Merely embracing an epistemic antireductionism will not suffice, as both reductionists and antireductionists accept that given our cognitive interests and limitations, non-molecular explanations may not be improved, corrected or grounded in molecular ones. Moreover, antireductionists themselves view their claim as a metaphysical or ontological one about the existence of facts molecular biology cannot identify, express, or explain. However, this is tantamount (...) to a rejection of physicalism and so causes the antireductionist discomfort. In this paper we argue that vindicating physicalism requires a physicalistic account of the principle of natural selection, and we provide such an account. The most important pay-off to the account is that it provides for the very sort of autonomy from the physical that antireductionists need without threatening their commitment to physicalism. (shrink)
This paper praises and criticizes Peter-Paul Verbeek’s What Things Do ( 2006 ). The four things that Verbeek does well are: (1) remind us of the importance of technological things; (2) bring Karl Jaspers into the conversation on technology; (3) explain how technology “co-shapes” experience by reading Bruno Latour’s actor-network theory in light of Don Ihde’s post-phenomenology; (4) develop a material aesthetics of design. The three things that Verbeek does not do well are: (1) analyze the material conditions in which (...) things are produced; (2) criticize the social-political design and use context of things; and (3) appreciate how liberal moral-political theory contributes to our evaluation of technology. (shrink)
Is the relationship between psychology and neuroscience one of autonomy or mutual constraint and integration? This volume includes new papers from leading philosophers seeking to address this issue by deepening our understanding of the similarities and differences between the explanatory patterns employed across these domains.
Hempel and Oppenheim, in their paper 'The Logic of Explanation', have offered an analysis of the notion of scientific explanation. The present paper advances considerations in the light of which their analysis seems inadequate. In particular, several theorems are proved with roughly the following content: between almost any theory and almost any singular sentence, certain relations of explainability hold.
This book explores food from a philosophical perspective, bringing together sixteen leading philosophers to consider the most basic questions about food: What is it exactly? What should we eat? How do we know it is safe? How should food be distributed? What is good food? David M. Kaplan’s erudite and informative introduction grounds the discussion, showing how philosophers since Plato have taken up questions about food, diet, agriculture, and animals. However, until recently, few have considered food a standard subject for (...) serious philosophical debate. Each of the essays in this book brings in-depth analysis to many contemporary debates in food studies—Slow Food, sustainability, food safety, and politics—and addresses such issues as “happy meat,” aquaculture, veganism, and table manners. The result is an extraordinary resource that guides readers to think more clearly and responsibly about what we consume and how we provide for ourselves, and illuminates the reasons why we act as we do. (shrink)
In 'Hempel and Oppenheim on Explanation', (see preceding article) Eberle, Kaplan, and Montague criticize the analysis of explanation offered by Hempel and Oppenheim in their 'Studies in the Logic of Explanation'. These criticisms are shown to be related to the fact that Hempel and Oppenheim's analysis fails to satisfy simultaneously three newly proposed criteria of adequacy for any analysis of explanation. A new analysis is proposed which satisfies these criteria and thus is immune to the criticisms brought against the earlier (...) analysis. (shrink)
Policy makers from around the world are trying to emulate successful innovation systems in order to support economic growth. At the same time, innovation governance systems are being put in place to ensure a better integration of stakeholder views into the research and development process. In Europe, one of the most prominent and newly emerging governance frameworks is called Responsible Research and Innovation. This article aims to substantiate the following points: The concept of RRI and the concept of justice can (...) be used to derive similar ethical positions on the nano-divide. Given the ambitious policy aims of RRI, the concept may be better suited to push for ethical outcomes on access to nanotechnology and its products rather than debates based on justice issues alone. It may thus serve as a mediator concept between those who push solely for competitiveness considerations and those who push solely for justice considerations in nano-technology debates. The descriptive, non-normative Systems of Innovation approaches should be linked into RRI debates to provide more evidence on whether the approach advocated to achieve responsible and ethical governance of research and innovation can indeed deliver on competitiveness. (shrink)
Readings in the Philosophy of Technology is a collection of the important works of both the forerunners of philosophy of technology and contemporary theorists, addressing a full range of topics on technology as it relates to ethics, politics, human nautre, computers, science, and the environment.