We argue that dynamical and mathematical models in systems and cognitive neuro- science explain (rather than redescribe) a phenomenon only if there is a plausible mapping between elements in the model and elements in the mechanism for the phe- nomenon. We demonstrate how this model-to-mechanism-mapping constraint, when satisfied, endows a model with explanatory force with respect to the phenomenon to be explained. Several paradigmatic models including the Haken-Kelso-Bunz model of bimanual coordination and the difference-of-Gaussians model of visual receptive fields are (...) explored. (shrink)
The central aim of this paper is to shed light on the nature of explanation in computational neuroscience. I argue that computational models in this domain possess explanatory force to the extent that they describe the mechanisms responsible for producing a given phenomenon—paralleling how other mechanistic models explain. Conceiving computational explanation as a species of mechanistic explanation affords an important distinction between computational models that play genuine explanatory roles and those that merely provide accurate descriptions or predictions of phenomena. It (...) also serves to clarify the pattern of model refinement and elaboration undertaken by computational neuroscientists. (shrink)
Completeness is an important but misunderstood norm of explanation. It has recently been argued that mechanistic accounts of scientific explanation are committed to the thesis that models are complete only if they describe everything about a mechanism and, as a corollary, that incomplete models are always improved by adding more details. If so, mechanistic accounts are at odds with the obvious and important role of abstraction in scientific modelling. We respond to this characterization of the mechanist’s views about abstraction and (...) articulate norms of completeness for mechanistic explanations that have no such unwanted implications. (shrink)
Abstract While agreeing that dynamical models play a major role in cognitive science, we reject Stepp, Chemero, and Turvey's contention that they constitute an alternative to mechanistic explanations. We review several problems dynamical models face as putative explanations when they are not grounded in mechanisms. Further, we argue that the opposition of dynamical models and mechanisms is a false one and that those dynamical models that characterize the operations of mechanisms overcome these problems. By briefly considering examples involving the generation (...) of action potentials and circadian rhythms, we show how decomposing a mechanism and modeling its dynamics are complementary endeavors. (shrink)
Advocates of extended cognition argue that the boundaries of cognition span brain, body, and environment. Critics maintain that cognitive processes are confined to a boundary centered on the individual. All participants to this debate require a criterion for distinguishing what is internal to cognition from what is external. Yet none of the available proposals are completely successful. I offer a new account, the mutual manipulability account, according to which cognitive boundaries are determined by relationships of mutual manipulability between the properties (...) and activities of putative components and the overall behavior of the cognitive mechanism in which they figure. Among its main advantages, this criterion is capable of (a) distinguishing components of cognition from causal background conditions and lower-level correlates, and (b) showing how the core hypothesis of extended cognition can serve as a legitimate empirical hypothesis amenable to experimental test and confirmation. Conceiving the debate in these terms transforms the current clash over extended cognition into a substantive empirical debate resolvable on the basis of evidence from cognitive science and neuroscience. (shrink)
Since its introduction, multivariate pattern analysis, or ‘neural decoding’, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the decoder’s dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the dictum, arguing that it is false: decodability is (...) a poor guide for revealing the content of neural representations. However, we also suggest how the dictum can be improved on, in order to better justify inferences about neural representation using MVPA. 1Introduction 2A Brief Primer on Neural Decoding: Methods, Application, and Interpretation 2.1What is multivariate pattern analysis? 2.2The informational benefits of multivariate pattern analysis 3Why the Decoder’s Dictum Is False 3.1We don’t know what information is decoded 3.2The theoretical basis for the dictum 3.3Undermining the theoretical basis 4Objections and Replies 4.1Does anyone really believe the dictum? 4.2Good decoding is not enough 4.3Predicting behaviour is not enough 5Moving beyond the Dictum 6Conclusion. (shrink)
Recently, it has been provocatively claimed that dynamical modeling approaches signal the emergence of a new explanatory framework distinct from that of mechanistic explanation. This paper rejects this proposal and argues that dynamical explanations are fully compatible with, even naturally construed as, instances of mechanistic explanations. Specifically, it is argued that the mathematical framework of dynamics provides a powerful descriptive scheme for revealing temporal features of activities in mechanisms and plays an explanatory role to the extent it is deployed for (...) this purpose. It is also suggested that more attention should be paid to the distinctive methodological contributions of the dynamical framework including its usefulness as a heuristic for mechanism discovery and hypothesis generation in contemporary neuroscience and biology. (shrink)
Since its introduction, multivariate pattern analysis, or ‘neural decoding’, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the decoder’s dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the dictum, arguing that it is false: decodability is (...) a poor guide for revealing the content of neural representations. However, we also suggest how the dictum can be improved on, in order to better justify inferences about neural representation using MVPA. (shrink)
Physicalism and antireductionism are the ruling orthodoxy in the philosophy of biology. But these two theses are difficult to reconcile. Merely embracing an epistemic antireductionism will not suffice, as both reductionists and antireductionists accept that given our cognitive interests and limitations, non-molecular explanations may not be improved, corrected or grounded in molecular ones. Moreover, antireductionists themselves view their claim as a metaphysical or ontological one about the existence of facts molecular biology cannot identify, express, or explain. However, this is tantamount (...) to a rejection of physicalism and so causes the antireductionist discomfort. In this paper we argue that vindicating physicalism requires a physicalistic account of the principle of natural selection, and we provide such an account. The most important pay-off to the account is that it provides for the very sort of autonomy from the physical that antireductionists need without threatening their commitment to physicalism. (shrink)
This paper praises and criticizes Peter-Paul Verbeek’s What Things Do ( 2006 ). The four things that Verbeek does well are: (1) remind us of the importance of technological things; (2) bring Karl Jaspers into the conversation on technology; (3) explain how technology “co-shapes” experience by reading Bruno Latour’s actor-network theory in light of Don Ihde’s post-phenomenology; (4) develop a material aesthetics of design. The three things that Verbeek does not do well are: (1) analyze the material conditions in which (...) things are produced; (2) criticize the social-political design and use context of things; and (3) appreciate how liberal moral-political theory contributes to our evaluation of technology. (shrink)
This book explores food from a philosophical perspective, bringing together sixteen leading philosophers to consider the most basic questions about food: What is it exactly? What should we eat? How do we know it is safe? How should food be distributed? What is good food? David M. Kaplan’s erudite and informative introduction grounds the discussion, showing how philosophers since Plato have taken up questions about food, diet, agriculture, and animals. However, until recently, few have considered food a standard subject for (...) serious philosophical debate. Each of the essays in this book brings in-depth analysis to many contemporary debates in food studies—Slow Food, sustainability, food safety, and politics—and addresses such issues as “happy meat,” aquaculture, veganism, and table manners. The result is an extraordinary resource that guides readers to think more clearly and responsibly about what we consume and how we provide for ourselves, and illuminates the reasons why we act as we do. (shrink)
Readings in the Philosophy of Technology is a collection of the important works of both the forerunners of philosophy of technology and contemporary theorists, addressing a full range of topics on technology as it relates to ethics, politics, human nautre, computers, science, and the environment.
Jeffery et al. characterize the egocentric/allocentric distinction as discrete. But paradoxically, much of the neural and behavioral evidence they adduce undermines a discrete distinction. More strikingly, their positive proposal reflects a more complex interplay between egocentric and allocentric coding than they acknowledge. Properly interpreted, their proposal about three-dimensional spatial representation contributes to recent work on embodied cognition.
Richard Wolin questions the connection between the philosophy and politics of Paul Ricoeur to make three charges: 1) Ricoeur's version of hermeneutics slides into a relativism of incommensurable perspectives; 2) Ricoeur's "covert agenda" in his recent work, Memory, History, Forgetting is to come to terms with the regrettable choices he made in his youth; 3) Ricoeur left us a written record of his pro-Vichy sympathies that raise questions about the political implications of hermeneutics. Each claim is, however, far from true. (...) Ricoeur's hermeneutics is particularly sensitive to the charge of relativistic incommensurability and avoids it assiduously; his philosophical motivations in writing Memory, History, Forgetting are well known and are more important with respect to the work's merit than his personal motivations; and his early political writings need to be read in light of a broader, life-long attempt to find a balance between the universal and particular in hermeneutics, ethics, and politics. (shrink)
This paper praises and criticizes Peter-Paul Verbeek's What Things Do (2006). The four things that Verbeek does well are: (1) remind us of the importance of technological things; (2) bring Karl Jaspers into the conversation on technology; (3) explain how technology "co-shapes" experience by reading Bruno Latour's actor-network theory in light of Don Ihde's post-phenomenology; (4) develop a material aesthetics of design. The three things that Verbeek does not do well are: (1) analyze the material conditions in which things are (...) produced; (2) criticize the social-political design and use context of things; and (3) appreciate how liberal moral-political theory contributes to our evaluation of technology. (shrink)
This work traces the development Paul Ricoeur's recent hermeneutic phenomenology since the late 1960's, and develops the critical element within Ricoeur's recent thought by examining his conceptions of ideology and utopia, and the relationship between hermeneutics and critical theory, in order to elaborate a critical and rationally justified interpretation of human action for the social sciences. Particular attention is paid to Ricoeur's works on metaphor, narrative, and ethics in the context of a critical theory of power, ideology and history. Hermeneutics, (...) if properly conceived, is, at the same time, a critical theory of society geared toward identifying ideological formations and utopian possibilities of liberation. The Habermas-Gadamer debate forms the backcloth for this study by functioning like a hidden dialogue partner that informs our reading of the development of Ricoeur's thought. I propose an extension of Ricoeur's conception of a critique of ideology in two directions, corresponding a phenomenology of the will, and a narrative theory of human action. The result of this extension, or interpolation, is to deepen and clarify a thought path begun by Ricoeur. The basis for a critique of ideological action is a conception of truth that incorporates a Husserlian notion of evidential experience, and a Habermasian notion of truth as consensus. The normative basis for critique is Ricoeur's conception of discourse ethics, which incorporates an Aristotelian conception of the good life, and a Kantian conception of autonomy and deontological moral norm. Ricoeur's model of interpretation and critique surpasses both Habermas and Gadamer, by integrating the Habermasian validity basis of discourse within a broader, phenomenologically grounded conception of human experience and action that emphasizes the creative and imaginative uses of language for interpretation, critique, practical reason, and self-reflection. (shrink)
Is the relationship between psychology and neuroscience one of autonomy or mutual constraint and integration? This volume includes new papers from leading philosophers seeking to address this issue by deepening our understanding of the similarities and differences between the explanatory patterns employed across these domains.