Andy Clark and David Chalmers claim that cognitive processes can and do extend outside the head.1 Call this the “hypothesis of extended cognition” (HEC). HEC has been strongly criticised by Fred Adams, Ken Aizawa and Robert Rupert.2 In this paper I argue for two claims. First, HEC is a harder target than Rupert, Adams and Aizawa have supposed. A widely-held view about the nature of the mind, functionalism—a view to which Rupert, Adams and Aizawa appear to subscribe— entails HEC. Either (...) HEC is true, or functionalism is false. The relationship between functionalism and HEC goes beyond support for the relatively uncontroversial claim that it is logically or nomologically possible for cognition to extend (the “can” part of HEC); functionalism entails that cognitive processes do extend in the actual world. Second, I argue that the version of HEC entailed by functionalism is more radical than the version that Clark and Chalmers suggest. I argue that it is so radical as to form a counterexample to functionalism. If functionalism is modified to prevent these consequences, then HEC falls victim to Rupert, Adams and Aizawa’s original criticism. An advocate of HEC has two choices: (1) accept functionalism and radical HEC; (2) give up HEC entirely. Clark and Chalmers’ intermediate position of a modest form of HEC is unsustainable. The argument of this paper, although initially appearing to support Clark and Chalmers, ultimately argues against their position. The price of HEC is rampant expansion of the mind into the world, and the implausibility of such expansion is indicative of deep-seated problems with functionalism. The argument of this paper consequently speaks to wider issues than just the status of HEC. The reasons for.. (shrink)
The ‘received view’ about computation is that all computations must involve representational content. Egan and Piccinini argue against the received view. In this paper, I focus on Egan’s arguments, claiming that they fall short of establishing that computations do not involve representational content. I provide positive arguments explaining why computation has to involve representational content, and how that representational content may be of any type. I also argue that there is no need for computational psychology to be individualistic. Finally, I (...) draw out a number of consequences for computational individuation, proposing necessary conditions on computational identity and necessary and sufficient conditions on computational I/O equivalence of physical systems.Keywords: Computation; Representation; Computational identity; Explanation; Narrow content; Physical computation. (shrink)
This paper explores a novel form of Mental Fictionalism: Fictionalism about talk of neural representations in cognitive science. This type of Fictionalism promises to (i) avoid the hard problem of naturalising representations, without (ii) incurring the high costs of eliminating useful representation talk. In this paper, I motivate and articulate this form of Fictionalism, and show that, despite its apparent advantages, it faces two serious objections. These objections are: (1) Fictionalism about talk of neural representations ultimately does not avoid the (...) problem of naturalising representations; (2) Fictional representations cannot play the explanatory role required by cognitive science. (shrink)
What is the relationship between information and representation? Dating back at least to Dretske (1981), an influential answer has been that information is a rung on a ladder that gets one to representation. Representation is information, or representation is information plus some other ingredient. In this paper, I argue that this approach oversimplifies the relationship between information and representation. If one takes current probabilistic models of cognition seriously, information is connected to representation in a new way. It enters as a (...) property of the represented content as well as a property of the vehicles that carry that content. This offers a new, conceptually and logically distinct way in which information and representation are intertwined in cognition. (shrink)
This paper examines the justification for the hypothesis of extended cognition. HEC claims that human cognitive processes can, and often do, extend outside our head to include objects in the environment. HEC has been justified by inference to the best explanation. Both advocates and critics of HEC claim that we can infer the truth value of HEC based on whether HEC makes a positive or negative explanatory contribution to cognitive science. I argue that IBE cannot play this epistemic role. A (...) serious rival to HEC exists with a differing truth value, and this invalidates IBEs for both the truth and the falsity of HEC. Explanatory value to cognitive science is not a guide to the truth value of HEC.Keywords: Extended mind; Extended cognition; Embedded cognition; Externalism; Inference to the best explanation; Functionalism. (shrink)
Computational approaches dominate contemporary cognitive science, promising a unified, scientific explanation of how the mind works. However, computational approaches raise major philosophical and scientific questions. In what sense is the mind computational? How do computational approaches explain perception, learning, and decision making? What kinds of challenges should computational approaches overcome to advance our understanding of mind, brain, and behaviour? The Routledge Handbook of the Computational Mind is an outstanding overview and exploration of these issues and the first philosophical collection of (...) its kind. Comprising thirty-five chapters by an international team of contributors from different disciplines, the Handbook is organised into four parts: History and future prospects of computational approaches Types of computational approach Foundations and challenges of computational approaches Applications to specific parts of psychology. Essential reading for students and researchers in philosophy of mind, philosophy of psychology, and philosophy of science, The Routledge Handbook of the Computational Mind will also be of interest to those studying computational models in related subjects such as psychology, neuroscience, and computer science. (shrink)
William Ramsey’s Representation Reconsidered is a superb, insightful analysis of the notion of mental representation in cognitive science. The book presents an original argument for a bold conclusion: partial eliminativism about mental representation in scientific psychology. According to Ramsey, once we examine the conditions that need to be satisfied for something to qualify as a representation, we can see those conditions are not fulfilled by the ‘representations’ posited by much of modern psychology. Cognitive science—or at least large swathes of it—has (...) no warrant for positing representations. The structure of Ramsey’s argument repeats a familiar eliminativist strategy (c.f. Churchland (1981); Stich (1983)).1 First step: argue that in order for something to be an X, it must satisfy a certain description D (say, beliefs must satisfy the description given in folk psychology). Second step: argue that to the best of our knowledge, nothing satisfies description D (e.g. folk psychology is false). Third step: conclude that since nothing satisfies description D, there are no Xs (no beliefs). Here is how the strategy is played out in the book. First, Ramsey argues for certain minimal conditions that a representation must satisfy (what he calls the ‘job description’). Second (this takes the bulk of the book), he considers the ways in which our best psychological theories use the notion of representation. Ramsey argues that none of these uses satisfy the job description associated with a genuine representation. (A wrinkle is that some representations—those posited by the classical computational theory of cognition—do qualify as true representations. But, Ramsey claims, classical theories are in a minority in cognitive science, and their hold on the field is shrinking.) Therefore, Ramsey concludes, in most of cognitive science, there are no mental representations. (shrink)
I argue in this article that there is a mistake in Searle's Chinese room argument that has not received sufficient attention. The mistake stems from Searle's use of the Church-Turing thesis. Searle assumes that the Church-Turing thesis licences the assumption that the Chinese room can run any program. I argue that it does not, and that this assumption is false. A number of possible objections are considered and rejected. My conclusion is that it is consistent with Searle's argument to hold (...) onto the claim that understanding consists in the running of a program. (shrink)
Reinvigorates our understanding of Victorian and modernist works and society Offers a wide-ranging application of theories of distributed cognition to Victorian culture and Modernism Explores the distinctive nature and expression of notions of distributed cognition in Victorian culture and Modernism and considers their relation to current notions Reinvigorates our understanding of Western European works – including Wordsworth, T. S. Eliot and Virginia Woolf – and society by bringing to bear recent insights on the distributed nature of cognition Includes essays by (...) international specialists in Victorian culture and Modernist literature, history, technology, science, philosophy and art including Andrew Michael Roberts, Jennifer Gosetti-Ferencei and Melba Cuddy-Keane Includes essays on literature, history, technology, science, philosophy and art This book brings together 11 essays by international specialists in Victorian culture and modernism and provides a general and period-specific introduction to distributed cognition and the cognitive humanities. Together, they revitalise our reading of Victorian and modernist works in the fields of history of technology, science and medicine, material culture, philosophy, art and literary studies by bringing to bear recent insights in cognitive science and philosophy of mind on the ways in which cognition is distributed across brain, body and world. (shrink)
This volume celebrates the various facets of Alan Turing (1912–1954), the British mathematician and computing pioneer, widely considered as the father of computer science. It is aimed at the general reader, with additional notes and references for those who wish to explore the life and work of Turing more deeply. -/- The book is divided into eight parts, covering different aspects of Turing’s life and work. -/- Part I presents various biographical aspects of Turing, some from a personal point of (...) view. -/- Part II presents Turing’s universal machine (now known as a Turing machine), which provides a theoretical framework for reasoning about computation. His 1936 paper on this subject is widely seen as providing the starting point for the field of theoretical computer science. -/- Part III presents Turing’s working on codebreaking during World War II. While the War was a disastrous interlude for many, for Turing it provided a nationally important outlet for his creative genius. It is not an overstatement to say that without Turing, the War would probably have lasted longer, and may even have been lost by the Allies. The sensitive nature of Turning’s wartime work meant that much of this has been revealed only relatively recently. -/- Part IV presents Turing’s post-War work on computing, both at the National Physical Laboratory and at the University of Manchester. He made contributions to both hardware design, through the ACE computer at the NPL, and software, especially at Manchester. Part V covers Turing’s contribution to machine intelligence (now known as Artificial Intelligence or AI). Although Turing did not coin the term, he can be considered a founder of this field which is still active today, authoring a seminal paper in 1950. -/- Part VI covers morphogenesis, Turing’s last major scientific contribution, on the generation of seemingly random patterns in biology and on the mathematics behind such patterns. Interest in this area has increased rapidly in recent times in the field of bioinformatics, with Turing’s 1952 paper on this subject being frequently cited. -/- Part VII presents some of Turing’s mathematical influences and achievements. Turing was remarkably free of external influences, with few co-authors – Max Newman was an exception and acted as a mathematical mentor in both Cambridge and Manchester. -/- Part VIII considers Turing in a wider context, including his influence and legacy to science and in the public consciousness. -/- Reflecting Turing’s wide influence, the book includes contributions by authors from a wide variety of backgrounds. Contemporaries provide reminiscences, while there are perspectives by philosophers, mathematicians, computer scientists, historians of science, and museum curators. Some of the contributors gave presentations at Turing Centenary meetings in 2012 in Bletchley Park, King’s College Cambridge, and Oxford University, and several of the chapters in this volume are based on those presentations – some through transcription of the original talks, especially for Turing’s contemporaries, now aged in their 90s. Sadly, some contributors died before the publication of this book, hence its dedication to them. -/- For those interested in personal recollections, Chapters 2, 3, 11, 12, 16, 17, and 36 will be of interest. For philosophical aspects of Turing’s work, see Chapters 6, 7, 26–31, and 41. Mathematical perspectives can be found in Chapters 35 and 37–39. Historical perspectives can be found in Chapters 4, 8, 9, 10, 13–15, 18, 19, 21–25, 34, and 40. With respect to Turing’s body of work, the treatment in Parts II–VI is broadly chronological. We have attempted to be comprehensive with respect to all the important aspects of Turing’s achievements, and the book can be read cover to cover, or the chapters can be tackled individually if desired. There are cross-references between chapters where appropriate, and some chapters will inevitably overlap. -/- We hope that you enjoy this volume as part of your library and that you will dip into it whenever you wish to enter the multifaceted world of Alan Turing. (shrink)
The choice between realism and instrumentalism is at the core of concerns about how our scientific models relate to reality: Do our models aim to be literally true descriptions of reality, or is their role only as useful instruments for generating predictions? Realism about X, roughly speaking, is the claim that X exists and has its nature independent of our interests, attitudes, and beliefs. An instrumentalist about X denies this. She claims that talk of X should be understood as no (...) more than a useful locution for generating predictions, such talk should not be understood as taking on a commitment to the existence of X. According to an instrumentalist, we should either flatly not believe that X is out there, or else suspend judgement about the existence of X. The most we need acknowledge is that talk of X is useful in making predictions. The question of realism vs. instrumentalism can be asked about almost any theoretical entity in science. It is likely, and seems reasonable, that different answers will be given in different cases. Someone may wish to be a realist about certain theoretical entities (e.g. electrons), but an instrumentalist about others (e.g. centres of gravity). Not every noun-phrase in a scientific theory should be taken as expressing an ontological commitment. Psychological theories are no exception. Almost every theoretical posit in psychology has been questioned as to whether it is really out there or just a useful theoretical fiction. In this entry, I will focus on two major theoretical posits in psychology: (a) propositional attitudes (e.g. beliefs, desires) and (b) conscious states (qualia). (shrink)
Nowadays, it has become almost a matter of course to say that the human mind is like a computer. Folks in all walks of life talk of ‘programming’ themselves, ‘multitasking’, running diﬀerent ‘operating systems’, and sometimes of ‘crashing’ and being ‘rebooted’. Few who have used computers have not been touched by the appeal of the..
The general introduction, which is replicated across all four volumes, aims to orientate readers unfamiliar with this area of research. It provides an overview of the different approaches within distributed cognition and discussion of the value of a distributed cognitive approach to the humanities.
In contrast to many areas of contemporary philosophy, something like a carnival atmosphere surrounds Searle’s Chinese room argument. Not many recent philosophical arguments have exerted such a pull on the popular imagination, or have produced such strong reactions. People from a wide range of fields have expressed their views on the argument. The argument has appeared in Scientific American, television shows, newspapers, and popular science books. Preston and Bishop’s recent volume of essays reflects this interdisciplinary atmosphere. The volume includes essays (...) from computer science, neuroscience, artificial intelligence, cognitive science, sociology, science studies, physics, mathematics, and philosophy. There are two sides to this interdisciplinary mix. On the one hand, it makes for interesting and fun reading for anyone interested in the Chinese room argument, but on the other, it raises the threat that the Chinese room argument might be left in some kind of interdisciplinary no man’s land. The Chinese room argument (CRA) is an argument against the possibility of Strong artificial intelligence (Strong AI). The thesis of Strong AI is that running a program is sufficient for, or constitutive of, understanding: it is merely in virtue of running a 1 particular program that a system understands. Searle appreciates that understanding is a complex notion, and so he has a particular form of understanding in mind: the understanding of simple stories. It seems intuitively obvious that when I read a simple story in English, I understand that story. One could say that somewhere in my head there is understanding going on. However, if I read a simple story written in Chinese (a language I do not speak), then there is no understanding going on. What makes the difference between these two cases? The advocate of Strong AI says that the difference.. (shrink)
An effective method is a computational method that might, in principle, be executed by a human. In this paper, I argue that there are methods for computing that are not effective methods. The examples I consider are taken primarily from quantum computing, but these are only meant to be illustrative of a much wider class. Quantum inference and quantum parallelism involve steps that might be implemented in multiple physical systems, but cannot be implemented, or at least not at will, by (...) an idealised human. Recognising that not all computational methods are effective methods is important for at least two reasons. First, it is needed to correctly state the results of Turing and other founders of computation theory. Turing is sometimes said to have offered a replacement for the informal notion of an effective method with the formal notion of a Turing machine. I argue that such a view only holds under limited circumstances. Second, not distinguishing between computational methods and effective methods can lead to mistakes when quantifying over the class of all possible computational methods. Such quantification is common in philosophy of mind in the context of thought experiments that explore the limits of computational functionalism. I argue that these ‘homuncular’ thought experiments should not be treated as valid. (shrink)
Philosophy of mind is one of the core disciplines in philosophy. The questions that it deals with are profound, vexed and intriguing. This volume of 15 new cutting-edge essays gives young researchers a chance to stir up new ideas. The essays cover a wide range of topics, including the nature of consciousness, cognition, and action. A common theme in the essays is that the future of philosophy of mind lies in judicious use of resources from related fields, including epistemology, metaphysics, (...) philosophy of language, philosophy of science, and cognitive neuroscience. Approaches that the researchers explore in this volume range from the use of armchair conceptual analysis to brain scanning techniques. (shrink)
Bruineberg et al. argue that the formal notion of a Markov blanket fails to provide a single principled boundary between an agent and its environment. I argue that one should not expect a general theory of agenthood to provide a single boundary; and the reliance on auxiliary assumptions is neither arbitrary nor reason to suspect instrumentalism.
In this paper we offer an exegesis of Hilary Putnam’s classic argument against the brain-in-avat hypothesis offered in his Reason, truth and history (1981). In it, Putnam argues that we cannot be brains in a vat because the semantics of the situation make it incoherent for anyone to wonder whether they are a brain a vat. Putnam’s argument is that in order for ‘I am a brain in a vat’ to be true, the person uttering it would have to be (...) able to refer successfully to those things: the vat, and the envatted brain. Putnam thinks that reference can’t be secured without relevant kinds of causal relations, which, if envatted, the brain would lack, and so, it fails to be able to meaningfully utter ‘I am a brain in a vat’. We consider the implications of Putnam’s arguments for the traditional sceptic. In conclusion, we discuss the role of Putnam’s arguments against the brain in a vat hypothesis in his larger defense of his own internal realism against metaphysical realism. (shrink)
The frame problem is a problem in artificial intelligence that a number of philosophers have claimed has philosophical relevance. The structure of this paper is as follows: (1) An account of the frame problem is given; (2) The frame problem is distinguished from related problems; (3) The main strategies for dealing with the frame problem are outlined; (4) A difference between commonsense reasoning and prediction using a scientific theory is argued for; (5) Some implications for the..
Kripke (1982, Wittgenstein on rules and private language. Cambridge, MA: MIT Press) presents a rule-following paradox in terms of what we meant by our past use of “plus”, but the same paradox can be applied to any other term in natural language. Many responses to the paradox concentrate on fixing determinate meaning for “plus”, or for a small class of other natural language terms. This raises a problem: how can these particular responses be generalised to the whole of natural language? (...) In this paper, I propose a solution. I argue that if natural language is computable in a sense defined below, and the Church–Turing thesis is accepted, then this auxiliary problem can be solved. (shrink)
The Edinburgh History of Distributed Cognition (Series Editor(s): Miranda Anderson, Douglas Cairns) -/- Questions the barriers between the humanities and the cognitive sciences. -/- Cognitive science is finding increasing evidence that cognition is distributed across brain, body and world. This series calls for a reappraisal of historical concepts of cognition in light of these findings. It engages with recent debates about the various strong or weak models of distributed cognition and brings them into discourse with research in the humanities. Together, (...) the books in this series give a wide-ranging examination of the parallels (and divergences) from these models in cultural, philosophical and scientific works, from antiquity to the mid-20th century. -/- Key Features: * Opens up our reading of Western European works in the fields of history of ideas, history of science, material culture and literary studies by bringing recent insights in cognitive science to bear on the distributed nature of cognition * Explores how sociocultural and environmental contexts lead to the manifestation of particular forms of cognitive paradigms or to their suppression * Traces the interconnections between and particular divergences across Western European history among the various concepts of distributed cognition -/- . (shrink)
12 essays by international specialists in classical antiquity create a period-specific interdisciplinary introduction to distributed cognition and the cognitive humanities - The first book in an ambitious 4-volume set looking at distributed cognition in the history of thought - Includes essays on archaeology, art history, rhetoric, literature, philosophy, science, medicine and technology -For students and scholars in classics, cognitive humanities, philosophy of mind and ancient philosophy -Includes essays by international specialists in classics, ancient history and archaeology This collection explores how (...) cognition is explicitly or implicitly conceived of as distributed across brain, body and world in Greek and Roman technology, science, medicine, material culture, philosophy and literary studies. A range of models emerge, which vary both in terms of whether cognition is just embodied or involves tools or objects in the world. As many of the texts and practices discussed have influenced Western European society and culture, this collection reveals the historical foundations of our theoretical and practical attempts to comprehend the distributed nature of human cognition. (shrink)
Predictive coding – sometimes also known as ‘predictive processing’, ‘free energy minimisation’, or ‘prediction error minimisation’ – claims to offer a complete, unified theory of cognition that stretches all the way from cellular biology to phenomenology. However, the exact content of the view, and how it might achieve its ambitions, is not clear. This series of articles examines predictive coding and attempts to identify its key commitments and justification. The present article begins by focusing on possible confounds with predictive coding: (...) claims that are often identified with predictive coding, but which are not predictive coding. These include the idea that the brain employs an efficient scheme for encoding its incoming sensory signals; that perceptual experience is shaped by prior beliefs; that cognition involves minimisation of prediction error; that the brain is a probabilistic inference engine; and that the brain learns and employs a generative model of the world. These ideas have garnered widespread support in modern cognitive neuroscience, but it is important not to conflate them with predictive coding. (shrink)
This paper explores the claim that explanation of a group 's behaviour in term of individual mental states is, in principle, superior to explanation of that behaviour in terms of group mental states. We focus on the supposition that individual-level explanation is superior because it is simpler than group -level explanation. In this paper, we consider three different simplicity metrics. We argue that on none of those metrics does individual-level explanation achieve greater simplicity than a group -level alternative. We conclude (...) that an argument against group minds should not lay weight on concerns of explanatory simplicity. (shrink)
Kanaan and McGuire elegantly describe three challenges facing the use of fMRI to uncover cognitive mechanisms. They shows how these challenges ramify in the case of identifying the mechanisms responsible for psychiatric disorders. In this commentary, I would like to raise another difficulty for fMRI that also appears to ramify in similar cases. This is that there are good reasons for doubting one of the assumptions on which many fMRI studies are based: that neural mechanisms are always and everywhere sufficient (...) for cognition. I suggest that in the case of the mechanisms underlying psychiatric disorders, this assumption should be doubted. I do not dispute that a malfunctioning neural mechanism is likely to be a necessary component of a psychiatric disorder—as Kanaan and McGuire say, the experimental evidence from cognitive neuropsychiatry gives us excellent reasons to think that this is so. My question is whether a story only in terms of these neural mechanisms is sufficient to explain the mechanism of a psychiatric disorder. Is the reduction, projected by cognitive neuropsychiatry, of psychiatric disorders to disorders in neural functioning even in principle possible? Drawing on recent concerns about the location of mental states, I argue that such a reduction is likely to fail. Even if the considerable problems raised by Kanaan and McGuire for fMRI could be addressed, we have no reason to think that the mechanisms involved in psychiatric disorders are entirely neural, and that fMRI, or even a perfect science-fiction brain-scanner, would be capable of uncovering them. Psychiatric disorders, like numerous other cognitive processes, are liable to cross the brain–world boundary in such a promiscuous way as to be resistant to neural reduction. (shrink)