Panpsychism is an eminently sensible view of the world and its relation to mind. If God is a metaphysician, and regardless of the actual truth or falsity of panpsychism, it is certain that he regards the theory as an honest and elegant competitor on the ﬁeld of ontologies. And if God didn’t create a panpsychist world, then there’s a fair chance that he wishes he had done so, or will do next time around. The difﬁculties panpsychism faces, then, are not (...) metaphysical ones. They are, instead, difﬁculties of understanding, and of acceptance by philosophers. The main difﬁculty of this sort the theory faces is that its ontology – with consciousness in some sense at the heart of all that exists1 – is deemed too bizarre, frankly, too humano-centric to be taken seriously. Why should anyone think that consciousness, widely held to be the preserve only of ourselves, plus the most recently evolved organisms, infuses the basement level of all existence? Such a thought seems to many – especially, to scientiﬁcally scrupled philosophers of mind – a narcissistic (or at best hopelessly anti-realist) folly, which doesn’t even deserve its day in court. Panpsychism.. (shrink)
In my book How the Mind Works, I defended the theory that the human mind is a naturally selected system of organs of computation. Jerry Fodor claims that 'the mind doesn't work that way'(in a book with that title) because (1) Turing Machines cannot duplicate humans' ability to perform abduction (inference to the best explanation); (2) though a massively modular system could succeed at abduction, such a system is implausible on other grounds; and (3) evolution adds nothing to our understanding (...) of the mind. In this review I show that these arguments are flawed. First, my claim that the mind is a computational system is different from the claim Fodor attacks (that the mind has the architecture of a Turing Machine); therefore the practical limitations of Turing Machines are irrelevant. Second, Fodor identifies abduction with the cumulative accomplishments of the scientific community over millennia. This is very different from the accomplishments of human common sense, so the supposed gap between human cognition and computational models may be illusory. Third, my claim about biological specialization, as seen in organ systems, is distinct from Fodor's own notion of encapsulated modules, so the limitations of the latter are irrelevant. Fourth, Fodor's arguments dismissing of the relevance of evolution to psychology are unsound. (shrink)
The author presents an autobiographical story of serious peripheral motor nerve damage resulting from chemotoxicity induced as a side effect of Hodgkin’s Lymphoma treatment. The first-person, phenomenological account of the condition naturally leads to philosophical questions about consciousness, felt presence of oneself all over and within one’s body, and the felt constitutiveness of peripheral processes to one’s mental life. The first-person data only fit well with a philosophical approach to the mind that takes peripheral, bodily events and states at their (...) face value, and not as a body-in-the-brain, which has been popular with most neuroscientists. Thus the philosophical tradition that comes closest to the idea of the peripheral mind is Maurice Merleau-Ponty’s bodily phenomenology. (shrink)
Cognitive Science is in some sense the science of the mind. But an increasingly influential theme, in recent years, has been the role of the physical body, and of the local environment, in promoting adaptive success. No right-minded Cognitive Scientist, to be sure, ever claimed that body and world were completely irrelevant to the understanding of mind. But there was, nonetheless, an unmistakable tendency to marginalize such factors: to dwell on inner complexity whilst simplifying or ignoring the complex inner-outer interplays (...) that characterize the bulk of basic biological problem-solving. This tendency was expressed in, for example, the development of planning algorithms that treated real-world action as merely a way of implementing solutions arrived at by pure cognition (more recent work, by contrast, allows such actions to play important computational and problem-solving roles). It was also expressed in David Marr’s depiction of the task of vision as the construction of a detailed three-dimensional image of the visual scene. For possession of such a rich inner model effectively allows the system to “throw away” the world and to focus current computational activity int he inner model alone. (shrink)
Over the last two decades, debates over the viability of commonsense psychology have been center stage in both cognitive science and the philosophy of mind. Eliminativists have argued that advances in cognitive science and neuroscience will ultimately justify a rejection of our "folk" theory of the mind, and of its ontology. In the first half of this book Stich, who was at one time a leading advocate of eliminativism, maintains that even if the sciences develop in the ways that eliminativists (...) foresee, none of the arguments for ontological elimination are tenable. Rather than being resolved by science, he contends, these ontological disputes will be settled by a pragmatic process in which social and political considerations have a major role to play. In later chapters, Stich argues that the widespread worry about "naturalizing" psychological properties is deeply confused, since there is no plausible account of what naturalizing requires on which the failure of the naturalization project would lead to eliminativism. He also offers a detailed analysis of the many different notions of folk psychology to be found in philosophy and psychology, and argues that simulation theory, which purports to be an alternative to folk psychology, is not supported by recent experimental findings. (shrink)
I respond to an argument presented by Daniel Povinelli and Jennifer Vonk that the current generation of experiments on chimpanzee theory of mind cannot decide whether chimpanzees have the ability to reason about mental states. I argue that Povinelli and Vonk’s proposed experiment is subject to their own criticisms and that there should be a more radical shift away from experiments that ask subjects to predict behavior. Further, I argue that Povinelli and Vonk’s theoretical commitments should lead them to accept (...) this new approach, and that experiments which offer subjects the opportunity to look for explanations for anomalous behavior should be explored. (shrink)
Psychologists and philosophers have recently been exploring whether the mechanisms which underlie the acquisition of ‘theory of mind’ (ToM) are best charac- terized as cognitive modules or as developing theories. In this paper, we attempt to clarify what a modular account of ToM entails, and why it is an attractive type of explanation. Intuitions and arguments in this debate often turn on the role of _develop-_ _ment_: traditional research on ToM focuses on various developmental sequences, whereas cognitive modules are thought (...) to be static and ‘anti-developmental’. We suggest that this mistaken view relies on an overly limited notion of modularity, and we explore how ToM might be grounded in a cognitive module and yet still afford development. Modules must ‘come on-line’, and even fully developed modules may still develop _internally_, based on their constrained input. We make these points con- crete by focusing on a recent proposal to capture the development of ToM in a module via _parameterization_. (shrink)
The understanding of the interrelationship between brain and mind remains far from clear. It is well established that the brain's capacity to integrate information from numerous sources forms the basis for cognitive abilities. However, the core unresolved question is how information about the "objective" physical entities of the external world can be integrated, and how unifiedand coherent mental states (or Gestalts) can be established in the internal entities of distributed neuronal systems. The present paper offers a unified methodological and conceptual (...) basis for a possible mechanism of how the transient synchronization of brain operations may construct the unified and relatively stable neural states, which underlie mental states. It was shown that the sequence of metastable spatial EEG mosaics does exist and probably reflects the rapid stabilization periods of the interrelation of large neuron systems. At the EEG level this is reflected in the stabilization of quasi-stationary segments on corresponding channels. Within the introduced framework, physical brain processes and psychological processes are considered as two basic aspects of a single whole informational brain state. The relations between operational process of the brain, mental states and consciousness are discussed. (shrink)
In this book, Mark Rowlands challenges the Cartesian view of the mind as a self-contained monadic entity, and offers in its place a radical externalist or environmentalist model of cognitive processes. Drawing on both evolutionary theory and a detailed examination of the processes involved in perception, memory, thought and language use, Rowlands argues that cognition is, in part, a process whereby creatures manipulate and exploit relevant objects in their environment. This innovative book provides a foundation for an unorthodox but increasingly (...) popular view of the nature of cognition. (shrink)
Primary Works -/- Ryle, Gilbert: The Concept of Mind, Penguin Books, 1978 -/- __________: Dilemmas, Cambridge, at the University Press, 1966. -/- __________: Collected Papers, Edited by Barnes and Noble Vols. I &II, Hutchinson, 1971. -/- __________: On thinking, Edited by K. Kolenda, Oxford: Basil Blackwell Publishers, 1982. -/- __________;Aspects of Mind, Edited by Rene Meyer, Oxford : Blackwell, 1993..
I have posted four my article published at different journals in India. This is an open resource to do our work well. -/- GILBERRT RYLE ON DESCARTES’ MYTH Philosophical Mind Studies, Dec 13, 2010 (Published). -/- Ryle’s Dispositional Analysis of Mind and its Relevance Philosophical Mind Studies, Dec 13, 2010 (Published). -/- The Official Doctrine and its Relevance Today Philosophical Mind Studies, Dec 13, 2010 (Published). -/- The Concept of the Self in David Hume and the Buddha Philosophical Mind Studies, (...) Dec 13, 2010 (Published). -/- Human Beings Have No Identical Self Philosophical Mind Studies, Dec 13, 2010 (Published). (shrink)
Taking into account the difficulties that all attempts at a solution of the problem of causal-explanatory exclusion have experienced, we analyze in this paper the chances that mind-body causation is a case of overdetermination, a line of attack that has scarcely been explored. Our conclusion is that claiming that behaviors are causally overdetermined cannot solve the problem of causal-explanatory exclusion. The reason is the problem of massive coincidence, that can only be avoided by establishing a relation between mind and body; (...) that is, by denying overdetermination. The only way to defend that mind-body causation is a case of overdetermination would be by denying any modal force whatever to the principle of the causal closure of the physical, and this is a claim we would not like to reject. (shrink)
Abelson, Raziel Persons(1977) A Study in Philosophical Psychology, The Macmillan Press Ltd. London and Basingstoke. -/- Ameriks, Karl (1982) Kant’s Theory of Mind, Clarendon Press, Oxford. -/- Armstrong, D.M.(1968) A Materialistic Theory of Mind, London: Routledge and Kegan Paul. -/- Ayer, A.J.( 1974) The Central Questions of Philosophy, Holt, Rinehart and Winson, New York.
This is a collection of terms and definitions which I used in my research work entitled A Philosophical study of the Concept of Mind (with special reference to René Descartes, David Hume and Gilbert Ryle). You can find the reference abbreviation with page no. in the end of the definition. Suggestions are invited for further improvement.
F.A. Hayek’s theory of cultural evolution has often been regarded as incompatible with his earlier works. Since it lacks an elaborated theory of individual learning, we try to back his arguments by starting with his thoughts on individual perception described in hisTheory of Mind. With a focus on the current discussion concerning biological and cultural selection theories, we argue hisTheory of Mind leads to two different stages of societal evolution with well-defined learning processes, respectively. The first learning process describes his (...) Morality of Small Groups, in which Hayek’s thoughts coincide with learning theories that do not allow for the perception of behavior from outside the group. His second stage of cultural evolution, the Open Society, involves a different kind of learning behavior. We connect this notion with a model of local interaction in which the cultural learning aspect is addressed by a distinction between interaction and learning neighborhoods. This results in a situation in which individuals change their strategy and —depending on the radius of interaction and learning neighborhood—eventually may adopt new strategies that lead to higher payoffs. (shrink)
On the extended mind hypothesis (EM), many of our cognitive states and processes are hybrids, unevenly distributed across biological and nonbiological realms. In certain circumstances, things - artifacts, media, or technologies - can have a cognitive life, with histories often as idiosyncratic as those of the embodied brains with which they couple. The realm of the mental can spread across the physical, social, and cultural environments as well as bodies and brains. My independent aims in this chapter are: first, to (...) describe two compatible but distinct movements or "waves" within the EM literature, arguing for the priority of the second wave (and gesturing briefly toward a third); and, second, to defend and illustrate the interdisciplinary implications of EM as best understood, specifically for historical disciplines, by sketching two case studies. (shrink)
Advocates of the computational theory of mind claim that the mind is a computer whose operations can be implemented by various computational systems. According to these philosophers, the mind is multiply realisable because—as they claim—thinking involves the manipulation of syntactically structured mental representations. Since syntactically structured representations can be made of different kinds of material while performing the same calculation, mental processes can also be implemented by different kinds of material. From this perspective, consciousness plays a minor role in mental (...) activity. However, contemporary neuroscience provides experimental evidence suggesting that mental representations necessarily involve consciousness. Consciousness does not only enable individuals to become aware of their own thoughts, it also constantly changes the causal properties of these thoughts. In light of these empirical studies, mental representations appear to be intrinsically dependent on consciousness. This discovery represents an obstacle to any attempt to construct an artificial mind. (shrink)
In this paper I argue that whether or not a computer can be built that passes the Turing test is a central question in the philosophy of mind. Then I show that the possibility of building such a computer depends on open questions in the philosophy of computer science: the physical Church-Turing thesis and the extended Church-Turing thesis. I use the link between the issues identified in philosophy of mind and philosophy of computer science to respond to a prominent argument (...) against the possibility of building a machine that passes the Turing test. Finally, I respond to objections against the proposed link between questions in the philosophy of mind and philosophy of computer science. (shrink)
Eliminative materialism is a popular view of the mind which holds that propositional attitudes, the typical units of our traditional understanding, are unsupported by modern connectionist psychology and neuroscience, and consequently that propositional attitudes are a poor scientific postulate, and do not exist. Since our traditional folk psychology employs propositional attitudes, the usual argument runs, it too represents a poor theory, and may in the future be replaced by a more successful neurologically grounded theory, resulting in a drastic improvement in (...) our interpersonal relationships. I contend that these eliminativist arguments typically run together two distinct capacities: the folk psychological mechanisms which we use to understand one another, and scientific and philosophical guesses about the structure of those understandings. Both capacities are ontologically committed and therefore empirical. However, the commitments whose prospects look so dismal to the eliminativist, in particular the causal and logical image of propositional attitudes, belong to the guesses, and not necessarily to the underlying mechanisms. It is the commitments of traditional philosophical perspectives about the operation of our folk psychology which are contradicted by?new evidence and modeling methods in connectionist psychology. Our actual folk psychology was not clearly committed to causal, sentential propositional attitudes, and thus is not directly threatened by connectionist psychology. (shrink)
In this paper, I explore the implications of Fodor’s attacks on the Computational Theory of Mind (CTM), which get their most recent airing in The Mind Doesn’t Work That Way. I argue that if Fodor is right that the CTM founders on the global nature of abductive inference, then several of the philosophical views about the mind that he has championed over the years founder as well. I focus on Fodor’s accounts of mental causation, psychological explanation, and intentionality.
Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers (...) has been the Computational Theory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols. (shrink)
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the Computational Theory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
It is plausible to think, as many developmental psychologists do, that joint attention is important in the development of getting a full grasp on psychological notions. This chapter argues that this role of joint attention is best understood in the context of the simulation theory about the nature of psychological understanding rather than in the context of the theory. Episodes of joint attention can then be seen not as good occasions for learning a theory of mind but rather as good (...) occasions for developing skills of expressing and sharing thoughts. This approach suggests seeing language acquisition as learning how to focus and fine-tune joint attention already present in the normal basic relation of carer and infant. Philosophers in thinking about other minds have concentrated too much on the contrast of first and third person, I vs he/she, and forgotten the centrality of the contrast of first and second person, I vs you, and the related centrality of we. (shrink)
What I call semiotic brains are brains that make up a series of signs and that are engaged in making or manifesting or reacting to a series of signs: through this semiotic activity they are at the same time engaged in “being minds” and so in thinking intelligently. An important effect of this semiotic activity of brains is a continuous process of disembodiment of mind that exhibits a new cognitive perspective on the mechanisms underling the semiotic emergence of meaning processes. (...) Indeed at the roots of sophisticated thinking abilities there is a process of disembodiment of mind that presents a new cognitive perspective on the role of external models, representations, and various semiotic materials. Taking advantage of Turing’s comparison between “unorganized” brains and “logical” and “practical” machines” this paper illustrates the centrality to cognition of the disembodiment of mind from the point of view of the interplay between internal and external representations, both mimetic and creative. The last part of the paper describes the concept of mimetic mind I have introduced to shed new cognitive and philosophical light on the role of computational modeling and on the decline of the so-called Cartesian computationalism. (shrink)
Computer simulations can be useful tools to support philosophers in validating their theories, especially when these theories concern phenomena showing nontrivial dynamics. Such theories are usually informal, whilst for computer simulation a formally described model is needed. In this paper, a methodology is proposed to gradually formalise philosophical theories in terms of logically formalised dynamic properties. One outcome of this process is an executable logic-based temporal specification, which within a dedicated software environment can be used as a simulation model to (...) perform simulations. This specification provides a logical formalisation at the lowest aggregation level of the basic mechanisms underlying a process. In addition, dynamic properties at a higher aggregation level that may emerge from the mechanisms specified by the lower level properties, can be specified. Software tools are available to support specification, and to automatically check such higher level properties against the lower level properties and against generated simulation traces. As an illustration, three case studies are discussed showing successful applications of the approach to formalise and analyse, among others, Clark’s theory on extended mind, Damasio’s theory on core consciousness, and Dennett’s perspective on intertemporal decision making and altruism. (shrink)
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the Computational Theory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaninguful conversation). These higher levels of interpretability are called ‘virtual’ systems. If such a virtual system is interpretable as if it had a mind, is such a ‘virtual (...) mind’ real?This is the question addressed in this ‘virtual’ symposium, originally conducted electronically among four cognitive scientists. Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: a real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one. (shrink)
This paper questions the form and prospects of “extended theories” which have been simultaneously and independently advocated both in the philosophy of mind and in the philosophy of biology. It focuses on Extend Mind Theory (EMT) and Developmental Systems Theory (DST). It shows first that the two theories vindicate a parallel extension of received views, the former concerning extending cognition beyond the brain, the latter concerned with extending evolution and development beyond the genes. It also shows that both arguments rely (...) on the demonstration of causal parities, which have been undermined by the classical received view. Then I question whether the argument that there is an illegitimate inference from parities or coupling to constitution claims, which has been objected by Adams and Aizawa in The bounds of cognition, (2008) to EMT, also holds against DST. To this aim, I consider two defenses against DST that are parallel to two defenses against EMT, one about intrinsic content, the other about the difference between what’s in principle possible and what happens in practice. I conclude by claiming that the weaknesses and strengths of both theories are different regarding these two kinds of objections. (shrink)
Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different (...) sort of externalism: an _active externalism_ , based on the active role of the environment in driving cognitive processes. (shrink)
This classic work of recent philosophy was first published in 1968, and remains the most compelling and comprehensive statement of the view that the mind is material or physical. In A Materialist Theory of the Mind , D. M. Armstrong provided insight into the debate surrounding the relationship of the mind and body. He put forth a detailed materialist account of all the main mental phenomena, including perception, sensation, belief, the will, introspection, mental images, and consciousness. This causal analysis of (...) mental concepts, along with the similar theory by David Lewis, has come to dominate all subsequent debates in the philosophy of mind. In the preface to this updated edition, Armstrong reflects on the impact of the book, and places it in the context of subsequent developments. A full bibliography of all the key writings that have appeared in the materialist debate is also provided. (shrink)
In his Meditations, Rene Descartes asks, "what am I?" His initial answer is "a man." But he soon discards it: "But what is a man? Shall I say 'a rational animal'? No: for then I should inquire what an animal is, what rationality is, and in this way one question would lead down the slope to harder ones." Instead of understanding what a man is, Descartes shifts to two new questions: "What is Mind?" and "What is Body?" These questions develop (...) into Descartes's main philosophical preoccupation: the Mind-Body distinction. How can Mind and Body be independent entities, yet joined--essentially so--within a single human being? If Mind and Body are really distinct, are human beings merely a "construction"? On the other hand, if we respect the integrity of humans, are Mind and Body merely aspects of a human being and not subjects in and of themselves? For centuries, philosophers have considered this classic philosophical puzzle. Now, in this compact, engaging, and long-awaited work UCLA philosopher Joseph Almog closely decodes the French philosopher's argument for distinguishing between the human mind and body while maintaining simultaneously their essential integration in a human being. He argues that Descartes constructed a solution whereby the trio of Human Mind, Body, and Being are essentially interdependent yet remain each a genuine individual subject. Almog's reading not only steers away from the most popular interpretations of Descartes, but also represents a scholar coming to grips directly with Descartes himself. In doing so, Almog creates a work that Cartesian scholars will value, and that will also prove indispensable to philosophers of language, ontology, and the metaphysics of mind. (shrink)
Intuitions based on the first-person perspective can easily mislead us about what is and is not conceivable.1 This point is usually made in support of familiar reductionist positions on the mind-body problem, but I believe it can be detached from that approach. It seems to me that the powerful appearance of contingency in the relation between the functioning of the physical organism and the conscious mind -- an appearance that depends directly or indirectly on the first- person perspective -- must (...) be an illusion. But the denial of this contingency should not take the form of a reductionist account of consciousness of the usual type, whereby the logical gap between the mental and the physical is closed by conceptual analysis -- in effect, by analyzing the mental in terms of the physical (however elaborately this is done -- and I count functionalism as such a theory, along with the topic-neutral causal role analyses of mental concepts from which it descends). (shrink)
The book is an extended study of the problem of consciousness. After setting up the problem, I argue that reductive explanation of consciousness is impossible (alas!), and that if one takes consciousness seriously, one has to go beyond a strict materialist framework. In the second half of the book, I move toward a positive theory of consciousness with fundamental laws linking the physical and the experiential in a systematic way. Finally, I use the ideas and arguments developed earlier to defend (...) a form of strong artificial intelligence and to analyze some problems in the foundations of quantum mechanics. (shrink)
This now-classic work challenges what Ryle calls philosophy's "official theory," the Cartesians "myth" of the separation of mind and matter. Ryle's linguistic analysis remaps the conceptual geography of mind, not so much solving traditional philosophical problems as dissolving them into the mere consequences of misguided language. His plain language and esstentially simple purpose place him in the traditioin of Locke, Berkeley, Mill, and Russell.
The mind-body problem arises because all theories about mind-brain connections are too deeply obscure to gain general acceptance. This essay suggests a clear, simple, mind-brain solution that avoids all these perennial obscurities. (1) It does so, first of all, by reworking Strawson and Stoljar’s views. They argue that while minds differ from observable brains, minds can still be what brains are physically like behind the appearances created by our outer senses. This could avoid many obscurities. But to clearly do so, (...) it must first clear up its own deep obscurity about what brains are like behind appearances, and how they create the mind’s privacy, unity and qualia – all of which observable brains lack. (2) This can ultimately be done with a clear, simple assumption: our consciousness is the physical substance that certain brain events consist of beyond appearances. For example, the distinctive electrochemistry in nociceptor ion channels wholly consists of pain. This rejects that pain is a brain property: instead it’s a brain substance that occupies space in brains, and exerts forces by which it’s indirectly detectable via EEGs. (3) This assumption is justified because treating pains as physical substances avoids the perennial obscurities in mind-body theories. For example, this ‘clear physicalism’ avoids the obscure nonphysical pain of dualism and its spinoffs. Pain is instead an electrochemical substance. It isn’t private because it’s hidden in nonphysical minds, but instead because it’s just indirectly detected in the physical world in ways that leave its real nature hidden. (4) Clear physicalism also avoids puzzling reductions of private pains into more fundamental terms of observable brain activity. Instead pain is a hidden, private substance underlying this observable activity. Also, pain is fundamental in itself, for it’s what some brain activity fundamentally consists of. This also avoids reductive idealist claims that the world just exists in the mind. They yield obscure views on why we see a world that isn’t really out there. (5) Clear physicalism also avoids obscure claims that pain is information processing which is realizable in multiple hardwares (not just in electrochemistry). Molecular neuroscience now casts doubt on multiple realization. Also, it’s puzzling how abstract information gets ‘realized’ in brains and affects brains (compare ancient quandries on how universals get embodied in matter). A related idea is that of supervenient properties in nonreductive physicalism. They involve obscure overdetermination and emergent consciousness. Clear physicalism avoids all this. Pain isn’t an abstract property obscurely related to brains – it’s simply a substance in brains. (6) Clear physicalism also avoids problems in neuroscience. Neuroscience explains the mind’s unity in problematic ways using synchrony, attention, etc.. Clear physicalism explains unity in terms of intense neuroelectrical activity reaching continually along brain circuits as a conscious whole. This fits evidence that just highly active, highly connected circuits are fully conscious. Neuroscience also has problems explaining how qualia are actually encoded by brains, and how to get from these abstract codes to actual pain, fear, etc.. Clear physicalism explains qualia electrochemically, using growing evidence that both sensory and emotional qualia correlate with very specific electrical channels in neural receptors. Multiple-realization advocates overlook this important evidence. (7) Clear physicalism thus bridges the mind-brain gulf by showing how brains can possess the mind’s qualia, unity and privacy – and how minds can possess features of brain activity like occupying space and exerting forces. This unorthodox nonreductive physicalism may be where physicalism leads to when stripped of all its reductive and nonreductive obscurities. It offers a clear, simple mind-body solution by just filling in what neuroscience is silent about, namely, what brain matter is like behind perceptions of it. (shrink)
When Fodor titled his (1983) book the _Modularity of Mind_, he overstated his position. His actual view is that the mind divides into systems some of which are modular and others of which are not. The book would have been more aptly, if less provocatively, called _The Modularity of Low-Level Peripheral Systems_. High-level perception and cognitive systems are non-modular on Fodor’s theory. In recent years, modularity has found more zealous defenders, who claim that the entire mind divides into highly specialized (...) modules. This view has been especially popular among Evolutionary Psychologists. They claim that the mind is massively modular (Cosmides and Tooby, 1994; Sperber, 1994; Pinker, 1997; see also Samuels, 1998). Like a Swiss Army Knife, the mind is an assembly of specialized tools, each of which has been designed for some particular purpose. My goal here is to raise doubts about both peripheral modularity and massive modularity. To do that, I will rely on the criteria for modularity laid out by Fodor (1983). I will argue that neither input systems, nor central systems are modular on any of these criteria. (shrink)
There is no Argument that the Mind Extends On the basis of two argumentative examples plus their 'parity principle', Clark and Chalmers argue that mental states like beliefs can extend into the environment. I raise two problems for the argument. The first problem is that it is more difficult than Clark and Chalmers think to set up the Tetris example so that application of the parity principle might render it a case of extended mind. The second problem is that, even (...) when appropriate versions of the argumentative examples can be constructed, the availability of a second, internalist parity principle precludes the possibility of inferring that the mind extends. Choosing which parity principle we ought to wield would involve deciding beforehand whether or not the mind can extend. Thus Clark and Chalmers beg the question by employing their parity principle rather than the internalist one. I conclude that they fail to provide a proper argument to support the extended mind thesis. (shrink)
* Argument from authoritative self-knowledge ("privileged access" to one's own mental states) 1. We have a "privileged access" to our own mental states in the sense we have the authority on what mental states we are in. 2. Through introspection, we are aware of our mental states but not aware of them as physical states of any sort or as functional states. 3. Therefore, our mental states cannot be physical states.
In his "Meditations on First Philosophy", Descartes argues for there being a radical difference between mind and body. Yet, we know that mind and body interest. How is this possible? Descartes's answer tothis question is that human nature is a "substantial union" of mind and body. In this essay, Descartes's solution is explained and critically examined.
Jaegwon Kim is one of the most preeminent and most influential contributors to the philosophy of mind and metaphysics. This collection of essays presents the core of his work on supervenience and mind with two sets of postscripts especially written for the book. The essays focus on such issues as the nature of causation and events, what dependency relations other than causal relations connect facts and events, the analysis of supervenience, and the mind-body problem. A central problem in the philosophy (...) of mind is the problem of explaining how the mind can causally influence bodily processes. Professor Kim explores this problem in detail, criticizes the nonreductionist solution of it, and offers a modified reductionist solution of his own. Both professional philosophers and their graduate students will find this an invaluable collection. (shrink)
According to the Extended Mind thesis, the mind extends beyond the skull or the skin: mental processes can constitutively include external devices, like a computer or a notebook. The Extended Mind thesis has drawn both support and criticism. However, most discussions—including those by its original defenders, Andy Clark and David Chalmers—fail to distinguish between two very different interpretations of this thesis. The first version claims that the physical basis of mental features can be located spatially outside the body. Once we (...) accept that the mind depends on physical events to some extent, this thesis, though not obvious, is compatible with a large variety of views on the mind. The second version applies to standing states only, and has to do with how we conceive the nature of such states. This second version is much more interesting, because it points to a potential tension in our conception of minds or selves. However, without properly distinguishing between the two theses, the significance of the second is obscured by the comparative triviality of the first. (shrink)