What is consciousness? How does the subjective character of consciousness fit into an objective world? How can there be a science of consciousness? In this sequel to his groundbreaking and controversial The Conscious Mind, DavidChalmers develops a unified framework that addresses these questions and many others. Starting with a statement of the "hard problem" of consciousness, Chalmers builds a positive framework for the science of consciousness and a nonreductive vision of the metaphysics of consciousness. He replies (...) to many critics of The Conscious Mind, and then develops a positive theory in new directions. The book includes original accounts of how we think and know about consciousness, of the unity of consciousness, and of how consciousness relates to the external world. Along the way, Chalmers develops many provocative ideas: the "consciousness meter", the Garden of Eden as a model of perceptual experience, and The Matrix as a guide to the deepest philosophical problems about consciousness and the external world. This book will be required reading for anyone interested in the problems of mind, brain, consciousness, and reality. (shrink)
This appeared in Philosophy and Phenomenological Research 59:473-93, as a response to four papers in a symposium on my book The Conscious Mind . Most of it should be comprehensible without having read the papers in question. This paper is for an audience of philosophers and so is relatively technical. It will probably also help to have read some of the book. (There is a corresponding precis of the book, written for the symposium.) The papers I'm responding to are: Chris (...) Hill & Brian McLaughlin, There are fewer things in reality than are dreamt of in Chalmers' philosophy Brian Loar, DavidChalmers' The Conscious Mind Sydney Shoemaker, On DavidChalmers' The Conscious Mind Stephen Yablo, Concepts and consciousness Contents. (shrink)
Why is two-dimensional semantics important? One can think of it as the most recent act in a drama involving three of the central concepts of philosophy: meaning, reason, and modality. First, Kant linked reason and modality, by suggesting that what is necessary is knowable a priori, and vice versa. Second, Frege linked reason and meaning, by proposing an aspect of meaning (sense) that is constitutively tied to cognitive signi?cance. Third, Carnap linked meaning and modality, by proposing an aspect of meaning (...) (intension) that is constitutively tied to possibility and necessity. (shrink)
Is conceptual analysis required for reductive explanation? If there is no a priori entailment from microphysical truths to phenomenal truths, does reductive explanation of the phenomenal fail? We say yes (Chalmers 1996; Jackson 1994, 1998). Ned Block and Robert Stalnaker say no (Block and Stalnaker 1999).
A number of popular arguments for dualism start from a premise about an epistemic gap between physical truths about truths about consciousness, and infer an ontological gap between physical processes and consciousness. Arguments of this sort include the conceivability argument, the knowledge argument, the explanatory-gap argument, and the property dualism argument. Such arguments are often resisted on the grounds that epistemic premises do not entail ontological conclusion. My view is that one can legitimately infer ontological conclusions from epistemic premises, if (...) one is very careful about how one reasons. To do so, the best way is to reason first from epistemic premises to modal conclusions (about necessity and possibility), and from there to ontological conclusions. Here, the crucial issue is the link between the epistemic and modal domains. How can one reason from theses about what is knowable or conceivable to theses about what is necessary or possible? To bridge the epistemic and modal domains, the framework of two-dimensional semantics can play a central role. I have used this framework in earlier work (Chalmers 1996) to mount an argument against materialism. Here, I want to revisit the argument, laying it out in a more explicit and careful form, and responding to a number of objections. In what follows I will concentrate mostly on the conceivability argument. I think that very similar considerations apply to the other arguments mentioned above, however. In the final section of the paper, I show how this analysis might yield a unified treatment of a number of anti-materialist arguments. (shrink)
*[[This paper is largely based on material in other papers. The first three sections and the appendix are drawn with minor modifications from Chalmers 2002c (which explores issues about phenomenal concepts and beliefs in much more depth, mostly independently of questions about materialism). The main ideas of the last three sections are drawn from Chalmers 1996, 1999, and 2002a, although with considerable revision and elaboration. ]].
John Searle's review of my book The Conscious Mind appeared in the March 6, 1997 edition of the New York Review of Books. I replied in a letter printed in their May 15, 1997 edition, and Searle's response appeared simultaneously. I set up this web page so that interested people can see my reply to Searle in turn, and to give access to other relevant materials.
John Perry's book Knowledge, Possibility, and Consciousness is a lucid and engaging defense of a physicalist view of consciousness against various anti-physicalist arguments. In what follows, I will address Perry's responses to the three main anti-physicalist arguments he discusses: the zombie argument (focusing on imagination), the knowledge argument (focusing on indexicals), and the modal argument (focusing on intensions).
More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects (...) of human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power. (shrink)
In my book _The Conscious Mind_ , I deny a number of claims that John Searle finds "obvious", and I make some claims that he finds "absurd". But if the mind/body problem has taught us anything, it is that nothing about consciousness is obvious, and that one person's obvious truth is another person's absurdity. So instead of throwing around this sort of language, it is best to examine the claims themselves and the arguments that I give for them, to (...) see whether Searle says anything of substance that touches them. (shrink)
Scott Soames’ Reference and Description contains arguments against a number of different versions of two-dimensional semantics. After early chapters on descriptivism and on Kripke’s anti-descriptivist arguments, a chapter each is devoted to the roots of twodimensionalism in “slips, errors, or misleading suggestions” by Kripke and Kaplan, and to the two-dimensional approaches developed by Stalnaker (1978) and by Davies and Humberstone (1981). The bulk of the book (about 200 pages) is devoted to “ambitious twodimensionalism”, attributed to Frank Jackson, David Lewis, (...) and me. After a quick overview of two-dimensional approaches, I will focus on Soames’ discussion of ambitious twodimensionalism. I will then turn to a system advocated by Soames that is itself strikingly reminiscent of a two-dimensional approach. Two-dimensional semantic theories are varieties of possible-worlds semantics on which linguistic items can be evaluated relative to possibilities in two different ways, yielding two sorts of intensional semantic values, which can be seen as two “dimensions” of meaning. The second dimension is the familiar sort of Kripkean evaluation in metaphysically possible worlds, so that necessarily coextensive terms (such as ‘Hesperus’ and ‘Phosphorus’ or ‘water’ and ‘H2O’) always have the same semantic value. The first dimension behaves differently, so that there are typically at least some cases where necessarily coextensive terms have different semantic values on the first dimension. For this reason, the two-dimensional framework is sometimes seen as a way of granting many of the insights of a Kripkean approach to meaning (on the second dimension), while retaining elements of a Fregean approach to meaning (on the first dimension). (shrink)
*[[This paper appears in _Toward a Science of Consciousness II: The Second Tucson Discussions and Debates_ (S. Hameroff, A. Kaszniak, and A.Scott, eds), published with MIT Press in 1998. It is a transcript of my talk at the second Tucson conference in April 1996, lightly edited to include the contents of overheads and to exclude some diversions with a consciousness meter. A more in-depth argument for some of the claims in this paper can be found in Chapter 6 of my (...) book _The Conscious Mind_ (Chalmers, 1996). ]]. (shrink)
Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different (...) sort of externalism: an _active externalism_ , based on the active role of the environment in driving cognitive processes. (shrink)
Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see how it could be part of the physical world. So it seems that to find a place for consciousness within the natural order, we must either revise our conception of consciousness, or revise our conception of nature. In twentieth-century philosophy, this dilemma is (...) posed most acutely in C. D. Broad’s The Mind and its Place in Nature (1925). The phenomena of mind, for Broad, are the phenomena of consciousness. The central problem is that of locating mind with respect to the physical world. Broad’s exhaustive discussion of the problem culminates in a taxonomy of seventeen different views of the mental-physical relation.1 On Broad’s taxonomy, a view might see the mental as nonexistent (“delusive”), as reducible, as emergent, or as a basic property of a substance (a “differentiating” attribute). The physical might be seen in one of the same four ways. So a four-by-four matrix of views results. (The seventeenth entry arises from Broad’s division of the substance/substance view according to whether one substance or two is involved.) At the end, three views are left standing: those on which mentality is an emergent characteristic of either a physical substance or a neutral substance, where in the latter case, the physical might be either emergent or delusive. (shrink)
To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that such methods inevitably fail to come to grips with the hardest part of the problem. Once this failure is recognized, the (...) door to further progress is opened. In the second half of the paper, I argue that if we move to a new kind of nonreductive explanation, a naturalistic account of consciousness can be given. I put forward my own candidate for such an account: a nonreductive theory based on principles of structural coherence and organizational invariance, and a double-aspect theory of information. (shrink)
There is a long tradition in philosophy of using a priori methods to draw conclusions about what is possible and what is necessary, and often in turn to draw conclusions about matters of substantive metaphysics. Arguments like this typically have three steps: first an epistemic claim (about what can be known or conceived), from there to a modal claim (about what is possible or necessary), and from there to a metaphysical claim (about the nature of things in the world).
Confronted with the apparent explanatory gap between physical processes and consciousness, there are many possible reactions. Some deny that any explanatory gap exists at all. Some hold that there is an explanatory gap for now, but that it will eventually be closed. Some hold that the explanatory gap corresponds to an ontological gap in nature.
Zombies are hypothetical creatures of the sort that philosophers have been known to cherish. A zombie is physically identical to a normal human being, but completely lacks conscious experience. Zombies look and behave like the conscious beings that we know and love, but "all is dark inside." There is nothing it is like to be a zombie.
Consciousness and intentionality are perhaps the two central phenomena in the philosophy of mind. Human beings are conscious beings: there is something it is like to be us. Human beings are intentional beings: we represent what is going on in the world.Correspondingly, our specific mental states, such as perceptions and thoughts, very often have a phenomenal character: there is something it is like to be in them. And these mental states very often have intentional content: they serve to represent the (...) world. On the face of it, consciousness and intentionality are intimately connected. Our most important conscious mental states are intentional states: conscious experiences often inform us about the state of the world. And our most important intentional mental states are conscious states: there is often something it is like to represent the external world. It is natural to think that a satisfactory account of consciousness must respect its intentional structure, and that a satisfactory account of intentionality must respect its phenomenological character.With this in mind, it is surprising that in the last few decades, the philosophical study of consciousness and intentionality has often proceeded in two independent streams. This wasnot always the case. In the work of philosophers from Descartes and Locke to Brentano and Husserl, consciousness and intentionality were typically analyzed in a single package. But in the second half of the twentieth century, the dominant tendency was to concentrate on onetopic or the other, and to offer quite separate analyses of the two. On this approach, the connections between consciousness and intentionality receded into the background.In the last few years, this has begun to change. The interface between consciousness and intentionality has received increasing attention on a number of fronts. This attention has focused on such topics as the representational content of perceptual experience, the higherorder representation of conscious states, and the phenomenology of thinking. Two distinct philosophical groups have begun to emerge. One group focuses on ways in which consciousness might be grounded in intentionality. The other group focuses on ways in which intentionality might be grounded in consciousness. (shrink)
This paper is an edited transcription of a talk at the 1997 Montreal symposium on "Consciousness at the Frontiers of Neuroscience". There's not much here that isn't said elsewhere, e.g. in "Facing Up to the Problem of Consciousness" and "How Can We Construct a Science of Consciousness?"]].
It is widely accepted that conscious experience has a physical basis. That is, the properties of experience (phenomenal properties, or qualia) systematically depend on physical properties according to some lawful relation. There are two key questions about this relation. The first concerns the strength of the laws: are they logically or metaphysically necessary, so that consciousness is nothing "over and above" the underlying physical process, or are they merely contingent laws like the law of gravity? This question about the strength (...) of the psychophysical link is the basis for debates over physicalism and property dualism. The second question concerns the shape of the laws: precisely how do phenomenal properties depend on physical properties? What sort of physical properties enter into the laws' antecedents, for instance; consequently, what sort of physical systems can give rise to conscious experience? It is this second question that I address in this paper. (shrink)
Conscious experience is at once the most familiar thing in the world and the most mysterious. There is nothing we know about more directly than consciousness, but it is extraordinarily hard to reconcile it with everything else we know. Why does it exist? What does it do? How could it possibly arise from neural processes in the brain? These questions are among the most intriguing in all of science.
The term ‘emergence’ often causesconfusion in science and philosophy, as it is used to express at leasttwo quite different concepts. We can label these concepts _strong_ _emergence_ and _weak emergence_. Both of these concepts are important, but it is vital to keep them separate.
There are many ways the world might be, for all I know. For all I know, it might be that there is life on Jupiter, and it might be that there is not. It might be that Australia will win the next Ashes series, and it might be that they will not. It might be that my great-grandfather was my great-grandmother's second cousin, and it might be that he was not. It might be that copper is a compound, and it (...) might be that it is not. (shrink)
The book is an extended study of the problem of consciousness. After setting up the problem, I argue that reductive explanation of consciousness is impossible (alas!), and that if one takes consciousness seriously, one has to go beyond a strict materialist framework. In the second half of the book, I move toward a positive theory of consciousness with fundamental laws linking the physical and the experiential in a systematic way. Finally, I use the ideas and arguments developed earlier to defend (...) a form of strong artificial intelligence and to analyze some problems in the foundations of quantum mechanics. (shrink)
Intro to what "first person" and "third person" mean. (outline the probs of the first person) (convenience of third person vs absoluteness of first person) (explain terminology) Dominance of third person, reasons. (embarassment with first person) (division of reactions) (natural selection - those who can make the most noise) (analogy with behaviourism) Reductionism, hard line and soft line Appropriation of first person terms by reductionists.
Thanks to all the people who responded to my enquiry about the status of the Continuum Hypothesis. This is a really fascinating subject, which I could waste far too much time on. The following is a summary of some aspects of the feeling I got for the problems. This will be old hat to set theorists, and no doubt there are a couple of embarrassing misunderstandings, but it might be of some interest to non professionals.
At any given time, a subject has a multiplicity of conscious experiences. A subject might simultaneously have visual experiences of a red book and a green tree, auditory experiences of birds singing, bodily sensations of a faint hunger and a sharp pain in the shoulder, the emotional experience of a certain melancholy, while having a stream of conscious thoughts about the nature of reality. These experiences are distinct from each other: a subject could experience the red book without the singing (...) birds, and could experience the singing birds without the red book. But at the same time, the experiences seem to be tied together in a deep way. They seem to be unified, by being aspects of a single encompassing state of consciousness. (shrink)
In the Garden of Eden, we had unmediated contact with the world. We were directly acquainted with objects in the world and with their properties. Objects were simply presented to us without causal mediation, and properties were revealed to us in their true intrinsic glory.
We could have been characters in a huge computer simulation. It is a familiar idea that the whole world might be simulated on a computer, and things would seem exactly the same to us (and indeed, who is to say that we are not).
The project that Dan Lloyd has undertaken is admirable and audacious. He has tried to boil down the substrate of information-processing that underlies conscious experience to some very simple elements, in order to gain a better understanding of the phenomenon. Some people will suspect that by considering a model as simple as a connectionist network, Dan has thrown away everything that is interesting about consciousness. Perhaps there is something to that complaint, but I will take a different tack. It seems (...) to me that if we apply his own reasoning, we can see that Dan has not taken things far _enough_. When we have boiled things down to a system as simple as a connectionist network, it seems faint-hearted to stop there, and perhaps a little arbitrary as well. So I will take things further, and ask what seems to be the really interesting question in the vicinity: what is it like to be a thermostat? (shrink)
When I say ‘Hesperus is Phosphorus’, I seem to express a proposition. And when I say ‘Joan believes that Hesperus is Phosphorus’, I seem to ascribe to Joan an attitude to the same proposition. But what are propositions? And what is involved in ascribing propositional attitudes?
The search for neural correlates of consciousness (or NCCs) is arguably the cornerstone in the recent resurgence of the science of consciousness. The search poses many difficult empirical problems, but it seems to be tractable in principle, and some ingenious studies in recent years have led to considerable progress. A number of proposals have been put forward concerning the nature and location of neural correlates of consciousness. A few of these include.
This is just a beginning categorization. I claim no 'objective correctness' for it. And of course the categories can be fluid, and the same joke can be a member of more than one category (and perhaps it will be funnier if it is). But thinking about the jokes which I can recall from the Humour Weekend, most seem to fall squarely into one or another category, indicating that perhaps this is a useful way of dividing jokes. It seems to me (...) that the "causes of humour" in all 4 classes are different, coming from different parts of the brain. (shrink)
This paper is a response to the 26 commentaries on my paper "Facing Up to the Problem of Consciousness". First, I respond to deflationary critiques, including those that argue that there is no "hard" problem of consciousness or that it can be accommodated within a materialist framework. Second, I respond to nonreductive critiques, including those that argue that the problems of consciousness are harder than I have suggested, or that my framework for addressing them is flawed. Third, I address positive (...) proposals for addressing the problem of consciousness, including those based in neuroscience and cognitive science, phenomenology, physics, and fundamental psychophysical theories. Reply to: Baars, Bilodeau, Churchland, Clark, Clarke, Crick & Koch, Dennett, Hameroff & Penrose, Hardcastle, Hodgson, Hut & Shepard, Libet, Lowe, MacLennan, McGinn, Mills, O'Hara & Scutt, Price, Robinson, Rosenberg, Seager, Shear, Stapp, Varela, Velmans. (shrink)
Two-dimensional approaches to semantics, broadly understood, recognize two "dimensions" of the meaning or content of linguistic items. On these approaches, expressions and their utterances are associated with two different sorts of semantic values, which play different explanatory roles. Typically, one semantic value is associated with reference and ordinary truth-conditions, while the other is associated with the way that reference and truth-conditions depend on the external world. The second sort of semantic value is often held to play a distinctive role in (...) analyzing matters of cognitive significance and/or context-dependence. (shrink)
[[This paper appears in my anthology _Philosophy of Mind: Classical and Contemporary Readings_ (Oxford University Press, 2002), pp. 608-633. It is a heavily revised version of a paper first written in 1994 and revised in 1995. Sections 1, 7, 8, and 10 are similar to the old version, but the other sections are quite different. Because the old version has been widely cited, I have made it available (in its 1995 version) at http://consc.net/papers/content95.html.
Experiences and beliefs are different sorts of mental states, and are often taken to belong to very different domains. Experiences are paradigmatically phenomenal, characterized by what it is like to have them. Beliefs are paradigmatically intentional, characterized by their propositional content. But there are a number of crucial points where these domains intersect. One central locus of intersection arises from the existence of phenomenal beliefs: beliefs that are about experiences.
In article <firstname.lastname@example.org> email@example.com writes: Reminds me of a friend of mine who claims that the number 17 is "the most random" number. His proof ran as follows: pick a number. It's not really as good a random number as 17, is it? (Invariable Answer: "Umm, well, no...") This reminds me of a little experiment I did a couple of years ago. I stood on a busy street corner in Oxford, and asked passers by to "name a random number between (...) zero and infinity." I was wondering what this "random" distribution would look like. (shrink)
It is fairly well-known that certain hard computational problems (that is, 'difficult' problems for a digital processor to solve) can in fact be solved much more easily with an analog machine. This raises questions about the true nature of the distinction between analog and digital computation (if such a distinction exists). I try to analyze the source of the observed difference in terms of (1) expanding parallelism and (2) more generally, infinite-state Turing machines. The issue of discreteness vs continuity will (...) also be touched upon, although it is not so important for analyzing these particular problems. (shrink)
This paper offers both a theoretical and an experimental perspective on the relationship between connectionist and Classical (symbol-processing) models. Firstly, a serious flaw in Fodor and Pylyshyn’s argument against connectionism is pointed out: if, in fact, a part of their argument is valid, then it establishes a conclusion quite different from that which they intend, a conclusion which is demonstrably false. The source of this flaw is traced to an underestimation of the differences between localist and distributed representation. It has (...) been claimed that distributed representations cannot support systematic operations, or that if they can, then they will be mere implementations of traditional ideas. This paper presents experimental evidence against this conclusion: distributed representations can be used to support direct structure-sensitive operations, in a man- ner quite unlike the Classical approach. Finally, it is argued that even if Fodor and Pylyshyn’s argument that connectionist models of compositionality must be mere implementations were correct, then this would still not be a serious argument against connectionism as a theory of mind. (shrink)