What are the philosophical views of contemporary professional philosophers? We surveyed many professional philosophers in order to help determine their views on 30 central philosophical issues. This article documents the results. It also reveals correlations among philosophical views and between these views and factors such as age, gender, and nationality. A factor analysis suggests that an individual's views on these issues factor into a few underlying components that predict much of the variation in those views. The results of a metasurvey (...) also suggest that many of the results of the survey are surprising: philosophers as a whole have quite inaccurate beliefs about the distribution of philosophical views in the profession. (shrink)
The book is an extended study of the problem of consciousness. After setting up the problem, I argue that reductive explanation of consciousness is impossible , and that if one takes consciousness seriously, one has to go beyond a strict materialist framework. In the second half of the book, I move toward a positive theory of consciousness with fundamental laws linking the physical and the experiential in a systematic way. Finally, I use the ideas and arguments developed earlier to defend (...) a form of strong artificial intelligence and to analyze some problems in the foundations of quantum mechanics. (shrink)
Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different (...) sort of externalism: an _active externalism_, based on the active role of the environment in driving cognitive processes. (shrink)
What is consciousness? How does the subjective character of consciousness fit into an objective world? How can there be a science of consciousness? In this sequel to his groundbreaking and controversial The Conscious Mind, David Chalmers develops a unified framework that addresses these questions and many others. Starting with a statement of the "hard problem" of consciousness, Chalmers builds a positive framework for the science of consciousness and a nonreductive vision of the metaphysics of consciousness. He replies to many critics (...) of The Conscious Mind, and then develops a positive theory in new directions. The book includes original accounts of how we think and know about consciousness, of the unity of consciousness, and of how consciousness relates to the external world. Along the way, Chalmers develops many provocative ideas: the " consciousness meter", the Garden of Eden as a model of perceptual experience, and The Matrix as a guide to the deepest philosophical problems about consciousness and the external world. This book will be required reading for anyone interested in the problems of mind, brain, consciousness, and reality. (shrink)
Is conceptual analysis required for reductive explanation? If there is no a priori entailment from microphysical truths to phenomenal truths, does reductive explanation of the phenomenal fail? We say yes . Ned Block and Robert Stalnaker say no.
There is a long tradition in philosophy of using a priori methods to draw conclusions about what is possible and what is necessary, and often in turn to draw conclusions about matters of substantive metaphysics. Arguments like this typically have three steps: first an epistemic claim , from there to a modal claim , and from there to a metaphysical claim.
To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that such methods inevitably fail to come to grips with the hardest part of the problem. Once this failure is recognized, the (...) door to further progress is opened. In the second half of the paper, I argue that if we move to a new kind of nonreductive explanation, a naturalistic account of consciousness can be given. I put forward my own candidate for such an account: a nonreductive theory based on principles of structural coherence and organizational invariance, and a double-aspect theory of information. (shrink)
Consciousness and intentionality are perhaps the two central phenomena in the philosophy of mind. Human beings are conscious beings: there is something it is like to be us. Human beings are intentional beings: we represent what is going on in the world.Correspondingly, our specific mental states, such as perceptions and thoughts, very often have a phenomenal character: there is something it is like to be in them. And these mental states very often have intentional content: they serve to represent the (...) world. On the face of it, consciousness and intentionality are intimately connected. Our most important conscious mental states are intentional states: conscious experiences often inform us about the state of the world. And our most important intentional mental states are conscious states: there is often something it is like to represent the external world. It is natural to think that a satisfactory account of consciousness must respect its intentional structure, and that a satisfactory account of intentionality must respect its phenomenological character.With this in mind, it is surprising that in the last few decades, the philosophical study of consciousness and intentionality has often proceeded in two independent streams. This wasnot always the case. In the work of philosophers from Descartes and Locke to Brentano and Husserl, consciousness and intentionality were typically analyzed in a single package. But in the second half of the twentieth century, the dominant tendency was to concentrate on onetopic or the other, and to offer quite separate analyses of the two. On this approach, the connections between consciousness and intentionality receded into the background.In the last few years, this has begun to change. The interface between consciousness and intentionality has received increasing attention on a number of fronts. This attention has focused on such topics as the representational content of perceptual experience, the higherorder representation of conscious states, and the phenomenology of thinking. Two distinct philosophical groups have begun to emerge. One group focuses on ways in which consciousness might be grounded in intentionality. The other group focuses on ways in which intentionality might be grounded in consciousness. (shrink)
In the Garden of Eden, we had unmediated contact with the world. We were directly acquainted with objects in the world and with their properties. Objects were simply presented to us without causal mediation, and properties were revealed to us in their true intrinsic glory.
The philosophical interest of verbal disputes is twofold. First, they play a key role in philosophical method. Many philosophical disagreements are at least partly verbal, and almost every philosophical dispute has been diagnosed as verbal at some point. Here we can see the diagnosis of verbal disputes as a tool for philosophical progress. Second, they are interesting as a subject matter for first-order philosophy. Reflection on the existence and nature of verbal disputes can reveal something about the nature of concepts, (...) language, and meaning. In this article I first characterize verbal disputes, spell out a method for isolating and resolving them, and draw out conclusions for philosophical methodology. I then use the framework to draw out consequences in first-order philosophy. In particular, I argue that the analysis of verbal disputes can be used to support the existence of a distinctive sort of primitive concept and that it can be used to reconstruct a version of an analytic/synthetic distinction, where both are characterized in dialectical terms alone. (shrink)
Experiences and beliefs are different sorts of mental states, and are often taken to belong to very different domains. Experiences are paradigmatically phenomenal, characterized by what it is like to have them. Beliefs are paradigmatically intentional, characterized by their propositional content. But there are a number of crucial points where these domains intersect. One central locus of intersection arises from the existence of phenomenal beliefs: beliefs that are about experiences.
When I say ‘Hesperus is Phosphorus’, I seem to express a proposition. And when I say ‘Joan believes that Hesperus is Phosphorus’, I seem to ascribe to Joan an attitude to the same proposition. But what are propositions? And what is involved in ascribing propositional attitudes?
In Philosophy Without Intuitions, Herman Cappelen focuses on the metaphilosophical thesis he calls Centrality: contemporary analytic philosophers rely on intuitions as evidence for philosophical theories. Using linguistic and textual analysis, he argues that Centrality is false. He also suggests that because most philosophers accept Centrality, they have mistaken beliefs about their own methods.To put my own views on the table: I do not have a large theoretical stake in the status of intuitions, but unreflectively I find it fairly obvious that (...) many philosophers, including myself, appeal to intuitions. Cappelen’s arguments make a provocative challenge to this unreflective background conception. So it is interesting to work through the arguments to see what they might and might not show.In what follows I aim to articulate a minimal notion of intuition that captures something of the core everyday philosophical usage of the term, and that captures the sense .. (shrink)
Two-dimensional approaches to semantics, broadly understood, recognize two "dimensions" of the meaning or content of linguistic items. On these approaches, expressions and their utterances are associated with two different sorts of semantic values, which play different explanatory roles. Typically, one semantic value is associated with reference and ordinary truth-conditions, while the other is associated with the way that reference and truth-conditions depend on the external world. The second sort of semantic value is often held to play a distinctive role in (...) analyzing matters of cognitive significance and/or context-dependence. (shrink)
There are many ways the world might be, for all I know. For all I know, it might be that there is life on Jupiter, and it might be that there is not. It might be that Australia will win the next Ashes series, and it might be that they will not. It might be that my great-grandfather was my great-grandmother's second cousin, and it might be that he was not. It might be that copper is a compound, and it (...) might be that it is not. (shrink)
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...) surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work.. (shrink)
Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see how it could be part of the physical world. So it seems that to find a place for consciousness within the natural order, we must either revise our conception of consciousness, or revise our conception of nature. In twentieth-century philosophy, this dilemma is (...) posed most acutely in C. D. Broad’s The Mind and its Place in Nature . The phenomena of mind, for Broad, are the phenomena of consciousness. The central problem is that of locating mind with respect to the physical world. Broad’s exhaustive discussion of the problem culminates in a taxonomy of seventeen different views of the mental-physical relation.1 On Broad’s taxonomy, a view might see the mental as nonexistent , as reducible, as emergent, or as a basic property of a substance . The physical might be seen in one of the same four ways. So a four-by-four matrix of views results. At the end, three views are left standing: those on which mentality is an emergent characteristic of either a physical substance or a neutral substance, where in the latter case, the physical might be either emergent or delusive. (shrink)
Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of computation (...) requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science. (shrink)
This appeared in Philosophy and Phenomenological Research 59:473-93, as a response to four papers in a symposium on my book The Conscious Mind . Most of it should be comprehensible without having read the papers in question. This paper is for an audience of philosophers and so is relatively technical. It will probably also help to have read some of the book. The papers I’m responding to are: Chris Hill & Brian McLaughlin, There are fewer things in reality than are (...) dreamt of in Chalmers’ philosophy Brian Loar, David Chalmers’ The Conscious Mind Sydney Shoemaker, On David Chalmers’ The Conscious Mind Stephen Yablo, Concepts and consciousness Contents. (shrink)
Hilary Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. I argue that Putnam's argument fails, but that it points out the need for a better understanding of the bridge between the theory of computation and the theory of physical systems: the relation of implementation. It also raises questions about the class of automata that can serve as a basis for understanding the (...) mind. I develop an account of implementation, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system. This clears the way for computation to play a central role in the analysis of mind. (shrink)
The term ‘emergence’ often causesconfusion in science and philosophy, as it is used to express at leasttwo quite different concepts. We can label these concepts _strong_ _emergence_ and _weak emergence_. Both of these concepts are important, but it is vital to keep them separate.
A number of popular arguments for dualism start from a premise about an epistemic gap between physical truths about truths about consciousness, and infer an ontological gap between physical processes and consciousness. Arguments of this sort include the conceivability argument, the knowledge argument, the explanatory-gap argument, and the property dualism argument. Such arguments are often resisted on the grounds that epistemic premises do not entail ontological conclusion. My view is that one can legitimately infer ontological conclusions from epistemic premises, if (...) one is very careful about how one reasons. To do so, the best way is to reason first from epistemic premises to modal conclusions , and from there to ontological conclusions. Here, the crucial issue is the link between the epistemic and modal domains. How can one reason from theses about what is knowable or conceivable to theses about what is necessary or possible? To bridge the epistemic and modal domains, the framework of two-dimensional semantics can play a central role. I have used this framework in earlier work to mount an argument against materialism. Here, I want to revisit the argument, laying it out in a more explicit and careful form, and responding to a number of objections. In what follows I will concentrate mostly on the conceivability argument. I think that very similar considerations apply to the other arguments mentioned above, however. In the final section of the paper, I show how this analysis might yield a unified treatment of a number of anti-materialist arguments. (shrink)
Confronted with the apparent explanatory gap between physical processes and consciousness, there are many possible reactions. Some deny that any explanatory gap exists at all. Some hold that there is an explanatory gap for now, but that it will eventually be closed. Some hold that the explanatory gap corresponds to an ontological gap in nature.
The Matrix presents a version of an old philosophical fable: the brain in a vat. A disembodied brain is floating in a vat, inside a scientist’s laboratory. The scientist has arranged that the brain will be stimulated with the same sort of inputs that a normal embodied brain receives. To do this, the brain is connected to a giant computer simulation of a world. The simulation determines which inputs the brain receives. When the brain produces outputs, these are fed back (...) into the simulation. The internal state of the brain is just like that of a normal brain, despite the fact that it lacks a body. From the brain’s point of view, things seem very much as they seem to you and me. (shrink)
The objects of credence are the entities to which credences are assigned for the purposes of a successful theory of credence. I use cases akin to Frege's puzzle to argue against referentialism about credence : the view that objects of credence are determined by the objects and properties at which one's credence is directed. I go on to develop a non-referential account of the objects of credence in terms of sets of epistemically possible scenarios.
This is a reply to commentaries on my book, The Character of Consciousness, by Benj Hellie, Christopher Peacocke, and Susanna Siegel. The reply to Hellie focuses on issues about acquaintance and transparency. The reply to Peacocke focuses on externalism about spatial experience. The reply to Siegel focuses on whether there can be Frege cases in perceptual experience.
Why is two-dimensional semantics important? One can think of it as the most recent act in a drama involving three of the central concepts of philosophy: meaning, reason, and modality. First, Kant linked reason and modality, by suggesting that what is necessary is knowable a priori, and vice versa. Second, Frege linked reason and meaning, by proposing an aspect of meaning (sense) that is constitutively tied to cognitive signi?cance. Third, Carnap linked meaning and modality, by proposing an aspect of meaning (...) (intension) that is constitutively tied to possibility and necessity. (shrink)
*[[This paper is largely based on material in other papers. The first three sections and the appendix are drawn with minor modifications from Chalmers 2002c . The main ideas of the last three sections are drawn from Chalmers 1996, 1999, and 2002a, although with considerable revision and elaboration. ]].
The search for neural correlates of consciousness (or NCCs) is arguably the cornerstone in the recent resurgence of the science of consciousness. The search poses many difficult empirical problems, but it seems to be tractable in principle, and some ingenious studies in recent years have led to considerable progress. A number of proposals have been put forward concerning the nature and location of neural correlates of consciousness. A few of these include.
John Perry's book Knowledge, Possibility, and Consciousness is a lucid and engaging defense of a physicalist view of consciousness against various anti-physicalist arguments. In what follows, I will address Perry's responses to the three main anti-physicalist arguments he discusses: the zombie argument , the knowledge argument , and the modal argument.
Zombies are hypothetical creatures of the sort that philosophers have been known to cherish. A zombie is physically identical to a normal human being, but completely lacks conscious experience. Zombies look and behave like the conscious beings that we know and love, but "all is dark inside." There is nothing it is like to be a zombie.
Graeme Forbes (2011) raises some problems for two-dimensional semantic theories. The problems concern nested environments: linguistic environments where sentences are nested under both modal and epistemic operators. Closely related problems involving nested environments have been raised by Scott Soames (2005) and Josh Dever (2007). Soames goes so far as to say that nested environments pose the “chief technical problem” for strong two-dimensionalism. We call the problem of handling nested environments within two-dimensional semantics “the nesting problem”. We show that the two-dimensional (...) semantics for attitude ascriptions developed in Chalmers (2011a) has no trouble accommodating certain forms of the nesting problem that involve factive verbs such as “know” or “establish”. A certain form of the nesting problem involving apriority and necessity operators does raise an interesting puzzle, but we show how a generalized version of the nesting problem arises independently of two-dimensional semantics—it arises, in fact, for anyone who accepts the contingent a priori. We, then, provide a two-dimensional treatment of the apriority operator that fits the two-dimensional treatment of attitude verbs and apply it to the generalized nesting problem. We conclude that two-dimensionalism is not seriously threatened by cases involving the nesting of epistemic and modal operators. (shrink)
Frank Ramsey (1931) wrote: If two people are arguing 'if p will q?' and both are in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q. We can say that they are fixing their degrees of belief in q given p. Let us take the first sentence the way it is often taken, as proposing the following test for the acceptability of an indicative conditional: ‘If p then q’ (...) is acceptable to a subject S iff, were S to accept p and consider q, S would accept q. Now consider an indicative conditional of the form (1) If p, then I believe p. Suppose that you accept p and consider ‘I believe p’. To accept p while rejecting ‘I believe p’ is tantamount to accepting the Moore-paradoxical sentence ‘p and I do not believe p’, and so is irrational. To accept p while suspending judgment about ‘I believe p’ is irrational for similar reasons. So rationality requires that if you accept p and consider ‘I believe p’, you accept ‘I believe p’. (shrink)
A content of a subject's mental state is narrow when it is determined by the subject's intrinsic properties: that is, when any possible intrinsic duplicate of the subject has a corresponding mental state with the same content. A content of a subject's mental state is..
I would like to thank the authors of the 26 contributions to this symposium on my article “The Singularity: A Philosophical Analysis”. I learned a great deal from the reading their commentaries. Some of the commentaries engaged my article in detail, while others developed ideas about the singularity in other directions. In this reply I will concentrate mainly on those in the first group, with occasional comments on those in the second. A singularity (or an intelligence explosion) is a rapid (...) increase in intelligence to superintelligence (intelligence of far greater than human levels), as each generation of intelligent systems creates more intelligent systems in turn. The target article argues that we should take the possibility of a singularity seriously, and argues that there will be superintelligent systems within centuries unless certain specific defeating conditions obtain. (shrink)
Conscious experience is at once the most familiar thing in the world and the most mysterious. There is nothing we know about more directly than consciousness, but it is extraordinarily hard to reconcile it with everything else we know. Why does it exist? What does it do? How could it possibly arise from neural processes in the brain? These questions are among the most intriguing in all of science.
It is widely believed that for all p, or at least for all entertainable p, it is knowable a priori that (p iff actually p). It is even more widely believed that for all such p, it is knowable that (p iff actually p). There is a simple argument against these claims from four antecedently plausible premises.