In this paper I want to discuss the way in which physical science has come to claim a particular kind of hegemony over other subjects in the second half of this century. This claim to hegemony is generally known by the name of "physicalism". In this paper I shall try to understand why this doctrine has come to prominence in recent decades. By placing this doctrine in a historical context, we will be better able to appreciate its strengths and weaknesses.
On the Friday afternoon of the 3 rd test at Trent Bridge in 2001, the series was in the balance. The Australians had won the first two tests easily, but England now found themselves in a position of some strength. They had restricted Australia to a first-innings lead of just 5 runs, and had built a lead of 120 with six wickets in hand. Mark Ramprakash was in and had been batting steadily for well over an hour. Even though this (...) Australian side was as strong as any in cricket history, England had real hopes of getting back into the series. (shrink)
Since the publication of Elga's seminal paper in 2000, the Sleeping Beauty paradox has been the source of much discussion, particularly in this journal. Over the past few decades the Everettian interpretation of quantum mechanics 1 has also been much debated. There is an interesting connection between the way these two topics raise issues about subjective probability assignments.This connection is often alluded to, but as far as we know Peter J. Lewis's ‘Quantum Sleeping Beauty’ is the first attempt to examine (...) it explicitly. Lewis claims that the two debates are not independent: to be specific, he argues that accepting the Everettian interpretation of quantum mechanics requires you to be a ‘halfer’ about Sleeping Beauty, in opposition to the more widely accepted ‘thirder’ solution.This paper will argue that Lewis is wrong. Everettians do not have to be halfers. It is perfectly cogent to be both an Everettian and a thirder. (shrink)
It is widely agreed among contemporary philosophers of mind that science leaves us with an ‘explanatory gap’—that even after we know everything that science can tell us about the conscious mind and the brain, their relationship still remains mysterious. I argue that this agreed view is quite mistaken. The feeling of a ‘explanatory gap’ arises only because we cannot stop ourselves thinking about the mind–brain relation in a dualist way.
I argue that philosophy is like science in three interesting and non-obvious ways. First, the claims made by philosophy are synthetic, not analytic: philosophical claims, just like scientific claims, are not guaranteed by the structure of the concepts they involve. Second, philosophical knowledge is a posteriori, not a priori: the claims established by philosophers depend on the same kind of empirical support as scientific theories. And finally, the central questions of philosophy concern actuality rather than necessity: philosophy is primarily aimed (...) at understanding the actual world studied by science, not some further realm of metaphysical modality. (shrink)
It is widely assumed that the normativity of conceptual judgement poses problems for naturalism. Thus John McDowell urges that 'The structure of the space of reasons stubbornly resists being appropriated within a naturalism that conceives nature as the realm of law' (1994, p 73). Similar sentiments have been expressed by many other writers, for example Robert Brandom (1994, p xiii) and Paul Boghossian (1989, p 548).
The term ‘naturalism’ has no very precise meaning in contemporary philosophy. Its current usage derives from debates in America in the first half of the last century. The self-proclaimed ‘naturalists’ from that period included John Dewey, Ernest Nagel, Sidney Hook and Roy Wood Sellars. These philosophers aimed to ally philosophy more closely with science. They urged that reality is exhausted by nature, containing nothing ‘supernatural’, and that the scientific method should be used to investigate all areas of reality, including the (...) ‘human spirit’ (Krikorian 1944, Kim 2003). (shrink)
Peter J. Lewis argued that the Everettian interpretation of quantum mechanics implies the unpopular halfer position in the Sleeping Beauty debate. We retorted that it is perfectly coherent to be an Everettian and an ordinary thirder. In a recent reply to our paper Lewis further clarifies the basis for his thinking. We think this brings out nicely where he goes wrong: he underestimates the importance of metaphysical considerations in determining rational credences.
In this paper we distinguish two issues that are often run together in discussions about physicalism. The first issue concerns levels. How do entities picked out by non-physical terminology, such as biological or psychological terminology, relate to physical entities? Are the former identical to, or metaphysically supervenient on, the latter? The second issue concerns physical parts and wholes. How do macroscopic physical entities relate to their microscopic parts? Are the former generally determined by the latter? We argue that views on (...) these two issues are independent of one another and should not be conflated. (shrink)
Functionalism faces a problem in accounting for the semantic powers of beliefs and other mental states. Simple causal considerations will not solve this problem, nor will any appeal to the social utility of semantic interpretations. The correct analysis of semantic representation is a teleological one, in terms of the biological purposes of mental states: whereas functionalism focuses, so to speak, only on the structure of the cognitive mechanism, the semantic perspective requires in addition that we consider the purposes of the (...) cognitive mechanism's parts. (shrink)
The relation between subjective consciousness and the physical brain is widely regarded as the last mystery facing science. David Papineau argues that there is no real puzzle here. Consciousness seems mysterious, not because of any hidden essence, but only because we think about it in a special way. Papineau exposes the confusion, and dispels the mystery: we see consciousness in its place in the material world, and we are on the way to a proper understanding of the mind.
The aim of this paper is to defend the teleological theory of representation against an objection by Jerry Fodor. I shall argue that previous attempts to answer this objection fail to recognize the importance of belief-desire structure for the teleological theory of representation.
The main puzzle about theoretical definitions is that nothing seems to decide which assumptions contribute to such definitions and which do not. I argue that theoretical definitions are indeed imprecise, but that this does not normally matter, since the definitional imprecision does not normally produce indeterminacy of referential value. Sometimes, however, the definitional imprecision is less benign, and does generate referential indeterminacy. In these special cases, but not otherwise, it is necessary to refine the term's definition.
On the first page of The Problem of Consciousness , Colin McGinn asks "How is it possible for conscious states to depend on brain states? How can technicolour phenomenology arise from soggy grey matter?" Many philosophers feel that questions like these pose an unanswerable challenge to physicalism. They argue that there is no way of bridging the "explanatory gap" between the material brain and the lived world of conscious experience , and that physicalism about the mind can therefore provide no (...) answer to the "hard problem" of why brains give rise to consciousness. (shrink)
In 'How Many Lives Has Schrödinger's Cat?' David Lewis argues that the Everettian no-collapse interpretation of quantum mechanics is in a tangle when it comes to probabilities. This paper aims to show that the difficulties that Lewis raises are insubstantial. The Everettian metaphysics contains a coherent account of probability. Indeed it accounts for probability rather better than orthodox metaphysics does.
This paper is about the nature of conscious sensory properties. My initial thesis is that these properties should not be equated with representational properties. I argue that any such representationalist view is in danger of implying that conscious sensory properties are constituted by relations to propositions or other abstract objects outside space and time; and I add that, even if this implication can be avoided, the broadness of representational properties in any case renders them unsuitable to constitute conscious properties. In (...) place of the representational account, I then defend an equation of conscious sensory properties with intrinsic non-relational properties of subjects, and I show how this view deals naturally with all the difficulties facing representationalism. I conclude by defending this non-relational account of conscious experience against arguments from the ‘transparency’ and the ‘intrinsic intentionality’ of experience. (shrink)
In this paper I argue that causation is an essentially macroscopic phenomenon, and that mental causes are therefore capable of outcompeting their more specific physical realizers as causes of physical effects. But I also argue that any causes must be type-identical with physical properties, on pain of positing inexplicable physical conspiracies. I therefore allow macroscopic mental causation, but only when it is physically reducible.
The newest addition to the successful Oxford Readings in Philosophy series, this collection contains the most important contributions to the recent debate on the philosophy of science. The contributors crystallize the often heated arguments of the last two decades, assessing the skeptical attitudes within philosophy of science and the counter-challenges of the scientific realists. Contributors include Nancy Cartwright, Brian Ellis, Arthur Fine, Clark Glymour, Larry Laudan, Peter Lipton, Alan Musgrave, Wesely C. Salmon, Lawrence Sklar, Bas C. van Fraassen, and John (...) Worrall. (shrink)
Teleosemantics seeks to explain meaning and other intentional phenomena in terms of their function in the life of the species. This volume of new essays from an impressive line-up of well-known contributors offers a valuable summary of the current state of the teleosemantics debate.
David Papineau presents a controversial view of human reason, portraying it as a normal part of the natural world, and drawing on the empirical sciences to illuminate its workings. In these six interconnected essays he discusses both theoretical and practical rationality, and shows how evolutionary theory, decision theory, and quantum mechanics offer fresh approaches to some long-standing problems.
It is very natural to suppose that conscious sensory experience is essentially representational. However this thought gives rise to any number of philosophical problems and confusions. I shall argue that it is quite mistaken. Conscious phenomena cannot be constructed out of representational materials.
This paper applies a teleosemantic perspective to the question of whether there is genuine representation outside the familiar realm of belief‐desire psychology. I first explain how teleosemantics accounts for the representational powers of beliefs and desires themselves. I then ask whether biological states which are simpler than beliefs and desires can also have representational powers. My conclusion is that such biologically simple states can be ascribed representational contents, but only in a system‐relative way: such states must be ascribed varying contents (...) when viewed as components in different biological systems. I conclude by arguing that ‘the genetic code’ does not even embody this kind of system‐relative representation. (shrink)
Identity theorists make claims like ‘pain = C-fibre stimulation’. These claims must be necessary if true, given that terms like ‘pain’ and ‘C-fibre stimulation’ are rigid. Yet there is no doubt that such claims appear contingent. It certainly seems that there could have been C-fibre stimulation without pains or vice versa. So identity theorists owe us an explanation of why such claims should appear contingent if they are in fact necessary.
By way of an example, Lewis imagines your being invited to join Schrödinger’s cat in its box for an hour. This box will either fill up with deadly poison fumes or not, depending on whether or not some radioactive atom decays, the probability of decay within an hour being 50%. The invitation is accompanied with some further incentive to comply (Lewis sets it up so there is a significant chance of some pretty bad but not life-threatening punishment if you don’t (...) get in the box). Lewis argues that the many minds theory implies that you should get in the box with the cat, despite this making it 50% likely you will die. (shrink)
Peter Urbach has argued, on Bayesian grounds, that experimental randomization serves no useful purpose in testing causal hypothesis. I maintain that he fails to distinguish general issues of statistical inference from specific problems involved in identifying causes. I concede the general Bayesian thesis that random sampling is inessential to sound statistical inference. But experimental randomization is a different matter, and often plays an essential role in our route to causal conclusions.
Galen Strawson (2006) thinks it is 'obviously' false that 'the terms of physics can fully capture the nature or essence of experience' (p. 4). He also describes this view as 'crazy' (p. 7). I think that he has been carried away by first impressions. It is certainly true that 'physicSalism', as he dubs this view, is strongly counterintuitive. But at the same time there are compelling arguments in its favour. I think that these arguments are sound and that the contrary (...) intuitions are misbegotten. In the first two sections of my remarks I would like to spend a little time defending physicSalism, or 'straightforward' physicalism, as I shall call it ('S' for 'straightforward', if you like). I realize that the main topic of Strawson's paper is panpsychism rather than his rejection of straightforward physicalism. But the latter is relevant as his arguments for panpsychism depend on his rejection of straightforward physicalism, in ways I shall explain below. (shrink)
According to an influential view in contemporary cognitive science, many human cognitive capacities are innate. The primary support for this view comes from ‘poverty of stimulus’ arguments. In general outline, such arguments contrast the meagre informational input to cognitive development with its rich informational output. Consider the ease with which humans acquire languages, become facile at attributing psychological states (‘folk psychology’), gain knowledge of biological kinds (‘folk biology’), or come to understand basic physical processes (‘folk physics’). In all these cases, (...) the evidence available to a growing child is far too thin and noisy for it to be plausible that the underlying principles involved are derived from general learning mechanisms. This only alternative hypothesis seems to be that the child’s grasp of these principles is innate. (Cf. Laurence and Margolis, 2001.). (shrink)
Block shows that we can consciously see a scene without being able to identify all the individual items in it. But in itself this fails to drive a wedge between phenomenology and access. Once we distinguish scene phenomenology from item phenomenology, the link between phenomenology and access is restored.
Gary Marcus has written a very interesting book about mental development from a nativist perspective. For the general readership at which the book is largely aimed, it will be interesting because of its many informative examples of the development of cognitive structures and because of its illuminating explanations of ways in which genes can contribute to these developmental processes. However, the book is also interesting from a theoretical point of view. Marcus tries to make nativism compatible with the central arguments (...) that anti-nativists use to attack nativism and with many recent discoveries about genetic activity and brain development. In so doing, he reconfigures the nativist position to a considerable extent. (shrink)