In this paper, I criticize an influential understanding of naturalization according to which work on traditional problems in the philosophy of law should be replaced with sociological or psychological explanations of how judges decide cases. W.V. Quine famously proposed the “naturalization of epistemology.” Quine argued that we should replace certain traditional philosophical inquiries into the justification of our beliefs with empirical psychological inquiry into how we actually form beliefs. In a prominent series of papers and a forthcoming book, Brian Leiter (...) has raised the intriguing idea that Quine’s naturalization of epistemology is a useful model for philosophy of law. I examine Quine’s naturalization of epistemology and Leiter’s suggested parallel. I argue that the parallel does not hold up. I show that, granting Leiter’s substantive assumption that the law is indeterminate, there is no philosophical confusion or overreaching in the legal case that is parallel to the philosophical overreaching of foundationalism in epistemology. Moreover, if we take seriously Leiter’s analogy between, on the one hand, the justification of belief in scientific theories and, on the other, the justification of decisions in legal cases, the result is almost the opposite of what Leiter suggests. The closest parallel in the legal case to Quine’s position would be the rejection of the philosophical positions that lead to the indeterminacy thesis. Finally, the conclusion that law is indeterminate could not establish the bankruptcy of philosophical investigation into the relation between the grounds of law and the content of the law. After all, the argument for that conclusion depends on a philosophical account of the relation between the grounds of law and the content of law. The argument therefore presupposes that that relation is an appropriate subject for philosophical inquiry. (shrink)
Tyler Burge’s influential arguments have convinced most philosophers that a thinker can have a thought involving a particular concept without fully grasping or having mastery of that concept. In Burge’s (1979) famous example, a thinker who lacks mastery of the concept of arthritis nonetheless has thoughts involving that concept. It is generally supposed, however, that this phenomenon – incomplete understanding, for short – does not require us to reconsider in a fundamental way what it is for a thought to involve (...) a particular concept. In this paper, I argue that the real significance of incomplete understanding has not been appreciated. To the extent that theorists of content address the phenomenon of thoughts involving incompletely grasped contents at all, they tend to assume that some hand-waving about deference to other thinkers who fully grasp the relevant concepts will take care of the inconvenient cases of incomplete understanding. The main lesson of Burge’s arguments is often taken to be that the content of language and thought is socially determined. On this picture, we do not need to change our basic view about what it is to have a concept; we just need to recognize that some thinkers can manage to have a concept by piggybacking on others. In contrast, on the view I defend, taking incomplete understanding seriously forces us to rethink some of our most basic assumptions about the nature of mental content. Deference is a red herring. The role of society in determining the content of thought is not the main lesson, but at most a useful clue as to the nature of mental and linguistic content. (shrink)
John Leslie comes to tell us that the end of the world is closer than we think. His book is no ordinary millennial manifesto, however. Leslie is a sophisticated philosopher of science, and the source of his message is not divine revelation, apocalyptic fantasy or anxiety about the year-2000 computer problem, but ‘the Doomsday Argument’ – an a priori argument that seeks support in probability..
In “How Facts Make Law” (Greenberg 2004), I argue that non-normative contingent facts are not sufficient to determine the content of the law. In the present paper, I take up a challenge raised by Enrique Villanueva (2005). He suggests that, to put it very briefly, descriptive facts can be reasons of the relevant kind. Therefore, even if the content of the law depends on reasons, it does not follow that law practices cannot themselves determine the content of the law. Villanueva (...) proposes a value-neutral criterion – textualism. In other words, he suggests that the descriptive facts about the meaning of legal texts are themselves reasons that determine the contribution of law practices to the content of the law. This suggestion depends on too shallow a conception of the requirement of reasons. For the law to be rationally determined, it is not enough that there be some value-neutral criterion that specifies that law practices have certain consequences for the content of the law. There have to be reasons that explain why that criterion, as opposed to all others, is the legally correct one – the one that, in the relevant legal system, determines the contribution of law practices to the content of the law. Normative facts are the best candidates for such reasons. And, in fact, Villanueva’s textualist criterion derives its appeal from normative facts. Reasons play a central role in the ontology of law. The determinants of the content of the law, which include law-determining practices such as statutes and judicial decisions, influence the content of the law in a systematic way. But their influence on the content of the law cannot be brute: the determining facts must constitute reasons why particular legal facts obtain. Descriptive facts cannot themselves provide the necessary reasons: for any descriptive fact that is a candidate reason, there are many possible models of its significance for the legal facts. Given the descriptive facts alone, it is arbitrary which of the possible models is correct, and therefore what the legal facts are. Descriptive facts therefore cannot alone determine the content of the law. Normative facts are the best candidates for what needs to be added to the law practices in order for the determining facts to make rationally intelligible why particular legal facts, as opposed to others, obtain. (shrink)
Fodor’s asymmetric-dependence theory of content is probably the best known and most developed causal or informational theory of mental content. Many writers have attempted to provide counterexamples to Fodor’s theory. In this paper, I offer a more fundamental critique. I begin by attacking Fodor’s view of the dialectical situation. Fodor’s theory is cast in terms of laws covering the occurrence of an individual thinker’s mental symbols. I show that, contrary to Fodor’s view, we cannot restrict consideration to hypothetical cases in (...) which his conditions for content are satisfied, but must consider whether the relevant laws exhibit the specified asymmetric-dependence relations in actual cases. My central argument is that the laws that the theory requires do not in fact exhibit the appropriate asymmetric-dependence relations. I show that, in general, part of the mechanism for the crucial, supposedly content-determining law for a mental symbol is not shared by the mechanisms for the other laws covering the occurrence of the same mental symbol. As a result, the former law can be eliminated (by eliminating the non-overlapping part of the mechanism) without eliminating the latter laws. The latter laws do not asymmetrically depend on the former law. (shrink)
The view (most prominently advocated by Justice Scalia) that original meaning entails the constitutionality of original practices has strong intuitive appeal and has been broadly assumed by originalists and nonoriginalists alike. But the position is mistaken. We suggest that a failure to distinguish between two different notions of meaning accounts for the position's wide currency. According to the first notion, the meaning of a term is roughly what a dictionary definition attempts to convey--the semantic or linguistic understanding necessary to use (...) the term, as opposed to nonlinguistic facts about the objects or activities to which the term applies. In contrast, according to the second, looser notion, the meaning of a term incorporates the objects or activities to which the term is applied. The first notion lies behind originalism's theoretical force; it is untenable that the meaning of the Constitution in the first sense could evolve. In sharp contrast, it is not only tenable but inevitable that changes occur over time in the class of things to which a constitutional provision is applied. Once recognized, the distinction undermines the seemingly natural move from the necessity of interpreting the Constitution in accordance with how it was originally understood to the necessity of upholding practices originally understood to be constitutional. By taking the distinction on board and rejecting the assumption, originalism can readily deflect the challenges based on unacceptable original practices; as a consequence, however, it will not be tenable for originalism, in any case challenging an original practice, simply to rule out the possibility of the practice's invalidity. (shrink)
Most legal theorists, including almost all positivists and many others, take for granted or are implicitly committed to an assumption that is not an official part of positivism. The assumption is that the content of the law is determined by the contents of legally authoritative pronouncements. I call it the Pronouncement View (PV, for short). The kind of determination at issue here is constitutive, not epistemic. That is, PV concerns what makes the content of the law what it is, not (...) how we ascertain the content of the law. PV is more of an organizing principle or core idea, than a precise doctrine. I have introduced PV in an unqualified form, but there are a variety of ways in which its claim could be moderated. For example, a qualified version of PV could hold that the contents of legally authoritative pronouncements play a central or predominant role in determining the content of the law. The debate between positivists and anti-positivists is often framed in a way that takes PV for granted. For example, one typical characterization takes the debate to concern whether laws must pass a moral test in order to be valid. The use of “laws” here betrays an assumption that the starting point is legally authoritative pronouncements. Given this starting point, if morality is relevant it will have to be as a way of screening the content of authoritative pronouncements. I suggest that the issue of whether a PV-based view of law is correct is more fundamental than the issue that divides legal positivists and anti-positivists, at least as the latter issue is commonly understood. For example, one common understanding of the latter issue is that positivists claim, and anti-positivists deny, that the criteria of legal validity – the criteria that determine whether a norm is legally valid – need not include moral facts. As will become clear, however, the picture of law presupposed by this way of framing the debate – according to which candidate norms qualify as legal norms by satisfying criteria of validity – is closely associated with PV and need not be accepted by non-PV-based views of law. PV leads to a characteristic set of concerns and problems, and yields a distinctive way of thinking about how law is supposed to operate. In this paper, I will offer an alternative to PV, which I call the Dependence View (DV, for short). I begin to develop the picture of law that it yields. The discussion throughout is exploratory and tentative. The goal is to sketch the alternative picture and identify problems and possible ways of developing it, not to refute the picture associated with PV. (shrink)
This paper was presented at the American Philosophical Association's 2007 Berger Prize session. It is a reply to Ken Himma's comment on my paper, "How Facts Make Law," which was awarded the 2007 Berger Prize for the outstanding paper in philosophy of law published during 2004 and 2005. In his thoughtful and thought-provoking paper, Himma claims that the argument of "How Facts Make Law" must go wrong somewhere because, if successful, the argument shows too much with too little. In particular, (...) he claims that my argument, with very limited resources, reaches a conclusion that entails that subjectivist and non-cognitivist theories of morality are false. Himma insists that I should not be able to resolve such controversial debates in meta-ethics with no meta-ethical or even normative resources. My basic response has two parts. First, it is not correct that my conclusion entails that subjectivist and non-cognitivist theories of morality are false. My conclusion itself is neutral as to the metaphysics of morality. Second, it's not even true that my argument, if successful, shows that there must be moral facts. The reason is that I rely on the plausibility of the existence of moral facts (whatever their metaphysics) in arguing for my conclusion. In sum, my argument's conclusion doesn't get us nearly as far as Himma thinks. Nor are my argument's resources as meager as he claims. (shrink)
In a circulated but heretofore unpublished 2001 paper, I argued that Leiter’s analogy to Quine’s “naturalization of epistemology” does not do the philosophical work Leiter suggests. I revisit the issues in this new essay. I first show that Leiter’s replies to my arguments fail. Most significantly, if – contrary to the genuinely naturalistic reading of Quine that I advanced – Quine is understood as claiming that we have no vantage point from which to address whether belief in scientific theories is (...) ever justified, it would not help Leiter’s parallel. Given Leiter’s way of drawing the parallel, the analogous position in the legal case would be not the Legal Realists’ indeterminacy thesis, but the very different position that we have no vantage point from which to address whether legal decisions can ever be justified. I then go on to address the more important question of whether the indeterminacy thesis, if true, would support any replacement of important legal philosophical questions with empirical ones. Although Ronald Dworkin has argued against the indeterminacy thesis, if he were wrong on this issue, it would not in any way suggest that the questions with which Dworkin is centrally concerned cannot fruitfully be addressed. The indeterminacy thesis is a bone of contention in an ordinary philosophical debate between its proponents and Dworkin. Of course, if the determinacy thesis were true, no one should try to show that it is false, but this triviality lends no support to the kind of replacement proposal that Leiter proposes. I conclude with some general reflections on naturalism and philosophical methodology. (shrink)
In this paper, I challenge an influential understanding of naturalization according to which work on traditional problems in the philosophy of law should be replaced with sociological or psychological explanations of how judges decide cases. W.V. Quine famously proposed the ‘naturalization of epistemology’. In a prominent series of papers and a book, Brian Leiter has raised the intriguing idea that Quine’s naturalization of epistemology is a useful model for philosophy of law. I examine Quine’s naturalization of epistemology and Leiter’s suggested (...) parallel and argue that the parallel does not hold up. Even granting Leiter’s substantive assumption that the law is indeterminate, there is no philosophical confusion or overreaching in the legal case that is parallel to the philosophical overreaching of Cartesian foundationalism in epistemology. Moreover, if we take seriously Leiter’s analogy, the upshot is almost the opposite of what Leiter suggests. The closest parallel in the legal case to Quine’s position would be the rejection of the philosophical positions that lead to the indeterminacy thesis. (shrink)
In this paper, I argue that there is a picture of how law works that most legal theorists are implicitly committed to and take to be common ground. This Standard Picture (SP, for short) is generally unacknowledged and unargued for. SP leads to a characteristic set of concerns and problems and yields a distinctive way of thinking about how law is supposed to operate. I suggest that the issue of whether SP is correct is a fundamental one for the philosophy (...) of law, more basic, for example, than the issue that divides legal positivists and anti-positivists, at least as the latter issue is ordinarily understood. The goals of the paper are fourfold: 1) to identify and articulate in some detail the Standard Picture; 2) to show that SP is widely held and has important consequences for other debates in the philosophy of law; 3) to show that SP leads to a serious theoretical problem; 4) to sketch an alternative picture that promises to avoid this problem. I emphasize the modesty of these goals in one respect. I make no claim to refute SP or to fully develop and defend an alternative picture. (shrink)
CRS says that the meanings of expressions of a language or other symbol system or the contents of mental states are determined and explained by the way symbols are used in thinking. According to CRS one.
I offer a new argument against the legal positivist view that non-normative social facts can themselves determine the content of the law. I argue that the nature of the determination relation in law is rational determination: the contribution of law-determining practices to the content of the law must be based on reasons. That is why it must be possible in principle to explain what makes the law have the content that it does. It follows, I argue, that non-normative facts about (...) statutes, judicial decisions, and other practices cannot themselves determine the content of the law. A full account must appeal to considerations independent of the practices that determine the relevance of the practices to the content of the law. Normative facts are the best candidates. (shrink)
In this paper, I deploy an argument that I have developed in a number of recent papers in the service of three projects. First, I show that the most influential version of legal positivism – that associated with H.L.A. Hart – fails. The argument’s engine is a requirement that a constitutive account of legal facts must meet. According to this rational-relation requirement, it is not enough for a constitutive account of legal facts to specify non-legal facts that modally determine the (...) legal facts. The constitutive determinants of legal facts must provide reasons for the obtaining of the legal facts (in a sense of “reason” that I develop). I show that the Hartian account is unable to meet this requirement. That officials accept a rule of recognition does not by itself constitute a reason why the standards specified in that rule are part of the law of the community. I argue that it is false that understanding the explanatory significance of officials’ acceptance of a rule is part of our reflective understanding of the nature of law. The second project of the paper is to respond to a family of objections that challenge me to explain why normative facts and descriptive facts together are better placed to provide reasons for legal facts than descriptive facts alone. A unifying theme of the objections is that explanations have to stop somewhere; descriptive facts, it is suggested, are no worse a stopping place than normative facts. Third, the paper spells out a consequence of the rational-relation requirement: if an account of what, at the most basic level, determines legal facts is true in any possible legal system, it is true in all possible legal systems. For example, if a Hartian account of legal facts is true in any possible legal system, it is true in all possible legal systems. I use this all-or-nothing result in my critique of a Hartian account, but the result is of interest in its own right. (shrink)
In this paper, I propose a new way of understanding the space of possibilities in the field of mental content. The resulting map assigns separate locations to theories of content that have generally been lumped together on the more traditional map. Conversely, it clusters together some theories of content that have typically been regarded as occupying opposite poles. I make my points concrete by developing a taxonomy of theories of mental content, but the main points of the paper concern not (...) merely how to classify, but how to understand, the theories. Also, though the paper takes theories of mental content as a case study, much of the discussion is applicable to theories of other phenomena. To a first approximation, the difference between the traditional and the proposed taxonomies turns on whether we classify theories of content by, on the one hand, their implications for a non-redundant supervenience base for content facts (i.e., for facts about what contents thoughts have) or, on the other, by their constitutive accounts of content. By a "constitutive account," I mean the kind of elucidation of the nature of a phenomenon that theorists have tried to give for, for example, knowledge, justice, personal identity, consciousness, convention, heat, and limit. The tendency to taxonomize by supervenience base is encouraged, I suggest, by a failure to keep clearly in view a distinction between constitutive and modal determination. Many philosophers would accept that a constitutive account cannot be captured in purely modal terms. Giving a constitutive account is not the same as specifying modally necessary and sufficient conditions. Nevertheless, philosophers often try to cash constitutive claims in modal terms. A case in point is that theories of content tend to be conceptualized in terms of the theories' implications for a supervenience base for content facts. My thesis goes beyond the by-now somewhat familiar proposition that not all modal determinants of a phenomenon are constitutive determinants. One who has taken that point on board might nevertheless conceive of a philosophical account as an attempt to specify constitutive determinants of the target phenomenon that make up a non-redundant supervenience base for the phenomenon. Shoehorning a philosophical account into this form leaves out elements that are modally redundant, but may be explanatorily or ontologically significant. For example, when a constitutive account has multiple levels, the different levels will typically be modally redundant. Formulating the account as a specification of a supervenience base of constitutive determinants will therefore flatten the account into a single level. Many of my arguments can be illustrated by considering the place of normativity in the theory of content. The new taxonomy gives a distinct niche to normative theories of content - theories that explain a thought's having a certain content at least in part in terms of the obtaining of normative facts. By contrast, on a traditional map, normative theories are invisible as such because normative facts supervene on non-normative ones. (shrink)
Darwinian theories of culture need to show that they improve upon the commonsense view that cultural change is explained by humans? skillful pursuit of their conscious goals. In order for meme theory to pull its weight, it is not enough to show that the development and spread of an idea is, broadly speaking, Darwinian, in the sense that it proceeds by the accumulation of change through the differential survival and transmission of varying elements. It could still be the case that (...) the best explanation of why the idea has developed and spread is the conscious pursuit of human goals. Meme theory has the potential to do explanatory work in diverse ways. It can challenge the goal-based account of cultural change directly. Other possibilities for meme theory include explaining the acquisition of our goals and showing that memes and genes evolve together, each affecting the selective forces acting on the other. Raising the question of meme theory?s explanatory payoff brings out the importance of the ?selfish-meme? idea and the idea of non-content biases. Both have the potential to challenge the claim that our goals are in the driver?s seat. In order to show that a Darwinian theory of culture is more than an idle redescription, however, it is necessary to make the case that it offers explanatory gain over its competitors, in particular over the common sense goal-based account. (shrink)