What accounts for how we know that certain rules of reasoning, such as reasoning by Modus Ponens, are valid? If our knowledge of validity must be based on some reasoning, then we seem to be committed to the legitimacy of rule-circular arguments for validity. This paper raises a new difficulty for the rule-circular account of our knowledge of validity. The source of the problem is that, contrary to traditional wisdom, a universal generalization cannot be inferred just on the (...) basis of reasoning about an arbitrary object. I argue in favor of a more sophisticated constraint on reasoning by universal generalization, one which undermines a rule-circular account of our knowledge of validity. (shrink)
Bootstrapping, evidentialist internalism, and rule circularity Content Type Journal Article Pages 1-7 DOI 10.1007/s11098-012-9876-9 Authors Anthony Brueckner, University of California, Santa Barbara, CA, USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
I examine Paul Boghossian's recent attempt to argue for scepticism about logical rules. I argue that certain rule- and proof-theoretic considerations can avert such scepticism. Boghossian's 'Tonk Argument' seeks to justify the rule of tonk-introduction by using the rule itself. The argument is subjected here to more detailed proof-theoretic scrutiny than Boghossian undertook. Its sole axiom, the so-called Meaning Postulate for tonk, is shown to be false or devoid of content. It is also shown that the rules of Disquotation and (...) of Semantic Ascent cannot be derived for sentences with tonk dominant. These considerations deprive Boghossian's scepticism of its support. (shrink)
This article offers a critique of Karsten Stuebers account of rule following as presented in his article "How to Think about Rules and Rule Following." The task Stueber sets himself is of defending the idea that human practices are bound and guided by rules (both causally and normatively) while avoiding the discredited "cognitive model of rule following." This article argues that Stuebers proposal is unconvincing because it falls foul of the very problems it sets out to avoid. Stuebers defense of (...) rules as normative guides is shown to be either circular or burdened with an infinite regress, while his account of rules as causal determinants of our actions is shown to lapse back into the "cognitive model" that he explicitly rejects. Key Words: rules rule following norms causes social science. (shrink)
According to Gupta and Belnap, the “extensional behavior” of ‘true’ matches that of a circularly defined predicate. Besides promising to explain semantic paradoxicality, their general theory of circular predicates significantly liberalizes the framework of truth-conditional semantics. The authors’ discussions of the rationale behind that liberalization invoke two distinct senses in which a circular predicate’s semantic behavior is explained by a “revision rule” carrying hypothetical information about its extension. Neither attempted explanation succeeds. Their theory may however be modified to employ a (...) relativized notion of extension. The resulting contextualist semantics for ‘true’ construes circularity as a pragmatic phenomenon. (shrink)
In a recent paper, Michael Friedman and Hilary Putnam argued that the Luders rule is ad hoc from the point of view of the Copenhagen interpretation but that it receives a natural explanation within realist quantum logic as a probability conditionalization rule. Geoffrey Hellman maintains that quantum logic cannot give a non-circular explanation of the rule, while Jeffrey Bub argues that the rule is not ad hoc within the Copenhagen interpretation. As I see it, all four are wrong. Given that (...) there is to be a projection postulate, there are at least two natural arguments which the Copenhagen advocate can offer on behalf of the Luders rule, contrary to Friedman and Putnam. However, the argument which Bub offers is not a good one. At the same time, contrary to Hellman, quantum logic really does provide an explanation of the Luders rule, and one which is superior to that of the Copenhagen account, since it provides an understanding of why there should be a projection postulate at all. (shrink)
We clarify the role of the Born rule in the Copenhagen Interpretation of quantum mechanics by deriving it from Bohr's doctrine of classical concepts, translated into the following mathematical statement: a quantum system described by a noncommutative C*-algebra of observables is empirically accessible only through associated commutative C*-algebras. The Born probabilities emerge as the relative frequencies of outcomes in long runs of measurements on a quantum system; it is not necessary to adopt the frequency interpretation of single-case probabilities (which will (...) be the subject of a sequel paper). Our derivation of the Born rule uses ideas from a program begun by Finkelstein (1965) and Hartle (1968), intending to remove the Born rule as a separate postulate of quantum mechanics. Mathematically speaking, our approach refines previous elaborations of this program - notably the one due to Farhi, Goldstone, and Gutmann (1989) as completed by Van Wesep (2006) - in replacing infinite tensor products of Hilbert spaces by continuous fields of C*-algebras. In combination with our interpretational context, this technical improvement circumvents valid criticisms that earlier derivations of the Born rule have provoked, especially to the effect that such derivations were mathematically flawed as well as circular. Furthermore, instead of relying on the controversial eigenvector-eigenvalue link in quantum theory, our derivation just assumes that pure states in classical physics have the usual interpretation as truthmakers that assign sharp values to observables. (shrink)
Kant’s Critique of Judgment and Schiller’s Letters on the Aesthetic Education of Man are generally recognized as crucial documents in the development of modern aesthetics away from rule-based conceptions of objectivity. This paper claims that they are also, in crucial ways, circular. In both Kant and Schiller, aesthetic taste turns out to be grounded in the realm of the social in a way that challenges the idealist notion that aesthetic evaluation and education would—or should—occur against the backdrop of humanity in (...) general, rather than of concrete communities. The threat of conceptual circularity, I claim, is thus directly tied to the ineradicable significance of social circles for the articulation of Kant’s and Schiller’s aesthetics. (shrink)
In §201 of Philosophical Investigations, Ludwig Wittgenstein puts forward his famous “rule-following paradox.” The paradox is how can one follow in accord with a rule – the applications of which are potentially infinite – when the instances from which one learns the rule and the instances in which one displays that one has learned the rule are only finite? How can one be certain of rule-following at all? In Wittgenstein: On Rules and Private Language, Saul Kripke concedes the skeptical position (...) that there are no facts that we follow a rule but that there are still conditions under which we are warranted in asserting of others that they are following a rule. In this paper, I explain why Kripke’s solution to the rule-following paradox fails. I then offer an alternative. (shrink)
The dead donor rule justifies current practice in organ procurement for transplantation and states that organ donors must be dead prior to donation. The majority of organ donors are diagnosed as having suffered brain death and hence are declared dead by neurological criteria. However, a significant amount of unrest in both the philosophical and the medical literature has surfaced since this practice began forty years ago. I argue that, first, declaring death by neurological criteria is both unreliable and unjustified but (...) further, the ethical principles which themselves justify the dead donor rule are better served by abandoning that rule and instead allowing individuals who have suffered severe and irreversible brain damage to become organ donors, even though they are not yet dead and even though the removal of their organs would be the proximal cause of death. (shrink)
In this paper I argue that the most prominent and familiar features of Wittgenstein’s rule following considerations generate a powerful argument for the thesis that most of our concepts are innate, an argument that echoes a Chomskyan poverty of the stimulus argument. This argument has a significance over and above what it tells us about Wittgenstein’s implicit commitments. For, it puts considerable pressure on widely held contemporary views of concept learning, such as the view that we learn concepts by constructing (...) prototypes. This should lead us to abandon our general default hostility to concept nativism and be much more sceptical of claims made on behalf of learning theories. (shrink)
Non-cognitivists claim that thick concepts can be disentangled into distinct descriptive and evaluative components and that since thick concepts have descriptive shape they can be mastered independently of evaluation. In Non-Cognitivism and Rule-Following, John McDowell uses Wittgenstein’s rule-following considerations to show that such a non-cognitivist view is untenable. In this paper I do several things. I describe the non-cognitivist position in its various forms and explain its driving motivations. I then explain McDowell’s argument against non-cognitivism and the Wittgensteinian considerations upon (...) which it relies, because this has been sufficiently misunderstood by critics and rarely articulated by commentators. After clarifying McDowell’s argument against non-cognitivism, I extend the analysis to show that commentators of McDowell have failed to appreciate his argument and that critical responses have been weak. I argue against three challenges posed to McDowell, and show that the case of thick concepts should lead us to reject non-cognitivism. (shrink)
A modest solution to the problem(s) of rule-following is defended against Kripkensteinian scepticism about meaning. Even though parts of it generalise to other concepts, the theory as a whole applies to response-dependent concepts only. It is argued that the finiteness problem is not nearly as pressing for such concepts as it may be for some other kinds of concepts. Furthermore, the modest theory uses a notion of justification as sensitivity to countervailing conditions in order to solve the justification problem. Finally, (...) in order to solve the normativity problem, it relies on substantial specifications of normal conditions such as those that have been proposed by Crispin Wright and Mark Johnston, rather than on Philip Pettit's functionalist specification. This theory is modest in that it does not meet the demands of Kripke's sceptic in full. Arguments are provided as to why this is not needed. (shrink)
According to a standard criticism, Robert Brandom's “normative pragmatics”, i.e. his attempt to explain normative statuses in terms of practical attitudes, faces a dilemma. If practical attitudes and their interactions are specified in purely non-normative terms, then they underdetermine normative statuses; but if normative terms are allowed into the account, then the account becomes viciously circular. This paper argues that there is no dilemma, because the feared circularity is not vicious. While normative claims do exhibit their respective authors' practical attitudes (...) and thereby contribute towards establishing the normative statuses they are about, this circularity is not a mark of Brandom's explanatory strategy but a feature of social practice of which we theorists partake. (shrink)
The aim of this paper is to discover whether or not a solitary individual, a human being isolated from birth, could become a rule-follower. The argumentation against this possibility rests on the claim that such an isolate could not become aware of a normative standard, with which her actions could agree or disagree. As a consequence, theorists impressed by this argumentation adopt a view on which the normativity of rules arises from corrective practices in which agents engage in a community. (...) However, it has been suggested that an isolated individual could engage in such a practice by herself. Three prospective examples of such cases are considered, and the possibility of solitary rule-following is vindicated. Furthermore, the nature of the goals at which rule-following practices generally aim is clarified. (shrink)
The paper explicates a version of dispositionalism and defends it against Kripke's objections (in his "Wittgenstein on Rules and Private Language") that 1) it leaves out the normative aspect of a rule, 2) it cannot account for the directness of the knowledge one has of what one meant, and 3) regarding rules for computable functions of numbers, a) there are numbers beyond one's capacity to consider and b) there are people who are disposed to make systematic mistakes in computing values (...) of functions they understand perfectly well. (shrink)
What is objectivity? What is the rule of law? Are the operations of legal systems objective? If so, in what ways and to what degrees are they objective? Does anything of importance depend on the objectivity of law? These are some of the principal questions addressed by Matthew H. Kramer in this lucid and wide-ranging study that introduces readers to vital areas of philosophical enquiry.
A simple method is provided for translating proofs in Grentzen's LK into proofs in Gentzen's LJ with the Peirce rule adjoined. A consequence is a simpler cut elimination operator for LJ + Peirce that is primitive recursive.
Many philosophers believe that agents are self-ruled only when ruled by their (authentic) selves. Though this view is rarely argued for explicitly, one tempting line of thought suggests that self-rule is just obviously equivalent to rule by the self . However, the plausibility of this thought evaporates upon close examination of the logic of ‘self-rule’ and similar reflexives. Moreover, attempts to rescue the account by recasting it in negative terms are unpromising. In light of these problems, this paper instead proposes (...) that agents are self-ruled only when not ruled by others. One reason for favouring this negative social view is its ability to yield plausible conclusions concerning various manipulation cases that are notoriously problematic for nonsocial accounts of self-rule. A second reason is that the account conforms with ordinary usage. It is concluded that self-rule may be best thought of as an essentially social concept. (shrink)
The view that psychological episodes have a physical nature (physicalism) and the view that they have a mental nature (Cartesian dualism) can be distinguished from the view that they have a purely normative nature. I explore some strands of a distinct, fourth view: psychological episodes are what they are because of the actual and possible relations of defeasible justification in which they stand; defeasible justification is an internal relation; it is not at bottom a normative matter; rule-following presupposes such internal (...) relations; to follow a rule is not to break it. (shrink)
Given (1) Wittgensteins externalist analysis of the distinction between following a rule and behaving in accordance with a rule, (2) prima facie connections between rule-following and psychological capacities, and (3) pragmatic issues about training, it follows that most, even all, future artificially intelligent computers and robots will not use language, possess concepts, or reason. This argument suggests that AIs traditional aim of building machines with minds, exemplified in current work on cognitive robotics, is in need of substantial revision.
The article evaluates the Domain Postulate of the Classical Model of Science and the related Aristotelian prohibition rule on kind-crossing as interpretative tools in the history of the development of mathematics into a general science of quantities. Special reference is made to Proclus’ commentary to Euclid’s first book of Elements , to the sixteenth century translations of Euclid’s work into Latin and to the works of Stevin, Wallis, Viète and Descartes. The prohibition rule on kind-crossing formulated by Aristotle in Posterior (...) analytics is used to distinguish between conceptions that share the same name but are substantively different: for example the search for a broader genus including all mathematical objects; the search for a common character of different species of mathematical objects; and the effort to treat magnitudes as numbers. (shrink)
In recent works, Chomsky has once more endorsed a computational view of rulefollowing, whereby to follow a rule is to operate certain computations on a subject’s mental representations. As is well known, this picture does not conform to what we may call the grammatical conception of rule-following outlined by Wittgenstein, whereby an elucidation of the concept of rule-following is aimed at by isolating grammatical statements regarding the phrase ‘to follow a rule’. As a result, Chomskyan and Wittgensteinian treatments of topics (...) immediately connected with rule-following, namely linguistic competence and understanding, are utterly different from one another. There are two possible stances that computationalists like Chomsky may adopt with regard to the discrepancy between the two aforementioned modes of dealing with rule-following, namely a conciliatory and a non-conciliatory attitude. According to the former attitude, grammatical remarks on and computationallyoriented theories of rule-following investigate one and the same topic although admittedly at different levels, namely a conceptual and an empirical one. According to the latter attitude, grammatical remarks are just a preliminary step in the investigation of rule-following which scientific advancement, presently represented by computationally-oriented theories on this matter, is well entitled to put aside. In what follows, however, I will try to show that both stances are problematic. The conciliatory attitude simply does not work, for it hardly copes with the fact that the concept of rule-following does not supervene, even weakly, on the property of rule-following, namely the property instantiated in the mental/cerebral phenomena that computationally-oriented theories of rule-following study. To take the contrary attitude, on the other hand, is to end up with another disappointing result, namely that the computational treatment of rule-following ultimately deals with something different from that which we wished to gain knowledge of when we began our inquiry into rule-following.. (shrink)
Agents which perform inferences on the basis of unreliable information need an ability to revise their beliefs if they discover an inconsistency. Such a belief revision algorithm ideally should be rational, should respect any preference ordering over the agent’s beliefs (removing less preferred beliefs where possible) and should be fast. However, while standard approaches to rational belief revision for classical reasoners allow preferences to be taken into account, they typically have quite high complexity. In this paper, we consider belief revision (...) for agents which reason in a simpler logic than full first-order logic, namely rule-based reasoners. We show that it is possible to define a contraction operation for rule-based reasoners, which we call McAllester contraction, which satisfies all the basic Alchourrón, Gärdenfors and Makinson (AGM) postulates for contraction (apart from the recovery postulate) and at the same time can be computed in polynomial time. We prove a representation theorem for McAllester contraction with respect to the basic AGM postulates (minus recovery), and two additional postulates. We then show that our contraction operation removes a set of beliefs which is least preferred, with respect to a natural interpretation of preference. Finally, we show how McAllester contraction can be used to define a revision operation which is also polynomial time, and prove a representation theorem for the revision operation. (shrink)
I argue that rule consequentialism sometimes requires us to act in ways that we lack sufficient reason to act. And this presents a dilemma for Parfit. Either Parfit should concede that we should reject rule consequentialism (and, hence, Triple Theory, which implies it) despite the putatively strong reasons that he believes we have for accepting the view or he should deny that morality has the importance he attributes to it. For if morality is such that we sometimes have decisive reason (...) to act wrongly, then what we should be concerned with, practically speaking, is not with the morality of our actions, but with whether our actions are supported by sufficient reasons. We could, then, for all intents and purposes just ignore morality and focus on what we have sufficient reason to do, all things considered. So if my arguments are cogent, they show that Parfit’s Triple Theory is either false or relatively unimportant in that we can, for all intents and purposes, simply ignore its requirements and just do whatever it is that we have sufficient reason to do, all things considered. (shrink)
As it is known, there is no rule satisfying Additivity in the complete domain of bankruptcy problems. This paper proposes a notion of partial Additivity in this context, to be called µ-additivity. We find that µ-additivity, together with two quite compelling axioms, anonymity and continuity, identify the Minimal Overlap rule, introduced by Neill (1982).
Syntactic logics do not suffer from the problems of logical omniscience but are often thought to lack interesting properties relating to epistemic notions. By focusing on the case of rule-based agents, I develop a framework for modelling resource-bounded agents and show that the resulting models have a number of interesting properties.
In the attempt of defending an interpretation of David Hume's moral and political philosophy connected to classical utilitarianism, intervenes in a key way the so called problem of the " Sensitive Knave " raised by this author at the end of his more utilitarian work, the Enquiry Concerning the Principles of Morals. According to the classic interpretation of this fragment, the utilitarian rationality in politics would clash with morality turning useless the latter. Therefore, in the political area the defense of (...) a moral utilitarianism would be an auto contradictory task. In order to show that, first, Hume does not say anything similar to this and second, that even indicates the way of overcoming this apparent contradiction between morality and rationality, we analyze briefly the arguments from which there comes basically this "anti-utilitarian" standard interpretation, and we defend an interpretation of the humean discussion on the problem of the supposed conflict between morality and rationality, or of rational incentives for immoral behavior, which allows to explain better Hume's position on this problem. Finally, we propose an instance of overcoming the contradiction morality/ rationality by a rule adjusted utilitarianism centered on the idea of the "progressive development of artificial institutions of reinforcement of morality", that Hume himself would suggest in other places in which he approaches the topic of the apparent contradiction between "morality" and "knavery". We propose also possible lines of future development of this idea, between them its use to clarify the relation of David Hume's thought with certain forms of contemporary liberalism. (shrink)
In an incendiary 2010 Nature article, M. A. Nowak, C. E. Tarnita and E. O. Wilson present a savage critique of the best known and most widely used framework for the study of social evolution, W. D. Hamilton’s theory of kin selection. Over a hundred biologists have since rallied to the theory’s defence, but Nowak et al. maintain that their arguments ‘stand unrefuted’. Here I consider the most contentious claim Nowak et al. defend: that Hamilton’s rule, the core explanatory principle (...) of kin selection theory, ‘almost never holds’. I first distinguish two versions of Hamilton’s rule in contemporary theory: a special version (HRS) that requires restrictive assumptions, and a general version (HRG) that does not. I then show that Nowak et al. are most charitably construed as arguing that HRS almost never holds, while HRG buys its generality at the expense of explanatory power. While their arguments against HRS are fairly uncontroversial, their arguments against HRG are more contentious, yet these have been largely overlooked in the ensuing furore. I consider the arguments for and against the explanatory value of HRG, with a view to assessing what exactly is at stake in the debate. I suggest that the debate hinges on issues concerning the causal interpretability of regression coefficients, and concerning the explanatory function Hamilton’s rule is intended to serve. (shrink)
People often have a strong intuitive sense that we ought to rescue those in serious need, even in cases where we could produce better outcomes by acting in other ways. It has become common in such cases to refer to this as the Rule of Rescue. Within the medical field this rule has predominantly been discussed in relation to decisions about whether to fund particular treatments. Whilst in this setting the arguments in favour of the Rule of Rescue have generally (...) been found to be unconvincing, there are some reasons for thinking that it may have more of a role to play at the clinical level. In this article we examine three lines that such reasoning might take. In each case we argue that the reasons given do not support the adoption of a Rule of Rescue in clinical practice. (shrink)
Usual derivations of Lilders's projection rule show that Liuders's rule is the rule required by quantum statistics to calculate the final state after an ideal (minimally disturbing) measurement. These derivations are at best inconclusive, however, when it comes to interpreting Liuders's rule as a description of individual state transformations. In this paper, I show a natural way of deriving Liiders's rule from well-motivated and explicit physical assumptions referring to individual systems. This requires, however, the introduction of a concept of individual (...) state which is not standard. (shrink)
David Gauthier and Edward McClennen have claimed that it could be rational to form an intention to A because it maximizes utility to intend to A, and that acting on such an intention could be rational even if it maximizes utility not to A. Michael Bratman has objected to this way of thinking, claiming that it is equivalent to the familiar rule-utilitarian mistake of rule-worship. The purpose of this paper is to argue that, so long as one is aware at (...) the time of forming an intention to A that it maximizes utility not to A, then acting on that intention need not be rule worship, but the result of a rational refusal to reconsider an issue which has already been adequately considered. (shrink)
The general chemistry curriculum includes a prelude that consumes nearly all of the first semester and occupies the first third of the typical textbook. This necessary prelude to the main event is comparable in scope to precalculus though not broken out as a formal ‘prechemistry’ course. Atomic orbitals account for much of this prelude-to-chemistry. By tradition, orbital theory is conveyed to the student in three disjunct pieces, presented in the following illogical order: the Pauli principle, the Aufbau principle, and Hund’s (...) rule. (Often the n + l rule is tossed into the mix as well, though with no fixed place in the scheme). In the early twentieth century, as various researchers announced new insights into the atom at unpredictable intervals, no one could have been faulted for teaching orbitals in such a manner, catch-as-catch-can. A hundred years on, the vestiges of that (presumed) practice look wrong, and are indefensible. In the approach advocated here, orbitals would be taught as a single hierarchical rule-set, with the parts coherently sequenced as Aufbau–Hund–Pauli (and with Madelung’s n + l rule rehabilitated as part of Aufbau, no longer a free-floating mnemonic aid only). Logic aside, pragmatism offers its own argument for adopting this scheme: A tighter approach to Aufbau can lighten the ‘prechemistry’ burden significantly and bring the student that much sooner to chemistry itself. (shrink)
This essay investigates Xunzi’s political philosophy of ba dao (Hegemonic Rule). It argues that Xunzi’s practical philosophy of ba dao was developed in the course of resolving the tension between theory and practice latent in Mencius’s account of ba dao . Its central claim is that contra Mencius who remained torn between his ideal political theory of ba dao and the practical utility and moral value of ba dao , Xunzi creatively re-appropriated ba dao as a “morally decent” (if not (...) morally ideal) statecraft, within the parameter of practical Confucian philosophy. After examining the moral and political value of ba dao in both domestic and international governance, the essay concludes by arguing that Xunzi’s defense of ba dao should be understood in the context of what I call “negative Confucianism,” without which the realization of the Confucian moral-political ideal (or positive Confucianism) is impossible. (shrink)
This paper offers an appraisal of Phillip Pettit’s approach to the problem how a finite set of examples can serve to represent a determinate rule, given that indefinitely many rules can be extrapolated from any such set. Negatively, I argue that Pettit’s so-called ethocentric theory of rule-following fails to deliver the solution to this problem that he sets out to provide. More constructively, I consider what further provisions are needed in order to advance Pettit’s distinctive general approach to the problem. (...) I conclude that what is needed is a ‘no-priority’ account of rule-exemplification: that is, an account that (a) affirms the constitutive role of agents’ responses in the exemplification of rules but (b) denies the explanatory priority given to such responses in Pettit’s theory. (shrink)
Kant’s Critique of Pure Reason contains an original and powerful semantics of singular cognitive reference which has important implications for epistemology and for philosophy of science. Here I argue that Kant’s semantics directly and strongly supports Newton’s Rule 4 of Philosophy in ways which support Newton’s realism about gravitational force. I begin with Newton’s Rule 4 of Philosophy and its role in Newton’s justification of realism about gravitational force (§2). Next I briefly summarize Kant’s semantics of singular cognitive reference (§3), (...) and then show that it is embedded in and strongly supports Newton’s Rule 4, and that it rules out not only Cartesian physics (per Harper) but also Cartesian, infallibilist presumptions about empirical justification generally (§4). This result exposes a key fallacy in Bas van Fraassen’s original argument for his anti-realist Constructive Empiricism (§5). (shrink)
Pretheoretically we hold that we cannot gain justification or knowledge through an epistemically circular reasoning process. Epistemically circular reasoning occurs when a subject forms the belief that p on the basis of an argument A, where at least one of the premises of A already presupposes the truth of p. It has often been argued that process reliabilism does not rule out that this kind of reasoning leads to justification or knowledge (cf. the so-called bootstrapping-problem or the easy-knowledge-problem). For some (...) philosophers, this is a reason to reject reliabilism. Those who try to defend reliabilism have two basic options: (I) accept that reliabilism does not rule out circular reasoning (or bootstrapping), but argue that this kind of reasoning is not as epistemically “bad” as it seems, or (II) hold on to the view that circular reasoning (or bootstrapping) is epistemically “bad”, but deny that reliabilism really allows this kind of reasoning. Option (I) has been spelled out in several ways, all of which have found to be problematic. Option (II) has not been discussed very widely. Vogel (J Philos 97:602–623, 2000) considers and quickly dismisses it on the basis of three reasons. Weisberg (Philos Phenomenol Res 81:525–548, 2010) has shown in detail that one of these reasons is unconvincing. In this paper I argue that the other two reasons are unconvincing as well and that therefore option (II) might in fact be a more promising starting point to defend reliabilism than option (I). (shrink)
I. Recent years have witnessed a great resurgence of interest in the writings of the later Wittgenstein, especially with those passages roughly, Philosophical Investigations p)I 38 Ã¢â¬â 242 and Remarks on the Foundations of mathematics, section VI that are concerned with the topic of rules. Much of the credit for all this excitement, unparalleled since the heyday of Wittgenstein scholarship in the early IIJ6os, must go to Saul Kripke's I4rittgenstein on Rules and Private Language. It is easy to explain why. (...) To begin with, the dialectic Kripke uncovered from Wittgenstein's.. (shrink)
Semantic holists view what one's terms mean as function of all of one's usage. Holists will thus be coherentists about semantic justification: showing that one's usage of a term is semantically justified involves showing how it coheres with the rest of one's usage. Semantic atomists, by contrast, understand semantic justification in a foundationalist fashion. Saul Kripke has, on Wittgenstein's behalf, famously argued for a type of skepticism about meaning and semantic justification. However, Kripke's argument has bite only if one understands (...) semantic justification in foundationalist terms. Consequently, Kripke's arguments lead not to a type of skepticism about meaning, but rather to the conclusion that one should be a coherentist about semantic justification, and thus a holist about semantic facts. (shrink)
Abstract: This paper argues that most of the alleged straight solutions to the sceptical paradox which Kripke (1982) ascribed to Wittgenstein can be regarded as the first horn of a dilemma whose second horn is the paradox itself. The dilemma is proved to be a by-product of a foundationalist assumption on the notion of justification, as applied to linguistic behaviour. It is maintained that the assumption is unnecessary and that the dilemma is therefore spurious. To this end, an alternative conception (...) of the justification of linguistic behaviour is outlined, a conception that vindicates some of the insights behind Kripke's Wittgenstein's sceptical solution of the paradox. This alternative conception is defended against two objections (both familiar from McDowell's works): (1) that it would imply that for the linguistic community there is no authority, no standard to meet and, therefore, no possibility of error and (2) that it would lead to a kind of idealism. (shrink)
This paper employs some outcomes (for the most part due to David Lewis) of the contemporary debate on the metaphysics of dispositions to evaluate those dispositional analyses of meaning that make use of the concept of a disposition in ideal conditions. The first section of the paper explains why one may find appealing the notion of an ideal-condition dispositional analysis of meaning and argues that Saul Kripke’s well-known argument against such analyses is wanting. The second section focuses on Lewis’ work (...) in the metaphysics of dispositions in order to call attention to some intuitions about the nature of dispositions that we all seem to share. In particular, I stress the role of what I call ‘Actuality Constraint’. The third section of the paper maintains that the Actuality Constraint can be used to show that the dispositions with which ideal-condition dispositional analyses identify my meaning addition by ‘+’ do not exist (in so doing, I develop a suggestion put forward by Paul Boghossian). This immediately implies that ideal-condition dispositional analyses of meaning cannot work. The last section discusses a possible objection to my argument. The point of the objection is that the argument depends on an illicit assumption. I show (1) that, in fact, the assumption in question is far from illicit and (2) that even without this assumption it is possible to argue that the dispositions with which ideal-condition dispositional analyses identify my meaning addition by ‘+’ do not exist. (shrink)
The inferentialist account of the a priori says that basic logical beliefs can be justified by way of rule circular inference. I argue that this account of the a priori fails to skirt the charge of begging the question, that the reasons offered in support of it are weak and that it makes justifying logical beliefs too easy. I also argue that recent modifications to inferentialism spell doom for it as a general theory of a priori justification.
Epistemic circularity occurs when a subject forms the belief that a faculty F is reliable through the use of F. Although this is often thought to be vicious, externalist theories generally don't rule it out. For some philosophers, this is a reason to reject externalism. However, Michael Bergmann defends externalism by drawing on the tradition of common sense in two ways. First, he concedes that epistemically circular beliefs cannot answer a subject's doubts about her cognitive faculties. But, he argues, subjects (...) don't have such doubts, so epistemically circular beliefs are rarely called upon to play this role. Second, following Thomas Reid, Bergmann argues that we have noninferential, though epistemically circular, knowledge that our faculties are reliable. I argue, however, that Bergmann's view is undermined by doubts a subject should have and that there is no plausible explanation for how we can have noninferential knowledge that our faculties are reliable. (shrink)
Many diﬀerent modes of deﬁnition have been proposed over time, but none of them allows for circular deﬁnitions, since, according to the prevalent view, the term deﬁned would then be lacking a precise signiﬁcation. I argue that although circular deﬁnitions may at times fail uniquely to pick out a concept or an object, sense still can be made of them by using a rule of revision in the style adopted by Anil Gupta and Nuel Belnap in the theory of truth.
Sometimes the fact that something is the law can be justified by the law. For example, the Sarbanes-Oxley Act is the law because it was enacted by Congress pursuant to the Commerce Clause. But eventually legal justification of law ends. The ultimate criteria of validity in a legal system cannot themselves be justified by law. According to H.L.A. Hart, justification of these ultimate criteria is still available, by reference to social facts concerning official acceptance - facts about what Hart calls (...) the "rule of recognition" for the system. -/- Drawing upon criticisms of sociological accounts of the law that can be found in the writings of Hans Kelsen, I argue in this essay that Hart's approach cannot account for statements about the law that assert the independence of legal validity from rule of recognition facts. I offer as an alternative a legal quietist approach, which can account for such statements. For the quietist, legal justification exhausts the possible justification for law. If our judgments about the law are fundamental, in the sense that they cannot be justified by other judgments about the law, then they have no justification (which is not to say that they should be abandoned). I argue that legal quietism is exemplified - if somewhat imperfectly - in Kelsen's writings, and I end the essay by exploring some difficulties that the quietist approach must face. (shrink)
In sec. 1.1 I emphasize the meliorative purpose of epistemology, and I characterize Goldman's epistemology as reliabilistic, cognitive, social, and meliorative. In sec. 1.2 I point out that Goldman's weak notion of knowledge is in conflict with our ordinary usage of 'knowledge'. In sec. 2 I argue for an externalist-internalist hybrid conception of justification which adds reliability-indicators to externalist knowledge. Reliability-indicators produce a veritistic surplus value for the social spread of knowledge. In sec. 3 I analyze some particular meliorative rules (...) which have been proposed by Goldman. I prove that obedience to the rule of maximally specific evidence increases expected veritistic value (sec. 3.1), and I argue that rule-circular arguments are epistemically worthless (sec. 3.2). In the final sec. 3.3 I report a non-circular justification of meta-induction which has been developed elsewhere. (shrink)
This book discusses theories of legal reasoning and provides an overall view of the rhetoric of legal justification. It shows how and why lawyers arguments can be rationally persuasive even though rarely, if ever, logically conclusive or compelling. It examines the role of "legal syllogism" and universality of legal reasoning, looking at arguments of consequentialism and principle, and concludes by questioning the infallibility of judges as lawmakers.
What are the appropriate criteria for assessing a theory of morality? In this enlightening work, Brad Hooker begins by answering this question. He then argues for a rule-consequentialist theory which, in part, asserts that acts should be assessed morally in terms of impartially justified rules. In the end, he considers the implications of rule-consequentialism for several current controversies in practical ethics, making this clearly written, engaging book the best overall statement of this approach to ethics.
euroscience of Rule-Guided Behavior brings together, for the first time, the experiments and theories that have created the new science of rules. Rules are central to human behavior, but until now the field of neuroscience lacked a synthetic approach to understanding them. How are rules learned, retrieved from memory, maintained in consciousness and implemented? How are they used to solve problems and select among actions and activities? How are the various levels of rules represented in the brain, ranging from simple (...) conditional ones if a traffic light turns red, then stop to rules and strategies of such sophistication that they defy description? And how do brain regions interact to produce rule-guided behavior? These are among the most fundamental questions facing neuroscience, but until recently there was relatively little progress in answering them. It was difficult to probe brain mechanisms in humans, and expert opinion held that animals lacked the capacity for such high-level behavior. However, rapid progress in neuroimaging technology has allowed investigators to explore brain mechanisms in humans, while increasingly sophisticated behavioral methods have revealed that animals can and do use high-level rules to control their behavior. The resulting explosion of information has led to a new science of rules, but it has also produced a plethora of overlapping ideas and terminology and a field sorely in need of synthesis. In this book, Silvia Bunge and Jonathan Wallis bring together the worlds leading cognitive and systems neuroscientists to explain the most recent research on rule-guided behavior. Their work covers a wide range of disciplines and methods, including neuropsychology, functional magnetic resonance imaging, neurophysiology, electroencephalography, neuropharmacology, near-infrared spectroscopy, and transcranial magnetic stimulation. This unprecedented synthesis is a must-read for anyone interested in how complex behavior is controlled and organized by the brain. (shrink)
This is a short, and therefore necessarily very incomplete discussion of one of the great questions of modern philosophy. I return to a station at which an interpretative train of thought of mine came to a halt in a paper written almost 20 years ago, about Wittgenstein and Chomsky, hoping to advance a little bit further down the track. The rule-following passages in the Investigations and Remarks on the Foundations of Mathematics in fact raise a number of distinct (though connected) (...) issues about rules, meaning, objectivity, and reasons, whose conflation is encouraged by the standard caption, "the Rule-following Considerations". So, let me begin by explaining my focus here. (shrink)
Anandi Hattiangadi packs a lot of argument into this lucid, well-informed and lively examination of the meaning scepticism which Kripke ascribes to Wittgenstein. Her verdict on the success of the sceptical considerations is mixed. She concludes that they are sufﬁcient to rule out all accounts of meaning and mental content proposed so far. But she believes that they fail to constitute, as Kripke supposed they did, a fully general argument against the possibility of meaning or content. Even though we are (...) not now in a position to specify facts in which meaning consists, the view that there are such facts, and more speciﬁcally that they satisfy the intuitive conception of meaning which she labels ‘semantic realism’, remains a live option. Moreover, given.. (shrink)
Moral particularists have seen Wittgenstein as a close ally. One of the main reasons for this is that particularists such as Jonathan Dancy and John McDowell have argued that Wittgenstein's so-called "rule-following considerations" (RFCs) provide support for their skepticism about the existence and/or role of rules and principles in ethics. In this paper, I show that while Wittgenstein's RFCs challenge the notion that competence with language, i.e., the ability to apply concepts properly, is like mechanically following a rule, he does (...) not reject the idea that there are rules that govern proper use of language. I then argue that while the RFCs may, at best, support a weak form of particularism that denies that moral competence is dependent on an explicit grasp of rules, they do not support a stronger version of particularism that denies that there are any true rules or principles in ethics. (shrink)
I examine Plato's claim in the Republic that philosophers must rule in a good city and Aristotle's attitude towards this claim in his early, and little discussed, work, the Protrepticus. I argue that in the Republic, Plato's main reason for having philosophers rule is that they alone understand the role of philosophical knowledge in a good life and how to produce characters that love such knowledge. He does not think that philosophic knowledge is necessary for getting right the vast majority (...) of judgments about actions open to assessment as virtuous or vicious. I argue that in the Protrepticus Aristotle accepts similar reasons for the rule of philosophers, but goes beyond the Republic and seems to suggest that philosophic knowledge is required for getting right ethical and political judgments in general. I close by noting some connections with Aristotle's later views in the Eudemian Ethics, the Nicomachean Ethics, and the Politics. Footnotesa For comments on an earlier draft of this paper, I thank the other contributors to this volume, as well as Aditi Iyer and Rachana Kamtekar. (shrink)
The theory of morality we can call full rule-consequentialism selects rules solely in terms of the goodness of their consequences and then claims that these rules determine which kinds of acts are morally wrong. George Berkeley was arguably the first rule-consequentialist. He wrote, “In framing the general laws of nature, it is granted we must be entirely guided by the public good of mankind, but not in the ordinary moral actions of our lives. … The rule is framed with respect (...) to the good of mankind; but our practice must be always shaped immediately by the rule.” (Berkeley 1712, section 31) Writers often classed as rule-consequentialists include Austin 1832; Harrod 1936; Toulmin 1950; Urmson 1953; Harrison 1953; Mabbott 1953; Singer 1955; 1961; and most prominently Brandt 1959; 1963; 1967; 1979; 1989; 1996; and Harsanyi 1977; 1982; 1993. See also Rawls 1955; Hospers 1972; Haslett 1987; 1994, ch. 1; 2000; Attfield 1987, 103-12; Barrow 1991, ch. 6; Johnson 1991; Riley 1998; 2000; Shaw 1999; and Hooker 2000. Whether J. S. Mill's ethics was rule-consequentialist is controversial (Urmson 1953; Crisp 1997, 102-33). (shrink)
The purpose of this paper is to look at the problem of rule-following—notably discussed by Kripke (Wittgenstein on rules and private language, 1982 ) and Wittgenstein (Philosophical investigations, 1953 )—from the perspective of the study of generics. Generics are sentences that express generalizations that tolerate exceptions. I first suggest that meaning ascriptions be viewed as habitual sentences, which are a sub-set of generics. I then seek a proper semantic analysis for habitually construed meaning sentences. The quantificational approach is rejected, due (...) to its persistent difficulties. Instead, a cognitive approach is adopted, where psychological considerations of meaning attributors play a crucial role. This account is then compared with the picture of meaning offered by Kripke and Wittgenstein, respectively. I show how this fresh way of conceiving of meaning sentences respects some of their insights while avoiding some of the drawbacks, and serves to improve the framework in which the current debate and inquiry about rule-following are conducted. (shrink)
John McDowell has suggested recently that there is a route from his favoured solution to Kripke's Wittgenstein's "sceptical paradox" about rule-following to a particular form of cognitive externalism. In this paper, I argue that this is not the case: even granting McDowell his solution to the rule-following paradox, his preferred version of cognitive externalism does not follow.
One of the principal lessons of The Concept of Law is that legal systems are not only comprised of rules, but founded on them as well. As Hart painstakingly showed, we cannot account for the way in which we talk and think about the law - that is, as an institution which persists over time despite turnover of officials, imposes duties and confers powers, enjoys supremacy over other kinds of practices, resolves doubts and disagreements about what is to be done (...) in a community and so on - without supposing that it is at bottom regulated by what he called the secondary rules of recognition, change and adjudication. Given this incontrovertible demonstration that every legal system must contain rules constituting its foundation, it might seem puzzling that many philosophers have contested Hart's view. In particular, they have objected to his claim that every legal system contains a rule of recognition. More surprisingly, these critiques span different jurisprudential schools. Positivists such as Joseph Raz, as well as natural lawyers such as Ronald Dworkin and John Finnis, have been among Hart's most vocal critics. In this essay, I would like to examine the opposition to the rule of recognition. What is objectionable about Hart's doctrine? Why deny that every legal system necessarily contains a rule setting out the criteria of legal validity? And are these objections convincing? Does the rule of recognition actually exist? This essay has five parts. In Part One, I try to state Hart's doctrine of the rule of recognition with some precision. As we will see, this task is not simple, insofar as Hart's position on this crucial topic is often frustratingly unclear. I also explore in this part whether the United States Constitution, or any of its provisions, can be considered the Hartian rule of recognition for the United States legal system. In Part Two, I attempt to detail the many roles that the rule of recognition plays within Hart's theory of law. In addition to the function that Hart explicitly assigned to it, namely, the resolution of normative uncertainty within a community, I argue that the rule of recognition, and the secondary rules more generally, also account for the law's dexterity, efficiency, normativity, continuity, persistence, supremacy, independence, identity, validity, content and existence. In Part Three, I examine three important challenges to Hart's doctrine of the rule of recognition. They are: 1) Hart's rule of recognition is under- and over-inclusive; 2) Hart cannot explain how social practices are capable of generating rules that confer powers and impose duties and hence cannot account for the normativity of law; 3) Hart cannot explain how disagreements about the criteria of legal validity that occur within actual legal systems, such as in American law, are possible. In Parts Four and Five, I address these various objections. I argue that although Hart's particular account of the rule of recognition is flawed and should be rejected, a related notion can be fashioned and should be substituted in its place. The idea, roughly, is to treat the rule of recognition as a shared plan which sets out the constitutional order of a legal system. As I try to show, understanding the rule of recognition in this new way allows the legal positivist to overcome the challenges lodged against Hart's version while still retaining the power of the original idea. (shrink)
David Bloor's challenging new evaluation of Wittgenstein's account of rules and rule-following brings together the rare combination of philosophical and sociological viewpoints. Wittgenstein enigmatically claimed that the way we follow rules is an "institution" without ever explaining what he meant by this term. Wittgenstein's contribution to the debate has since been subject to sharply opposed interpretations by "collectivist" and "individualist" readings by philosophers; in the light of this controversy, Bloor argues convincingly for a collectivist, sociological understanding of Wittgenstein's later work. (...) Accessible and simply written, this book provides the first consistent sociological reading of Wittgenstein's work for many years. (shrink)
In Making it Explicit, Brandom aims to articulate an account of conceptual content that accommodates its normativity--a requirement on theories of content that Brandom traces to Wittgenstein's rule following considerations. It is widely held that the normativity requirement cannot be met, or at least not with ease, because theories of content face an intractable dilemma. Brandom proposes to evade the dilemma by adopting a middle road--one that uses normative vocabulary, but treats norms as implicit in practices. I argue that this (...) proposal fails to evade the dilemma, as Brandom himself understands it. Despite his use of normative vocabulary, Brandom's theory fares no better than the reductionist theories he criticises. I consider some responses that Brandom might make to my charges, and finally conclude that his proposal founders on his own criteria. (shrink)
Democracy is commonly associated with political equality and/or majority rule. This essay shows that these three ideas are conceptually separate, so the transition from any one to another stands in need of further substantive argument, which is not always adequately given. It does this by offering an alternative decision-making mechanism, called lottery voting, in which all individuals cast votes for their preferred options but, instead of these being counted, one is randomly selected and that vote determines the outcome. This procedure (...) is democratic and egalitarian, since all have an equal chance to influence outcomes, but obviously not majoritarian. (shrink)
Fixed-rate versions of rule-consequentialism and rule-utilitarianism evaluate rules in terms of the expected net value of one particular level of social acceptance, but one far enough below 100% social acceptance to make salient the complexities created by partial compliance. Variable-rate versions of rule-consequentialism and rule-utilitarianism instead evaluate rules in terms of their expected net value at all different levels of social acceptance. Brad Hooker has advocated a fixed-rate version. Michael Ridge has argued that the variable-rate version is better. The debate (...) continues here. Of particular interest is the difference between the implications of Hooker's and Ridge's rules about doing good for others. (shrink)
The popularity of rule-consequentialism among philosophers has waxed and waned. Waned, mostly; at least lately. The idea that the morality that ought to claim allegiance is the ideal code of rules whose acceptance by everybody would bring about best consequences became the object of careful analysis about half a century ago, in the writings of J. J. C. Smart, John Rawls, David Lyons, Richard Brandt, Richard Hare, and others.1 They considered utilitarian versions of rule consequentialism but discovered flaws in the (...) view that attach to the wider consequentialist doctrine. In the eyes of many, the flaws were decisive. Brad Hooker has produced brilliant work that unsettles this complacent consensus.2 Over a period of several years he has produced a sustained and powerful defense of a version of rule consequentialism that does not obviously succumb to the criticisms that have been thought to render this doctrine a nonstarter. He acknowledges intellectual debts to Richard Brandt. But Hooker avoid certain excrescences in Brandt’s efforts to conceive of morality as an ideal code of rules. Most notably, Hooker eschews Brandt’s misguided attempt to derive some version of rule utilitarianism from an underlying commitment to some form of contractualism. Moreover, Hooker has worked to articulate a version of rule consequentialism in sufficient detail that one can see how the different parts of the doctrine hang together and how the best version of the.. (shrink)
This paper proposes a causal-dispositional account of rule-following as it occurs in reasoning and intentional agency. It defends this view against Kripke’s (1982) objection to dispositional accounts of rule-following, and it proposes a solution to the problem of deviant causal chains. In the first part, I will outline the causal-dispositional approach. In the second part, I will follow Martin and Heil’s (1998) realist response to Kripke’s challenge. I will propose an account that distinguishes between two kinds of rule-conformity and two (...) kinds of rule-following, and I will defend the realist approach against two challenges that have recently been raised by Handfield and Bird (2008). In the third part, I will turn to the problem of deviant causal chains, and I will propose a new solution that is partly based on the realist account of rule-following. (shrink)
We argue that the dead donor rule, which states that multiple vital organs should only be taken from dead patients, is justified neither in principle nor in practice. We use a thought experiment and a guiding assumption in the literature about the justification of moral principles to undermine the theoretical justification for the rule. We then offer two real world analogues to this thought experiment, voluntary active euthanasia and capital punishment, and argue that the moral permissibility of terminating any patient (...) through the removal of vital organs cannot turn on whether or not the practice violates the dead donor rule.Next, we consider practical justifications for the dead donor rule. Specifically, we consider whether there are compelling reasons to promulgate the rule even though its corresponding moral principle is not theoretically justified. We argue that there are no such reasons. In fact, we argue that promulgating the rule may actually decrease public trust in organ procurement procedures and medical institutions generally – even in states that do not permit capital punishment or voluntary active euthanasia.Finally, we examine our case against the dead donor rule in the light of common arguments for it. We find that these arguments are often misplaced – they do not support the dead donor rule. Instead, they support the quite different rule that patients should not be killed for their vital organs. (shrink)
Ideas on meaning, rules and mathematical proofs abound in Wittgenstein’s writings. The undeniable fact that they are present together, sometimes intertwined in the same passage of Philosophical Investigations or Remarks on the Foundations of Mathematics, does not show, however, that the connection between these ideas is necessary or inextricable. The possibility remains, and ought to be checked, that they can be plausibly and consistently separated. I am going to examine two views detectable in Wittgenstein’s works: one about proofs, the other (...) about meaning and rules. The first is the denial of the objectivity of proof. The second is a conception of meaning stemming from the rule-following considerations. I shall argue that, though Wittgenstein seems to conjoin the two views, they can be, and should be, separated1. (shrink)
In her (1996) Kadri Vihvelin argues that autoinfanticide is nomologically impossible and so that there is no sense in which time travelers are able to commit it. In response, Theodore Sider (2002) defends the original Lewisian verdict (Lewis 1976) whereby, on a common understanding of ability, time travelers are able to kill their earlier selves and their failure to do so is merely coincidental. This paper constitutes a critical note on arguments put forward by both Sider and Vihvelin. I argue (...) that although Sider’s criticism starts out promisingly he doesn’t succeed in establishing that Vihvelin’s analysis fails, because (a) he neglects to rule out a class of counterfactuals to which Vihvelin’s sample-case may belong; and (b) (together with Lewis) he is wrong to suggest that future facts are irrelevant in the evaluation of time travelers’ abilities. I show instead that Vihvelin’s argument is viciously circular, indicating that even if there are nomological constraints on autoinfanticide these cannot be established a priori. (shrink)
Pierre Bourdieu has developed a philosophy of social science, grounded in the phenomenological tradition, which treats knowledge as a practical ability embodied in skilful behaviour, rather than an intellectual capacity for the representation and manipulation of propositional knowledge. He invokes Wittgenstein’s remarks on rule-following as one way of explicating the idea that knowledge is a skill. Bourdieu’s conception of tacit knowledge is a dispositional one, adopted to avoid a perceived dilemma for methodological individualism. That dilemma requires either the explanation of (...) regularities in social behaviour as the result of the tacit representation of procedural rules (‘legalism’) or the self-conscious representation of behavioural goals (‘voluntarism’) by individuals. After explaining the apparent dilemma, I then argue that Wittgenstein’s remarks on rule following actually undermine, rather than support, a dispositional solution. Nonetheless, the philosophy of social science can survive without a dispositional account of knowledge. Such a social science needs, firstly, to embrace one horn of the dilemma, voluntarism, provided that the relevant regularities can be explained as unintended consequences of agents’ self-represented intentions. Secondly, such a social science should treat theorists’ interpretations as unifying generalizations, not hypotheses about the acquisition of tacit knowledge. Finally, where appeal to cognitive psychology can distinguish otherwise equivalent theories in social science, social science should incorporate the data of cognitive psychology concerning tacit mental processes. (shrink)
The duty to keep promises has many aspects associated with deontological moral theories. The duty to keep promises is non-welfarist, in that the obligation to keep a promise need not be conditional on there being a net benefit from keeping the promise—indeed need not be conditional on there being at least someone who would benefit from its being kept. The duty to keep promises is more closely connected to autonomy than directly to welfare: agents have moral powers to give themselves (...) certain obligations to others. And these moral powers, which enable promisors to create agent- relative obligations to promisees, correlate with rights the promisees acquire in the process, such as rights to waive the duty or insist on its performance. As a result of promises, promisees acquire (not only rights but also) a special status: the promisees are the ones wronged when promises to them that they have not waived are not kept. One more aspect of the duty to keep promises that is associated with deontological moral theories is that what actions the duty requires is at least partly backward-looking: what actions the duty requires depends on facts about the past, namely facts about what promises were made and then waived or not. This paper surveys these aspects of the duty to keep promises and then explores whether rule-consequentialism can be reconciled with them. (shrink)
I argue that a target of the rule-following considerations is the thought that there are mental episodes in which a consciously accessible item guides me in my decision to respond in a certain way when I follow a rule. I contend that Wittgenstein’s position on this issue invokes a distinction between a literal and a symbolic reading of the claim that these processes of guidance take place. In the literal sense he rejects the claim, but in the symbolic sense he (...) sees nothing wrong with it. I consider some arguments that Wittgenstein deploys against the literal sense of the claim. (shrink)
When faced with a rule that they take to be true, and a recalcitrant example, people are apt to say: “The exception proves the rule”. When pressed on what they mean by this though, things are often less than clear. A common response is to dredge up some once-heard etymology: ‘proves’ here, it is often said, means ‘tests’. But this response—its frequent appearance even in some reference works notwithstanding1—makes no sense of the way in which the expression is used. To (...) insist that the exception proves the rule is to insist that whilst this is an exception, the rule still stands; and furthermore, that, rather than undermining the rule, the exception serves to confirm it. This second claim may seem paradoxical, but it should not, once it is realized that what does the confirming is not the exception itself, but rather the fact that we judge it to be an exception; and that what is confirmed is not the rule itself, but rather the fact that we judge it to be a rule. To treat something as an exception is not to treat it as a counterexample that refutes the existence of the rule. Rather it is to treat it as special, and so to concede the rule from which it is excepted. The point comes clearly in the original (probably 17th Century) Latin form: Exceptio probat (figit2) regulam in casibus non exceptis. Exception (i.e. the act of excepting) proves (establishes) the rule in the cases not excepted. Clearly this form of reasoning cannot apply when the rule that we are considering has the form of a simple universal generalization. Here there can be no exceptions, only counterexamples. So what we need, and what will be developed.. (shrink)
It is often argued that the rule of law is only instrumentally morally valuable, valuable when and to the extent that a legal system is used to purse morally valuable ends. In this paper, I defend Lon Fuller’s view that the rule of law has conditional non-instrumental as well as instrumental moral value. I argue, along Fullerian lines, that the rule of law is conditionally non-instrumentally valuable in virtue of the way a legal system structures political relationships. The rule of (...) law specifies a set of requirements which lawmakers must respect if they are to govern legally. As such, the rule of law restricts the illegal or extra-legal use of power. When a society rules by law, there are clear rules articulating the behavior appropriate for citizens and officials. Such rules ideally determine the particular contours political relationships will take. When the requirements of the rule of law are respected, the political relationships structured by the legal system constitutively express the moral values of reciprocity and respect for autonomy. The rule of law is instrumentally valuable, I argue, because in practice the rule of law limits the kind of injustice which governments pursue. There is in practice a deeper connection between ruling by law and the pursuit of moral ends than advocates of the standard view recognize. The next part of this paper outlines Lon Fuller’s conception of the rule of law and his explanation of its moral value. The third.. (shrink)