Solidarity-the reciprocal relations of trust and obligation between citizens that are essential for a thriving polity-is a basic goal of all political communities. Yet it is extremely difficult to achieve, especially in multiracial societies. In an era of increasing global migration and democratization, that issue is more pressing than perhaps ever before. In the past few decades, racial diversity and the problems of justice that often accompany it have risen dramatically throughout the world. It features prominently nearly everywhere: from the (...) United States, where it has been a perennial social and political problem, to Europe, which has experienced an unprecedented influx of Muslim and African immigrants, to Latin America, where the rise of vocal black and indigenous movements has brought the question to the fore. Political theorists have long wrestled with the topic of political solidarity, but they have not had much to say about the impact of race on such solidarity, except to claim that what is necessary is to move beyond race. The prevailing approach has been: How can a multicultural and multiracial polity, with all of the different allegiances inherent in it, be transformed into a unified, liberal one? JulietHooker flips this question around. In multiracial and multicultural societies, she argues, the practice of political solidarity has been indelibly shaped by the social fact of race. The starting point should thus be the existence of racialized solidarity itself: How can we create political solidarity when racial and cultural diversity are more or less permanent? Unlike the tendency to claim that the best way to deal with the problem of racism is to abandon the concept of race altogether, Hooker stresses the importance of coming to terms with racial injustice, and explores the role that it plays in both the United States and Latin America. Coming to terms with the lasting power of racial identity, she contends, is the starting point for any political project attempting to achieve solidarity. (shrink)
The theory of morality we can call full rule-consequentialism selects rules solely in terms of the goodness of their consequences and then claims that these rules determine which kinds of acts are morally wrong. George Berkeley was arguably the first rule-consequentialist. He wrote, “In framing the general laws of nature, it is granted we must be entirely guided by the public good of mankind, but not in the ordinary moral actions of our lives. … The rule is framed with respect (...) to the good of mankind; but our practice must be always shaped immediately by the rule.” (Berkeley 1712, section 31) Writers often classed as rule-consequentialists include Austin 1832; Harrod 1936; Toulmin 1950; Urmson 1953; Harrison 1953; Mabbott 1953; Singer 1955; 1961; and most prominently Brandt 1959; 1963; 1967; 1979; 1989; 1996; and Harsanyi 1977; 1982; 1993. See also Rawls 1955; Hospers 1972; Haslett 1987; 1994, ch. 1; 2000; Attfield 1987, 103-12; Barrow 1991, ch. 6; Johnson 1991; Riley 1998; 2000; Shaw 1999; and Hooker 2000. Whether J. S. Mill's ethics was rule-consequentialist is controversial (Urmson 1953; Crisp 1997, 102-33). (shrink)
Fixed-rate versions of rule-consequentialism and rule-utilitarianism evaluate rules in terms of the expected net value of one particular level of social acceptance, but one far enough below 100% social acceptance to make salient the complexities created by partial compliance. Variable-rate versions of rule-consequentialism and rule-utilitarianism instead evaluate rules in terms of their expected net value at all different levels of social acceptance. Brad Hooker has advocated a fixed-rate version. Michael Ridge has argued that the variable-rate version is better. The (...) debate continues here. Of particular interest is the difference between the implications of Hooker's and Ridge's rules about doing good for others. (shrink)
What are the appropriate criteria for assessing a theory of morality? In this enlightening work, Brad Hooker begins by answering this question. He then argues for a rule-consequentialist theory which, in part, asserts that acts should be assessed morally in terms of impartially justified rules. In the end, he considers the implications of rule-consequentialism for several current controversies in practical ethics, making this clearly written, engaging book the best overall statement of this approach to ethics.
The term ‘moral particularism’ has been used to refer to different doctrines. The main body of this paper begins by identifying the most important doctrines associated with the term, at least as the term is used by Jonathan Dancy, on whose work I will focus. I then discuss whether holism in the theory of reasons supports moral particularism, and I call into question the thesis that particular judgements have epistemological priority over general principles. Dancy’s recent book Ethics without Principles (Dancy (...) 2004) makes much of a distinction between reasons, enablers, disablers, intensi- ﬁers, and attenuators. I will suggest that the distinction is unnecessary, and I will argue that, even if there is such a distinction, it does not entail moral particularism. In the ﬁnal two sections, I try to give improved versions of arguments against particularism that I put forward in my paper ‘Moral Particularism: Wrong and Bad’ (Hooker 2000b: 1–22, esp. pp. 7–11, 15–22). (shrink)
This paper replies to Carson's attacks on an earlier paper of Hooker's. Carson argued that rule-consequentialism--the theory that an act is morally right if and only if it is allowed by the set of rules and corresponding virtues the having of which by everyone would bring about the best consequences considered impartially--can and does require the comfortably off to make enormous sacrifices in order to help the needy. Hooker defends rule-consequentialism against Carson's arguments.
: The purpose of this paper and its sister paper (Farrell and Hooker, b) is to present, evaluate and elaborate a proposed new model for the process of scientific development: self-directed anticipative learning (SDAL). The vehicle for its evaluation is a new analysis of a well-known historical episode: the development of ape-language research. In this first paper we outline five prominent features of SDAL that will need to be realized in applying SDAL to science: 1) interactive exploration of possibility (...) space; 2) self-directedness; 3) localization of success and error; 4) Synergistic increase in learning capacity; and 5) continuity of SDAL process across scientific change. In this paper we examine the first three features of SDAL in relation to the early history of ape-language research. We show that this history is readily explicated as a self-directed, ever-finer, delineation of possibility space that enables the localization of both success and error. Paper II examines the last two features against this history. (shrink)
: The purpose of this paper and its sister paper I (Farrell and Hooker, a) is to present, evaluate and elaborate a proposed new model for the process of scientific development: self-directed anticipative learning. The vehicle for its evaluation is a new analysis of a well-known historical episode: the development of ape language research. Paper I examined the basic features of SDAL in relation to the early history of ape-language research. In this second paper we examine the reconceptualization of (...) ape-language research following what many conceived to be Terrace's refutation of ape-language. We show that the apparent 'revolution' in our understanding of ape linguistic capacities was not based upon 'revolutionary' research different in kind from 'normal' research. The same processes of self-directed interactive exploration of possibility space, that enables a homing-in upon both error and success, is present in all phases of productive science. Moreover, conceiving science as an SDAL process explains how scientists learn how to learn about their research domain. (shrink)
With respect to morality, the term ‘impartiality’ is used to refer to quite different things. My paper will focus on three: 1. Impartial application of good (first-order) moral rules 2. Impartial benevolence as the direct guide to decisions about what to do 3. Impartial assessment of (first-order) moral rules What are the relations among these three? Suppose there was just one good (first-order) moral rule, namely, that one should choose whatever one thinks will maximize aggregate good. If there were just (...) this one moral rule, then impartial application of that one rule might be compatible with impartial benevolence as the direct guide to decisions about what to do. But now suppose there are other good moral rules, such as ones that prohibit certain kinds of act, ones that permit some degree of preferential concern for oneself, and ones that require some degree of preference for one’s friends and family in one’s decisions about how to allocate one’s time, attention, and other resources. If there are these other good rules, then at least sometimes impartially applying and complying with them will conflict with letting impartial benevolence dictate what to do. More importantly, we can reject impartial benevolence as the direct guide to decisions about what to do while endorsing impartial application of good (first-order) moral rules. Likewise, rejecting impartial benevolence as the direct guide to decisions about what to do does not entail rejecting impartial assessment of (first-order) moral rules. Section 1 of this paper argues that impartiality in the application of good moral rules is always appropriate. Section 2 argues that impartial benevolence as a direct guide to decisions about what to do is appropriate only sometimes. Section 3 argues that impartiality in the assessment of rules is or is not appropriate—depending on how plausible the impartially selected rules are. (shrink)
An international line-up of fourteen distinguished philosophers presents new essays in honor of James Griffin, White's Professor of Moral Philosophy at Oxford University. The essays take up topics relating to well-being and morality, prominent themes in contemporary ethics and particularly in Griffin's work. Griffin himself provides replies to these essays, offering a fascinating development of his own thinking on these topics.
Virtue ethics is normally taken to be an alternative to consequentialist and Kantian moral theories. I shall discuss what I think is the most interesting version of virtue ethics – Rosalind Hursthouse's. I shall then argue that her version is inadequate in ways that suggest revision in the direction of a kind of rule-consequentialism.
The main body of this paper assesses a leading recent theory of fairness, a theory put forward by John Broome. I discuss Broome's theory partly because of its prominence and partly because I think it points us in the right direction, even if it takes some missteps. In the course of discussing Broome's theory, I aim to cast light on the relation of fairness to consistency, equality, impartiality, desert, rights, and agreements. Indeed, before I start assessing Broome's theory, I discuss (...) two very popular conceptions of fairness that contrast with his. One of these very popular conceptions identifies fairness with the equal and impartial application of rules. The other identifies fairness with all-things-considered moral rightness. (shrink)
The duty to keep promises has many aspects associated with deontological moral theories. The duty to keep promises is non-welfarist, in that the obligation to keep a promise need not be conditional on there being a net benefit from keeping the promise—indeed need not be conditional on there being at least someone who would benefit from its being kept. The duty to keep promises is more closely connected to autonomy than directly to welfare: agents have moral powers to give themselves (...) certain obligations to others. And these moral powers, which enable promisors to create agent- relative obligations to promisees, correlate with rights the promisees acquire in the process, such as rights to waive the duty or insist on its performance. As a result of promises, promisees acquire (not only rights but also) a special status: the promisees are the ones wronged when promises to them that they have not waived are not kept. One more aspect of the duty to keep promises that is associated with deontological moral theories is that what actions the duty requires is at least partly backward-looking: what actions the duty requires depends on facts about the past, namely facts about what promises were made and then waived or not. This paper surveys these aspects of the duty to keep promises and then explores whether rule-consequentialism can be reconciled with them. (shrink)
A timely and penetrating investigation, this book seeks to transform moral philosophy. In the face of continuing disagreement about which general moral principles are correct, there has been a resurgence of interest in the idea that correct moral judgements can be only about particular cases. This view--moral particularism--forecasts a revolution in ordinary moral practice that has until now consisted largely of appeals to general moral principles. Moral particularism also opposes the primary aim of most contemporary normative moral theory that attempts (...) to show that either one general principle, or a set of general principles, is superior to all its rivals. (shrink)
This paper’s first section invokes a relevant meta-ethical principle about what a moral theory needs in order to be plausible and superior to its rivals. In subsequent sections, I try to pinpoint exactly what the demandingness objection has been taken to be. I try to explain how the demandingness objection developed in reaction to impartial act-consequentialism’s requirement of beneficence toward strangers. In zeroing in on the demandingness objection, I distinguish it from other, more or less closely related, objections. In particular, (...) I discuss arguments put forward by Bernard Williams concerning integrity, Samuel Scheffler on prerogatives, and Liam Murphy on fairness. The final part of the paper acknowledges some ways in which vagueness bedevils my own rule-consequentialism’s rules about doing good and preventing disasters. (shrink)
Rule-consequentialism has been accused of either collapsing into act-consequentialism or being internally inconsistent. I have tried to develop a form of rule-consequentialism without these flaws. In this June's issue of Utilitas, Robert Card argued that I have failed. Here I assess his arguments.
All the major inter-theoretic relations of fundamental science are asymptotic ones, e.g. quantum theory as Planck's constant h 0, yielding (roughly) Newtonian mechanics. Thus asymptotics ultimately grounds claims about inter-theoretic explanation, reduction and emergence. This paper examines four recent, central claims by Batterman concerning asymptotics and reduction. While these claims are criticised, the discussion is used to develop an enriched, dynamically-based account of reduction and emergence, to show its capacity to illuminate the complex variety of inter-theory relationships in physics, and (...) to provide a principled resolution to such persistent philosophical problems as multiple realisability and the nature of the special sciences. Introduction Exposition Examination I: Claims (1) and (2), asymptotic explanation and reference Examination II: Claim (3), reduction and singular asymptotics Examination III: Claim (4), emergence and multiple realisability Conclusion. (shrink)
This paper employs (and defends where needed) a familiar four-part methodology for assessing moral theories. This methodology makes the most popular kind of moral pluralism--here called Ross-style pluralism--look extremely attractive. The paper contends, however, that, if rule-consequentialism's implications match our considered moral convictions as well as Ross-style pluralism's implications do, the methodology makes rule-consequentialism look even more attractive than Ross-style pluralism. The paper then attacks two arguments recently put forward in defence of Ross-style pluralism. One of these arguments is that (...) no moral theory containing some single normative principle to justify general pro tanto duties can do justice to the ineliminable role of judgment in moral thinking. The other argument is that no such theory is plausible in light of the fact that our moral ideas come from disparate historical sources. (shrink)
Donald Campbell has long advocated a naturalist epistemology based on a general selection theory, with the scope of knowledge restricted to vicarious adaptive processes. But being a vicariant is problematic because it involves an unexplained epistemic relation. We argue that this relation is to be explicated organizationally in terms of the regulation of behavior and internal state by the vicariant, but that Campbell's selectionist approach can give no satisfactory account of it because it is opaque to organization. We show how (...) organizational constraints and capacities are crucial to understanding both evolution and cognition and conclude with a proposal for an enriched, generalized model of evolutionary epistemology that places high-order regulatory organization at the center. (shrink)
This essay explores the reasons for thinking that Scanlon's contractualist principle serves merely as a ?spare wheel?, an element that spins along nicely but bears no real weight, because it presupposes too much of what it should be explaning. The ambitions and scope of Scanlon's contractualism are discussed, as is Scanlon's thesis that contracualism will assess candidate moral principles individually rather than as sets. The final third of the paper critizes Scanlon's account of fairness and his approach to cases where (...) agents can save either one person or many people. (shrink)
It is argued that fundamental to Piaget''s life works is a biologically based naturalism in which the living world is a nested complex of self-regulating, self-organising (constructing) adaptive systems. A structuralist-rationalist overlay on this core position is distinguished and it is shown how it may be excised without significant loss of content or insight. A new and richer conception of the nature of Piaget''s genetic epistemology emerges, one which enjoys rich interrelationships with evolutionary epistemology. These are explored and it is (...) shown how a regulatory systems evolutionary epistemology may be embedded within genetic epistemology. (shrink)
Analytic moral philosophy's strong divide between empirical and normative restricts facts to providing information for the application of norms and does not allow them to confront or challenge norms. So any genuine attempt to incorporate experience and empirical research into bioethics – to give the empirical more than the status of mere 'descriptive ethics'– must make a sharp break with the kind of analytic moral philosophy that has dominated contemporary bioethics. Examples from bioethics and science are used to illustrate the (...) problems with the method of application that philosophically prevails in both domains and with the conception of rationality that underlies this method. Cues from how these problems can be handled in science then introduce summaries of richer, more productive naturalist and constructivist accounts of reason and normative knowledge. Liberated by a naturalist approach to ethics and an enlarged conception of rationality, empirical work can be recognized not just as essential to bioethics but also as contributing to normative knowledge. (shrink)
Most of us believe morality requires us to help the desperately needy. But most of us also believe morality doesn't require us to make enormous sacrifices in order to help people who have no special connection with us. Such self-sacrifice is of course praiseworthy, but it isn't morally mandatory. Rule-consequentialism might seem to offer a plausible grounding for such beliefs. Tim Mulgan has recently argued in _Analysis and _Pacific Philosophical Quarterly that rule-consequentialism cannot do so. This paper replies to Mulgan's (...) arguments. (shrink)
In his bookMinimal Rationality (1986), Christopher Cherniak draws deep and widespread conclusions from our finitude, and not only for philosophy but also for a wide range of science as well. Cherniak's basic idea is that traditional philosophical theories of rationality represent idealisations that are inaccessible to finite rational agents. It is the purpose of this paper to apply a theory of idealisation in science to Cherniak's arguments. The heart of the theory is a distinction between idealisations that represent reversible, solely (...) quantitative simplifications and those that represent irreversible, degenerate idealisations which collapse out essential theoretical structure. I argue that Cherniak's position is best understood as assigning the latter status to traditional rationality theories and that, so understood, his arguments may be illuminated, expanded, and certain common criticisms of them rebutted. The result, however, is a departure from traditional, formalist theories of rationality of a more radical kind than Cherniak contemplates, with widespread ramifications for philosophical theory, especially philosophy of science itself. (shrink)
Richard Arneson and Alison McIntyre have done me a great honor by reading my book Ideal Code, Real World so carefully.1 In addition, they have done me a great kindness by reading it sympathetically. Nevertheless, they each find the book ultimately unconvincing, though in very different ways. But the cause of their dissatisfaction with the book is not mistaken interpretation. They have interpreted the book accurately, and they have advanced penetrating criticisms of it. One group of their criticisms definitely draw (...) blood. To treat the wound, my formulation of rule-consequentialism will have to be revised. A second group their criticisms seems to me fatal only if certain considerations are ignored. I will highlight the considerations that I think inoculate rule-consequentialism against these criticisms. In reaction to a third group of their criticisms, however, I have to accept that Arneson and McIntyre simply have quite different intuitions from mine, such that the prospects of agreement between the three of us are dim. (shrink)
This paper begins by celebrating Sidgwick's Methods of Ethics. It then discusses Sidgwick's moral epistemology and in particular the coherentist element introduced by his argument from common-sense morality to utilitarianism. The paper moves on to a discussion of how common-sense morality seems more appealing if its principles are formulated as picking out pro tanto considerations rather than all-things-considered demands. Thefinal section of the paper considers the question of which version of utilitarianism follows from Sidgwick's arguments.
Error is protean, ubiquitous and crucial in scientific process. In this paper it is argued that understanding scientific process requires what is currently absent: an adaptable, context-sensitive functional role for error in science that naturally harnesses error identification and avoidance to positive, success-driven, science. This paper develops a new account of scientific process of this sort, error and success driving Self-Directed Anticipative Learning (SDAL) cycling, using a recent re-analysis of ape-language research as test example. The example shows the limitations of (...) other accounts of error, in particular Mayo’s (Error and the growth of experimental knowledge, 1996) error-statistical approach, and SDAL cycling shows how they can be fruitfully contextualised. (shrink)
The Aristotle-Kant tradition requires that autonomous activity must originate within the self and points toward a new type of causation (different from natural efficient causation) associated with teleology. Notoriously, it has so far proven impossible to uncover a workable model of causation satisfying these requirements without an increasingly unsatisfying appeal to extra-physical elements tailor-made for the purpose. In this paper we first provide the essential reason why the standard linear model of efficient causation cannot support the required model of agency: (...) its causal thread model of efficient causation cannot support the core requirement that an action is determined by, and thus an expression of, the agent’s nature. We then provide a model that corrects these deficiencies, constructed naturalistically from within contemporary biology, and argue that it provides an appropriate foundation for all the features of genuine agency. Further, we provide general characterisations of freedom and reason suitable to this bio-context (but that also capture the core classical conceptions) and show how this model reconciles them. (shrink)
It is now commonly accepted that N. Goodman's predicate "grue" presents the theory of confirmation of C. G. Hempel (and other such theories) with grave difficulties. The precise nature and status of these "difficulties" has, however, never been made clear. In this paper it is argued that it is very unlikely that "grue" raises any formal difficulties for Hempel and appearances to the contrary are examined, rejected and an explanation of their intuitive appeal offered. However "grue" is shown to raise (...) an informal, "over-arching" difficulty of great magnitude for all theories of confirmation, including Hempel's theory. (shrink)
This paper has been about the question of what there is most reason to doin situations in which either there are no moral considerations to be takeninto account or the moral considerations to be taken into account are equally balanced. I have assessed all Parfit's arguments for concluding that the Present-aim Theory is right and the Self-interest Theory wrong aboutthis question. In § III, I showed how Parfit's argument from personal identity leads not to the abandonment of the Self-interest Theory, (...) but merely to a revision of it. In § IV, I argued that a premiss relied on by Parfit's argument from incomplete relativity - the premiss that theoretical and practical reason are relevantly similar - is too weak to support the conclusion that knowingly doing what is against one's long- term self-interest is rational (when no moral considerations are in play). In § V, I addressed Parfit's argument that we must reject the Self-interest Theory because we believe that it is rational to care more about certain things (such as achievement) than about one's overall welfare. I suggested that he misdescribed what we believe: for what we really believe is that it is not irrational to care more about these things than about either having the most pleasant life possible or having the life with the strongest desires fulfilled. This thought is consistent with Objective List versions of the Self- interest Theory. In § VI, I suggested Parfit's argument from our bias towards the future might be answered by making a second revision to the Self-interest Theory. Therefore, for all Parfit has argued, a version of the Self-interest Theory might be the most plausible theory of what we have most reason to do when moral considerations do not decide the issue.21. (shrink)
An explicit philosophy and meta-philosophy of positivism, empiricism and popperianism is provided. Early popperianism is argued to be essentially a form of empiricism, the deviations from empiricism are traced. In contrast, the meta-philosophy and philosophy of an evolutionary naturalistic realism is developed and it is shown how the maximal conflict of this doctrine with all forms of empiricism at the meta-philosophical level both accounts for the form of its development at the philosophical level and its defense against attack from nonrealist (...) quarters. Following an earlier article on realism of similar theme (Synthese 26 (1974), 409) this paper then further explores the ramifications of a thoroughgoing realist position. (shrink)
Consider the idea that moral rules must be suitable for public acknowledgement and acceptance, i.e., that moral rules must be suitable for being ‘widely known and explicitly recognized’, suitable for teaching as part of moral education, suitable for guiding behaviour and reactions to behaviour, and thus suitable for justifying one’s behaviour to others. This idea is now most often associated with John Rawls, who traces it back through Kurt Baier to Kant. My book developing ruleconsequentialism, Ideal Code, Real World, accepted (...) the ‘publicity requirement’ on moral rules. Katarzyna de Lazari-Radek and Peter Singer attack my moral theory on precisely this matter. Here I reply to their attack. The question under discussion is whether moral rightness is a matter of the application of principles or rules that must be suitable for public acceptance. No, answered Henry Sidgwick, holding that perhaps the principles that determine moral right and wrong should be kept secret, because publicizing these principles would not maximize utility. Since I think not-purely utilitarian forms of consequentialism may be more plausible than purely utilitarian forms, let me make the point in terms of consequentialism instead of utilitarianism. The standard form of act-consequentialism is maximizing and ‘global’, i.e., direct about everything. This act-consequentialism includes, among the acts to be evaluated by their consequences, instances of espousing principles, teaching morality, blaming, feeling indignation, feeling guilt, and punishing. On this form of act-consequentialism, an act that maximizes good consequences might be one that others should blame and even punish, since blaming and punishing the agent of the good-maximizing act might also for some reason maximize good consequences. Likewise, on this standard form of act-consequentialism, it may be right to do what it would be right neither to advocate openly nor even to recommend privately. All these ideas are entailed by the kind of act-consequentialism that evaluates, by their consequences, all ‘acts’—in a very broad sense of the term that takes in not only acts of doing or allowing but also acts of blaming, punishing, and recommending. De Lazari-Radek and Singer accept that there are strong consequentialist considerations in support of ‘board support for transparency in ethics’ and avoiding esoteric morality in most circumstances.. (shrink)
The role of interaction in learning is essential and profound: it must provide the means to solve open problems (those only vaguely specified in advance), but cannot be captured using our familiar formal cognitive tools. This presents an impasse to those confined to present formalisms; but interaction is fundamentally dynamical, not formal, and with its importance thus underlined it invites the development of a distinctively interactivist account of life and mind. This account is provided, from its roots in the interactivist (...) biological constitution of life, through the evolution of the dual internal regulatory capacities expressed as intentionality and intelligence, to its expression in self-directed anticipative learning in persons and in science. (shrink)