Solidarity-the reciprocal relations of trust and obligation between citizens that are essential for a thriving polity-is a basic goal of all political communities. Yet it is extremely difficult to achieve, especially in multiracial societies. In an era of increasing global migration and democratization, that issue is more pressing than perhaps ever before. In the past few decades, racial diversity and the problems of justice that often accompany it have risen dramatically throughout the world. It features prominently nearly everywhere: from the (...) United States, where it has been a perennial social and political problem, to Europe, which has experienced an unprecedented influx of Muslim and African immigrants, to Latin America, where the rise of vocal black and indigenous movements has brought the question to the fore. Political theorists have long wrestled with the topic of political solidarity, but they have not had much to say about the impact of race on such solidarity, except to claim that what is necessary is to move beyond race. The prevailing approach has been: How can a multicultural and multiracial polity, with all of the different allegiances inherent in it, be transformed into a unified, liberal one? JulietHooker flips this question around. In multiracial and multicultural societies, she argues, the practice of political solidarity has been indelibly shaped by the social fact of race. The starting point should thus be the existence of racialized solidarity itself: How can we create political solidarity when racial and cultural diversity are more or less permanent? Unlike the tendency to claim that the best way to deal with the problem of racism is to abandon the concept of race altogether, Hooker stresses the importance of coming to terms with racial injustice, and explores the role that it plays in both the United States and Latin America. Coming to terms with the lasting power of racial identity, she contends, is the starting point for any political project attempting to achieve solidarity. (shrink)
Rule-consequentialism has been accused of either collapsing into act-consequentialism or being internally inconsistent. I have tried to develop a form of rule-consequentialism without these flaws. In this June's issue of Utilitas, Robert Card argued that I have failed. Here I assess his arguments.
What are the appropriate criteria for assessing a theory of morality? In this enlightening work, Brad Hooker begins by answering this question. He then argues for a rule-consequentialist theory which, in part, asserts that acts should be assessed morally in terms of impartially justified rules. In the end, he considers the implications of rule-consequentialism for several current controversies in practical ethics, making this clearly written, engaging book the best overall statement of this approach to ethics.
The theory of morality we can call full rule - consequentialism selects rules solely in terms of the goodness of their consequences and then claims that these rules determine which kinds of acts are morally wrong. George Berkeley was arguably the first rule -consequentialist. He wrote, “In framing the general laws of nature, it is granted we must be entirely guided by the public good of mankind, but not in the ordinary moral actions of our lives. … The rule is (...) framed with respect to the good of mankind; but our practice must be always shaped immediately by the rule.” Writers often classed as rule -consequentialists include Austin 1832; Harrod 1936; Toulmin 1950; Urmson 1953; Harrison 1953; Mabbott 1953; Singer 1955; 1961; and most prominently Brandt 1959; 1963; 1967; 1979; 1989; 1996; and Harsanyi 1977; 1982; 1993. See also Rawls 1955; Hospers 1972; Haslett 1987; 1994, ch. 1; 2000; Attfield 1987, 103-12; Barrow 1991, ch. 6; Johnson 1991; Riley 1998; 2000; Shaw 1999; and Hooker 2000. Whether J. S. Mill's ethics was rule -consequentialist is controversial. (shrink)
: The purpose of this paper and its sister paper (Farrell and Hooker, b) is to present, evaluate and elaborate a proposed new model for the process of scientific development: self-directed anticipative learning (SDAL). The vehicle for its evaluation is a new analysis of a well-known historical episode: the development of ape-language research. In this first paper we outline five prominent features of SDAL that will need to be realized in applying SDAL to science: 1) interactive exploration of possibility (...) space; 2) self-directedness; 3) localization of success and error; 4) Synergistic increase in learning capacity; and 5) continuity of SDAL process across scientific change. In this paper we examine the first three features of SDAL in relation to the early history of ape-language research. We show that this history is readily explicated as a self-directed, ever-finer, delineation of possibility space that enables the localization of both success and error. Paper II examines the last two features against this history. (shrink)
Fixed-rate versions of rule-consequentialism and rule-utilitarianism evaluate rules in terms of the expected net value of one particular level of social acceptance, but one far enough below 100% social acceptance to make salient the complexities created by partial compliance. Variable-rate versions of rule-consequentialism and rule-utilitarianism instead evaluate rules in terms of their expected net value at all different levels of social acceptance. Brad Hooker has advocated a fixed-rate version. Michael Ridge has argued that the variable-rate version is better. The (...) debate continues here. Of particular interest is the difference between the implications of Hooker's and Ridge's rules about doing good for others. (shrink)
: The purpose of this paper and its sister paper I (Farrell and Hooker, a) is to present, evaluate and elaborate a proposed new model for the process of scientific development: self-directed anticipative learning. The vehicle for its evaluation is a new analysis of a well-known historical episode: the development of ape language research. Paper I examined the basic features of SDAL in relation to the early history of ape-language research. In this second paper we examine the reconceptualization of (...) ape-language research following what many conceived to be Terrace's refutation of ape-language. We show that the apparent 'revolution' in our understanding of ape linguistic capacities was not based upon 'revolutionary' research different in kind from 'normal' research. The same processes of self-directed interactive exploration of possibility space, that enables a homing-in upon both error and success, is present in all phases of productive science. Moreover, conceiving science as an SDAL process explains how scientists learn how to learn about their research domain. (shrink)
What are appropriate criteria for assessing a theory of morality? In Ideal Code, Real World, Brad Hooker begins by answering this question, and then argues for a rule-consequentialist theory. According to rule-consequentialism, acts should be assessed morally in terms of impartially justified rules, and rules are impartially justified if and only if the expected overall value of their general internalization is at least as great as for any alternative rules. In the course of developing his rule-consequentialism, Hooker discusses (...) impartiality, well-being, fairness, equality, the question of how the 'general internalization' of rules is to be interpreted by rule-consequentialism, and the main objections to rule-consequentialism. He also discusses the social contract theory of morality, act-consequentialism, and the question of which moral prohibitions and which duties to help others rule-consequentialism endorses. The last part of the book considers the implications of rule-consequentialism for some current controversies in practical ethics. (shrink)
The term ‘moral particularism’ has been used to refer to different doctrines. The main body of this paper begins by identifying the most important doctrines associated with the term, at least as the term is used by Jonathan Dancy, on whose work I will focus. I then discuss whether holism in the theory of reasons supports moral particularism, and I call into question the thesis that particular judgements have epistemological priority over general principles. Dancy’s recent book Ethics without Principles (Dancy (...) 2004) makes much of a distinction between reasons, enablers, disablers, intensi- ers, and attenuators. I will suggest that the distinction is unnecessary, and I will argue that, even if there is such a distinction, it does not entail moral particularism. In the nal two sections, I try to give improved versions of arguments against particularism that I put forward in my paper ‘Moral Particularism: Wrong and Bad’ (Hooker 2000b: 1--22, esp. pp. 7--11, 15--22). (shrink)
This paper replies to Carson's attacks on an earlier paper of Hooker's. Carson argued that rule-consequentialism--the theory that an act is morally right if and only if it is allowed by the set of rules and corresponding virtues the having of which by everyone would bring about the best consequences considered impartially--can and does require the comfortably off to make enormous sacrifices in order to help the needy. Hooker defends rule-consequentialism against Carson's arguments.
The role of interaction in learning is essential and profound: it must provide the means to solve open problems (those only vaguely specified in advance), but cannot be captured using our familiar formal cognitive tools. This presents an impasse to those confined to present formalisms; but interaction is fundamentally dynamical, not formal, and with its importance thus underlined it invites the development of a distinctively interactivist account of life and mind. This account is provided, from its roots in the interactivist (...) biological constitution of life, through the evolution of the dual internal regulatory capacities expressed as intentionality and intelligence, to its expression in self-directed anticipative learning in persons and in science. (shrink)
This book presents a clear and critical view of the orthodox logical empiricist tradition, pointing the way to significant developments for the understanding of science both as research and as culture.
This essay contends that the constitutive elements of well-being are plural, partly objective, and separable. The essay argues that these elements are pleasure, friendship, significant achievement, important knowledge, and autonomy, but not either the appreciation of beauty or the living of a morally good life. The essay goes on to attack the view that elements of well-being must be combined in order for well-being to be enhanced. The final section argues against the view that, because anything important to say about (...) well-being could be reduced to assertions about these separable elements, the concept of well-being or personal good is ultimately unimportant. (shrink)
This paper outlines an original interactivist-constructivist approach to modelling intelligence and learning as a dynamical embodied form of adaptiveness and explores some applications of I-C to understanding the way cognitive learning is realized in the brain. Two key ideas for conceptualizing intelligence within this framework are developed. These are: intelligence is centrally concerned with the capacity for coherent, context-sensitive, self-directed management of interaction; and the primary model for cognitive learning is anticipative skill construction. Self-directedness is a capacity for integrative process (...) modulation which allows a system to "steer" itself through its world by anticipatively matching its own viability requirements to interaction with its environment. Because the adaptive interaction processes required of intelligent systems are too complex for effective action to be prespecified learning is an important component of intelligence. A model of self-directed anticipative learning is formulated based on interactive skill construction, and argued to constitute a central constructivist process involved in cognitive development. SDAL illuminates the capacity of intelligent learners to start with the vague, poorly defined problems typically posed in realistic learning situations and progressively refine them, transforming them into problems with sufficient structure to guide the construction of a solution. Finally, some of the implications of I-C for modelling of the neuronal basis of intelligence and learning are explored; in particular, Quartz and Sejnowski's recent neural constructivism paradigm, enriched by Montague and Sejnowski's dopaminergic model of anticipative-predictive neural learning, is assessed as a promising, but incomplete, contribution to this approach. The paper concludes with a fourfold reflection on the divergence in cognitive modelling philosophy between the I-C and the traditional computational information processing approaches. (shrink)
Analytic moral philosophy's strong divide between empirical and normative restricts facts to providing information for the application of norms and does not allow them to confront or challenge norms. So any genuine attempt to incorporate experience and empirical research into bioethics – to give the empirical more than the status of mere 'descriptive ethics'– must make a sharp break with the kind of analytic moral philosophy that has dominated contemporary bioethics. Examples from bioethics and science are used to illustrate the (...) problems with the method of application that philosophically prevails in both domains and with the conception of rationality that underlies this method. Cues from how these problems can be handled in science then introduce summaries of richer, more productive naturalist and constructivist accounts of reason and normative knowledge. Liberated by a naturalist approach to ethics and an enlarged conception of rationality, empirical work can be recognized not just as essential to bioethics but also as contributing to normative knowledge. (shrink)
All the major inter-theoretic relations of fundamental science are asymptotic ones, e.g. quantum theory as Planck's constant h 0, yielding (roughly) Newtonian mechanics. Thus asymptotics ultimately grounds claims about inter-theoretic explanation, reduction and emergence. This paper examines four recent, central claims by Batterman concerning asymptotics and reduction. While these claims are criticised, the discussion is used to develop an enriched, dynamically-based account of reduction and emergence, to show its capacity to illuminate the complex variety of inter-theory relationships in physics, and (...) to provide a principled resolution to such persistent philosophical problems as multiple realisability and the nature of the special sciences. Introduction Exposition Examination I: Claims (1) and (2), asymptotic explanation and reference Examination II: Claim (3), reduction and singular asymptotics Examination III: Claim (4), emergence and multiple realisability Conclusion. (shrink)
Derek Parfit’s On What Matters endorses Kantian Contractualism, the normative theory that everyone ought to follow the rules that everyone could rationally will that everyone accept. This paper explores Parfit’s argument that Kantian Contractualism converges with Rule Consequentialism. A pivotal concept in Parfit’s argument is the concept of impartiality, which he seems to equate agent-neutrality. This paper argues that equating impartiality and agent-neutrality is insufficient, since some agent-neutral considerations are silly and some are not impartial. Perhaps more importantly, there is (...) little realistic prospect of Kantian Contractualism converging with Rule Consequentialism unless the same impartial reasons drive rule selection in the two theories. (shrink)
The Aristotle-Kant tradition requires that autonomous activity must originate within the self and points toward a new type of causation (different from natural efficient causation) associated with teleology. Notoriously, it has so far proven impossible to uncover a workable model of causation satisfying these requirements without an increasingly unsatisfying appeal to extra-physical elements tailor-made for the purpose. In this paper we first provide the essential reason why the standard linear model of efficient causation cannot support the required model of agency: (...) its causal thread model of efficient causation cannot support the core requirement that an action is determined by, and thus an expression of, the agent’s nature. We then provide a model that corrects these deficiencies, constructed naturalistically from within contemporary biology, and argue that it provides an appropriate foundation for all the features of genuine agency. Further, we provide general characterisations of freedom and reason suitable to this bio-context (but that also capture the core classical conceptions) and show how this model reconciles them. (shrink)
Theories of individual well‐being fall into three main categories: hedonism, the desire‐fulfilment theory, and the list theory (which maintains that there are some things that can benefit a person without increasing the person's pleasure or desire‐fulfilment). The paper briefly explains the answers that hedonism and the desire‐fulfilment theory give to the question of whether being virtuous constitutes a benefit to the agent. Most of the paper is about the list theory's answer.
Physician assistants (PAs), nurse practitioners (NPs), and medical residents constitute an increasingly significant part of the American health care workforce, yet patient assent to be seen by nonphysicians is only presumed and seldom sought. In order to assess the willingness of patients to receive medical care provided by nonphysicians, we administered provider preference surveys to a random sample of patients attending three emergency departments (EDs). Concurrently, a survey was sent to a random selection of ED residents and PAs. All respondents (...) were to assume the role of patient when presented with hypothetical clinical scenarios and standardized provider definitions. Despite presumptions to the contrary, ED patients are generally unwilling to be seen by PAs, NPs, and residents. While seldom asked in practice, 79.5% of patients fully expect to see a physician regardless of acuity or potential for cost savings by seeing another provider. Patients are more willing to see residents than nonphysicians. (shrink)
The point of this paper is to provide a principled framework for a naturalistic, interactivist-constructivist model of rational capacity and a sketch of the model itself, indicating its merits. Being naturalistic, it takes its orientation from scientific understanding. In particular, it adopts the developing interactivist-constructivist understanding of the functional capacities of biological organisms as a useful naturalistic platform for constructing such higher order capacities as reason and cognition. Further, both the framework and model are marked by the finitude and fallibility (...) that science attributes to organisms, with their radical consequences, and also by the individual and collective capacities to improve their performances that learning organisms display. Part A prepares the ground for the exposition through a critique of the dominant Western analytic tradition in rationalising science, followed by a brief exposition of the naturalist framework that will be employed to frame the construction. This results in two sets of guidelines for constructing an alternative. Part B provides the new conception of reason as a rich complex of processes of improvement against epistemic values, and argues its merits. It closes with an account of normativity and our similarly developing rational knowledge of it, including (reflexively) of reason itself. (shrink)
The main body of this paper assesses a leading recent theory of fairness, a theory put forward by John Broome. I discuss Broome's theory partly because of its prominence and partly because I think it points us in the right direction, even if it takes some missteps. In the course of discussing Broome's theory, I aim to cast light on the relation of fairness to consistency, equality, impartiality, desert, rights, and agreements. Indeed, before I start assessing Broome's theory, I discuss (...) two very popular conceptions of fairness that contrast with his. One of these very popular conceptions identifies fairness with the equal and impartial application of rules. The other identifies fairness with all-things-considered moral rightness. (shrink)
Virtue ethics is normally taken to be an alternative to consequentialist and Kantian moral theories. I shall discuss what I think is the most interesting version of virtue ethics – Rosalind Hursthouse's. I shall then argue that her version is inadequate in ways that suggest revision in the direction of a kind of rule-consequentialism.
Both natural and engineered systems are fundamentally dynamical in nature: their defining properties are causal, and their functional capacities are causally grounded. Among dynamical systems, an interesting and important sub-class are those that are autonomous, anticipative and adaptive (AAA). Living systems, intelligent systems, sophisticated robots and social systems belong to this class, and the use of these terms has recently spread rapidly through the scientific literature. Central to understanding these dynamical systems is their complicated organisation and their consequent capacities for (...) re- and self- organisation. But there is at present no general analysis of these capacities or of the requisite organisation involved. We define what distinguishes AAA systems from other kinds of systems by characterising their central properties in a dynamically interpreted information theory. (shrink)
Error is protean, ubiquitous and crucial in scientific process. In this paper it is argued that understanding scientific process requires what is currently absent: an adaptable, context-sensitive functional role for error in science that naturally harnesses error identification and avoidance to positive, success-driven, science. This paper develops a new account of scientific process of this sort, error and success driving Self-Directed Anticipative Learning (SDAL) cycling, using a recent re-analysis of ape-language research as test example. The example shows the limitations of (...) other accounts of error, in particular Mayo’s (Error and the growth of experimental knowledge, 1996) error-statistical approach, and SDAL cycling shows how they can be fruitfully contextualised. (shrink)
Every essay in this book is original, often highly original, and they will be of interest to practising scientists as much as they will be to philosophers of science — not least because many of the essays are by leading scientists who are currently creating the emerging new complex systems paradigm. This is no accident. The impact of complex systems on science is a recent, ongoing and profound revolution. But with a few honourable exceptions, it has largely been ignored by (...) scientists and philosophers alike as an object of reflective study. (shrink)
Papers presented cover: new approaches to evolutionary epistemology, new applications, critical evaluations, and the nature of the mind. Paper edition (unseen), $25.50. Annotation copyrighted by Book News, Inc., Portland, OR.
Katarzyna de Lazari-Radek and Peter Singer’s wonderful book, The Point of View of the Universe: Sidgwick and Contemporary Ethics, contains a wealth of intriguing arguments and compelling ideas. The present paper focuses on areas of continuing dispute. The paper first attacks LazariRadek’s and Singer’s evolutionary debunking arguments against both egoism and parts of common-sense morality. The paper then addresses their discussion of the role of rules in utilitarianism. De Lazari-Radek and Singer concede that rules should constitute our moral decision procedure (...) and our public morality. This paper argues that, if no one should be blamed for complying with the optimal decision procedure and optimal public rules, there are strong reasons to accept that these same rules determine what is morally permissible from what is morally wrong. (shrink)
In this paper we articulate a growing awareness within the field of the ways in which medical humanities could be deemed expressive of Western cultural values. The authors suggest that medical humanities is culturally limited by a pedagogical and scholarly emphasis on Western cultural artefacts, as well as a tendency to enact an uncritical reliance upon foundational concepts (such as ‘patient’ and ‘experience’) within Western medicine. Both these tendencies within the field, we suggest, are underpinned by a humanistic emphasis on (...) appreciative or receptive encounters with ‘difference’ among patients that may unwittingly contribute to the marginalisation of some patients and healthcare workers. While cultural difference should be acknowledged as a central preoccupation of medical humanities, we argue that the discipline must continue to expand its scholarly and critical engagements with processes of Othering in biomedicine. We suggest that such improvements are necessary in order to reflect the cultural diversification of medical humanities students, and the geographical expansion of the discipline within non-Western and/or non-Anglophone locations. (shrink)
In his bookMinimal Rationality (1986), Christopher Cherniak draws deep and widespread conclusions from our finitude, and not only for philosophy but also for a wide range of science as well. Cherniak's basic idea is that traditional philosophical theories of rationality represent idealisations that are inaccessible to finite rational agents. It is the purpose of this paper to apply a theory of idealisation in science to Cherniak's arguments. The heart of the theory is a distinction between idealisations that represent reversible, solely (...) quantitative simplifications and those that represent irreversible, degenerate idealisations which collapse out essential theoretical structure. I argue that Cherniak's position is best understood as assigning the latter status to traditional rationality theories and that, so understood, his arguments may be illuminated, expanded, and certain common criticisms of them rebutted. The result, however, is a departure from traditional, formalist theories of rationality of a more radical kind than Cherniak contemplates, with widespread ramifications for philosophical theory, especially philosophy of science itself. (shrink)