This review focuses on Pekka Hämäläinen’s characterization and analysis of the Comanche empire as a spatial category in The Comanche Empire and discusses how this work relates to broader discussions about space and power in borderlands and imperial histories. Although empires have long been central actors in borderlands histories, “empire” has not necessarily been a category of spatial organization and analysis and certainly not one used to describe spaces controlled by Native peoples. By contrast, while Hämäläinen emphasizes the imperial (...) characteristics of the economic, political, and cultural dimensions of Comanche history , he also uses “empire” to characterize Comanche dominance spatially. Hämäläinen helps us to rethink the spatial dynamics that both shaped and were produced by the encounters between Comanches and Spaniards, French, Mexicans, Americans, and other Native peoples in the Great Plains during the eighteenth and nineteenth centuries. By analyzing how Comanches came to control vast stretches of the southern plains, The Comanche Empire challenges our assumptions about how Native polities and imperial powers thought about territorial claims and how they employed more nuanced spatial strategies to assert their authority, extend their cultural influence, and control trade and resources. (shrink)
How should historians write Native history? To what extent should one privilege Native terms, sources, chronologies, and epistemologies? And to what extent should historians align Native history with concepts developed for other peoples and places? These crucial questions about emic and etic approaches to the past are cast into sharp relief in Pekka Hämäläinen’s award-winning The Comanche Empire. This essay charts the perils and possibilities of each position. It then explores possible ways to move beyond the emic/etic division that (...) has dominated many of the recent debates about Native history through a rereading of an episode in which Comanche history collides with US and Mexican history. (shrink)
In addition to thin concepts like the good, the bad and the ugly, our evaluative thought and talk appeals to thick concepts like the lewd and the rude, the selfish and the cruel, the courageous and the kind -- concepts that somehow combine evaluation and non-evaluative description. Thick concepts are almost universally assumed to be inherently evaluative in content, and many philosophers claimed them to have deep and distinctive significance in ethics and metaethics. In this first book-length treatment of thick (...) concepts, Pekka Väyrynen argues that all this is mistaken. Through detailed attention to the language of thick concepts, he defends a novel theory on which the relationship between thick words and evaluation is best explained by general conversational and pragmatic norms. Drawing on general principles in philosophy of language, he argues that many prominent features of thick words and concepts can be explained by general factors that have nothing in particular to do with being evaluative. If evaluation is not essential to the sort of thinking we do with thick concepts, claims for the deep and distinctive significance of the thick are undermined. The Lewd, the Rude and the Nasty is a fresh and innovative treatment of an important topic in moral philosophy and sets a new agenda for future work. It will be essential reading to anyone interested in the analysis and the broader philosophical significance of evaluative and normative language. (shrink)
Although the history of centrally planned economies has been widely studied, the development of socialist thinking on the subject has remained largely uncharted. In this 1991 work, Pekka Sutela presents a detailed analysis of Soviet economic thought and theory. Dr Sutela traces the competing currents in the Marxist tradition of socialist economies from the Revolution to the present day. In particular he shows how the Gorbachev economic reform programme of 1987 rose from the work of Nobel Prize economist L. (...) V. Kantorovich and his followers. However, this programme failed and the author explains in some detail why this happened. Since then, Soviet economists have tried to abandon their traditional theory of central planning and move along the path and long established contacts with leading Soviet economists, Pekka Sutela is able to show how Soviet economic thinking has moved from dogmatism through reformism to pragmatism. (shrink)
Normative explanations of why things are wrong, good, or unfair are ubiquitous in ordinary practice and normative theory. This paper argues that normative explanation is subject to a justification condition: a correct complete explanation of why a normative fact holds must identify features that would go at least some way towards justifying certain actions or attitudes. I first explain and motivate the condition I propose. I then support it by arguing that it fits well with various theories of normative reasons, (...) makes good sense of certain legitimate moves in ordinary normative explanatory discourse, and helps to make sense of our judgments about explanatory priority in certain cases of normative explanation. This last argument also helps to highlight respects in which normative explanation won’t be worryingly discontinuous with explanations in other domains even though these other explanations aren’t subject to the justification condition. Thus the paper aims not only to do some constructive theorizing about the relatively neglected topic of normative explanation but also to cast light on the broader question of how normative explanation may be similar to and different from explanations in other domains. (shrink)
The use of evidence in medicine is something we should continuously seek to improve. This book seeks to develop our understanding of evidence of mechanism in evaluating evidence in medicine, public health, and social care; and also offers tools to help implement improved assessment of evidence of mechanism in practice. In this way, the book offers a bridge between more theoretical and conceptual insights and worries about evidence of mechanism and practical means to fit the results into evidence assessment procedures.
This paper offers a simple response to the Moral Twin Earth (MTE) objection to Naturalist Moral Realism (NMR). NMR typically relies on an externalist metasemantics such as a causal theory of reference. The MTE objection is that such a theory predicts that terms like ‘good’ and ‘right’ have a different reference in certain twin communities where it’s intuitively clear that the twins are talking about the same thing when using ‘good’. I argue that Boyd’s causal regulation theory, the original target (...) of the MTE objection, was never vulnerable to this objection. The theory contains an epistemic constraint on reference which implies that either the property that causally regulates uses of ‘good’ isn’t different for the twin communities or, in scenarios where the reference is different, the communities diverge in ways where it’s not intuitively clear that ‘good’ has the same reference for them. (shrink)
First-order normative theories concerning what’s right and wrong, good and bad, etc. and metanormative theories concerning the nature of first-order normative thought and talk are widely regarded as independent theoretical enterprises. This paper argues that several debates in metanormative theory involve views that have first-order normative implications, even as the implications in question may not be immediately recognizable as normative. I first make my claim more precise by outlining a general recipe for generating this result. I then apply this recipe (...) to three debates in metaethics: the modal status of basic normative principles, normative vagueness and indeterminacy, and the determination of reference for normative predicates. In each case I argue that certain views on each issue carry first-order normative commitments, in accordance with my recipe. (shrink)
Normative explanations, which specify why things have the normative features they do, are ubiquitous in normative theory and ordinary thought. But there is much less work on normative explanation than on scientific or metaphysical explanation. Skow (2016) argues that a complete answer to the question why some fact Q occurs consists in all of the reasons why Q occurs. This paper explores this theory as a case study of a general theory that promises to offer us a grip on normative (...) explanation which is independent of particular normative theories. I first argue that the theory doesn’t give an adequate account of certain enablers of reasons which are important in normative explanation. I then formulate and reject three responses on behalf of the theory. But I suggest that since theories of this general sort have the right kind of resources to illuminate how normative explanation might be similar to and different from explanations in other domains, they nonetheless merit further exploration by normative theorists. (shrink)
We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...) We will also study whether humans and technological artifacts like robots can form hybrid collective agents that could be morally responsible for their actions and give an argument against such a possibility. (shrink)
This paper concerns non-causal normative explanations such as ‘This act is wrong because/in virtue of__’. The familiar intuition that normative facts aren't brute or ungrounded but anchored in non- normative facts seems to be in tension with the equally familiar idea that no normative fact can be fully explained in purely non- normative terms. I ask whether the tension could be resolved by treating the explanatory relation in normative explanations as the sort of ‘grounding’ relation that receives extensive discussion in (...) recent metaphysics. I argue that this would help only under controversial assumptions about the nature of normative facts, and perhaps not even then. I won't try to resolve the tension, but draw a distinction between two different sorts of normative explanations which helps to identify constraints on a resolution. One distinctive constraint on normative explanations in particular might be that they should be able to play a role in normative justification. (shrink)
This paper argues that the recent metaethical turn to reasons as the fundamental units of normativity offers no special advantage in explaining a variety of other normative and evaluative phenomena, unless perhaps a form of reductionism about reasons is adopted which is rejected by many of those who advocate turning to reasons.
I defend moral generalism against particularism. Particularism, as I understand it, is the negation of the generalist view that particular moral facts depend on the existence of a comprehensive set of true moral principles. Particularists typically present "the holism of reasons" as powerful support for their view. While many generalists accept that holism supports particularism but dispute holism, I argue that generalism accommodates holism. The centerpiece of my strategy is a novel model of moral principles as a kind of "hedged" (...) principles that incorporate an independently plausible "basis thesis" concerning the explanation of moral reasons. The model implies that moral reasons requires the existence of a comprehensive set of true hedged principles, and so it captures generalism. But the model also offers an alternative explanation of holism, and so it undercuts much of the motivation for particularism. I defend this moderate (because holism-tolerating) form of generalism against a number of objections, and show how it can be used to defeat three distinct arguments from holism to particularism. (shrink)
A particular tradition in medicine claims that a variety of evidence is helpful in determining whether an observed correlation is causal. In line with this tradition, it has been claimed that establishing a causal claim in medicine requires both probabilistic and mechanistic evidence. This claim has been put forward by Federica Russo and Jon Williamson. As a result, it is sometimes called the Russo–Williamson thesis. In support of this thesis, Russo and Williamson appeal to the practice of the International Agency (...) for Research on Cancer. However, this practice presents some problematic cases for the Russo–Williamson thesis. One response to such cases is to argue in favour of reforming these practices. In this paper, we propose an alternative response according to which such cases are in fact consistent with the Russo–Williamson thesis. This response requires maintaining that there is a role for mechanism-based extrapolation in the practice of the IARC. However, the response works only if this mechanism-based extrapolation is reliable, and some have argued against the reliability of mechanism-based extrapolation. Against this, we provide some reasons for believing that reliable mechanism-based extrapolation is going on in the practice of the IARC. The reasons are provided by appealing to the role of robustness analysis. (shrink)
Ethicists are typically willing to grant that thick terms (e.g. ‘courageous’ and ‘murder’) are somehow associated with evaluations. But they tend to disagree about what exactly this relationship is. Does a thick term’s evaluation come by way of its semantic content? Or is the evaluation pragmatically associated with the thick term (e.g. via conversational implicature)? In this paper, I argue that thick terms are semantically associated with evaluations. In particular, I argue that many thick concepts (if not all) conceptually entail (...) evaluative contents. The Semantic View has a number of outspoken critics, but I shall limit discussion to the most recent--Pekka Väyrynen--who believes that objectionable thick concepts present a problem for the Semantic View. After advancing my positive argument in favor of the Semantic View (section II), I argue that Väyrynen’s attack is unsuccessful (section III). One reason ethicists cite for not focusing on thick concepts is that such concepts are supposedly not semantically evaluative whereas traditional thin concepts (e.g. good and wrong) are. But if my view is correct, then this reason must be rejected. (shrink)
This paper is a survey of the supervenience challenge to non-naturalist moral realism. I formulate a version of the challenge, consider the most promising non-naturalist replies to it, and suggest that no fully effective reply has yet been given.
This paper defends doubts about the existence of genuine moral perception, understood as the claim that at least some moral properties figure in the contents of perceptual experience. Standard examples of moral perception are better explained as transitions in thought whose degree of psychological immediacy varies with how readily non-moral perceptual inputs, jointly with the subject's background moral beliefs, training, and habituation, trigger the kinds of phenomenological responses that moral agents are normally disposed to have when they represent things as (...) being morally a certain way. (shrink)
The core doctrine of ethical intuitionism is that some of our ethical knowledge is non-inferential. Against this, Sturgeon has recently objected that if ethical intuitionists accept a certain plausible rationale for the autonomy of ethics, then their foundationalism commits them to an implausible epistemology outside ethics. I show that irrespective of whether ethical intuitionists take non-inferential ethical knowledge to be a priori or a posteriori, their commitment to the autonomy of ethics and foundationalism does not entail any implausible non-inferential knowledge (...) in areas outside ethics (such as the past, the future, or the unobservable). However, each form of intuitionism does require a controversial stand on certain unresolved issues outside ethics. (shrink)
We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...) as products of engineering would undermine their autonomy and thus responsibility. (shrink)
This paper offers a general model of substantive moral principles as a kind of hedged moral principles that can (but don't have to) tolerate exceptions. I argue that the kind of principles I defend provide an account of what would make an exception to them permissible. I also argue that these principles are nonetheless robustly explanatory with respect to a variety of moral facts; that they make sense of error, uncertainty, and disagreement concerning moral principles and their implications; and that (...) one can grasp these principles without having to grasp any particular list of their permissibly exceptional instances. I conclude by pointing out various advantages that this model of principles has over several of its rivals. The bottom line is that we should find nothing peculiarly odd or problematic about the idea of exception-tolerating and yet robustly explanatory moral principles. (shrink)
Evaluative terms and concepts are often divided into “thin” and “thick”. We don’t evaluate actions and persons merely as good or bad, or right or wrong, but also as kind, courageous, tactful, selfish, boorish, and cruel. The latter evaluative concepts are "descriptively thick": their application somehow involves both evaluation and a substantial amount of non-evaluative description. This article surveys various attempts to answer four fundamental questions about thick terms and concepts. (1) A “combination question”: how exactly do thick terms and (...) concepts relate evaluation and non-evaluative description? (2) A “location question”: is evaluation somehow inherent to thick terms and concepts, such as perhaps an aspect of their meaning, or merely a feature of their use? (3) A “delineation question”: how do thick terms differ from the thin and from other kinds of evaluative terms? (4) Given answers to these questions, what broader philosophical significance and applications might thick concepts have? (shrink)
The role of mechanistic evidence tends to be under‐appreciated in current evidence‐based medicine, which focusses on clinical studies, tending to restrict attention to randomized controlled studies when they are available. The EBM+ programme seeks to redress this imbalance, by suggesting methods for evaluating mechanistic studies alongside clinical studies. Drug approval is a problematic case for the view that mechanistic evidence should be taken into account, because RCTs are almost always available. Nevertheless, we argue that mechanistic evidence is central to all (...) the key tasks in the drug approval process: in drug discovery and development; assessing pharmaceutical quality; devising dosage regimens; assessing efficacy, harms, external validity, and cost‐effectiveness; evaluating adherence; and extending product licences. We recommend that, when preparing for meetings in which any aspect of drug approval is to be discussed, mechanistic evidence should be systematically analysed and presented to the committee members alongside analyses of clinical studies. (shrink)
So-called "thick" moral concepts are distinctive in that they somehow "hold together" evaluation and description. But how? This paper argues against the standard view that the evaluations which thick concepts may be used to convey belong to sense or semantic content. That view cannot explain linguistic data concerning how thick concepts behave in a distinctive type of disagreements and denials which arise when one speaker regards another's thick concept as "objectionable" in a certain sense. The paper also briefly considers contextualist, (...) presuppositional, and implicature accounts of the evaluative contents of thick concepts, but finds none clearly superior to the others. (shrink)
Surprisingly, many ethical realists and anti-realists, naturalists and not, all accept some version of the following normative appeal to the natural (NAN): evaluative and normative facts hold solely in virtue of natural facts, where their naturalness is part of what fits them for the job. This paper argues not that NAN is false but that NAN has no adequate non-parochial justification (a justification that relies only on premises which can be accepted by more or less everyone who accepts NAN) to (...) back up this consensus. I show that we cannot establish versions of NAN which are interesting in their own right (and not merely as instances of a general naturalistic ontology) by appealing to the nature of natural properties or the kind of in-virtue-of relation to which NAN refers, plus other plausible nonparochial assumptions. On the way, I distinguish different types of 'in virtue of claims. I conclude by arguing that the way in which assessment of meta-ethical hypotheses is theory-dependent predicts the failure of non-parochial justifications of NAN. (shrink)
There is a pervasive trend in Western theology to identify imago Dei with human intellectual and cognitive capacities. However, several contemporary theologians have criticized this view because, according to the critics, it leads to a truncated view of humanity. In this article, I shall concentrate on the question of rationality, first, through theologies of Thomas Aquinas and contemporary Lutheran Robert Jenson, and second, in some branches of recent cognitive psychology. I will argue that there is a significant overlap between contemporary (...) scientific interpretations of rationality and both a traditional Thomistic view and a contemporary ecumenical interpretation of imago Dei. Consequently, it is possible to give an account of imago Dei which takes structural features as central and which is in accord with contemporary science, without falling prey to the dangers that the critics of structuralism point out. (shrink)
Some philosophers hold that so-called "thick" terms and concepts in ethics (such as 'cruel,' 'selfish,' 'courageous,' and 'generous') are contextually variable with respect to the valence (positive or negative) of the evaluations that they may be used to convey. Some of these philosophers use this variability claim to argue that thick terms and concepts are not inherently evaluative in meaning; rather their use conveys evaluations as a broadly pragmatic matter. I argue that one sort of putative examples of contextual variability (...) in evaluative valence that are found in the literature fail to support the variability claim and that another sort of putative examples are open to a wide range of explanations that have different implications for the relationship between thick terms and concepts and evaluation. I conclude that considerations of contextual variability fail to settle whether thick terms and concepts are inherently evaluative in meaning. In closing I suggest a more promising line of research. (shrink)
Evaluative and normative terms and concepts are often said to be "essentially contestable". This notion has been used in political and legal theory and applied ethics to analyse disputes concerning the proper usage of terms like democracy, freedom, genocide, rape, coercion, and the rule of law. Many philosophers have also thought that essential contestability tells us something important about the evaluative in particular. Gallie (who coined the term), for instance, argues that the central structural features of essentially contestable concepts secure (...) their evaluativeness. I'll argue that these (widely held) central features are exemplified by many evaluative and non-evaluative terms alike, owing to more general factors (such as multidimensionality) which have nothing in particular to do with being evaluative. The role of these factors in semantic interpretation is subject to a certain kind of "metasemantic" disputes which have the features of the disputes characteristically admitted by essentially contestable concepts (whether evaluative or not) and which can be similarly substantive and worthwhile. In closing I'll discuss how my argument shows that our understanding of evaluative disagreement needs refinement. The overall upshot is that essential contestability shows nothing deep or distinctive about the evaluative in particular. (shrink)
This paper presents an alternative to the standard view that the evaluations that the so-called "thick" terms and concepts in ethics may be used to convey belong to their sense or semantic meaning. I describe a large variety of linguistic data that are well explained by the alternative view that the evaluations that (at least a very wide range of) thick terms and concepts may be used to convey are a certain kind of defeasible implications of their utterances which can (...) be given a conversational explanation. I then provide some reasons to think that this explanation of the data is superior to the standard view, but a fuller assessment must await further work. In closing I briefly survey the largely deflationary consequences of this account regarding the significance of thick terms and concepts for evaluative thought and judgment. (shrink)
Group agents are able to act but are not literally agents. Some group agents, e.g., we-mode groups and corporations, can, however, be regarded as functional group agents that do not have “intrinsic” mental states and phenomenal features comparable to what their individual members on biological and psychological grounds have. But they can have “extrinsic” mental states, states collectively attributed to them—primarily by their members. In this paper, we discuss the responsibility of such group agents. We defend the view that if (...) the group members have accepted the group agent’s attitudes and are committed to them, we can favorably compare the situation with the case of individual human agents and a group agent can be regarded as morally responsible for its intentional activities. (shrink)
I first distinguish between different forms of the buck-passing account of value and clarify my target in other respects on buck-passers' behalf. I then raise a number of problems for the different forms of the buck-passing view that I have distinguished.
One prominent strand in contemporary moral particularism concerns the claim of "principle abstinence" that we ought not to rely on moral principles in moral judgment because they fail to provide adequate moral guidance. I argue that moral generalists can vindicate this traditional and important action-guiding role for moral principles. My strategy is to argue, first, that, for any conscientious and morally committed agent, the agent's acceptance of (true) moral principles shapes their responsiveness to (right) moral reasons and, second, that if (...) so, then those principles can contribute non-trivially to some reliable strategy for acting well that is available for use in the agent's practical thinking. My defense of these two claims appeals to an account of moral principles as a kind of hedged principles which I defend elsewhere, but my general line of argument should be acceptable to many other forms of generalism as well. I defend the epistemic significance of hedged principles in moral deliberation, and argue that the need for sensitivity to particulars in moral judgment doesn't supplant principles in moral guidance. I finish by arguing that the generalist model of moral guidance developed here isn't undermined by evidence from cognitive science about how we make moral judgments in actual practice, and that it compares favorably to particularism with respect to its capacity to offer adequate moral guidance. (shrink)
What are moral principles? The assumption underlying much of the generalism–particularism debate in ethics is that they are (or would be) moral laws: generalizations or some special class thereof, such as explanatory or counterfactual-supporting generalizations. I argue that this law conception of moral principles is mistaken. For moral principles do at least three things that moral laws cannot do, at least not in their own right: explain certain phenomena, provide particular kinds of support for counterfactuals, and ground moral necessities, “necessary (...) connections” between obligating reasons and obligations. Moreover, neither a best-systems theory of moral principles nor any of the competing theories of moral principles proposed by Sean McKeever and Michael Ridge, Pekka Väyrynen, and Mark Lance and Margaret Little could vindicate the law conception of moral principles. I conclude with some brief remarks about what moral principles might be if they are not moral laws. (shrink)
In this essay, I propose a standard of practical rationality and a grounding for the standard that rests on the idea of autonomous agency. This grounding is intended to explain the “normativity” of the standard. The basic idea is this: To be autonomous is to be self-governing. To be rational is at least in part to be self-governing; it is to do well in governing oneself. I argue that a person's values are aspects of her identity—of her “self-esteem identity”—in a (...) way that most of her ends are not, and that it therefore is plausible to view action governed by one's values as self-governed. This is also plausible on independent grounds. Given this, I say, rational agents comply with a standard—the “values standard”—that requires them to serve their values, and to seek what they need in order to continue to be able to serve their values. Footnotesa I am grateful to many people for helpful comments and discussion over the many years in which I have been developing the ideas in this essay. With apologies to those whose help escapes my memory, I would like to thank Nomy Arpaly, Sam Black, Michael Bratman, Justin D'Arms, Dan Farrell, Pat Greenspan, Don Hubin, Dan Jacobson, Marina Oshana, Michael Ridge, Michael Robins, David Sobel, Pekka Väyrynen, and David Velleman. I presented early versions of some of the ideas in this essay to audiences in the departments of philosophy at the University of Alberta, the University of Maryland at College Park, l'Université de Montréal, the University of Southern California, and the University of Florida, to the 1999 Conference on Moral Theory and Its Applications, Le Lavandou, France, and to the 2001 Conference on Reason and Deliberation, Bowling Green State University. I am grateful for the helpful comments of those who participated in the discussions on all of these occasions and especially to the other contributors to this volume, and its editors. I owe special thanks to Ellen Paul for encouraging me to integrate my thinking on identity with my thinking on rationality and for her useful comments. (shrink)