Germline Gene Editing has enormous potential both as a research tool and a therapeutic intervention. While other types of gene editing are relatively uncontroversial, GGE has been strongly resisted. In this article, we analyse the ethical arguments for and against pursuing GGE by allowing and funding its development. We argue there is a strong case for pursuing GGE for the prevention of disease. We then examine objections that have been raised against pursuing GGE and argue that these fail. We conclude (...) that the moral case in favour of pursuing GGE is stronger than the case against. This suggests that pursuing GGE is morally permissible and indeed morally desirable. (shrink)
The first book length study of property-owning democracy, Republic of Equals argues that a society in which capital is universally accessible to all citizens is uniquely placed to meet the demands of justice. Arguing from a basis in liberal-republican principles, this expanded conception of the economic structure of society contextualizes the market to make its transactions fair. The author shows that a property-owning democracy structures economic incentives such that the domination of one agent by another in the market is structurally (...) impossible. The result is a renovated form of capitalism in which the free market is no longer a threat to social democratic values, but is potentially convergent with them. It is argued that a property-owning democracy has advantages that give it priority over rival forms of social organization such as welfare state capitalism and market socialist institutions. The book also addresses the currently high levels of inequality in the societies of the developed West to suggest a range of policies that target the "New Inequality" of our times. For this reason, the work engages not only with political philosophers such as John Rawls, Philip Pettit and John Tomasi, but also with the work of economists and historians such as Anthony B. Atkinson, François Bourguignon, Jacob S. Hacker, Lane Kenworthy, and Thomas Piketty. (shrink)
The first book-length study of property-owning democracy, Republic of Equals, argues that a society in which capital is universally accessible to all citizens is uniquely placed to meet the demands of justice. Arguing from a basis in liberal-republican principles, this expanded conception of the economic structure of society contextualizes the market to make its transactions fair. It shows that a property-owning democracy structures economic incentives such that the domination of one agent by another in the market is structurally impossible. The (...) result is a renovated form of capitalism in which the free market is no longer a threat to social democratic values but is potentially convergent with them. It is argued that a property-owning democracy has advantages that give it priority over rival forms of social organization such as welfare-state capitalism and market socialist institutions. The book also addresses the currently high levels of inequality in the societies of the developed West to suggest a range of policies that target the “New Inequality” of the twenty-first century. For this reason, the work engages not only with political philosophers such as John Rawls, Philip Pettit, and John Tomasi but also with the work of economists and historians such as Anthony B. Atkinson, François Bourguignon, Jacob S. Hacker, Lane Kenworthy, and Thomas Piketty. (shrink)
It is notoriously difficult to find an intuitively satisfactory rule for evaluating populations based on the welfare of the people in them. Standard examples, like total utilitarianism, either entail the Repugnant Conclusion or in some other way contradict common intuitions about the relative value of populations. Several philosophers have presented formal arguments that seem to show that this happens of necessity: our core intuitions stand in contradiction. This paper assesses the state of play, focusing on the most powerful of these (...) ‘impossibility theorems’, as developed by Gustaf Arrhenius. I highlight two ways in which these theorems fall short of their goal: some appeal to a supposedly egalitarian condition which, however, does not properly reflect egalitarian intuitions; the others rely on a background assumption about the structure of welfare which cannot be taken for granted. Nonetheless, the theorems remain important: they give insight into the difficulty, if not perhaps the impossibility, of constructing a satisfactory population axiology. We should aim for reflective equilibrium between intuitions and more theoretical considerations. I conclude by highlighting one possible ingredient in this equilibrium, which, I argue, leaves open a still wider range of acceptable theories: the possibility of vague or otherwise indeterminate value relations. (shrink)
What are the processes, from conception to adulthood, that enable a single cell to grow into a sentient adult? Neuroconstructivism is a pioneering 2 volume work that sets out a whole new framework for considering the complex topic of development, integrating data from cognitive studies, computational work, and neuroimaging.
The exploration of chemical periodicity over the past 250 years led to the development of the Periodic System of Elements and demonstrates the value of vague ideas that ignored early scientific anomalies and instead allowed for extended periods of normal science where new methodologies and concepts are developed. The basic chemical element provides this exploration with direction and explanation and has shown to be a central and historically adaptable concept for a theory of matter far from the reductionist frontier. This (...) is explored in the histories of Prout’s hypothesis, Döbereiner Triads, element inversions necessary when ordering chemical elements by atomic weights, and van den Broeck’s ad-hoc proposal to switch to nuclear charges instead. The development of more accurate methods to determine atomic weights, Rayleigh and Ramsey’s gas separation and analytical techniques, Moseley’s X-ray spectroscopy to identify chemical elements, and more recent accelerator-based cold fusion methods to create new elements at the end of the Periodic Table point to the importance of methodological development complementing conceptual advances. I propose to frame the crossover from physics to chemistry not as a loss of accuracy and precision but as an increased application of vague concepts such as similarity which permit classification. This approach provides epistemic flexibility to adapt to scientific anomalies and the continued growth of chemical compound space and rejects the Procrustean philosophy of reductionist physics. Furthermore, it establishes chemistry with its explanatory and operational autonomy epitomized by the periodic system of elements as a gateway to other experimental sciences. (shrink)
Political realism criticises the putative abstraction, foundationalism and neglect of the agonistic dimension of political practice in the work of John Rawls. This paper argues that had Rawls not fully specified the implementation of his theory of justice in one particular form of political economy then he would be vulnerable to a realist critique. But he did present such an implementation: a property-owning democracy. An appreciation of Rawls s specificationist method undercuts the realist critique of his conception of justice as (...) fairness. (shrink)
It is only in the last few decades that analytic philosophers in particular have begun to pay any serious attention to the topic of life’s meaning. Such philosophers, however, do not usually attempt to answer or analyse the traditional question ‘What is the meaning of life?’, but rather the subtly different question ‘What makes a life meaningful?’ and it is generally assumed that the latter can be discussed independently of the former. Nevertheless, this paper will argue that the two questions (...) are indeed connected, and that identifying and expanding upon the most plausible analysis of the former will provide the resources necessary to determine the most plausible answer to the latter. Specifically, this paper will argue that the traditional question is simply a request for the information which constitutes a coherent answer to one or more of a certain set of questions regarding human existence that were salient to the asker. In simpler language, the meaning of life itself is the information a person needs to make sense of it. This analysis can then also be applied to individual lives, such that asking for the meaning of X’s life is an analogous request for the information necessary to make sense of that life in particular. Running with this concept of the ‘meaning’ of something as its ‘sense’, the paper then outlines an accompanying theory of ‘meaningfulness’ as ‘sensefulness’: a measure of the richness of certain aspects of the life, multiplied by their intelligibility. (shrink)
Smoke and Mirrors is a passionate, richly nuanced work that shows television as a circus, a wishing well, and a cure for loneliness. Ranging from Ed Sullivan to cyberspace, from kid shows to cable, and from the cheap thrills of "action adventure" to the solemn boredom of PBS pledge week, Leonard argues for a whole new way of thinking about television. For Leonard, the situation comedy is a socializing agency, the talk show is a legitimating agency, the made-for-television movie is (...) the last redoubt of social conscience, and television criticism itself is the last refuge of time-serving thugs and postmodernists. Instead of scapegoating television as the cause of crime in our streets, stupidity in our schools, and spectacle rather than substance in our government, Leonard sees something else inside the box: an echo chamber and a feedback loop, a medium neither wholly innocent of nor entirely responsible for the frantic disorder it brings into our homes. (shrink)
We set out an account of how self-domestication plays a crucial role in the evolution of language. In doing so, we focus on the growing body of work that treats language structure as emerging from the process of cultural transmission. We argue that a full recognition of the importance of cultural transmission fundamentally changes the kind of questions we should be asking regarding the biological basis of language structure. If we think of language structure as reflecting an accumulated set of (...) changes in our genome, then we might ask something like, “What are the genetic bases of language structure and why were they selected?” However, if cultural evolution can account for language structure, then this question no longer applies. Instead, we face the task of accounting for the origin of the traits that enabled that process of structure-creating cultural evolution to get started in the first place. In light of work on cultural evolution, then, the new question for biological evolution becomes, “How did those precursor traits evolve?” We identify two key precursor traits: the transmission of the communication system through learning; and the ability to infer the communicative intent associated with a signal or action. We then describe two comparative case studies—the Bengalese finch and the domestic dog—in which parallel traits can be seen emerging following domestication. Finally, we turn to the role of domestication in human evolution. We argue that the cultural evolution of language structure has its origin in an earlier process of self-domestication. (shrink)
We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are (...) also consistent with the rejection of all of the expected utility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expected utility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expected utility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expected utility’ condition popular in non-expected utility theory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expected utility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
We have synthesized a 582,970-base pair Mycoplasma genitalium genome. This synthetic genome, named M. genitalium JCVI-1.0, contains all the genes of wild-type M. genitalium G37 except MG408, which was disrupted by an antibiotic marker to block pathogenicity and to allow for selection. To identify the genome as synthetic, we inserted "watermarks" at intergenic sites known to tolerate transposon insertions. Overlapping "cassettes" of 5 to 7 kilobases (kb), assembled from chemically synthesized oligonucleotides, were joined by in vitro recombination to produce intermediate (...) assemblies of approximately 24 kb, 72 kb ("1/8 genome"), and 144 kb ("1/4 genome"), which were all cloned as bacterial artificial chromosomes in Escherichia coli. Most of these intermediate clones were sequenced, and clones of all four 1/4 genomes with the correct sequence were identified. The complete synthetic genome was assembled by transformation-associated recombination cloning in the yeast Saccharomyces cerevisiae, then isolated and sequenced. A clone with the correct sequence was identified. The methods described here will be generally useful for constructing large DNA molecules from chemically synthesized pieces and also from combinations of natural and synthetic DNA segments. 10.1126/science.1151721. (shrink)
Many believe that the ethical problems of donation after cardiocirculatory death (DCD) have been "worked out" and that it is unclear why DCD should be resisted. In this paper we will argue that DCD donors may not yet be dead, and therefore that organ donation during DCD may violate the dead donor rule. We first present a description of the process of DCD and the standard ethical rationale for the practice. We then present our concerns with DCD, including the following: (...) irreversibility of absent circulation has not occurred and the many attempts to claim it has have all failed; conflicts of interest at all steps in the DCD process, including the decision to withdraw life support before DCD, are simply unavoidable; potentially harmful premortem interventions to preserve organ utility are not justifiable, even with the help of the principle of double effect; claims that DCD conforms with the intent of the law and current accepted medical standards are misleading and inaccurate; and consensus statements by respected medical groups do not change these arguments due to their low quality including being plagued by conflict of interest. Moreover, some arguments in favor of DCD, while likely true, are "straw-man arguments," such as the great benefit of organ donation. The truth is that honesty and trustworthiness require that we face these problems instead of avoiding them. We believe that DCD is not ethically allowable because it abandons the dead donor rule, has unavoidable conflicts of interests, and implements premortem interventions which can hasten death. These important points have not been, but need to be fully disclosed to the public and incorporated into fully informed consent. These are tall orders, and require open public debate. Until this debate occurs, we call for a moratorium on the practice of DCD. (shrink)
In Value and Context Alan Thomas articulates and defends the view that human beings do possess moral and political knowledge but it is historically and culturally contextual knowledge in ways that, say, mathematical or chemical knowledge is not. In his exposition of "cognitive contextualism" in ethics and politics he makes wide-ranging use of contemporary work in epistemology, moral philosophy, and political theory.
This thesis consists of several independent papers in population ethics. I begin in Chapter 1 by critiquing some well-known 'impossibility theorems', which purport to show there can be no intuitively satisfactory population axiology. I identify axiological vagueness as a promising way to escape or at least mitigate the effects of these theorems. In particular, in Chapter 2, I argue that certain of the impossibility theorems have little more dialectical force than sorites arguments do. From these negative arguments I move to (...) positive ones. In Chapter 3, I justify the use of a 'veil of ignorance', starting from three more basic normative principles. This leads to positive arguments for various kinds of utilitarianism - the best such arguments I know. But in general the implications of the veil depend on how one answers what I call 'the risky existential question': what is the value to an individual of a chance of non-existence? I chart out the main options, and raise some puzzles for non-comparativism, the view that life is incomparable to non-existence. Finally, in Chapter 4, I consider the consequences for population ethics of the idea that what is normatively relevant is not personal identity, but a degreed relation of psychological connectedness. In particular, I pursue a strategy based in population ethics for understanding the controversial 'time-relative interests' account of the badness of death. (shrink)
Mental imagery (varieties of which are sometimes colloquially refered to as “visualizing,” “seeing in the mind's eye,” “hearing in the head,” “imagining the feel of,” etc.) is quasi-perceptual experience; it resembles perceptual experience, but occurs in the absence of the appropriate external stimuli. It is also generally understood to bear intentionality (i.e., mental images are always images of something or other), and thereby to function as a form of mental representation. Traditionally, visual mental imagery, the most discussed variety, was thought (...) to be caused by the presence of picturelike representations (mental images) in the mind, soul, or brain, but this is no longer universally accepted. (shrink)
I consider paradoxical spectrum arguments involving transitive relations like 'better than'. I argue that, despite being formally different from sorites arguments, at least some spectrum arguments arise from vagueness, and that vagueness might often be the most natural diagnosis.
Duncan Purves and Nicolas Delon have argued that one’s life will be meaningful to the extent that one contributes to valuable states of affairs and this contribution is a result of one’s intentional actions. They then argue, contrary to some theorists’ intuitions, that non-human animals are capable of fulfilling these requirements, and that this finding might entail important things for the animal ethics movement. In this paper, I also argue that things besides human beings can have meaningful existences, but I (...) disagree with Purves and Delon’s theory of meaning, and some of the practical implications they suggest arise from their conclusion. Specifically, I argue that Purves and Delon are wrong to suggest that intentional agency is necessary for one’s life to be meaningful; contributing to valuable states of affairs can be sufficient by itself. Purves and Delon’s objection to such a claim is that it would allow even inanimate objects’ existences to count as meaningful. However, while I accept this... (shrink)
This study examined the effect of various antecedent variables on marketers’ perceptions of the role of ethics and socialresponsibility in the overall success of the firm. Variables examined included Hofstede’s cultural dimensions , as well as corporate ethical values and enforcement ofan ethics code. Additionally, individual variables such as ethical idealism and relativism were included. Results indicated that most ofthese variables impacted marketers’ perceptions of the importance of ethics and social responsibility, although to varying degrees.
What is time? This is one of the most fundamental questions we can ask. Emily Thomas explores how a new theory of time emerged in the seventeenth century. The 'absolute' theory of time held that it is independent of material bodies or human minds, so even if nothing else existed there would be time.
The growing block view of time holds that the past and present are real whilst the future is unreal; as future events become present and real, they are added on to the growing block of reality. Surprisingly, given the recent interest in this view, there is very little literature on its origins. This paper explores those origins, and advances two theses. First, I show that although C. D. Broad’s Scientific Thought provides the first defence of the growing block theory, the (...) theory receives its first articulation in Samuel Alexander’s Space, Time, and Deity. Further, Alexander’s account of deity inclines towards the growing block view. Second, I argue that Broad shifted towards the growing block theory as a result of his newfound conviction that time has a direction. By way of tying these theses together, I argue that Broad’s views on the direction of time – and possibly even his growing block theory – are sourced in Alexander. (shrink)
We provide conditions under which an incomplete strongly independent preorder on a convex set X can be represented by a set of mixture preserving real-valued functions. We allow X to be infi nite dimensional. The main continuity condition we focus on is mixture continuity. This is sufficient for such a representation provided X has countable dimension or satisfi es a condition that we call Polarization.
By definition, pain is a sensory and emotional experience that is felt in a particular part of the body. The precise relationship between somatic events at the site where pain is experienced, and central processing giving rise to the mental experience of pain remains the subject of debate, but there is little disagreement in scholarly circles that both aspects of pain are critical to its experience. Recent experimental work, however, suggests a public view that is at odds with this conceptualisation. (...) By demonstrating that the public does not necessarily endorse central tenets of the “mental” view of pain (subjectivity, privacy, and incorrigibility), experimental philosophers have argued that the public holds a more “body-centric” view than most clinicians and scholars. Such a discrepancy would have important implications for how the public interacts with pain science and clinical care. In response, we tested the hypothesis that the public is capable of a more “mind-centric” view of pain. Using a series of vignettes, we demonstrate that in situations which highlight mental aspects of pain the public can, and does, recognize pain as a mental phenomenon. We also demonstrate that the public view is subject to context effects, by showing that the public’s view is modified when situations emphasizing mental and somatic aspects of pain are presented together. (shrink)
Kriegel described the problem of intentional inexistence as one of the ‘perennial problems of philosophy’, 307–340, 2007: 307). In the same paper, Kriegel alluded to a modal realist solution to the problem of intentional inexistence. However, Kriegel does not state by name who defends the kind of modal realist solution he has in mind. Kriegel also points out that even what he believes to be the strongest version of modal realism does not pass the ‘principle of representation’ and thus modal (...) realism is not an adequate solution to the problem of intentional inexistence. In this paper, I respond to Kriegel by defending a modal realist solution that he did not consider in 2007, called ‘extended modal realism’. EMR is a version of modal realism where possible worlds are not completely isolated as they are under the Lewisian model. Rather, under EMR worlds are, in a way, spatiotemporally related. The fact EMR worlds are related allows EMR to sufficiently pass the principle of representation and thus can be deemed a legitimate solution to the problem of intentional inexistence. I conclude that either EMR can pass the principle of representation in some cases or, and I think the more sensible option, we give up on the principle of representation altogether. (shrink)
What makes us conscious? Many theories that attempt to answer this question have appeared recently in the context of widespread interest about consciousness in the cognitive neurosciences. Most of these proposals are formulated in terms of the information processing conducted by the brain. In this overview, we survey and contrast these models. We first delineate several notions of consciousness, addressing what it is that the various models are attempting to explain. Next, we describe a conceptual landscape that addresses how the (...) theories attempt to explain consciousness. We then situate each of several representative models in this landscape and indicate which aspect of consciousness they try to explain. We conclude that the search for the neural correlates of consciousness should be usefully complemented by a search for the computational correlates of consciousness. (shrink)
Super-substantivalism is the thesis that space is identical to matter; it is currently under discussion ? see Sklar (1977, 221?4), Earman (1989, 115?6) and Schaffer (2009) ? in contemporary philosophy of physics and metaphysics. Given this current interest, it is worth investigating the thesis in the history of philosophy. This paper examines the super-substantivalism of Samuel Alexander, an early twentieth century metaphysician primarily associated with (the movement now known as) British Emergentism. Alexander argues that spacetime is ontologically fundamental and it (...) gives rise to an ontological hierarchy of emergence, involving novel properties such as matter, life and mind. Alexander's super-substantivalism is interesting not just because of its historical importance but also because Alexander unusually attempts to explain why spacetime is identical to matter. This paper carefully unpacks that explanation and shows how Alexander is best read as conceiving of spacetime as a Spinozistic substance, worked upon by evolution. (shrink)
For centuries, philosophers of time have produced texts containing words and pictures. Although some historians study visual representations of time, I have not found any history of philosophy on pictures of time within texts. This paper argues that studying such pictures can be rewarding. I will make this case by studying pictures of time in the works of Leibniz, Arthur Eddington and C. D. Broad, and argue they play subtle roles. Further, I will argue that historians of philosophy more widely (...) could benefit from paying more attention to pictures. (shrink)
Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material (...) form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique. (shrink)
Using Hofstede's culture theory (1980, 2001 Culture's Consequences: Comparing Values, Behaviours, Institutions, and Organizations Across Nation. Sage, NewYork), the current study incorporates the moral development (e.g. Thorne, 2000; Thorne and Magnan, 2000; Thorne et al., 2003) and multidimensional ethics scale (e.g. Cohen et al., 1993; Cohen et al., 1996b; Cohen et al., 2001; Flory et al., 1992) approaches to compare the ethical reasoning and decisions of Canadian and Mainland Chinese final year undergraduate accounting students. The results indicate that Canadian accounting (...) students' formulation of an intention to act on a particular ethical dilemma (deliberative reasoning) as measured by the moral development approach (Thorne, 2000) was higher than Mainland Chinese accounting students. The current study proposes that the five factors identified by the multidimensional ethics scale (MES), as being relevant to ethical decision making can be placed into the three levels of ethical reasoning identified by Kohlberg's (1958, The Development of Modes of Moral Thinking and Choice in the Years Ten to Sixteen. University of Chicago, Doctoral dissertation) theory of cognitive moral development. Canadian accounting students used post-conventional MES factors (moral equity, contractualism, and utilitarianism) more frequently and made more ethical audit decisions than Chinese accounting students. (shrink)
A theory of the structure and cognitive function of the human imagination that attempts to do justice to traditional intuitions about its psychological centrality is developed, largely through a detailed critique of the theory propounded by Colin McGinn. Like McGinn, I eschew the highly deflationary views of imagination, common amongst analytical philosophers, that treat it either as a conceptually incoherent notion, or as psychologically trivial. However, McGinn fails to develop his alternative account satisfactorily because (following Reid, Wittgenstein and Sartre) he (...) draws an excessively sharp, qualitative distinction between imagination and perception, and because of his flawed, empirically ungrounded conception of hallucination. His arguments in defense of these views are rebutted in detail, and the traditional, passive, Cartesian view of visual perception, upon which several of them implicitly rely, is criticized in the light of findings from recent cognitive science and neuroscience. It is also argued that the apparent intuitiveness of the passive view of visual perception is a result of mere historical contingency. An understanding of perception (informed by modern visual science) as an inherently active process enables us to unify our accounts of perception, mental imagery, dreaming, hallucination, creativity, and other aspects of imagination within a single coherent theoretical framework. (shrink)
Virtue ethics has long provided fruitful resources for the study of issues in medical ethics. In particular, study of the moral virtues of the good doctor—like kindness, fairness and good judgement—have provided insights into the nature of medical professionalism and the ethical demands on the medical practitioner as a moral person. Today, a substantial literature exists exploring the virtues in medical practice and many commentators advocate an emphasis on the inculcation of the virtues of good medical practice in medical education (...) and throughout the medical career. However, until very recently, no empirical studies have attempted to investigate which virtues, in particular, medical doctors and medical students tend to have or not to have, nor how these virtues influence how they think about or practise medicine. The question of what virtuous medical practice is, is vast and, as we have written elsewhere, the question of how to study doctors’ moral character is fraught with difficulty. In this paper, we report the results of a first-of-a-kind study that attempted to explore these issues at three medical schools in the United Kingdom. We identify which character traits are important in the good doctor in the opinion of medical students and doctors and identify which virtues they say of themselves they possess and do not possess. Moreover, we identify how thinking about the virtues contributes to doctors’ and medical students’ thinking about common moral dilemmas in medicine. In ending, we remark on the implications for medical education. (shrink)
The Cartesian view that animals are automata sparked a major controversy in early modern European philosophy. This paper studies an early contribution to this controversy. I provide an interpretation of an influential objection to Cartesian animal automatism raised by Ignace-Gaston Pardies (1636–1673). Pardies objects that the Cartesian arguments show only that animals lack ‘intellectual perception’ but do not show that animals lack ‘sensible perception.’ According to Pardies, the difference between these two types of perception is that the former is reflexive (...) such that we both perceive an object and the perception itself, whereas sensible perception lacks this reflexivity. This notion of sensible perception was criticized by the Cartesian Antoine Dilly for violating the doctrine that all thought is conscious. However, I argue that sensible perceptions are not unconscious for Pardies. Rather, they are conscious perceptions that are unaccompanied by a kind of reflexive perception that is constitutive of attention. Moreover, I argue that when understood in this way Pardies raises a compelling objection to Cartesian animal automatists. (shrink)
Moral distress is one of the core topics of clinical ethics. Although there is a large and growing empirical literature on the psychological aspects of moral distress, scholars, and empirical investigators of moral distress have recently called for greater conceptual clarity. To meet this recognized need, we provide a philosophical taxonomy of the categories of what we call ethically significant moral distress: the judgment that one is not able, to differing degrees, to act on one’s moral knowledge about what one (...) ought to do. We begin by unpacking the philosophical components of Andrew Jameton’s original formulation from his landmark 1984 work and identify two key respects in which that formulation remains unclear: the origins of moral knowledge and impediments to acting on that moral knowledge. We then selectively review subsequent literature that shows that there is more than one concept of moral distress and that explores the origin of the values implicated in moral distress and impediments to acting on those values. This review sets the stage for identifying the elements of a philosophical taxonomy of ethically significant moral distress. The taxonomy uses these elements to create six categories of ethically significant moral distress: challenges to, threats to, and violations of professional integrity; and challenges to, threats to, and violations of individual integrity. We close with suggestions about how the proposed philosophical taxonomy of ethically significant moral distress sheds light on the concepts of moral residue and crescendo effect of moral distress and how the proposed taxonomy might usefully guide prevention of and future qualitative and quantitative empirical research on ethically significant moral distress. (shrink)