Actual causes - e.g. Suzy's being exposed to asbestos - often bring about their effects - e.g. Suzy's suffering mesothelioma - probabilistically. I use probabilistic causal models to tackle one of the thornier difficulties for traditional accounts of probabilistic actual causation: namely probabilistic preemption.
I argue that there are non-trivial objective chances (that is, objective chances other than 0 and 1) even in deterministic worlds. The argument is straightforward. I observe that there are probabilistic special scientific laws even in deterministic worlds. These laws project non-trivial probabilities for the events that they concern. And these probabilities play the chance role and so should be regarded as chances as opposed, for example, to epistemic probabilities or credences. The supposition of non-trivial deterministic chances might seem to (...) land us in contradiction. The fundamental laws of deterministic worlds project trivial probabilities for the very same events that are assigned non-trivial probabilities by the special scientific laws. I argue that any appearance of tension is dissolved by recognition of the level-relativity of chances. There is therefore no obstacle to accepting non-trivial chance-role-playing deterministic probabilities as genuine chances. (shrink)
In Making Things Happen, James Woodward influentially combines a causal modeling analysis of actual causation with an interventionist semantics for the counterfactuals encoded in causal models. This leads to circularities, since interventions are defined in terms of both actual causation and interventionist counterfactuals. Circularity can be avoided by instead combining a causal modeling analysis with a semantics along the lines of that given by David Lewis, on which counterfactuals are to be evaluated with respect to worlds in which their antecedents (...) are realized by miracles. I argue, pace Woodward, that causal modeling analyses perform just as well when combined with the Lewisian semantics as when combined with the interventionist semantics. Reductivity therefore remains a reasonable hope. (shrink)
The starting point in the development of probabilistic analyses of token causation has usually been the naïve intuition that, in some relevant sense, a cause raises the probability of its effect. But there are well-known examples both of non-probability-raising causation and of probability-raising non-causation. Sophisticated extant probabilistic analyses treat many such cases correctly, but only at the cost of excluding the possibilities of direct non-probability-raising causation, failures of causal transitivity, action-at-a-distance, prevention, and causation by absence and omission. I show that (...) an examination of the structure of these problem cases suggests a different treatment, one which avoids the costs of extant probabilistic analyses. (shrink)
An influential tradition in the philosophy of causation has it that all token causal facts are, or are reducible to, facts about difference-making. Challenges to this tradition have typically focused on pre-emption cases, in which a cause apparently fails to make a difference to its effect. However, a novel challenge to the difference-making approach has recently been issued by Alyssa Ney. Ney defends causal foundationalism, which she characterizes as the thesis that facts about difference-making depend upon facts about physical causation. (...) She takes this to imply that causation is not fundamentally a matter of difference-making. In this paper, I defend the difference-making approach against Ney’s argument. I also offer some positive reasons for thinking, pace Ney, that causation is fundamentally a matter of difference-making. (shrink)
In this book, Mumford and Anjum advance a theory of causation based on a metaphysics of powers. The book is for the most part lucidly written, and contains some interesting contributions: in particular on the necessary connection between cause and effect and on the perceivability of the causal relation. I do, however, have reservations about some of the book’s central theses: in particular, that cause and effect are simultaneous, and that causes can fruitfully be represented as vectors.
An actual cause of some token effect is itself a token event that helped to bring about that effect. The notion of an actual cause is different from that of a potential cause – for example a pre-empted backup – which had the capacity to bring about the effect, but which wasn't in fact operative on the occasion in question. Sometimes actual causes are also distinguished from mere background conditions: as when we judge that the struck match was a cause (...) of the fire, while the presence of oxygen was merely part of the relevant background against which the struck match operated. Actual causation is also to be distinguished from type causation: actual causation holds between token events in a particular, concrete scenario; type causation, by contrast, holds between event kinds in scenario kinds. (shrink)
In an illuminating article, Claus Beisbart argues that the recently-popular thesis that the probabilities of statistical mechanics (SM) are Best System chances runs into a serious obstacle: there is no one axiomatization of SM that is robustly best, as judged by the theoretical virtues of simplicity, strength, and fit. Beisbart takes this 'no clear winner' result to imply that the probabilities yielded by the competing axiomatizations simply fail to count as Best System chances. In this reply, we express sympathy for (...) the 'no clear winner' thesis. However, we argue that an importantly different moral should be drawn from this. We contend that the implication for Humean chances is not that there are no SM chances, but rather that SM chances fail to be sharp. (shrink)
The discovery of high-level causal relations seems a central activity of the special sciences. Those same sciences are less successful in formulating strict laws. If causation must be underwritten by strict laws, we are faced with a puzzle, which might be dubbed the 'no strict laws' problem for high-level causation. Attempts have been made to dissolve this problem by showing that leading theories of causation do not in fact require that causation be underwritten by strict laws. But this conclusion has (...) been too hastily drawn. Philosophers have tended to equate non-strict laws with ceteris paribus laws. I argue that there is another category of non-strict law that has often not been properly distinguished: namely, minutiae rectus laws. If, as it appears, many special science laws are minutiae rectus laws, then this poses a problem for their ability to underwrite causal relations in a way that their typically ceteris paribus nature does not. I argue that the best prospect for resolving the resurgent 'no strict laws' problem is to argue that special science laws are in fact typically probabilistic, rather than being minutiae rectus laws. (shrink)
Though almost forty years have elapsed since its first publication, it is a testament to the philosophical acumen of its author that 'The Matter of Chance' contains much that is of continued interest to the philosopher of science. Mellor advances a sophisticated propensity theory of chance, arguing that this theory makes better sense than its rivals (in particular subjectivist, frequentist, logical and classical theories) of ‘what professional usage shows to be thought true of chance’ (p. xi) – in particular ‘that (...) chance is objective, empirical and not relational, and that it applies to the single case’ (ibid.). The book is short and dense, with the serious philosophical content delivered thick and fast. There is little by way of road-mapping or summarising to assist the reader: the introduction is hardly expansive and the concluding paragraph positively perfunctory. The result is that the book is often difficult going, and the reader is made to work hard to ensure correct understanding of the views expressed. On the other hand, the author’s avoidance of unnecessary use of formalism and jargon ensures that the book is still reasonably accessible. In the following, I shall first summarise the key features of Mellor’s propensity theory, and then offer a few critical remarks. (shrink)
We introduce a family of rules for adjusting one's credences in response to learning the credences of others. These rules have a number of desirable features. 1. They yield the posterior credences that would result from updating by standard Bayesian conditionalization on one's peers' reported credences if one's likelihood function takes a particular simple form. 2. In the simplest form, they are symmetric among the agents in the group. 3. They map neatly onto the familiar Condorcet voting results. 4. They (...) preserve shared agreement about independence in a wide range of cases. 5. They commute with conditionalization and with multiple peer updates. Importantly, these rules have a surprising property that we call synergy - peer testimony of credences can provide mutually supporting evidence raising an individual's credence higher than any peer's initial prior report. At first, this may seem to be a strike against them. We argue, however, that synergy is actually a desirable feature and the failure of other updating rules to yield synergy is a strike against them. (shrink)
We investigate whether standard counterfactual analyses of causation imply that the outcomes of space-like separated measurements on entangled particles are causally related. Although it has sometimes been claimed that standard CACs imply such a causal relation, we argue that a careful examination of David Lewis’s influential counterfactual semantics casts doubt on this. We discuss ways in which Lewis’s semantics and standard CACs might be extended to the case of space-like correlations.
In their article 'Causes and Explanations: A Structural-Model Approach. Part I: Causes', Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of 'actual cause'. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
ABSTRACT Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of ‘actual cause’. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation. 1Introduction 2Preemption 3Structural Equation Models 4The Halpern and Pearl Definition of ‘Actual Cause’ 5Preemption Again 6The Probabilistic Case 7Probabilistic Causal Models 8A Proposed Probabilistic Extension of Halpern and Pearl’s Definition 9Twardy and Korb’s Account 10Probabilistic (...) Fizzling 11Conclusion. (shrink)
Special science generalizations admit of exceptions. Among the class of non-exceptionless special science generalizations, I distinguish minutis rectis generalizations from the more familiar category of ceteris paribus generalizations. I argue that the challenges involved in showing that mr generalizations can play the law role are underappreciated, and quite different from those involved in showing that cp generalizations can do so. I outline a strategy for meeting the challenges posed by mr generalizations.
Much recent philosophical attention has been devoted to the prospects of the Best System Analysis of chance for yielding high-level chances, including statistical mechanical and special science chances. But a foundational worry about the BSA lurks: there don’t appear to be uniquely correct measures of the degree to which a system exhibits theoretical virtues, such as simplicity, strength, and fit. Nor does there appear to be a uniquely correct exchange rate at which the theoretical virtues trade off against one another (...) in the determination of an overall best system. I argue that there’s no robustly best system for our world – no system that comes out best under every reasonable measure of the theoretical virtues and exchange rate between them – but rather a set of ‘tied-for-best’ systems: a set of very good systems, none of which is robustly best. Among the tied-for-best systems are systems that entail differing high-level probabilities. I argue that the advocate of the BSA should conclude that the high-level chances for our world are imprecise. (shrink)
Much recent philosophical attention has been devoted to variants on the Best System Analysis of laws and chance. In particular, philosophers have been interested in the prospects of such Best System Analyses for yielding *high-level* laws and chances. Nevertheless, a foundational worry about BSAs lurks: there do not appear to be uniquely appropriate measures of the degree to which a system exhibits theoretical virtues, such as simplicity and strength. Nor does there appear to be a uniquely correct exchange rate at (...) which the theoretical virtues of simplicity, strength, and likelihood trade off against one another in the determination of a best system. Moreover, it may be that there is no *robustly* best system: no system that comes out best under *any* reasonable measures of the theoretical virtues and exchange rate between them. This worry has been noted by several philosophers, with some arguing that there is indeed plausibly a set of tied-for-best systems for our world. Some have even argued that this entails that there are no Best System laws or chances in our world. I argue that, while it *is* plausible that there is a set of tied-for-best systems for our world, it doesn't follow from this that there are no Best System chances. Rather, it follows that the Best System chances for our world are *unsharp*. (shrink)
Amazon.com Love, fear, hope, calculus, and game shows-how do all these spring from a few delicate pounds of meat? Neurophysiologist Ian Glynn lays the foundation for answering this question in his expansive An Anatomy of Thought, but stops short of committing to one particular theory. The book is a pleasant challenge, presenting the reader with the latest research and thinking about neuroscience and how it relates to various models of consciousness. Combining the aim of a textbook with the style of (...) a popularization, it provides all the lay reader needs to know to participate in the philosophical debate that is redefining our attitudes about our minds. Drawing on the rich history of neurological case studies, Glynn picks through the building blocks of our nervous system, examines our visual and linguistic systems, and probes deeply into our higher thought processes. The stories of great scientists, like Ramon y Cajal, and famous patients, like Sperry's split-brained epileptics, illuminate the scientific issues Glynn selects as essential for understanding consciousness. Some might argue that his lengthy explorations of natural selection overemphasize evolutionary explanations of psychological phenomena, but they must also agree that evolutionary psychology has distanced itself mightily from social Darwinism in recent years and merits a reappraisal. The great consciousness debate may form the core of the 21st-century Zeitgeist; get ready for it with An Anatomy of Thought. -Rob Lightner From Publishers Weekly How do we know? What do we think? How could a philosophical problem-'the mind-body problem,' say-induce a headache? What can evolutionary theory, molecular biology, the history of medicine and experimental psychology tell us about the features of human consciousness, and (once again) how do we know? Glynn, a physician and Cambridge University professor, meticulously attempts to answer these questions and more, setting forth the results of all sorts of research relevant to our brains-from 19th-century dissections to Oliver Sacks-like case studies, work with monkeys and supercomputers, and the enduring puzzles of philosophy, which he rightly saves for near the end. After explaining evolution by natural selection and 'clearing away much dross,' Glynn lays out the experiments and theories that have shown 'how nerve cells can carry information about the body, how they can interact' and how sense organs work; demonstrates the 'mixture of parallel and hierarchical organization' in our brains and 'the striking localization of function within it'; considers where neuroscience is likely to go; and admits that, among the many fields of exciting research just ahead, 'we can be least confident of progress toward a complete, scientific explanation of our sensations and thoughts and feelings.' Other recent explaining-the-brain books have sometimes advanced simplistic, or implausibly grand, claims about the nature and features of consciousness in general. Instead, Glynn offers a patient, informative, well-laid-out researcher's-eye view of what we have learned, how we figured it out and what we still don't know about neurons, senses, feelings, brains and minds. (Apr.) Copyright 2000 Reed Business Information, Inc. From Library Journal The nature of consciousness, which perennially troubles the minds of scientists and philosophers, is the subject of an ever-growing body of literature. Two of the latest entries approach the topic from different perspectives. Glynn, a professor of physiology and head of the Physiological Laboratory at Cambridge, offers a comprehensive summary of what we know about the brain-both its evolution and its mechanisms. Among the topics he covers are natural selection, molecular evolution, nerves and the nervous system, sensory perception, and the specific structures responsible for our intellect. Using the mechanisms involved in vision and speech as models, Glynn skillfully describes various neurological deficiencies that can lead to 'disordered seeing' and problems with the use of language. He carefully distinguishes what we know through experimental evidence from what we know through the observation of patients with neurological damage. He also describes some of the major theories that attempt to explain why these structures arose. While his book concentrates on the structures that make up the mind, Glynn is well aware that some physical events appear explicable only in terms of conscious mental events-a situation that conflicts with the laws of modern physics. Only briefly, however, does he consider the various approaches that have been taken to deal with the issues of mind/body and free will. In contrast, this is the primary focus of The Physics of Consciousness. After reviewing the fundamentals of classic physics, Walker (who has a Ph.D. in physics) summarizes elements of the new physics in which our knowledge of space, time, matter, and energy are all dependent on the moment of observation. Walker explores the meaning of consciousness as a characteristic of the observer. In this context both the observer and the act of measurement are critical. In essence, Walker leads his reader on a journey through his concept of a 'quantum mind,' which can both affect matter (including other minds) and can be affected by other distant matter/minds. To break up what would otherwise be an extremely dense text, Walker also relates the very touching story of the loss of his high-school sweetheart to leukemia. Indeed, it is his memory of their relationship that drives Walker to seek an understanding of ultimate reality. At times, he has a tendency to be dogmatic-as when he concludes, 'our consciousness, our mind, and the will of God are the same mind.' While An Anatomy of Thought is appropriate for most academic libraries, the Physics of Consciousness will be most accessible to readers with some knowledge of advanced physics. -Laurie Bartolini, Illinois State Lib., Springfield Copyright 2000 Reed Business Information, Inc. From Booklist The codiscoverers of natural selection-Charles Darwin and Alfred Wallace-disagreed over the possibility of finding an evolutionary explanation for the human mind. Glynn here argues Darwin's side of the debate, tracing an eons-long path of development starting from simple amino acids floating in primal seas and extending through the erect hominids in which the powers of a massive brain first manifest themselves. Patiently adducing evidence of an evolutionary origin for the underlying molecular machinery, Glynn dissects the nerve centers that make possible speech and hearing, sight, and reading. Pressing deeper, he lays bare the cortical foundations of personality. But those who deal with the mind must attend also to the arguments advanced by philosophers. And it is when he turns from dendrites to syllogisms (especially the vexing mind-body paradox) that Glynn's empirical reasoning fails him. In the end, he concedes his perplexity in trying to conceive of an evolutionary origin for human consciousness. This concession may set the shade of Alfred Wallace to chortling, but it will draw readers into an honest confrontation with a profound enigma. Bryce Christensen. (shrink)
At a time when global warming due to greenhouse gas emissions pose a present and clear threat to the environment, the Nuclear Energy Industry is gearing up to provide a solution to this problem, trading upon a number of fallacies to argue that it neither makes, nor will in future make, any significant contribution to these or to other radiation-linked diseases. This paper exposes these fallacies and argues, to the contrary, that even should the industry be able to avoid all (...) accidents, routine radioactive emissions to the environment during power production and fuel reprocessing threaten to destroy all human life on the planet. Dr. Glynn is a Full Professor of Philosophy at Florida Atlantic University, having earned his doctorate at The University of Manchester, England. His many publications include Continental and Postmodern Perspectives in the Philosophy of Science, eds. Babette Babich, Debra Bergoffen and Simon Glynn, (Vermont: Avebury, 1995). This paper was presented at the Comparative Studies Association 2008 Conference: Interdisciplinarity and Environmental Sustainability. (shrink)
The idea of elegance in science is not necessarily a familiar one, but it is an important one. The use of the term is perhaps most clear-cut in mathematics - the elegant proof - and this is where Ian Glynn begins his exploration. Scientists often share a sense of admiration and excitement on hearing of an elegant solution to a problem, an elegant theory, or an elegant experiment. The idea of elegance may seem strange in a field of endeavour that (...) prides itself in its objectivity, but only if science is regarded as a dull, dry activity of counting and measuring. It is, of course, far more than that, and elegance is a fundamental aspect of the beauty and imagination involved in scientific activity. Ian Glynn, a distinguished scientist, selects historical examples from a range of sciences to draw out the principles of science, including Kepler's Laws, the experiments that demonstrated the nature of heat, and the action of nerves, and of course the several extraordinary episodes that led to Watson and Crick's discovery of the structure of DNA. With a highly readable selection of inspiring episodes highlighting the role of beauty and simplicity in the sciences, the book also relates to important philosophical issues of inference, and Glynn ends by warning us not to rely on beauty and simplicity alone - even the most elegant explanation can be wrong. (shrink)
In this cross-cultural exploration of the comparative experiences of Asian and Western women in higher education management, leading feminist theorist Carmen Luke constructs a provocative framework that situates her own standpoint and experiences alongside those of Asian women she studied over a three-year period. She conveys some of the complexity of global sweeps and trends in education and feminist discourse as they intersect with local cultural variations but also dovetail into patterns of regional similarities. Western feminist research has established (...) that relatively few women hold senior positions in universities and colleges. Using the now common metaphor of the "glass ceiling," this research has developed a range of social, cultural, and institutional explanations for women's underrepresentation in academic life. International studies show that women in non-Western countries are also underrepresented in higher education. Yet do Western explanations and strategies for change hold for academic women working in non-Western universities? The very diversity among women's experiences calls into question many of the analytic tools, terms, claims, and solutions formulated by Western feminism. This is the first study to show how cultural differences figure into the institutional dynamics of "glass ceilings." It raises important theoretical and practical, strategic, and tactical questions about issues of cultural difference and institutional power. (shrink)
A Bayesian network (BN) is a graphical model of uncertainty that is especially well suited to legal arguments. It enables us to visualize and model dependencies between different hypotheses and pieces of evidence and to calculate the revised probability beliefs about all uncertain factors when any piece of new evidence is presented. Although BNs have been widely discussed and recently used in the context of legal arguments, there is no systematic, repeatable method for modeling legal arguments as BNs. Hence, where (...) BNs have been used in the legal context, they are presented as completed pieces of work, with no insights into the reasoning and working that must have gone into their construction. This means the process of building BNs for legal arguments is ad hoc, with little possibility for learning and process improvement. This article directly addresses this problem by describing a method for building useful legal arguments in a consistent and repeatable way. The method complements and extends recent work by Hepler, Dawid, and Leucari (2007) on object-oriented BNs for complex legal arguments and is based on the recognition that such arguments can be built up from a small number of basic causal structures (referred to as idioms). We present a number of examples that demonstrate the practicality and usefulness of the method. (shrink)
Among the "hard cases" of captive animal research is the continued use of chimpanzees in harmful experimental science. In a recent article I contend that contemporary animal welfare science and chimpanzee behavioral studies permit, if not require, a reappraisal of the moral significance of chimpanzee dissent from participation in certain experiments. In what follows, I outline my earlier argument, provide a brief survey of some central concepts in pediatric research ethics, and use these to enrich an understanding of chimpanzee dissent (...) useful for research ethics. (shrink)
Locked-in syndrome (LIS) is a severe neurological condition that typically leaves a patient unable to move, talk and, in many cases, initiate communication. Brain Computer Interfaces (or BCIs) promise to enable individuals with conditions like LIS to re-engage with their physical and social worlds. In this paper we will use extended mind theory to offer a way of seeing the potential of BCIs when attached to, or implanted in, individuals with LIS. In particular, we will contend that functionally integrated BCIs (...) extend the minds of individuals with LIS beyond their bodies, allowing them greater autonomy than they can typically hope for in living with their condition. This raises important philosophical questions about the implications of BCI technology, particularly the potential to change selves, and ethical questions about whether society has a responsibility to aid these individuals in re-engaging with their physical and social worlds. It also raises some important questions about when these interventions should be offered to individuals with LIS and respecting the rights of these individuals to refuse intervention. By aiding willing individuals in re-engaging with their physical and social worlds, BCIs open up avenues of opportunity taken for granted by able individuals and introduce new ways in which these individuals can be harmed. These latter considerations serve to highlight our emergent social responsibilities to those individuals who will be suitable for, and receive, BCIs. (shrink)
Ingmar Persson and Julian Savulescu argue that non-traditional forms of cognitive enhancement (those involving genetic engineering or pharmaceuticals) present a serious threat to humanity, since the fruits of such enhancement, accelerated scientific progress, will give the morally corrupt minority of humanity new and more effective ways to cause great harm. And yet it is scientific progress, accelerated by non-traditional cognitive enhancement, which could allow us to dramatically morally enhance human beings, thereby eliminating, or at least reducing, the threat from the (...) morally corrupt minority. I argue that this apparently intractable dilemma is less difficult to resolve than Persson and Savulescu suppose. Their analysis of non-traditional cognitive enhancement overstates the risks and undervalues the benefits that such enhancement might provide. Once the benefits are better described, it is clear that non-traditional cognitive enhancement could be the means of our survival, not of our destruction. (shrink)
In this article, we critically examine some of the ethical challenges and interpretive difficulties with possible future non-clinical applications of pediatric fMRI with a particular focus on applications in the classroom and the courtroom - two domains in which children come directly in contact with the state. We begin with a general overview of anticipated clinical and non-clinical applications of pediatric fMRI. This is followed by a detailed analysis of a range of ethical challenges and interpretive difficulties that trouble the (...) use of fMRI and are likely to be especially acute with non-clinical uses of the technology. We conclude that knowledge of these challenges and difficulties should influence policy decisions regarding the non-clinical uses of fMRI. Our aim is to encourage the development of future policies prescribing the responsible use of this neuroimaging technology as it develops both within and outside the clinical setting. (shrink)
The meaning of elegance -- Celestial mechanics : the route to Newton -- Bringing the heavens down to earth -- So what is heat? -- Elegance and electricity -- Throwing light on light : with the story of Thomas Young -- How do nerves work? -- Information handling in the brain -- The genetic code -- Epilogue : a cautionary tale.
This article describes the nature of animal abuse and the response of the criminal justice system to all cruelty cases prosecuted by the Massachusetts Society for Prevention of Cruelty to Animals between 1975 and 1996. Dogs were the most common target; when combined with cats, these domestic animals composed the vast majority of incidents. Almost all of these animals were owned, and females were the majority of complainants. Suspects were almost always young males, and most of the time they allegedly (...) shot, beat, stabbed, or threw their victims. Reportedly, adults were more likely than minors to abuse dogs, shoot them, and commit such acts alone rather than in a group, while minors were more likely to abuse cats, beat them, and commit such acts with peers present. Less than half of the alleged abusers were found guilty in court, one-third were fined, less than one-quarter had to pay restitution, one-fifth were put on probation, one-tenth were sent to jail, and an even smaller percent were required to undergo counseling or perform community service. (shrink)
“Neurodiversity” is associated with the struggle for the civil rights of all those diagnosed with neurological or neurodevelopmental disorders. Two basic approaches in the struggle for what might be described as “neuro-equality” are taken up in the literature: There is a challenge to current nosology that pathologizes all of the phenotypes associated with neurological or neurodevelopmental disorders ); there is a challenge to those extant social institutions that either expressly or inadvertently model a social hierarchy where the interests or needs (...) of individuals are ranked relative to what is regarded as properly functioning cognitive capacities. In this paper, we explore some of the reasons justifying which make it an important tool for achieving greater neuro-equality, while still recognizing its limitations for achieving this goal. Particularly, we explore how an appeal to functionality and neurological diversity can support a re-seeing of at least certain forms of ASD. (shrink)
In moral psychology, it has long been argued that empathy is a necessary capacity of both properly developing moral agents and developed moral agency . This view stands in tension with the belief that some individuals diagnosed with autism—which is typically characterized as a deficiency in social reciprocity —are moral agents. In this paper we propose to explore this tension and perhaps trouble how we commonly see those with autism. To make this task manageable, we will consider whether high functioning (...) individuals diagnosed with an autism spectrum disorder are capable of empathetic responses. If they are, then they possess a capacity that, on the view above, is required for moral agency. If they are not so capable, and yet sometimes engage in moral behaviour, this casts some doubt on the claim that empathy is required for moral agency. This second possibility will necessitate an exploration of the capacity of some individuals with autism to engage in moral behaviour, giving us further grounds to re-see these individuals as moral agents. (shrink)
The call has been made for global bioethics. In an age of pandemics, international drug trials, and genetic technology, health has gone global, and bioethics must follow suit. George Annas is one among a number of thinkers to recommend that bioethics expand beyond its traditional domain of patient–physician interactions to encompass a broader range of health-related matters. Medicine, Annas argues, must “develop a global language and a global strategy that can help to improve the health of all of the world's (...) citizens.” Individual countries cannot address global health issues, and culturally specific principles are inadequate for addressing global bioethics concerns. We will need a language and moral framework grounded in a foundation of universally shared, transcultural judgments about humankind that will also recognize moral pluralism. The claim has been made that such a foundation already exists in human rights, and that human rights should, therefore, be the new lingua franca of bioethics. (shrink)
This work was supported, in part, by a Stem Cell Network grant to Françoise Baylis and Jason Scott Robert and a CIHR grant to Françoise Baylis. We sincerely thank Alan Fine, Rich Campbell, Cynthia Cohen, and Tim Krahn for helpful comments on an earlier draft of this paper. Thanks are also owed to Tim Krahn for his research assistance. An earlier version of this paper was presented to the Department of Bioethics and the Novel Tech Ethics research team. We thank (...) the participants at each of these meetings for their helpful comments. (shrink)
Citing grounds of conscience, pharmacists are increasingly refusing to fill prescriptions for emergency contraception, or the "morning-after pill." Whether correctly or not, these pharmacists believe that emergency contraception either constitutes the destruction of post-conception human life, or poses a significant risk of such destruction. We argue that the liberty of conscientious refusal grounds a strong moral claim, one that cannot be defeated solely by consideration of the interests of those seeking medication. We examine, and find lacking, five arguments for requiring (...) pharmacists to fill prescriptions. However, we argue that in their professional context, pharmacists benefit from liberty restrictions on those seeking medication. What would otherwise amount to very strong claims can be defeated if they rest on some prior restriction of the liberty of others. We conclude that the issue of what policy should require pharmacists to do must be settled by way of a theory of second best. Asking "What is second best?" rather than "What is best?" offers a way to navigate the liberty restrictions that may be fixed obstacles to optimality. (shrink)
Some researchers and autistic activists have recently suggested that because some ‘autism-related’ behavioural atypicalities have a function or purpose they may be desirable rather than undesirable. Examples of such behavioural atypicalities include hand-flapping, repeatedly ordering objects (e.g., toys) in rows, and profoundly restricted routines. A common view, as represented in the Diagnostic and Statistical Manual of Mental Disorders (DSM) IV-TR (APA, 2000), is that many of these behaviours lack adaptive function or purpose, interfere with learning, and constitute the non-social behavioural (...) dysfunctions of those disorders making up the Autism Spectrum. As the DSM IV-TR continues to be the reference source of choice for professionals working with individuals with psychiatric difficulties, its characterization of the Autism Spectrum holds significant sway. We will suggest Extended Mind and Enactive Cognition Theories, which theorize that mind (or cognition) is embodied and environmentally embedded, as coherent conceptual and theoretical spaces within which to investigate the possibility that certain repetitive behaviours exhibited by autistics possess functions or purposes that make them desirable. As lenses through which to re-examine ‘autism-related’ behavioral atypicalities, these theories not only open up explanatory possibilities underdeveloped in the research literature, but also cohere with how some autistics describe their own experience. Our position navigates a middle way between the view of autism as understood in terms of impairment, deficit and dysfunction and one that seeks to de-pathologize the Spectrum. In so doing we seek to contribute to a continuing dialogue between researchers, clinicians and self- or parent advocates. (shrink)
The time is ripe for a greater interrogation of assumptions and commitments underlying an emerging common ground on the ethics of animal research as well on the 3 R (replacement, refinement, reduction) approach that parallels, and perhaps even further shapes, it. Recurring pressures to re-evaluate the moral status of some animals in research comes as much from within the relevant sciences as without. It seems incredible, in the light of what we now know of such animals as chimpanzees, to deny (...) that these animals are properly accorded high moral status. Barring the requirement that they be human, it is difficult to see what more animals such as chimpanzees would have to possess to acquire it. If the grounds for ascribing high moral status are to be non-arbitrary and responsive to our best knowledge of those individuals who possess the relevant features, we should expect that a sound ethical experimental science will periodically reassess the moral status of their research subjects as the relevant knowledge demands. We already can observe this reassessment as scientists committed to humane experimental science incorporate discoveries of enrichment tools and techniques into their housing and use of captive research animals. No less should this reassessment include a critical reflection on the possible elevation of moral status of certain research animals in light of what is discovered regarding their morally significant properties, characteristics or capacities, or so I will argue. To do anything short of this threatens the social and moral legitimacy of animal research. (shrink)
It is well known that Bayes’ theorem (with likelihood ratios) can be used to calculate the impact of evidence, such as a ‘match’ of some feature of a person. Typically the feature of interest is the DNA profile, but the method applies in principle to any feature of a person or object, including not just DNA, fingerprints, or footprints, but also more basic features such as skin colour, height, hair colour or even name. Notwithstanding concerns about the extensiveness of databases (...) of such features, a serious challenge to the use of Bayes in such legal contexts is that its standard formulaic representations are not readily understandable to non-statisticians. Attempts to get round this problem usually involve representations based around some variation of an event tree. While this approach works well in explaining the most trivial instance of Bayes’ theorem (involving a single hypothesis and a single piece of evidence) it does not scale up to realistic situations. In particular, even with a single piece of match evidence, if we wish to incorporate the possibility that there are potential errors (both false positives and false negatives) introduced at any stage in the investigative process, matters become very complex. As a result we have observed expert witnesses (in different areas of speciality) routinely ignore the possibility of errors when presenting their evidence. To counter this, we produce what we believe is the first full probabilistic solution of the simple case of generic match evidence incorporating both classes of testing errors. Unfortunately, the resultant event tree solution is too complex for intuitive comprehension. And, crucially, the event tree also fails to represent the causal information that underpins the argument. In contrast, we also present a simple-to-construct graphical Bayesian Network (BN) solution that automatically performs the calculations and may also be intuitively simpler to understand. Although there have been multiple previous applications of BNs for analysing forensic evidence—including very detailed models for the DNA matching problem, these models have not widely penetrated the expert witness community. Nor have they addressed the basic generic match problem incorporating the two types of testing error. Hence we believe our basic BN solution provides an important mechanism for convincing experts—and eventually the legal community—that it is possible to rigorously analyse and communicate the full impact of match evidence on a case, in the presence of possible errors. (shrink)