Euthanasia and physician assisted-suicide are terms used to describe the process in which a doctor of a sick or disabled individual engages in an activity which directly or indirectly leads to their death. This behavior is engaged by the healthcare provider based on their humanistic desire to end suffering and pain. The psychiatrist's involvement may be requested in several distinct situations including evaluation of patient capacity when an appeal for euthanasia is requested on grounds of terminal somatic illness or when (...) the patient is requesting euthanasia due to mental suffering. We compare attitudes of 49 psychiatrists towards euthanasia and assisted suicide with a group of 54 other physicians by means of a questionnaire describing different patients, who either requested physician-assisted suicide or in whom euthanasia as a treatment option was considered, followed by a set of questions relating to euthanasia implementation. When controlled for religious practice, psychiatrists expressed more conservative views regarding euthanasia than did physicians from other medical specialties. Similarly female physicians and orthodox physicians indicated more conservative views. Differences may be due to factors inherent in subspecialty education. We suggest that in light of the unique complexity and context of patient euthanasia requests, based on their training and professional expertise psychiatrists are well suited to take a prominent role in evaluating such requests to die and making a decision as to the relative importance of competing variables. (shrink)
The relations between philosophy, science and religion preoccupied S.H. Bergman for many years. He wanted to corroborate, by belief, a personal God to whom, and not only about whom, one can speak. This should follow from authentic religious experience, making it independent from philosophy. Furthermore, according to Bergman, religion can do what philosophical reasoning is incapable of doing since he considers belief to be stronger than knowledge. A criticalscrutiny of these assumptions involves some interesting implications concerning toleration, freedom-of-thought (...) and dogmatism. The final conclusion consists in that belief cannot refute philosophical knowledge but can reject it while philosophy can refute belief but cannot reject it. (shrink)
The following is a transcript of the interview I (Yasuko Kitano) conducted with Neil Levy (The Centre for Applied Philosophy and Public Ethics, CAPPE) on the 23rd in July 2009, while he was in Tokyo to give a series of lectures on neuroethics at The University of Tokyo Center for Philosophy. I edited his words for publication with his approval.
Compatibilists often think they can afford to be complacent with regard to scientific findings. But there are apparent threats to free will besides determinism. Robert Kane has recently claimed that if consciousness does not initiate action, all accounts of free will go down, compatibilist and incompatibilist. Some cognitive scientists argue that in fact consciousness does not initiate action. In this paper I argue that they are right (though not for the reasons they advance): as a matter of fact consciousness does (...) not initiate action. But, I contend, Kane is wrong in thinking that it follows that we have no free will. I sketch how we might have free will in spite of the finding that consciousness does not initiate action, and remark on the implications for several well-known accounts of responsibility, include Clarke's agent-causal theory and Fischer and Ravizza's reasons-responsiveness account. (shrink)
The concept of luck has played an important role in debates concerning free will and moral responsibility, yet participants in these debates have relied upon an intuitive notion of what luck is. Neil Levy develops an account of luck, which is then applied to the free will debate. He argues that the standard luck objection succeeds against common accounts of libertarian free will, but that it is possible to amend libertarian accounts so that they are no more vulnerable to (...) luck than is compatibilism. But compatibilist accounts of luck are themselves vulnerable to a powerful luck objection: historical compatibilisms cannot satisfactorily explain how agents can take responsibility for their constitutive luck; non-historical compatibilisms run into insurmountable difficulties with the epistemic condition on control over action. Levy argues that because epistemic conditions on control are so demanding that they are rarely satisfied, agents are not blameworthy for performing actions that they take to be best in a given situation. It follows that if there are any actions for which agents are responsible, they are akratic actions; but even these are unacceptably subject to luck. Levy goes on to discuss recent non-historical compatibilisms, and argues that they do not offer a viable alternative to control-based compatibilisms. He suggests that luck undermines our freedom and moral responsibility no matter whether determinism is true or not. (shrink)
In this highly original book, Donald Levy considers the most important and persuasive of these philosophical criticisms, as articulated by four figures: Ludwig Wittgenstein, William James, Alasdair MacIntyre, and Adolf Grunbaum.
The author comments on the article “The Neurobiology of Addiction: Implications for Voluntary Control of Behavior,‘ by S. E. Hyman. Hyman’s article suggests that addicted individuals have impairments in cognitive control of behavior. The author agrees with Hyman’s view that addiction weakens the addict’s ability to align his actions with his judgments. The author states that neuroethics may focus on brains and highlight key aspects of behavior but we still risk missing explanatory elements. Accession Number: 24077912; Authors: Levy, Neil (...) 1,2; Email Address: firstname.lastname@example.org; Affiliations: 1: University of Melbourne, Parkville 3010, Australia; 2: University of Oxford; Subject: ADDICTIONS; Subject: ALCOHOLISM; Subject: COGNITION; Subject: BEHAVIOR; Subject: JUDGMENT; Subject: HYMAN, S. E.; Subject: NEUROBIOLOGY; Number of Pages: 2p. (shrink)
This article summarizes the theory of federalism as non-domination Iris Marion Young began to develop in her final years, a theory of self-government that tried to recognize interconnectedness. Levy also poses an objection to that theory: non-domination cannot do the work Young needed of it, because it is a theory about the merits of decisions not about jurisdiction over them. The article concludes with an attempt to give Young the last word.
Abstract The presence of gene–environment statistical interaction ( G x E ) and correlation ( rGE ) in biological development has led both practitioners and philosophers of science to question the legitimacy of heritability estimates. The paper offers a novel approach to assess the impact of G x E and rGE on the way genetic and environmental causation can be partitioned. A probabilistic framework is developed, based on a quantitative genetic model that incorporates G x E and rGE , offering (...) a rigorous way of interpreting heritability estimates. Specifically, given an estimate of heritability and the variance components associated with estimates of G x E and rGE , I arrive at a probabilistic account of the relative effect of genes and environment. Content Type Journal Article Category Regular Article Pages 1-13 DOI 10.1007/s10441-011-9139-8 Authors Omri Tal, School of Philosophy and The Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University, Tel Aviv, 69978 Israel Journal Acta Biotheoretica Online ISSN 1572-8358 Print ISSN 0001-5342. (shrink)
O estruturalismo alcançou seu zênite de influência no pensamento francês nos anos 60 e 70 do século XX, quando Lévinas escreveu os seus livros mais importantes. Gostaria, portanto, de examinar sua concepção das implicações filosóficas desta corrente teorético-metodológica, cujo impacto nas sciences humaines quase não deixou nenhum pensador francês indiferente na época. Lévinas acusou o estruturalismo de não passar de uma ilusão, na medida em que sua espontaneidade subjetiva faz com que impulsos e instintos sejam descritos como valores da razão (...) prática. Todavia, apesar da divergência entre Lévinas, para quem a ciência deve estar ao serviço da ética, e Lévi-Strauss, que concebia a ética no melhor dos casos como resultado da pesquisa científica e não como seu fim, ambos pensadores embasaram sua ética na mesma premissa: respeitar a alteridade do Outro, de cada pessoa, cada sociedade e cada cultura. A crítica de Lévinas não visava a refutação do estruturalismo mas suas premissas teóricas. Se elas pudessem ser ratificadas, alguém poderia justificar a metodologia estruturalista. Portanto, a palavra-chave aqui é “se”. Sua crítica não é nenhuma negação absoluta. Sua principal crítica foi a de que o estruturalismo era uma teoria científica que não deixava nenhum lugar para a ética; portanto, Lévinas também considerava o estruturalismo uma ameaça ao judaísmo, onde a ética ocupa um importante lugar. No início do século XX, Rosenzweig – assim como Lévinas no seu fim – buscou propor uma saída da concepção de totalidade porque não deixava lugar adequado para o estatuto do ser humano como sujeito. Para Rosenzweig, tratava-se antes de mais nada de uma revolta contra a filosofia idealista de Hegel, enquanto que para Lévinas compreendia uma crítica do estruturalismo que dava primazia a estruturas inconscientes sobre a subjetividade humana. Apenas esta pode servir de base para uma ética que fosse o propósito maior de sua filosofia PALAVRAS-CHAVE – Estruturalismo. Filosofia Francesa. Lévinas. Lévi-Strauss. Rosenzweig. ABSTRACT Structuralism reached the peak of its influence in French thought in the sixties and seventies of the 20th century when Lévinas wrote his most important books. Therefore I want to examine his contention with the philosophical implications of this theoreticalmethodological current to whose impact on “les sciences humaines” almost no French thinker remained indifferent at the time. Lévinas accused structuralism that according to it subjective spontaneity is no more than an illusion by which impulses and instincts are described as values of practical reason. However, notwithstanding the divergence between Lévinas, according to whom science must serve ethics, and Lévi-Strauss, according to whom ethics is at most a result of scientific research and not its end, both established their ethics on the same assumption: to respect the otherness of the other, of every person, every society, every culture. Lévinas’ criticism did not aim at refuting structuralism but at wrestling with its theoretical assumptions. If they were possible of ratification, one might justify structuralist methodology. So the keyword is “if”. His critique is no absolute denial. His main critique was that structuralism was a scientific theory that left no place for ethics; therefore he also considered structuralism to be a danger to Judaism where ethics occupies an important place. Rosenzweig in the beginning of the 20th century as well as Lévinas at its end endeavored to propose an outlet frm the conception of totality because it did not leave any adequate place for man’s status as a subject. For Rosenzweig that had been first of all a revolt against Hegel’s idealistic philosophy, while for Lévinas it comprised a critique of structuralism that awarded priority to unconscious structures over human subjectivity. Only the latter can serve as a basis for ethics which was the chief goal of his philosophy. KEY WORDS – Structuralism. French Philosophy. Lévinas. Lévi-Strauss. Rosenzweig. (shrink)
In this article we survey six recent developments in the philosophical literature on free will and moral responsibility: (1) Harry Frankfurt's argument that moral responsibility does not require the freedom to do otherwise; (2) the heightened focus upon the source of free actions; (3) the debate over whether moral responsibility is an essentially historical concept; (4) recent compatibilist attempts to resurrect the thesis that moral responsibility requires the freedom to do otherwise; (5) the role of the control condition in free (...) will and moral responsibility, and finally (6) the debate centering on luck. (shrink)
In 1997, a Scottish surgeon by the name of Robert Smith was approached by a man with an unusual request: he wanted his apparently healthy lower left leg amputated. Although details about the case are sketchy, the would-be amputee appears to have desired the amputation on the grounds that his left foot wasn’t part of him – it felt alien. After consultation with psychiatrists, Smith performed the amputation. Two and a half years later, the patient reported that his life had (...) been transformed for the better by the operation . A second patient was also reported as having been satisfied with his amputation . (shrink)
I develop an account of weakness of the will that is driven by experimental evidence from cognitive and social psychology. I will argue that this account demonstrates that there is no such thing as weakness of the will: no psychological kind corresponds to it. Instead, weakness of the will ought to be understood as depletion of System II resources. Neither the explanatory purposes of psychology nor our practical purposes as agents are well-served by retaining the concept. I therefore suggest that (...) we ought to jettison it, in favour of the vocabulary and concepts of cognitive psychology. (shrink)
The question of the psychopath's responsibility for his or her wrongdoing has received considerable attention. Much of this attention has been directed toward whether psychopaths are a counterexample to motivational internalism (MI): Do they possess normal moral beliefs, which fail to motivate them? In this paper, I argue that this is a question that remains conceptually and empirically intractable, and that we ought to settle the psychopath's responsibility in some other way. I argue that recent empirical work on the moral (...) judgments of psychopaths provides us with good reason to think that they are not fully responsible agents, because their actions cannot express the kinds of ill-will toward others that grounds attributions of distinctively moral responsibility. I defend this view against objections, especially those due to an influential account of moral responsibility that holds that moral knowledge is not necessary for responsibility. (shrink)
Neuroethics is a rapidly growing subfield, straddling applied ethics, moral psychology and philosophy of mind. It has clear affinities to bioethics, inasmuch as both are responses to new developments in science and technology, but its scope is far broader and more ambitious because neuroethics is as much concerned with how the sciences of the mind illuminate traditional philosophical questions as it is with questions concerning the permissibility of using technologies stemming from these sciences. In this article, I sketch the two (...) branches of neuroethics, the applied and the philosophical, and illustrate how they interact. I also consider representative themes from each: the ethics of dampening memory and of cognitive enhancement, on the one hand, and the attack upon the reliability of deontological intuitions and upon free will, on the other. (shrink)
The extended mind thesis is the claim that mental states extend beyond the skulls of the agents whose states they are. This seemingly obscure and bizarre claim has far-reaching implications for neuroethics, I argue. In the first half of this article, I sketch the extended mind thesis and defend it against criticisms. In the second half, I turn to its neuroethical implications. I argue that the extended mind thesis entails the falsity of the claim that interventions into the brain are (...) especially problematic just because they are internal interventions, but that many objections to such interventions rely, at least in part, on this claim. Further, I argue that the thesis alters the focus of neuroethics, away from the question of whether we ought to allow interventions into the mind, and toward the question of which interventions we ought to allow and under what conditions. The extended mind thesis dramatically expands the scope of neuroethics: because interventions into the environment of agents can count as interventions into their minds, decisions concerning such interventions become questions for neuroethics. (shrink)
Libertarianism in all its varieties is widely taken to be vulnerable to a serious problem of present luck, inasmuch as it requires indeterminism somewhere in the causal chain leading to action. Genuine indeterminism entails luck, and lack of control over the ensuing action. Compatibilism, by contrast, is generally taken to be free of the problem of present luck, inasmuch as it does not require indeterminism in the causal chain. I argue that this view is false: compatibilism is subject to a (...) problem of present luck. Taken by itself, the compatibilist problem with present luck is less serious than the analogous problem confronting libertarianism. However, its effects are just as devastating for the entire account of freedom: the present luck confronting compatibilism is sufficient to undermine the compatibilist response to distant – constitutive – luck. (shrink)
Doxastic responsibility matters, morally and epistemologically. Morally, because many of our intuitive ascriptions of blame seem to track back to agents’ apparent responsibility for beliefs; epistemologically because some philosophers identify epistemic justification with deontological permissibility. But there is a powerful argument which seems to show that we are rarely or never responsible for our beliefs, because we cannot control them. I examine various possible responses to this argument, which aim to show either that doxastic responsibility does not require that we (...) control our beliefs, or that as a matter of fact we do exercise the right kind of control over our beliefs. I argue that the existing arguments are all wanting: in fact, our lack of control over our beliefs typically excuses us of responsibility for them. (shrink)
A number of writers have tackled the task of characterizing the differences between analytic and Continental philosophy.I suggest that these attempts have indeed captured the most important divergences between the two styles but have left the explanation of the differences mysterious.I argue that analytic philosophy is usefully seen as philosophy conducted within a paradigm, in Kuhn’s sense of the word, whereas Continental philosophy assumes much less in the way of shared presuppositions, problems, methods and approaches.This important opposition accounts for all (...) those features that have rightly been held to constitute the difference between the two traditions.I ﬁnish with some reﬂections on the relative superiority of each tradition and by highlighting the characteristic deﬁciencies of each. (shrink)
Frankfurt-style cases are widely taken to show that agents do not need alternative possibilities to be morally responsible for their actions. Many philosophers take these cases to constitute a powerful argument for compatibilism: if we do not need alternative possibilities for moral responsibility, it is hard to see what the attraction of indeterminism might be. I defend the claim that even though Frankfurt-style cases establish that agents can be responsible for their actions despite lacking alternatives, agents can only be responsible (...) if they possess certain powers, and possession of these powers is - arguably - incompatible with determinism. Because this is the case, Frankfurt-style cases fail to advance the debate between compatibilism and incompatibilism. (shrink)
The United States Supreme Court hasrecently ruled that virtual child pornographyis protected free speech, partly on the groundsthat virtual pornography does not harm actualchildren. I review the evidence for thecontention that virtual pornography might harmchildren, and find that it is, at best,inconclusive. Saying that virtual childpornography does not harm actual children isnot to say that it is completely harmless,however. Child pornography, actual or virtual,necessarily eroticizes inequality; in a sexistsociety it therefore contributes to thesubordination of women.
Disorders of volition are often accompanied by, and may even be caused by, disruptions in the phenomenology of agency. Yet the phenomenology of agency is at present little explored. In this paper we attempt to describe the experience of normal agency, in order to uncover its representational content.
Recent findings in neuroscience, evolutionary biology and psychology seem to threaten the existence or the objectivity of morality. Moral theory and practice is founded, ultimately, upon moral intuition, but these empirical findings seem to show that our intuitions are responses to nonmoral features of the world, not to moral properties. They therefore might be taken to show that our moral intuitions are systematically unreliable. I examine three cognitive scientific challenges to morality, and suggest possible lines of reply to them. I (...) divide these replies into two groups: we might confront the threat, showing that it does not have the claimed implications for morality; or we might bite the bullet, accepting that the claims have moral implications, but incorporating these claims into morality. I suggest that unless we are able to bite the bullet, when confronted by cognitive scientific challenges, there is a real possibility that morality will be threatened. This fact gives us a weighty reason to adopt a metaethics that makes it relatively easy to bite cognitive scientific bullets. Moral constructivism, in one of its many forms, makes these bullets more palatable; therefore, the cognitive scientific challenges provide us with an additional reason to adopt a constructivist metaethics. (shrink)
Recent work in neuroimaging suggests that some patients diagnosed as being in the persistent vegetative state are actually conscious. In this paper, we critically examine this new evidence. We argue that though it remains open to alternative interpretations, it strongly suggests the presence of consciousness in some patients. However, we argue that its ethical significance is less than many people seem to think. There are several different kinds of consciousness, and though all kinds of consciousness have some ethical significance, different (...) kinds underwrite different kinds of moral value. Demonstrating that patients have phenomenal consciousness — conscious states with some kind of qualitative feel to them — shows that they are moral patients, whose welfare must be taken into consideration. But only if they are subjects of a sophisticated kind of access consciousness — where access consciousness entails global availability of information to cognitive systems — are they persons, in the technical sense of the word employed by philosophers. In this sense, being a person is having the full moral status of ordinary human beings. We call for further research which might settle whether patients who manifest signs of consciousness possess the sophisticated kind of access consciousness required for personhood. (shrink)
Gregory Kavka's 'Toxin Puzzle' suggests that I cannot intend to perform a counter-preferential action A even if I have a strong self-interested reason to form this intention. The 'Rationalist Solution,' however, suggests that I can form this intention. For even though it is counter-preferential, A-ing is actually rational given that the intention behind it is rational. Two arguments are offered for this proposition that the rationality of the intention to A transfers to A-ing itself: the 'Self-Promise Argument' and David Gauthier's (...) 'Rational Self-Interest Argument.' But both arguments – and therefore the Rationalist Solution – fail. The Self-Promise Argument fails because my intention to A does not constitute a promise to myself that I am obligated to honor. And Gauthier's Rational Self-Interest Argument fails to rule out the possibility of rational irrationality. (shrink)
Carl Craver’s recent book offers an account of the explanatory and theoretical structure of neuroscience. It depicts it as centered around the idea of achieving mechanistic understanding, i.e., obtaining knowledge of how a set of underlying components interacts to produce a given function of the brain. Its core account of mechanistic explanation and relevance is causal-manipulationist in spirit, and offers substantial insight into casual explanation in brain science and the associated notion of levels of explanation. However, the focus on mechanistic (...) explanation leaves some open questions regarding the role of computation and cognition. (shrink)
In a series of articles, Terry Horgan and Mark Timmons have argued that Richard Boyd’s defence of moral realism, utilizing a causal theory of reference, fails. Horgan and Timmons construct a twin Earth-style thought experiment which, they claim, generates intuitions inconsistent with the realist account. In their thought experiment, the use of (allegedly) moral terms at a world is causally regulated by some property distinct from that regulating their use here on Earth; nevertheless, Horgan and Timmons claim, it is intuitive (...) that the inhabitants of this world disagree with us in their moral claims. Since any disagreement would be merely verbal were the alleged moral facts identical to or constituted by different natural facts, the identity or constitution claim must be false. I argue that their argument fails. Horgan and Timmons’ thought experiment is underdescribed; when we fill out the details, I claim, we shall see that the challenge to moral realism fades away. I sketch two possible interpretations of the (apparently) moral claims of the inhabitants of moral Twin Earth. On one interpretation, they fail to disagree with us because they actually agree with us; on the other, they fail to disagree with us because they are not moralizers at all. Which interpretation is true, I argue, will depend on the facts that explain the differences between us and the inhabitants of moral twin Earth. (shrink)
Ned Block has influentially distinguished two kinds of consciousness, access and phenomenal consciousness. He argues that these two kinds of consciousness can dissociate, and therefore we cannot rely upon subjective report in constructing a science of consciousness. I argue that none of Block's evidence better supports his claim than the rival view, that access and phenomenal consciousness are perfectly correlated. Since Block's view is counterintuitive, and has wildly implausible implications, the fact that there is no evidence that better supports it (...) than the rival view should lead us to reject it. (shrink)
So-called downshifters seek more meaningful lives by decreasing the amount of time they devote to work, leaving more time for the valuable goods of friendship, family and personal development. But though these are indeed meaning-conferring activities, they do not have the right structure to count as superlatively meaningful. Only in work – of a certain kind – can superlative meaning be found. It is by active engagements in projects, which are activities of the right structure, dedicated to the achievement of (...) goods beyond ourselves, that we make our lives superlatively meaningful. (shrink)
Theories of self-deception divide into those that hold that the state is characterized by some kind of synchronic tension or conflict between propositional attitudes and those that deny this. Proponents of the latter like Al Mele claim that their theories are more parsimonious, because they do not require us to postulate any psychological mechanisms beyond those which have been independently verified. But if we can show that there are real cases of motivated believing which are characterized by conflicting propositional attitudes, (...) however, the parsimony argument against incongruent mental state accounts is undermined. I argue that anosognosia presents us with a real-life example of motivated belief together with (sub)-doxastic conflict. (shrink)
This introduction to the special issue on empirically informed moral theory sketches the more important contributions to the field in the past several years. Attention is paid to experimental philosophy, the work of philosophers like Harman and Doris, and that of psychologists like Haidt and Hauser.
Children, even very young children, distinguish moral from conventional transgressions, inasmuch as they hold that the former, but not the latter, would still be wrong if there was no rule prohibiting them. Many people have taken this finding as evidence that morality is objective, and therefore universal. I argue that reflection on the phenomenon of imaginative resistance will lead us to question these claims. If a concept applies in virtue of the obtaining of a set of more basic facts, then (...) it is authority independent, and we therefore resist the attempts of authorities to claim that it does not apply. Thus, the moral/conventional distinction is a product of imaginative resistance to claims that a concept does not apply when its supervenience base is in place (or vice versa). All we can rightfully conclude from the fact that children are disposed to make the moral/conventional distinction is that our moral concepts belong to the class of authority-independent concepts. Though the set of basic facts in virtue of which an authority-independent concept obtains must be objective, the concept itself might be conventional, inasmuch as we could easily draw its boundaries wider or narrower, or fail to have a concept that corresponds to these properties at all. (shrink)
The Surprise Exam Paradox continues to perplex and torment despite the many solutions that have been offered. This paper proposes to end the intrigue once and for all by refuting one of the central pillars of the Surprise Exam Paradox, the 'No Friday Argument,' which concludes that an exam given on the last day of the testing period cannot be a surprise. This refutation consists of three arguments, all of which are borrowed from the literature: the 'Unprojectible Announcement Argument,' the (...) 'Wright & Sudbury Argument,' and the 'Epistemic Blindspot Argument.' The reason that the Surprise Exam Paradox has persisted this long is not because any of these arguments is problematic. On the contrary, each of them is correct. The reason that it has persisted so long is because each argument is only part of the solution. The correct solution requires all three of them to be combined together. Once they are, we may see exactly why the No Friday Argument fails and therefore why we have a solution to the Surprise Exam Paradox that should stick. (shrink)
According to one influential view, advanced by Jonathan Adler, David Owens and Susan Hurley, epistemic akrasia is impossible because when we form a full belief, any apparent evidence against that belief loses its power over us. Thus theoretical reasoning is quite unlike practical reasoning, in that in the latter our desires continue to exert a pull, even when they are outweighed by countervailing considerations. I call this argument against the possibility of epistemic akrasia the subsumption view. The subsumption view accurately (...) reflects the nature of reasoning in a range of everyday cases. But, as I show, it is quite false with regard to controversial questions, like philosophical disputes. In these, evidence against our best judgments continues to exert a hold on us. Thus, the claimed disanalogy between practical and theoretical reasoning fails. (shrink)
This paper argues that the accelerating pace of life is reducing the time for thoughtful reflection, and in particular for contemplative scholarship, within the academy. It notes that the loss of time to think is occurring at exactly the moment when scholars, educators, and students have gained access to digital tools of great value to scholarship. It goes on to explore how and why both of these facts might be true, what it says about the nature of scholarship, and what (...) might be done to address this state of affairs. (shrink)
It is, as Dana Nelkin (2004) says, a rare point of agreement among participants in the free will debate that rational deliberation presupposes a belief in freedom. Of course, the precise content of that belief – and, indeed, the nature of deliberation – is controversial, with some philosophers claiming that deliberation commits us to a belief in libertarian free will (Taylor 1966; Ginet 1966), and others claiming that, on the contrary, deliberation presupposes nothing more than an epistemic openness that is (...) entirely compatible with determinism (Dennett 1984; Kapitan 1986). Since, however, the claim that deliberation presupposes freedom is accepted by all sides in the free will debate, it ought to be possible to frame a minimal version that is neutral between compatibilism and incompatibilism, and which therefore can be accepted by everyone. Peter van Inwagen has advanced the best-known such claim: ‘all philosophers who have thought about deliberation agree on one point: one cannot deliberate about whether to perform a certain act unless one believes it is possible for one to perform it’ (van Inwagen 1983: 154). It is the purpose of this paper to argue that van Inwagen, and the many philosophers who have followed him in this regard, is wrong. (shrink)
Some philosophers have criticized the use of psychopharmaceuticals on the grounds that even if these drugs enhance the person using them, they threaten their authenticity. Others have replied by pointing out that the conception of authenticity upon which this argument rests is contestable; on a rival conception, psychopharmaceuticals might be used to enhance our authenticity. Since, however, it is difficult to decide between these competing conceptions of authenticity, the debate seems to end in a stalemate. I suggest that we need (...) not resolve this debate to end the stalemate. New technologies which alter the self can be understood within the framework of the first conception of authenticity, I suggest, not as threatening the authentic self, but rather as bringing the outward appearance of the self into line with its deepest essence. Since psychopharmaceutical use can plausibly be understood on this model, it can be seen as enhancing our authenticity on either conception. (shrink)
In this paper, I introduce the notion of a Frankfurt Enabler, a counterfactual intervener poised, should a signal for intervention be received, to enable an agent to perform a mental or physical action. Frankfurt enablers demonstrate, I claim, that merely counterfactual conditions are sometimes relevant to assessing what capacities agents possess. Since this is the case, we are not entitled to conclude that agents in standard Frankfurt-style cases retain their responsibility-ensuring capacities. There is no principled rationale for bracketing counterfactual interveners (...) in standard Frankfurt-style cases, but admitting their relevance when they are Frankfurt enablers. I argue that the intuition that we ought to bracket counterfactual interveners is, at bottom, an expression of a mistaken internalist view about the mental. (shrink)
Libertarianism seems vulnerable to a serious problem concerning present luck, because it requires indeterminism somewhere in the causal chain leading to directly free action. Compatibilism, by contrast, is thought to be free of this problem, as not requiring indeterminism in the causal chain. I argue that this view is false: compatibilism is subject to a problem of present luck. This is less of a problem for compatibilism than for libertarianism. However, its effects are just as devastating for one kind of (...) compatibilism, the kind of compatibilism which is history-sensitive, and therefore must take the problem of constitutive luck seriously. The problem of present luck confronting compatibilism is sufficient to undermine the history-sensitive compatibilist's response to remote – constitutive – luck. (shrink)
In a recent article, George Sher argues that a realistic conception of human agency, which recognizes the limited extent to which we are conscious of what we do, makes the task of specifying a conception of the kind of control that underwrites ascriptions of moral responsibility much more difficult than is commonly appreciated. Sher suggests that an adequate account of control will not require that agents be conscious of their actions; we are responsible for what we do, in the absence (...) of consciousness, so long as our obliviousness is explained by some subset of the mental states constitutive of the agent. In this response, I argue that Sher is wrong on every count. First, the account of moral responsibility in the absence of consciousness he advocates does not preserve control at all; rather, it ought to be seen as a variety of attributionism (a kind of account of moral responsibility which holds that control is unnecessary for responsibility, so long as the action is reflective of the agent’s real self). Second, I argue that a realistic conception of agency, that recognizes the limited role that consciousness plays in human life, narrows the scope of moral responsibility. We exercise control over our actions only when consciousness has played a direct or indirect role in their production. Moreover, we cannot escape this conclusion by swapping a volitionist account of moral responsibility for an attributionist account: our actions are deeply reflective of our real selves only when consciousness has played a causal role in their production. (shrink)
According to a common philosophical distinction, the `original' intentionality, or `aboutness' possessed by our thoughts, beliefs and desires, is categorically different from the `derived' intentionality manifested in some of our artifacts –- our words, books and pictures, for example. Those making the distinction claim that the intentionality of our artifacts is `parasitic' on the `genuine' intentionality to be found in members of the former class of things. In Kinds of Minds: Toward an Understanding of Consciousness, Daniel Dennett criticizes that claim (...) and the distinction it rests on, and seeks to show that ``metaphysically original intentionality'' is illusory by working out the implications he sees in the practical possibility of a certain type of robot, i.e., one that generates `utterances' which are `inscrutable to the robot's designers' so that we, and they, must consult the robot to discover the meaning of its utterances. I argue that the implications Dennett finds are erroneous, regardless of whether such a robot is possible, and therefore that the real existence of metaphysically original intentionality has not been undermined by the possibility of the robot Dennett describes. (shrink)
Michael Strevens has produced an ambitious and comprehensive new account of scientific explanation. This review discusses its main themes, focusing on regularity explanation and a number of methodological concerns.
In Plato’s Gorgias, Socrates argues that philosophy is superior to rhetoric in part because the former is a techne while the latter is not. I argue that the Socratic practice of philosophy within this dialogue fails to qualify as a techne for exactly the same reasons that rhetoric fails to qualify as a techne. In doing so, I introduce a new kind of Socratic ignorance: methodological ignorance. I reject both Charles Kahn’s account of the relationship between the dialogue’s dramatic and (...) philosophical contents, and Thomas Brickhouse and Nicholas Smith’s claim that Socrates never regarded his practice as a techne. (shrink)
Abstract The typical explanation of an event or process which attracts the label ‘conspiracy theory’ is an explanation that conflicts with the account advanced by the relevant epistemic authorities. I argue that both for the layperson and for the intellectual, it is almost never rational to accept such a conspiracy theory. Knowledge is not merely shallowly social, in the manner recognized by social epistemology, it is also constitutively social: many kinds of knowledge only become accessible thanks to the agent's embedding (...) in an environment that includes other epistemic agents. Moreover, advances in knowledge typically require ongoing immersion in this social environment. But the intellectual who embraces a conspiracy theory risks cutting herself off from this environment, and therefore epistemically disabling herself. Embracing a conspiracy theory therefore places at risk the ability to engage in genuine enquiry, including the enquiry needed properly to evaluate the conspiracy theory. (shrink)
This short article is a reply to Fine's criticisms of Haidt's social intuitionist model of moral judgement. After situating Haidt in the landscape of meta-ethical views, I examine Fine's argument, against Haidt, that the processes which give rise to moral judgements are amenable to rational control: first-order moral judgements, which are automatic, can nevertheless deliberately be brought to reflect higher-order judgements. However, Haidt's claims about the arationality of moral judgements seem to apply equally well to these higher-order judgements; showing that (...) we can exercise higher-order control over first-order judgements therefore does not show that our judgements are rational. I conclude by sketching an alternative strategy for vindicating the rationality of moral judgements: by viewing moral argument as a community-wide and distributed enterprise, in which knowledge is produced by debate and transferred to individuals via testimony. (shrink)
Both libertarian and compatibilist approaches have been unsuccessful in providing an acceptable account of free will. Recent developments in cognitive neuroscience, including the connectionist theory of mind and empirical findings regarding modularity and integration of brain functions, provide the basis for a new approach: neural holism. This approach locates free will in fully integrated behavior in which all of a person's beliefs and desires, implicitly represented in the brain, automatically contribute to an act. Deliberation, the experience of volition, and cognitive (...) and behavioral shortcomings are easily understood under this model. Assigning moral praise and blame, often seen as grounded in the notion that a person has the ability to have done otherwise, will be shown to reflect instead important aspects of signaling in social interactions. Thus, important aspects of the traditional notion of free will can be accounted for within the proposed model, which has interesting implications for lifelong cognitive development. (shrink)
David Hume's sympathetic principle applies to physical equals. In his account, we sympathize with those like us. By contrast, Adam Smith's sympathetic principle induces equality. We consider Hume's “other rational species” problem to see whether Smith's wider sympathetic principle would alter Hume's conclusion that “superior” beings will enslave “inferior” beings. We show that Smith introduces the notion of “generosity,” which functions as if it were Hume's justice even when there is no possibility of contract. Footnotes1 An earlier version was presented (...) at the 18th-Century Scottish Studies Society, Arlington meeting in June 2001. We benefited from conversations with and comments from Gordon Schochet, Roger Emerson and Silvia Sebastiana. A letter from Leon Montes helped sharpen the argument. The readers for the journal contributed to the output. We remain responsible for the errors and omissions. (shrink)
In a recent article in this journal, Storrs McCall and E.J. Lowe sketch an account of indeterminist free will designed to avoid the luck objection that has been wielded to such effect against event-causal libertarianism. They argue that if decision-making is an indeterministic process and not an event or series of events, the luck objection will fail. I argue that they are wrong: the luck objection is equally successful against their account as against existing event-causal libertarianisms. Like the event-causal (...) libertarianism their account is meant to supplant, the process view cannot offer a reasons explanation of the agent's choice itself; that choice is explained by nothing except chance. The agent therefore fails to exercise freedom-level control over it. (shrink)
Proponents of evolutionary psychology take the existence of humanuniversals to constitute decisive evidence in favor of their view. Ifthe same social norms are found in culture after culture, we have goodreason to believe that they are innate, they argue. In this paper Ipropose an alternative explanation for the existence of humanuniversals, which does not depend on them being the product of inbuiltpsychological adaptations. Following the work of Brian Skyrms, I suggestthat if a particular convention possesses even a very small advantageover (...) competitors, whatever the reason for that advantage, we shouldexpect it to become the norm almost everywhere. Tiny advantages aretranslated into very large basins of attraction, in the language of gametheory. If this is so, universal norms are not evidence for innatepsychological adaptations at all. Having shown that the existence ofuniversals is consistent with the so-called Standard Social ScienceModel, I turn to a consideration of the evidence, to show that thisstyle of explanation is preferable to the evolutionary explanation, atleast with regard to patterns of gender inequality. (shrink)
In 'What Luck Is Not', Lackey presents counterexamples to the two most prominent accounts of luck: the absence of control account and the modal account. I offer an account of luck that conjoins absence of control to a modal condition. I then show that Lackey's counterexamples mislocate the luck: the agents in her cases are lucky, but the luck precedes the event upon which Lackey focuses, and that event is itself only fortunate, not lucky. Finally I offer an account of (...) fortune. Fortune is luck-involving, and therefore easily confused with luck, but it is not itself lucky. (shrink)
Repression has remained controversial for nearly a century on account of the lack of well-controlled evidence validating it. Here we argue that the conceptual and methodological tools now exist for a rigorous scientific examination of repression, and that a nascent cognitive neuroscience of repression is emerging. We review progress in this area and highlight important questions for this field to address.
To the extent that indeterminacy intervenes between our reasons for action and our decisions, intentions and actions, our freedom seems to be reduced, not enhanced. Free will becomes nothing more than the power to choose irrationally. In recognition of this problem, some recent libertarians have suggested that free will is paradigmatically manifested only in actions for which we have reasons for both or all the alternatives. In these circumstances, however we choose, we choose rationally. Against this kind of account, most (...) fully developed by Robert Kane, critics have pressed the demand for contrastive explanations. Kane has responded by arguing that the demand does not need to be met: responsibility for an action does not require that there be a contrastive explanation of that action. However, this response proves too much: it implies that agents are responsible not only for the actions they choose, but also for the counterfactual actions which were equally available to them. (shrink)
For over a century now, American scholars (among others) have been debating the merits of “bad-samaritan” laws – laws punishing people for failing to attempt “easy rescues.” Unfortunately, the opponents of bad-samaritan laws have mostly prevailed. In the United States, the “no-duty-to-rescue” rule dominates. Only four states even have bad-samaritan laws, and these laws impose only the most minimal punishment – either sub-$500 fines or short-term imprisonment. This Article argues that this situation needs to be remedied. Every state should criminalize (...) bad samaritanism. For, first, criminalization is required by the supreme value that we place on protecting human life, a value that motivates laws against both homicide and manslaughter. Second, criminalization is recommended by the “proportionality principle” – i.e., the principle that a law’s level of punishment should be directly proportional to the moral severity of the offense. Third, criminalization would yield a number of significant benefits, including helping to minimize needless deaths and injuries and providing society with an institutional outlet for its outrage against bad samaritans. Still, many objections have been leveled against bad-samaritan laws. This Article will argue that while some of these objections – namely, the objections involving foundational criminal law principles such as the actus-reus requirement, the harm principle, and causation – are all easily refuted, five other objections are not. These five objections involve pragmatic considerations such as the difficulties with obtaining evidence against bad samaritans and psychological considerations such as people’s understandable reasons for not wanting to “get involved.” This Article will then put these five objections into reflective equilibrium with the moral arguments for bad-samaritan laws and conclude that while bad samaritanism should indeed be criminalized, the punishment that convicted bad samaritans receive should be mild – certainly milder than the level of punishment recommended by the “proportionality” principle. The corollary of this conclusion is that the criminal law should sometimes abandon the proportionality principle. (shrink)
_Libertarian restrictivists hold that agents are rarely directly free. However, they seek to reconcile their views_ _with common intuitions by arguing that moral responsibility, or indirect freedom (depending on the version of_ _restrictivism) is much more common than direct freedom. I argue that restrictivists must give up either the_ _claim that agents are rarely free, or the claim that indirect freedom or responsibility is much more common_ _than direct freedom. Focusing on Kane’s version of restrictivism, I show that the view (...) holds people responsible_ _for actions when (merely) compatibilist conditions are met. Since this is unacceptable by libertarian lights,_ _they must either accept that compatibilist conditions on moral responsibility are sufficient, or make their_ _restrictivism more extreme than it already is._. (shrink)
Contrary to the claim that measurement standards are absolutely accurate by definition, I argue that unit definitions do not completely fix the referents of unit terms. Instead, idealized models play a crucial semantic role in coordinating the theoretical definition of a unit with its multiple concrete realizations. The accuracy of realizations is evaluated by comparing them to each other in light of their respective models. The epistemic credentials of this method are examined and illustrated through an analysis of the contemporary (...) standardization of time. I distinguish among five senses of ‘measurement accuracy’ and clarify how idealizations enable the assessment of accuracy in each sense. (shrink)
Peter Baumann uses the Monty Hall game to demonstrate that probabilities cannot be meaningfully applied to individual games. Baumann draws from this first conclusion a second: in a single game, it is not necessarily rational to switch from the door that I have initially chosen to the door that Monty Hall did not open. After challenging Baumann’s particular arguments for these conclusions, I argue that there is a deeper problem with his position: it rests on the false assumption that what (...) justifies the switching strategy is its leading me to win a greater percentage of the time. In fact, what justifies the switching strategy is not any statistical result over the long run but rather the “causal structure” intrinsic to each individual game itself. Finally, I argue that an argument by Hilary Putnam will not help to save Baumann’s second conclusion above. (shrink)
This anthology mixes together previously published and new work in experimental philosophy, by many of its leading figures (among whom the editors feature prominently). Experimental philosophy is a burgeoning movement that urges philosophers to leave their armchairs and test their philosophical claims empirically. It builds upon but goes further than the movement that Jesse Prinz, in his contribution, calls empirical philosophy; philosophy that turns to existing scientific literature to find evidence for philosophical claim. Experimental philosophy involves philosophers actually getting their (...) hands dirty by conducting experiments. (shrink)
Janna Thompson has outlined ‘the apology paradox’, which arises whenever people apologize for an action or event upon which their existence is causally dependent. She argues that a sincere apology seems to entail a wish that the action or event had not occurred, but that we cannot sincerely wish that events upon which our existence depends had not occurred. I argue that Thompson’s paradox is a backward-looking version of Parﬁt’s (forward-looking) ‘non-identity problem’, where backward- and forward-looking refer to the perspective (...) of an agent apologizing for or contemplating an action. The temporal perspective of the agent gives us the tools with which to dissolve the air of paradox which surrounds these problems. Each is best grasped from one temporal perspective, but the para- doxes arise when we attempt to examine it simultaneously from another. The evaluations appropriate to the apology paradox and the non-identity problem are therefore time-indexed. (shrink)
BAT - the belief in ability thesis - states, roughly, that for an agent to be able rationally to deliberate between two or more alternatives, she must believe that she is metaphysically free to perform each alternative. I show, by way of a counterexample, that BAT is false.
Michael Walzer has made great contributions to the appreciation of both moral and cultural pluralism in political theory. Nonetheless, there are ways in which Walzer's arguments appear anti-pluralistic. The question of this essay is: why is there so little pluralism in Walzer's political theory, or why does its pluralism run out so soon? Focusing on Spheres of Justice and Nation and Universe, it examines the effect of Walzer's nationalism/statism on his theory, and the constraints his theory faces in considering multiculturalism (...) or political pluralist regimes such as federalism within a state. (shrink)
The authors attempt to show that certain forms of behavior of the human immune system are illuminatingly regarded as errors in that system's operation. Since error-ascription can occur only within the context of an intentional/teleological characterization of the system, it follows that such a characterization is illuminating. It is argued that error-ascription is objective, non-anthropomorphic, irreducible to any purely causal form of explanation of the same behavior, and further that it is wrong to regard all errors of the immune system (...) as due to malfunction or maladaptation. <br>. (shrink)
With a few notable exceptions formal semantics, as it originated from the seminal work of Richard Montague, Donald Davidson, Max Cresswell, David Lewis and others, in the late sixties and early seventies of the previous century, does not consider Wittgenstein as one of its ancestors. That honour is bestowed on Frege, Tarski, Carnap. And so it has been in later developments. Most introductions to the subject will refer to Frege and Tarski (Carnap less frequently) —in addition to the pioneers just (...) mentioned, of course— , and discuss the main elements of their work that helped shape formal semantics in some detail. But Wittgenstein is conspicuously absent whenever the history of the subject is mentioned (usually brieﬂy, if at all). Of course, if one thinks of Wittgenstein’s later work, this is obvious: nothing, it seems, could be more antithetic to what formal semantics aims for and to how it pursues those aims than the views on meaning and language that Wittgenstein expounds in, e.g., Philosophical Investigations, with its insistence on particularity and diversity, and its rejection of explanation and formal modelling. But what about his earlier work, the Tractatus (henceforth )? At ﬁrst sight, that seems much more congenial, as it develops a conception of language and meaning that is both general and uniform, explanatory.. (shrink)
Libertarians like Robert Kane believe that indeterminism is necessaryfor free will. They think this in part because they hold both (1) thatmy being the ultimate cause of at least part of myself is necessary forfree will and (2) that indeterminism is necessary for this ``ultimateself-causation''. But seductive and intuitive as this ``USCLibertarianism'' may sound, it is untenable. In the end, nometaphysically coherent (not to mention empirically valid) conception ofultimate self-causation is available. So the basic intuition motivatingthe USC Libertarian is ultimately (...) impossible to fulfill. (shrink)
This paper draws attention to an increasingly common method of using computer simulations to establish evidential standards in physics. By simulating an actual detection procedure on a computer, physicists produce patterns of data (‘signatures’) that are expected to be observed if a sought-after phenomenon is present. Claims to detect the phenomenon are evaluated by comparing such simulated signatures with actual data. Here I provide a justification for this practice by showing how computer simulations establish the reliability of detection procedures. I (...) argue that this use of computer simulation undermines two fundamental tenets of the Bogen–Woodward account of evidential reasoning. Contrary to Bogen and Woodward’s view, computer-simulated signatures rely on ‘downward’ inferences from phenomena to data. Furthermore, these simulations establish the reliability of experimental setups without physically interacting with the apparatus. I illustrate my claims with a study of the recent detection of the superfluid-to-Mott-insulator phase transition in ultracold atomic gases. (shrink)
Can a heritability value tell us something about the weight of genetic versus environmental causes that have acted in the development of a particular individual? Two possible questions arise. Q1: what portion of the phenotype of X is due to its genes and what portion to its environment? Q2: what portion of X’s phenotypic deviation from the mean is a result of its genetic deviation and what portion a result of its environmental deviation? An answer to Q1 provides the full (...) information about X’s development, while an answer to Q2 leaves out a large portion unexplained—that portion which corresponds to the phenotypic mean. Q1 is unanswerable, but I show it is nevertheless legitimate under certain quantitative genetics models. With regard to Q2, opinions in the philosophical and biological literature differ as to its legitimacy. I argue that not only is it legitimate, but in particular, under a few simplifying assumptions, it allows for a quantitative probabilistic answer: for normally distributed quantitative traits with no G-E correlation or statistical G × E interaction, we can assess the probability that X’s genes had a greater effect than its environment on its deviation from the mean population value. This probability is expressed as a function the heritability and the individual’s phenotypic value; we can also provide a quantitative probabilistic answer to Q2 for an arbitrary individual where the probability is a function only of heritability. (shrink)
Abstract : Libet’s famous experiments, showing that apparently we become aware of our intention to act only after we have unconsciously formed it, have widely been taken to show that there is no such thing as free will. If we are not conscious of the formation of our intentions, many people think, we do not exercise the right kind of control over them. I argue that the claim this view presupposes, that only consciously initiated actions could be free, places a (...) condition upon freedom of action which it is in principle impossible to fulfil, for reasons that are conceptual and not merely contingent. Exercising this kind of control would require that we control our control system, which would simply cause the same problem to arise at a higher-level or initiate an infinite regress of controllings. If the unconscious initiation of actions, as well as the takings of decisions, is incompatible with control over them, then free will is impossible on conceptual grounds. Thus, Libet’s experiments do not constitute a separate, empirical, challenge to our freedom. (shrink)
As he recognizes, Taylor's view of practical reasoning commits him to the existence of incommensurable world-views. However, he holds that it is in principle possible to overcome these incommensurabilities. He has two major arguments for this conclusion, which I label the argument from the human condition, and the transition argument. I show that the first argument, though perhaps successful in the case Taylor takes as an example, cannot be generalized. The second argument is even less successful, since all the evidence (...) it produces is compatible with a thoroughgoing relativism. I point out, moreover, that even if Taylor's arguments were successful, they would not demonstrate that someone who chose to continue to reject the practice that had been vindicated would be irrational to do so. I conclude that there seems no way to circumvent the relativism to which Taylor's picture of practical reasoning leads. Key Words: incommensurability • practical reason • relativism • Charles Taylor. (shrink)
The article explains why Soviet dissidents and the reformers of the Gorbachev era chose to characterize the Soviet system as totalitarian. The dissidents and the reformers strongly disagreed among themselves about the origins of Soviet totalitarianism. But both groups stressed the effects of totalitarianism on the individual personality; in doing so, they revealed themselves to be the heirs of the tsarist intelligentsia. Although the concept of totalitarianism probably obscures more than it clarifies when it is applied to regimes like the (...) Nazi and the Soviet, the decision of the dissidents and the reformers to use the term enabled them to clarify their own values and the reasons they felt compelled to criticize the Soviet Union and to call for its radical reform. (shrink)
One of the major conflicts in the social sciences since the Second World War has concerned whether, and to what extent, human beings have a nature. One view, traditionally associated with the political left, has rejected the notion that we have a contentful nature, and hoped thereby to underwrite the possibility that we can shape social institutions by references only to norms of justice, rather than our innate dispositions. This view has been in rapid retreat over the past three decades, (...) in the face of an onslaught from several different strands of psychology purporting to show that human nature has a content. In this paper, I argue for a third view: that human beings have a contentful nature, but that nature is uniquely flexible and therefore places relatively few constraints on the shape of our social institutions. Human beings are shaped, by nature, to be cultural animals. We are innately disposed to imitate the behavior of those around us, to a far greater degree than other animals: this disposition toward overimitation allows us to construct local traditions of behavior and thereby to adapt to an enormous variety of environments. These facts, in turn, ensure that human populations live embedded in local forms of life; thus our nature entails that we are deeply cultural animals. (shrink)