The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could (...) then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. (shrink)
Michel Seymour fills an important gap in Rawlsian theory. In fact, his Rawls inspired normative theory of collective rights is unprecedented. Likewise, his ideal theory of a primary right to internal self-determination (ISD) is a welcome contribution to the issue of collective rights. That said, his non-ideal theory – a remedial right only to secession – seems rather toothless in cases of noncompliance. In particular, Seymour leaves us with no guidance in the case of transition countries and situations (...) of tension where we need to know whether the ISD of the minority (the stateless) people is enabling or disabling for the ISD of the majority (the state owning) people. The paper concludes that borrowing from Aristotle, as Rawls does, offers more in the way of guidance when it comes to these issues. (shrink)
Over the past few decades, Seymour Feldman has contributed important studies on the philosophy of Levi ben Gershom, better known as Gersonides (1288-1344), as well as a highly acclaimed annotated translation of Gersonides' philosophical opus, The Wars of the Lord. Feldman now offers a succinct conspectus of Gersonides' positions on the pivotal issues of medieval Jewish philosophy and the arguments he offers in their favor: creation; God and His attributes; divine omniscience, providence, and omnipotence; prophecy; humanity; and the Torah. (...) Feldman's guiding thesis is encapsulated in the book's subtitle: Judaism within the Limits of Reason. Gersonides is fully committed to the authority of Scripture and .. (shrink)
Mill's most famous departure from Bentham is his distinction between higher and lower pleasures. This article argues that quality and quantity are independent and irreducible properties of pleasures that may be traded off against each other – as in the case of quality and quantity of wine. I argue that Mill is not committed to thinking that there are two distinct kinds of pleasure, or that ‘higher pleasures’ lexically dominate lower ones, and that the distinction is compatible with hedonism. I (...) show how this interpretation not only makes sense of Mill but allows him to respond to famous problems, such as Crisp's Haydn and the oyster and Nozick's experience machine. (shrink)
Consequentialists typically think that the moral quality of one's conduct depends on the difference one makes. But consequentialists may also think that even if one is not making a difference, the moral quality of one's conduct can still be affected by whether one is participating in an endeavour that does make a difference. Derek Parfit discusses this issue – the moral significance of what I call ‘participation’ – in the chapter of Reasons and Persons that he devotes to what he (...) calls ‘moral mathematics’. In my paper, I expose an inconsistency in Parfit's discussion of moral mathematics by showing how it gives conflicting answers to the question of whether participation matters. I conclude by showing how an appreciation of Parfit's error sheds some light on consequentialist thought generally, and on the debate between act- and rule-consequentialists specifically. (shrink)
Well-Being and Death addresses philosophical questions about death and the good life: what makes a life go well? Is death bad for the one who dies? How is this possible if we go out of existence when we die? Is it worse to die as an infant or as a young adult? Is it bad for animals and fetuses to die? Can the dead be harmed? Is there any way to make death less bad for us? Ben Bradley defends the (...) following views: pleasure, rather than achievement or the satisfaction of desire, is what makes life go well; death is generally bad for its victim, in virtue of depriving the victim of more of a good life; death is bad for its victim at times after death, in particular at all those times at which the victim would have been living well; death is worse the earlier it occurs, and hence it is worse to die as an infant than as an adult; death is usually bad for animals and fetuses, in just the same way it is bad for adult humans; things that happen after someone has died cannot harm that person; the only sensible way to make death less bad is to live so long that no more good life is possible. (shrink)
This article identifies an argument in Hobbes’s writings often overlooked but relevant to current philosophical debates. Political philosophers tend to categorize his thought as representing consent or rescue theories of political authority. Though these interpretations have textual support and are understandable, they leave out one of his most compelling arguments—what we call the lesser evil argument for political authority, expressed most explicitly in Chapter 20 of Leviathan. Hobbes frankly admits the state’s evils but appeals to the significant disparity between those (...) evils and the greater evils outside the state as a basis for political authority. More than a passing observation, aspects of the lesser evil argument appear in each of his three major political works. In addition to outlining this argument, the article examines its significance both for Hobbes scholarship and recent philosophical debates on political authority. (shrink)
This paper defends the view, put roughly, that to think that p is to guess that p is the answer to the question at hand, and that to think that p rationally is for one’s guess to that question to be in a certain sense non-arbitrary. Some theses that will be argued for along the way include: that thinking is question-sensitive and, correspondingly, that ‘thinks’ is context-sensitive; that it can be rational to think that p while having arbitrarily low credence (...) that p; that, nonetheless, rational thinking is closed under entailment; that thinking does not supervene on credence; and that in many cases what one thinks on certain matters is, in a very literal sense, a choice. Finally, since there are strong reasons to believe that thinking just is believing, there are strong reasons to think that all this goes for belief as well. (shrink)
To what extent do we know our own minds when making decisions? Variants of this question have preoccupied researchers in a wide range of domains, from mainstream experimental psychology to cognitive neuroscience and behavioral economics. A pervasive view places a heavy explanatory burden on an intelligent cognitive unconscious, with many theories assigning causally effective roles to unconscious influences. This article presents a novel framework for evaluating these claims and reviews evidence from three major bodies of research in which unconscious factors (...) have been studied: multiple-cue judgment, deliberation without attention, and decisions under uncertainty. Studies of priming and the role of awareness in movement and perception are also given brief consideration. The review highlights that inadequate procedures for assessing awareness, failures to consider artifactual explanations of “landmark” results, and a tendency to uncritically accept conclusions that fit with our intuitions have all contributed to unconscious influences being ascribed inflated and erroneous explanatory power in theories of decision making. The review concludes by recommending that future research should focus on tasks in which participants' attention is diverted away from the experimenter's hypothesis, rather than the highly reflective tasks that are currently often employed. (shrink)
A puzzling feature of paradigmatic cases of dehumanization is that the perpetrators often attribute uniquely human traits to their victims. This has become known as the “paradox of dehumanization.” We address the paradox by arguing that the perpetrators think of their victims as human in one sense, while denying that they are human in another sense. We do so by providing evidence that people harbor a dual character concept of humanity. Research has found that dual character concepts have two independent (...) sets of criteria for their application, one of which is descriptive and one of which is normative. Across four experiments, we found evidence that people deploy a descriptive criterion according to which being human is a matter of being a Homo sapiens; as well as a normative criterion according to which being human is a matter of possessing a deep-seated commitment to do the morally right thing. Importantly, we found that people are willing to affirm that someone is human in the descriptive sense, while denying that they are human in the normative sense, and vice versa. In addition to providing a solution to the paradox of dehumanization, these findings suggest that perceptions of moral character have a central role to play in driving dehumanization. (shrink)
The distinction between perception and cognition has always had a firm footing in both cognitive science and folk psychology. However, there is little agreement as to how the distinction should be drawn. In fact, a number of theorists have recently argued that, given the ubiquity of top-down influences, we should jettison the distinction altogether. I reject this approach, and defend a pluralist account of the distinction. At the heart of my account is the claim that each legitimate way of marking (...) a border between perception and cognition deploys a notion I call ‘stimulus-control.’ Thus, rather than being a grab bag of unrelated kinds, the various categories of the perceptual are unified into a superordinate natural kind. (shrink)
Ben Fine traces the origins of social capital through the work of Becker, Bourdieu and Coleman and comprehensively reviews the literature across the social sciences. The text is uniquely critical of social capital, explaining how it avoids a proper confrontation with political economy and has become chaotic. This highly topical text addresses some major themes, including the shifting relationship between economics and other social sciences, the 'publish or perish' concept currently burdening scholarly integrity, and how a social science interdisciplinarity requires (...) a place for political economy together with cultural and social theory. (shrink)
The issue of whether emotions are rational is at the centre of philosophical and psychological discussions. I believe that emotions are rational, but that they follow different principles to those of intellectual reasoning. The purpose of this paper is to reveal the unique logic of emotions. I begin by suggesting that we should conceive of emotions as a general mode of the mental system; other modes are the perceptual and intellectual modes. One feature distinguishing one mode from another is the (...) logical principles underlying its information processing mechanism. Before describing these principles, I clarify the notion of ‘rationality,’ arguing that in an important sense emotions can be rational. (shrink)
This paper considers some puzzling knowledge ascriptions and argues that they present prima facie counterexamples to credence, belief, and justification conditions on knowledge, as well as to many of the standard meta-semantic assumptions about the context-sensitivity of ‘know’. It argues that these ascriptions provide new evidence in favor of contextualist theories of knowledge—in particular those that take the interpretation of ‘know’ to be sensitive to the mechanisms of constraint.
Though the publication of Kuhn's Structure of Scientific Revolutions seemed to herald the advent of a unified study of the history and philosophy of science, it is a hard fact that history of science and philosophy of science have increasingly grown apart. Recently, however, there has been a series of workshops on both sides of the Atlantic intended to bring historians and philosophers of science together to discuss new integrative approaches. This is therefore an especially appropriate time to explore the (...) problems with and prospects for integrating history and philosophy of science. The original essays in this volume, all from specialists in the history of science or philosophy of science, offer such an exploration from a wide variety of perspectives. The volume combines general reflections on the current state of history and philosophy of science with studies of the relation between the two disciplines in specific historical and scientific cases. (shrink)
This paper defends the simple view that in asserting that p, one lies iff one knows that p is false. Along the way it draws some morals about deception, knowledge, Gettier cases, belief, assertion, and the relationship between first- and higher-order norms.
A plausible principle about the felicitous use of indicative conditionals says that there is something strange about asserting an indicative conditional when you know whether its antecedent is true. But in most contexts there is nothing strange at all about asserting indicative conditionals like ‘If Oswald didn’t shoot Kennedy, then someone else did’. This paper argues that the only compelling explanation of these facts requires the resources of contextualism about knowledge.
Three plausible views—Presentism, Truthmaking, and Independence—form an inconsistent triad. By Presentism, all being is present being. By Truthmaking, all truth supervenes on, and is explained in terms of, being. By Independence, some past truths do not supervene on, or are not explained in terms of, present being. We survey and assess some responses to this.
Recent debate in metaethics over evolutionary debunking arguments against morality has shown a tendency to abstract away from relevant empirical detail. Here, I engage the debate about Darwinian debunking of morality with relevant empirical issues. I present four conditions that must be met in order for it to be reasonable to expect an evolved cognitive faculty to be reliable: the environment, information, error, and tracking conditions. I then argue that these conditions are not met in the case of our evolved (...) faculty for moral judgement. (shrink)
Some incompatibilists about free will or moral responsibility and determinism would abandon their incompatibilism were they to learn that determinism is true. But is it reasonable to flip-flop in this way? In this article, we contend that it is and show what follows. The result is both a defense of a particular incompatibilist strategy and a general framework for assessing other cases of flip-flopping.
The evil God challenge is an argumentative strategy that has been pursued by a number of philosophers in recent years. It is apt to be understood as a parody argument: a wholly evil, omnipotent and omniscient God is absurd, as both theists and atheists will agree. But according to the challenge, belief in evil God is about as reasonable as belief in a wholly good, omnipotent and omniscient God; the two hypotheses are roughly epistemically symmetrical. Given this symmetry, thesis belief (...) in an evil God and belief in a good God are taken to be similarly preposterous. In this paper, we argue that the challenge can be met, suggesting why the three symmetries that need to hold between evil God and good God – intrinsic, natural theology and theodicy symmetries – can all be broken. As such, we take it that the evil God challenge can be met. (shrink)
I examine the origins of ordinary racial thinking. In doing so, I argue against the thesis that it is the byproduct of a unique module. Instead, I defend a pluralistic thesis according to which different forms of racial thinking are driven by distinct mechanisms, each with their own etiology. I begin with the belief that visible features are diagnostic of race. I argue that the mechanisms responsible for face recognition have an important, albeit delimited, role to play in sustaining this (...) belief. I then argue that essentialist beliefs about race are driven by some of the mechanisms responsible for “entitativity perception”: the tendency to perceive some aggregates of people as more genuine groups than others. Finally, I argue that coalitional thinking about race is driven by a distinctive form of entitativity perception. However, I suggest that more data is needed to determine the prevalence of this form of racial thinking. (shrink)
In Parts of Classes and "Mathematics is Megethology" David Lewis shows how the ideology of set membership can be dispensed with in favor of parthood and plural quantification. Lewis's theory has it that singletons are mereologically simple and leaves the relationship between a thing and its singleton unexplained. We show how, by exploiting Kit Fine's mereology, we can resolve Lewis's mysteries about the singleton relation and vindicate the claim that a thing is a part of its singleton.
_Foucault’s Law_ is the first book in almost fifteen years to address the question of Foucault’s position on law. Many readings of Foucault’s conception of law start from the proposition that he failed to consider the role of law in modernity, or indeed that he deliberately marginalized it. In canvassing a wealth of primary and secondary sources, Ben Golder and Peter Fitzpatrick rebut this argument. They argue that rather than marginalize law, Foucault develops a much more radical, nuanced and coherent (...) theory of law than his critics have acknowledged. For Golder and Fitzpatrick, Foucault’s law is not the contained creature of conventional accounts, but is uncontainable and illimitable. In their radical re-reading of Foucault, they show how Foucault outlines a concept of law which is not tied to any given form or subordinated to a particular source of power, but is critically oriented towards alterity, new possibilities and different ways of being. _Foucault’s Law_ is an important and original contribution to the ongoing debate on Foucault and law, engaging not only with Foucault’s diverse writings on law and legal theory, but also with the extensive interpretive literature on the topic. It will thus be of interest to students and scholars working in the fields of law and social theory, legal theory and law and philosophy, as well as to students of Foucault’s work generally. (shrink)
Purposeful infection of healthy volunteers with a microbial pathogen seems at odds with acceptable ethical standards, but is an important contemporary research avenue used to study infectious diseases and their treatments. Generally termed ‘controlled human infection studies’, this research is particularly useful for fast tracking the development of candidate vaccines and may provide unique insight into disease pathogenesis otherwise unavailable. However, scarce bioethical literature is currently available to assist researchers and research ethics committees in negotiating the distinct issues raised by (...) research involving purposefully infecting healthy volunteers. In this article, we present two separate challenge studies and highlight the ethical issues of human challenge studies as seen through a well-constructed framework. Beyond the same stringent ethical standards seen in other areas of medical research, we conclude that human challenge studies should also include: independent expert reviews, including systematic reviews; a publicly available rationale for the research; implementation of measures to protect the public from spread of infection beyond the research setting; and a new system for compensation for harm. We hope these additions may encourage safer and more ethical research practice and help to safeguard public confidence in this vital research alternative in years to come. (shrink)
If musical works are abstract objects, which cannot enter into causal relations, then how can we refer to musical works or know anything about them? Worse, how can any of our musical experiences be experiences of musical works? It would be nice to be able to sidestep these questions altogether. One way to do that would be to take musical works to be concrete objects. In this paper, we defend a theory according to which musical works are concrete objects. In (...) particular, the theory that we defend takes musical works to be fusions of performances. We defend this view from a series of objections, the first two of which are raised by Julian Dodd in a recent paper and the last of which is suggested by some comments of his in an earlier paper. (shrink)
Strategies to increase influenza vaccination rates have typically targeted healthcare professionals and individuals in various high-risk groups such as the elderly. We argue that they should focus on increasing vaccination rates in children. Because children suffer higher influenza incidence rates than any other demographic group, and are major drivers of seasonal influenza epidemics, we argue that influenza vaccination strategies that serve to increase uptake rates in children are likely to be more effective in reducing influenza-related morbidity and mortality than those (...) targeting HCPs or the elderly. This is true even though influenza-related morbidity and mortality amongst children are low, except in the very young. Further, we argue that there are no decisive reasons to suppose that children-focused strategies are less ethically acceptable than elderly or HCP-focused strategies. (shrink)
Can a musical work be created? Some say ‘no’. But, we argue, there is no handbook of universally accepted metaphysical truths that they can use to justify their answer. Others say ‘yes’. They have to find abstract objects that can plausibly be identified with musical works, show that abstract objects of this sort can be created, and show that such abstract objects can persist. But, we argue, none of the standard views about what a musical work is allows musical works (...) both to be created and to persist. (shrink)