In _Medieval Muslim Historians and the Franks in the Levant_ seven leading scholars examine the lives and historical writings of seven medieval Muslim historians whose works are relevant to the history of the crusading period in the Levant. Contributors include: Frédéric Bauden, Niall Christie, Anne-Marie Eddé, Konrad Hirschler, Alex Mallett, and Françoise Micheau, Lutz Richter-Bernburg.
Origens : Alex Atala, Fernando e Humberto Campana -- Presente : Fernando e Humberto Campana e Jum Nakao -- Intermezzo : convívio : Jam Nakao e colaboradores -- Destinos : Alex Atala e Jum Nakao -- Entrevistas -- Um pouco de história.
Philosophy of Science is a mid-level text for students with some grounding in philosophy. It introduces the questions that drive enquiry in the philosophy of science, and aims to educate readers in the main positions, problems and arguments in the field today. Alex Rosenberg is certainly well qualified to write such an introduction. His works cover a large area of the philosophy of natural and social sciences. In addition, the author of the argument that the ‘queen of the social (...) sciences’, economics, is not a science at all, can be counted on to show how the philosophy of science can be relevant to the understanding of the status of scientific knowledge and can provide a critical assessment of practitioners’ view of their field. (shrink)
Using an economic bargaining game, we tested for the existence of two phenomena related to social norms, namely norm manipulation – the selection of an interpretation of the norm that best suits an individual – and norm evasion – the deliberate, private violation of a social norm. We found that the manipulation of a norm of fairness was characterized by a self-serving bias in beliefs about what constituted normatively acceptable behaviour, so that an individual who made an uneven bargaining offer (...) not only genuinely believed it was fair, but also believed that recipients found it fair, even though recipients of the offer considered it to be unfair. In contrast, norm evasion operated as a highly explicit process. When they could do so without the recipient's knowledge, individuals made uneven offers despite knowing that their behaviour was unfair. (shrink)
Alex Oliver and Timothy Smiley provide a new account of plural logic. They argue that there is such a thing as genuinely plural denotation in logic, and expound a framework of ideas that includes the distinction between distributive and collective predicates, the theory of plural descriptions, multivalued functions, and lists.
For many epistemologists, and for many philosophers more broadly, it is axiomatic that rationality requires you to take the doxastic attitudes that your evidence supports. Yet there is also another current in our talk about rationality. On this usage, rationality is a matter of the right kind of coherence between one's mental attitudes. Surprisingly little work in epistemology is explicitly devoted to answering the question of how these two currents of talk are related. But many implicitly assume that evidence -responsiveness (...) guarantees coherence, so that the rational impermissibility of incoherence will just fall out of the putative requirement to take the attitudes that one's evidence supports, and so that coherence requirements do not need to be theorized in their own right, apart from evidential reasons. In this paper, I argue that this is a mistake, since coherence and evidence -responsiveness can in fact come into conflict. More specifically, I argue that in cases of misleading higher-order evidence, there can be a conflict between believing what one's evidence supports and satisfying a requirement that I call “inter-level coherence ”. This illustrates why coherence requirements and evidential reasons must be separated and theorized separately. (shrink)
This book investigates context-sensitivity in natural language by examining the meaning and use of a target class of theoretically recalcitrant expressions. These expressions-including epistemic vocabulary, normative and evaluative vocabulary, and vague language -exhibit systematic differences from paradigm context-sensitive expressions in their discourse dynamics and embedding properties. Many researchers have responded by rethinking the nature of linguistic meaning and communication. Drawing on general insights about the role of context in interpretation and collaborative action, Silk develops an improved contextualist theory of CR-expressions (...) within the classical truth-conditional paradigm: Discourse Contextualism. The aim of Discourse Contextualism is to derive the distinctive linguistic behavior of a CR-expression from a particular contextualist interpretation of an independently motivated formal semantics, along with general principles of interpretation and conversation. It is shown how in using CR-expressions, speakers can exploit their mutual grammatical and world knowledge, and general pragmatic reasoning skills, to coordinate their attitudes and negotiate about how the context should evolve. The book focuses primarily on developing a Discourse Contextualist semantics and pragmatics for epistemic modals. The Discourse Contextualist framework is also applied to other categories of epistemic vocabulary, normative and evaluative vocabulary, and vague adjectives. The similarities/differences among these expressions, and among context-sensitive expressions more generally, have been underexplored. The development of Discourse Contextualism in this book sheds light on general features of meaning and communication, and the variety of ways in which context affects and is affected by uses of language. Discourse Contextualism provides a fruitful framework for theorizing about various broader issues in philosophy, linguistics, and cognitive science. (shrink)
In discussions of whether and how pragmatic considerations can make a difference to what one ought to believe, two sets of cases feature. The first set, which dominates the debate about pragmatic reasons for belief, is exemplified by cases of being financially bribed to believe (or withhold from believing) something. The second set, which dominates the debate about pragmatic encroachment on epistemic justification, is exemplified by cases where acting on a belief rashly risks some disastrous outcome if the belief turns (...) out to be false. Call those who think that pragmatic considerations make a difference to what one ought to believe in the second kind of case, but not in the first, ‘moderate pragmatists’. Many philosophers – in particular, most advocates of pragmatic and moral encroachment – are moderate pragmatists. But moderate pragmatists owe us an explanation of exactly why the second kind of pragmatic consideration makes a difference, but the first kind doesn’t. I argue that the most promising of these explanations all fail: they are either theoretically undermotivated, or get key cases wrong, or both. Moderate pragmatism may be an unstable stopping point between a more extreme pragmatism, on one hand, and an uncompromising anti-pragmatism on the other. (shrink)
In this paper, we consider how certain longstanding philosophical questions about mental representation may be answered on the assumption that cognitive and perceptual systems implement hierarchical generative models, such as those discussed within the prediction error minimization framework. We build on existing treatments of representation via structural resemblance, such as those in Gładziejewski :559–582, 2016) and Gładziejewski and Miłkowski, to argue for a representationalist interpretation of the PEM framework. We further motivate the proposed approach to content by arguing that it (...) is consistent with approaches implicit in theories of unsupervised learning in neural networks. In the course of this discussion, we argue that the structural representation proposal, properly understood, has more in common with functional-role than with causal/informational or teleosemantic theories. In the remainder of the paper, we describe the PEM framework for approximate Bayesian inference in some detail, and discuss how structural representations might arise within the proposed Bayesian hierarchies. After explicating the notion of variational inference, we define a subjectively accessible measure of misrepresentation for hierarchical Bayesian networks by appeal to the Kullbach–Leibler divergence between posterior generative and approximate recognition densities, and discuss a related measure of objective misrepresentation in terms of correspondence with the facts. (shrink)
You know what someone else is thinking and feeling by observing them. But how do you know what you are thinking and feeling? This is the problem of self-knowledge: Alex Byrne tries to solve it. The idea is that you know this not by taking a special kind of look at your own mind, but by an inference from a premise about your environment.
Priority monism is the view that the cosmos is the only independent concrete object. The paper argues that, pace its proponents, Priority monism is in conflict with the dependence of any whole on any of its parts: if the cosmos does not depend on its parts, neither does any smaller composite.
Philosophy of Medicine provides a fresh and comprehensive treatment of the topic. It offers a novel theory of the nature of medicine, and proposes a new attitude to medicine, aimed at improving the quality of debates between medical traditions and facilitating medicine's decolonization.
Is life a purely physical process? What is human nature? Which of our traits is essential to us? In this volume, Daniel McShea and Alex Rosenberg – a biologist and a philosopher, respectively – join forces to create a new gateway to the philosophy of biology; making the major issues accessible and relevant to biologists and philosophers alike. Exploring concepts such as supervenience; the controversies about genocentrism and genetic determinism; and the debate about major transitions central to contemporary thinking (...) about macroevolution; the authors lay out the broad terms in which we should assess the impact of biology on human capacities, social institutions and ethical values. (shrink)
ABSTRACTMany discussions of the ‘preface paradox’ assume that it is more troubling for deductive closure constraints on rational belief if outright belief is reducible to credence. I show that this is an error: we can generate the problem without assuming such reducibility. All that we need are some very weak normative assumptions about rational relationships between belief and credence. The only view that escapes my way of formulating the problem for the deductive closure constraint is in fact itself a reductive (...) view: namely, the view that outright belief is credence 1. However, I argue that this view is unsustainable. Moreover, my version of the problem turns on no particular theory of evidence or evidential probability, and so cannot be avoided by adopting some revisionary such theory. In sum, deductive closure is in more serious, and more general, trouble than some have thought. (shrink)
Evidence from functional neuroimaging of the human brain indicates that information about salient properties of an object¿such as what it looks like, how it moves, and how it is used¿is stored in sensory and motor systems active when that information was acquired. As a result, object concepts belonging to different categories like animals and tools are represented in partially distinct, sensory- and motor property-based neural networks. This suggests that object concepts are not explicitly represented, but rather emerge from weighted activity (...) within property-based brain regions. However, some property-based regions seem to show a categorical organization, thus providing evidence consistent with category-based, domain-specific formulations as well.Acronyms and DefinitionsBiological motion: motion of animate agents characterized by highly flexible, fully articulated motion vectors, in contrast to the rigid, unarticulated motion vectors associated with most tools.Category-specific disorder: a relatively greater impairment in retrieving information about members of one superordinate object category (e.g., animals) as compared with other categories following brain injury or diseaseIPS: intraparietal sulcusLO: lateral occipital cortexObject concept: memory representations of a class or category of objects. Necessary for numerous cognitive functions including identifying an object as a member of a specific category and drawing inferences about object propertiespMTG: posterior middle temporal gyruspSTS: posterior superior temporal sulcusRepetition suppression: decreased neural response associated with repeated presentation of an identical, or a semantically/conceptually related, stimulusSD: semantic dementiaSemantic memory: a large division of long-term memory containing knowledge about the world including facts, ideas, beliefs, and conceptsSemantic priming: a short-lasting facilitation in processing a stimulus due to the prior presentation of a semantically related stimulusTMS: transcranial magnetic stimulationVPMC: ventral premotor cortex. (shrink)
Peak human performance—whether of Olympic athletes, Nobel prize winners, or you cooking the best dish you’ve ever made—depends on skill. Skill is at the heart of what it means to excel. Yet, the fixity of skilled behavior can sometimes make it seem a lower-level activity, more akin to the movements of an invertebrate or a machine. Peak performance in elite athletes is often described, for example, as “automatic” by those athletes: “The most frequent response from participants when describing the execution (...) of a peak performance was the automatic execution of performance”. While the automaticity of skilled behavior is widely acknowledged, some worry that too much automaticity in skill would challenge its ability to exhibit human excellence. And so two camps have developed: those who focus on the automaticity of skilled behavior, the “habitualists,” and those who focus on the higher-level cognition behind peak performance, the “intellectualists.” We take a different tack. We argue that skilled behavior weaves together automaticity and higher-level cognition, which we call “pluralism.” That is, we argue that automaticity and higher-level cognition are both normal features of skilled behavior that benefit skilled behavior. This view is hinted at in other quotes about automaticity in skill—while expert gamers describe themselves as “playing with” automaticity, expert musicians are said to balance automaticity with creativity through performance cues: “Performance cues allow the musician to attend to some aspects of the performance while allowing others to be executed automatically”. We describe in this paper three ways that higher-level cognition and automaticity are woven together. The first two, level pluralism and synchronic pluralism, are described in other papers, albeit under different cover. We take our contribution to be both distinguishing the three forms and contributing the third, diachronic pluralism. In fact, we find that diachronic pluralism presents the strongest case against habitualism and intellectualism, especially when considered through the example of strategic automaticity. In each case of pluralism, we use research on the presence or absence of attention to explore the presence or absence of higher-level cognition in skilled behavior. (shrink)
Recent work on rationality has been increasingly attentive to “coherence requirements”, with heated debates about both the content of such requirements and their normative status (e.g., whether there is necessarily reason to comply with them). Yet there is little to no work on the metanormative status of coherence requirements. Metaphysically: what is it for two or more mental states to be jointly incoherent, such that they are banned by a coherence requirement? In virtue of what are some putative requirements genuine (...) and others not? Epistemologically: how are we to know which of the requirements are genuine and which aren’t? This paper tries to offer an account that answers these questions. On my account, the incoherence of a set of attitudinal mental states is a matter of its being (partially) constitutive of the mental states in question that, for any agent that holds these attitudes jointly, the agent is disposed, when conditions of full transparency are met, to give up at least one of the attitudes. (shrink)
The difference between the unity of the individual and the separateness of persons requires that there be a shift in the moral weight that we accord to changes in utility when we move from making intrapersonal tradeoffs to making interpersonal tradeoffs. We examine which forms of egalitarianism can, and which cannot, account for this shift. We argue that a form of egalitarianism which is concerned only with the extent of outcome inequality cannot account for this shift. We also argue that (...) a view which is concerned with both outcome inequality and with the unfairness of inequality in individuals‘ expected utilities can account for this shift. Finally, we limn an alternative view, on.. (shrink)
This user-friendly text covers key issues in the philosophy of science in an accessible and philosophically serious way. It will prove valuable to students studying philosophy of science as well as science students. Prize-winning author Alex Rosenberg explores the philosophical problems that science raises by its very nature and method. He skilfully demonstrates that scientific explanation, laws, causation, theory, models, evidence, reductionism, probability, teleology, realism and instrumentalism actually pose the same questions that Plato, Aristotle, Descartes, Hume, Kant and their (...) successors have grappled with for centuries. (shrink)
In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. -/- In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the (...) syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module. (shrink)
Expressivism promises an illuminating account of the nature of normative judgment. But worries about the details of expressivist semantics have led many to doubt whether expressivism's putative advantages can be secured. Drawing on insights from linguistic semantics and decision theory, I develop a novel framework for implementing an expressivist semantics that I call ordering expressivism. I argue that by systematically interpreting the orderings that figure in analyses of normative terms in terms of the basic practical attitude of conditional weak preference, (...) the expressivist can explain the semantic properties of normative sentences in terms of the logical properties of that attitude. Expressivism's problems with capturing the logical relations among normative sentences can be reduced to the familiar, more tractable problem of explaining certain coherence constraints on preferences. Particular attention is given to the interpretation of wide-scope negation. The proposed solution is also extended to other types of embedded contexts—most notably, disjunctions. (shrink)
This is the first book to systematically examine the underlying theory of evidence in Anglo-American legal systems. Stein develops a detailed and innovative theory which sets aside the traditional vision of evidence law as facilitating the discovery of the truth. Combining probability theory, epistemology, economic analysis, and moral philosophy, he argues instead that the fundamental purpose of evidence law is to apportion the risk of error in conditions of uncertainty.
This chapter explores the idea that causal inference is warranted if and only if the mechanism underlying the inferred causal association is identified. This mechanistic stance is discernible in the epidemiological literature, and in the strategies adopted by epidemiologists seeking to establish causal hypotheses. But the exact opposite methodology is also discernible, the black box stance, which asserts that epidemiologists can and should make causal inferences on the basis of their evidence, without worrying about the mechanisms that might underlie their (...) hypotheses. I argue that the mechanistic stance is indeed a bad methodology for causal inference. However, I detach and defend a mechanistic interpretation of causal generalisations in epidemiology as existence claims about underlying mechanisms. (shrink)
Explaining moral intuitions is one of the hot topics of recent cognitive science. In the present article we focus on a factor that attracted surprisingly little attention so far, namely the temporal order in which moral scenarios are presented. We argue that previous research points to a systematic pattern of order effects that has been overlooked until now: only judgments of actions that are normally regarded as morally acceptable are susceptible to be affected by the order of presentation, and this (...) in turn is only the case if the dilemma is immediately preceded by a dilemma in which the proposed action was considered as not morally acceptable. We conducted an experiment that largely confirmed this pattern and allowed us to analyze by what individual level responses it was generated. We argue that investigating order effects is necessary for approaching a complete descriptive moral theory. Furthermore, we discuss the implications of these findings for moral philosophy. (shrink)
Most contemporary philosophical discussions of intentionality start and end with a treatment of the propositional attitudes. In fact, many theorists hold that all attitudes are propositional attitudes. Our folk-psychological ascriptions suggest, however, that there are non-propositional attitudes: I like Sally, my brother fears snakes, everyone loves my grandmother, and Rush Limbaugh hates Obama. I argue that things are as they appear: there are non-propositional attitudes. More specifically, I argue that there are attitudes that relate individuals to non-propositional objects and do (...) so not in virtue of relating them to propositions. I reach this conclusion by not only showing that attempted analyses of apparently non-propositional attitudes in terms of the propositional fail, but that some non-propositional attitudes don’t even supervene on propositional attitudes. If this is correct, then the common discussions of intentionality that address only propositional attitudes are incomplete and those who hold that all intentional states are propositional are mistaken. (shrink)
Disagreement is a hot topic in epistemology. A fast-growing literature centers around a dispute between the ‘steadfast’ view, on which one may maintain one’s beliefs even in the light of disagreement with epistemic peers who have all the same evidence, and the ‘conciliationist’ view, on which such disagreement requires a revision of attitudes. In this paper, however, I argue that there is less separating the main rivals in the debate about peer disagreement than is commonly thought. The extreme versions of (...) both views are clearly indefensible, while more moderate versions of the views converge on the idea that how much revision of belief is called for by an instance of peer disagreement varies from case to case. Those tempted by this diagnosis are sometimes pessimistic about the prospects for giving a unified account which clearly predicts when more or less extensive revisions will be called for. By contrast, in this paper I give an account that aspires to such unity and predictive power, centering on the notion of the net resilience of your estimate of your own reliability against your estimate of your interlocutor’s reliability. The view I present thus amounts to a new, moderate theory of how one should respond to disagreement. I argue that ultimately, when we weaken conciliationism and the steadfast view to account for exception cases and to make them adequately plausible, they end up converging on the moderate view I present. Much of the seeming disagreement about disagreement is, then, illusory. (shrink)
Book Review of Being for Beauty: Aesthetic Agency and Value, by Dominic McIver Lopes. This review summarizes the book's main thread of argument and Lopes' positive view, which he dubs the "network theory". It ends by reflecting on whether Lopes' account of aesthetic normativity is ultimately satisfactory.
The lack of access to gender-affirming surgery represents a significant unmet health care need within the transgender community, frequently resulting in depression and self-destructive behavior. While some transgender people may have access to gender reassignment surgery, an overwhelming majority cannot afford facial feminization surgery. The former may be covered as a “medical necessity,” but FFS is considered “cosmetic” and excluded from insurance coverage. This demarcation between “necessity” and “cosmetic” in transgender health care based on specific body parts is in direct (...) opposition to the scientific community’s understanding of gender dysphoria and professional guidelines for transgender health. GRS affects one’s ability to function in an intimate relationship, while FFS has the same impact on social interactions an, therefore may have a far greater implication for one’s quality of life. FFS is a cost-effective intervention that needs to be covered by insurance policies. The benefits of such coverage far exceed the insignificant costs. (shrink)
This paper presents a formal account of how to determine the discourse relations between propositions introduced in a text, and the relations between the events they describe. The distinct natural interpretations of texts with similar syntax are explained in terms of defeasible rules. These characterise the effects of causal knowledge and knowledge of language use on interpretation. Patterns of defeasible entailment that are supported by the logic in which the theory is expressed are shown to underly temporal interpretation.
The thin red line ( TRL ) is a theory about the semantics of future-contingents. The central idea is that there is such a thing as the ‘actual future’, even in the presence of indeterminism. It is inspired by a famous solution to the problem of divine foreknowledge associated with William of Ockham, in which the freedom of agents is argued to be compatible with God’s omniscience. In the modern branching time setting, the theory of the TRL is widely regarded (...) to suffer from several fundamental problems. In this paper we propose several new TRL semantics, each with differing degrees of success. This leads up to our final semantics, which is a cross between the TRL and supervaluationism. We discuss the notions of truth, validity and semantic consequence which result from our final semantics, and demonstrate some of its pleasing results. This account, we believe, answers the main objection in the literature, and thus places the TRL on the same level as any other competing semantics for future contingents. (shrink)
Many philosophers believe that truth is grounded: True propositions depend for their truth on the world. Some philosophers believe that truth’s grounding has implications for our ontology of time. If truth is grounded, then truth supervenes on being. But if truth supervenes on being, then presentism is false since, on presentism, e.g., that there were dinosaurs fails to supervene on the whole of being plus the instantiation pattern of properties and relations. Call this the grounding argument against presentism. Many presentists (...) claim that the grounding argument fails because, despite appearances, supervenience is compatible with presentism. In this paper, I claim that the grounding argument fails because, despite appearances, truth’s grounding gives the presentist no compelling reason to adopt the sort of supervenience principle at work in the grounding argument. I begin by giving two precisifications of the grounding principle: truthmaking and supervenience. In Sect. 2, I give the grounding argument against presentism. In Sect. 3, I argue that we should distinguish between eternalist and presentist notions of grounding; once this distinction is in hand, the grounding argument is undercut. In Sect. 4, I show how the presentist’s notion of grounding leads to presentist-friendly truthmaking and supervenience principles. In Sect. 5, I address some potential objections. (shrink)
This paper demarcates a theoretically interesting class of "evaluational adjectives." This class includes predicates expressing various kinds of normative and epistemic evaluation, such as predicates of personal taste, aesthetic adjectives, moral adjectives, and epistemic adjectives, among others. Evaluational adjectives are distinguished, empirically, in exhibiting phenomena such as discourse-oriented use, felicitous embedding under the attitude verb `find', and sorites-susceptibility in the comparative form. A unified degree-based semantics is developed: What distinguishes evaluational adjectives, semantically, is that they denote context-dependent measure functions ("evaluational (...) perspectives")—context-dependent mappings to degrees of taste, beauty, probability, etc., depending on the adjective. This perspective-sensitivity characterizing the class of evaluational adjectives cannot be assimilated to vagueness, sensitivity to an experiencer argument, or multidimensionality; and it cannot be demarcated in terms of pretheoretic notions of subjectivity, common in the literature. I propose that certain diagnostics for "subjective" expressions be analyzed instead in terms of a precisely specified kind of discourse-oriented use of context-sensitive language. I close by applying the account to `find x PRED' ascriptions. (shrink)
This article puts pressure on moral motivational internalism and rejects normative motivational internalism by arguing that we should be aesthetic motivational externalists. Parallels between aesthetic and moral normativity give us new reason to doubt moral internalism. I address possible disanalogies, arguing that either they fail, or they succeed, but aren’t strong enough to underwrite a motivational difference between the domains. Furthermore, aesthetic externalism entails normative externalism, providing further presumptive evidence against moral internalism. I also make the case that, regardless of (...) these particular conclusions, examining different normative domains alongside each other is a fruitful way to move debates forward. (shrink)
This paper develops an account of the meaning of `ought', and the distinction between weak necessity modals (`ought', `should') and strong necessity modals (`must', `have to'). I argue that there is nothing specially ``strong'' about strong necessity modals per se: uses of `Must p' predicate the (deontic/epistemic/etc.) necessity of the prejacent p of the actual world (evaluation world). The apparent ``weakness'' of weak necessity modals derives from their bracketing whether the necessity of the prejacent is verified in the actual world. (...) `Ought p' can be accepted without needing to settle that the relevant considerations (norms, expectations, etc.) that actually apply verify the necessity of p. I call the basic account a modal-past approach to the weak/strong necessity modal distinction (for reasons that become evident). Several ways of implementing the approach in the formal semantics/pragmatics are critically examined. The account systematizes a wide range of linguistic phenomena: it generalizes across flavors of modality; it elucidates a special role that weak necessity modals play in discourse and planning; it captures contrasting logical, expressive, and illocutionary properties of weak and strong necessity modals; and it sheds light on how a notion of `ought' is often expressed in other languages. These phenomena have resisted systematic explanation. In closing I briefly consider how linguistic inquiry into differences among necessity modals may improve theorizing on broader philosophical issues. (shrink)
This paper offers a new account of how structural rationality, or coherence, is normative. The central challenge to the normativity of coherence – which I term the problem of “making space” for the normativity of coherence – is this: if considerations of coherence matter normatively, it is not clear how we ought to take account of them in our deliberation. Coherence considerations don’t seem to show up in reasoning about what to believe, intend, desire, hope, fear, and so on; moreover, (...) they seem awkward to take account of alongside more “substantive” considerations about the merits of such attitudes. I aim here to solve this problem, and in so doing to offer the aforementioned new account of how coherence is normative. On the view I defend, which I call the Reasons-to-Structure-Deliberation model, considerations of coherence constitute reasons for structuring deliberation in certain ways: more particularly, to treat incoherent combinations of attitudes as off-limits, and so to focus one’s deliberation on choosing between the coherent combinations. (shrink)
Despite increasing prominence, ‘ought’-contextualism is regarded with suspicion by most metaethicists. As I’ll argue, however, contextualism is a very weak claim, that every metaethicist can sign up to. The real controversy concerns how contextualism is developed. I then draw an oft-overlooked distinction between “parochial” contextualism—on which the contextually-relevant standards are those that the speaker, or others in her environment, subscribe to—and “aspirational” contextualism—on which the contextually-relevant standards are the objective standards for the relevant domain. However, I argue that neither view (...) is acceptable. I suggest an original compromise: “ecumenical contextualism”, on which some uses of ‘ought’ are parochial, others aspirational. Ecumenical contextualism is compatible with realism or antirealism, but either combination yields interesting results. And though it’s a cognitivist view, it is strengthened by incorporating an expressivist insight: for robustly normative usages of ‘ought’, the contextually-relevant standards must be endorsed by the speaker. (shrink)
It is standard, both in the philosophical literature and in ordinary parlance, to assume that one can fall short of responding to all one’s moral reasons without being irrational. Yet when we turn to epistemic reasons, the situation could not be more different. Most epistemologists take it as axiomatic that for a belief to be rational is for it to be well-supported by epistemic reasons. We find ourselves with a striking asymmetry, then, between the moral and epistemic domains concerning what (...) is taken for granted about whether failures to respond to reasons are failures of rationality. My aim in this paper is to interrogate this asymmetry, and ultimately to argue that the asymmetry is groundless. Instead, I will offer an error theory to explain the asymmetry in intuitions. This error theory suggests that we should amend the conventional wisdom about the relationship between epistemic reasons and rationality. (shrink)
I distinguish two different kinds of practical stakes associated with propositions. The W-stakes track what is at stake with respect to whether the proposition is true or false. The A-stakes track what is at stake with respect to whether an agent believes the proposition. This poses a dilemma for those who claim that whether a proposition is known can depend on the stakes associated with it. Only the W-stakes reading of this view preserves intuitions about knowledge-attributions, but only the A-stakes (...) reading preserves the putative link between knowledge and practical reasoning that has motivated it. (shrink)