The article argues that theorists who try to justify 'ought'-claims, i.e., who try to show that a standard of behavior has normative authority, will run into a regress problem. The problem is similar in structure to the familiar regress in the justification of belief. The point of the paper is not skeptical. Rather, the aim is to help theorists better understand the challenges associated with formulating a theory of normative authority.
It is commonly thought that logic, whatever it may be, is normative. While accounting for the normativity of logic is a challenge for any view of logic, in this paper I argue that it is particularly problematic for certain types of logical pluralists, due to what I call the normativeproblem for logical pluralism. I introduce the NPLP, distinguish it from other problems that logical pluralists may face, and show that it is unsolvable for one prominent type (...) of logical pluralism. (shrink)
I. The Problem of Normative Foundations: Habermas's Original Criticism of Adorno and Horkheimer In The Theory of Communicative Action, Jürgen Habermas writes:From the beginning, critical theory labored over the difficulty of giving an account of its own normative foundations …1Call this Habermas's original objection to the problem of normative foundations. It has been hugely influential both in the interpretation and assessment of Frankfurt School critical theory and in the development of later variants of it. Nowadays (...) it is a truth almost universally acknowledged that any critical social theory in possession of normative…. (shrink)
David Enoch, in his paper “Why Idealize?”, argues that theories of normative reasons that hold that normative facts are subject or response-dependent and include an idealization condition might have a problem in justifying the need for idealization. I argue that at least some response-dependence conceptions of normative reasons can justify idealization. I explore two ways of responding to Enoch’s challenge. One way involves a revisionary stance on the ontological commitments of the normative discourse about reasons. (...) To establish this point, I argue by analogy with the case of color perception. To make the analogy, it suffices to show that even if colors are response-dependent properties, it does not follow that some kind of idealization cannot be introduced to specify the truth conditions of color ascriptions. The second route involves the denial of Enoch’s contention that our normative discourse is implicitly committed to a realist ontology. I adduce reasons for thinking that our normative discourse only presupposes a possibility of misrepresentation. However, this feature of the normative discourse does not favor robustly objectivist as opposed to response-dependence accounts of normative reasons. Thus, I argue that proponents of response-dependence accounts can use this feature to answer the question of why to idealize. (shrink)
Some philosophers have recently argued that decision-makers ought to take normative uncertainty into account in their decisionmaking. These philosophers argue that, just as it is plausible that we should maximize expected value under empirical uncertainty, it is plausible that we should maximize expected choice-worthiness under normative uncertainty. However, such an approach faces two serious problems: how to deal with merely ordinal theories, which do not give sense to the idea of magnitudes of choice-worthiness; and how, even when theories (...) do give sense to magnitudes of choice-worthiness, to compare magnitudes of choice-worthiness across different theories. Some critics have suggested that these problems are fatal to the project of developing a normative account of decision-making under normative uncertainty. The primary purpose of this article is to show that this is not the case. To this end, I develop an analogy between decision-making under normative uncertainty and the problem of social choice, and then argue that the Borda Rule provides the best way of making decisions in the face of merely ordinal theories and intertheoretic incomparability. (shrink)
The key problem for normative (or moral) cultural relativism arises as soon as we try to formulate it. It resists formulations that are (1) clear, precise, and intelligible; (2) plausible enough to warrant serious attention; and (3) faithful to the aims of leading cultural relativists, one such aim being to produce an important alternative to moral universalism. Meeting one or two of these conditions is easy; meeting all three is not. I discuss twenty-four candidates for the label "cultural (...) relativism," showing that not one meets all three conditions. In the end I conclude that cultural relativists have produced nothing that threatens universalism. (shrink)
It is wrong for John to kick my cat because it will cause the cat serious pain, but also because it is wrong for people to cause serious pain in certain circumstances. This suggests the following structure: some normative facts hold in virtue of both non-normative facts and normative principles. As I will construe this, it is a claim about the metaphysical grounds of normative facts. Many non-naturalists about the normative want to endorse this view (...) generally—that particular normative facts are often partially grounded in normative principles. In this paper, I argue that non-naturalism is inconsistent with this thesis about partial grounding in principles, due to the nature of normative principles and their grounds. I then consider two ways in which the non-naturalist position could be modified or expanded to solve this problem. No solution, it turns out, is without its problems. I propose, then, that non-naturalists should abandon the view that principles partially ground particular normative facts, in favor of a different role for principles. (shrink)
_ Source: _Page Count 37 Frege seems committed to the thesis that the senses of the fundamental notions of arithmetic remain stable and are stably grasped by thinkers throughout history. Fully competent practitioners grasp those senses clearly and distinctly, while uncertain practitioners see them, the very same senses, “as if through a mist”. There is thus a common object of the understanding apprehended to a greater or lesser degree by thinkers of diverging conceptual competence. Frege takes the thesis to be (...) a condition for the possibility of the rational intelligibility of mathematical practice. I argue however that the idea that senses could be grasped as a matter of degree is in tension with the constitutive theses that Frege held with regard to sense. Given those theses, there can in fact be no such thing as misty grasp of sense, since any uncertainty as to the logical features of a given sense will entail that one is getting hold of a different sense or of no sense at all. I consider various ways of resolving the tension and conclude that Frege’s thesis cannot be defended if we take it to be a thesis about our competence with concepts. This leaves unresolved what I call the problem of normative guidance, that is, the problem of explaining how the fundamental notions of logic and arithmetic can provide inferential guidance to thinkers. (shrink)
This article argues that a successful answer to Hume's problem of induction can be developed from a sub-genre of philosophy of science known as formal learning theory. One of the central concepts of formal learning theory is logical reliability: roughly, a method is logically reliable when it is assured of eventually settling on the truth for every sequence of data that is possible given what we know. I show that the principle of induction (PI) is necessary and sufficient for (...) logical reliability in what I call simple enumerative induction. This answer to Hume's problem rests on interpreting PI as a normative claim justified by a non-empirical epistemic means-ends argument. In such an argument, a rule of inference is shown by mathematical or logical proof to promote a specified epistemic end. Since the proof concerning PI and logical reliability is not based on inductive reasoning, this argument avoids the circularity that Hume argued was inherent in any attempt to justify PI. (shrink)
When we are faced with a choice among acts, but are uncertain about the true state of the world, we may be uncertain about the acts’ “choiceworthiness”. Decision theories guide our choice by making normative claims about how we should respond to this uncertainty. If we are unsure which decision theory is correct, however, we may remain unsure of what we ought to do. Given this decision-theoretic uncertainty, meta-theories attempt to resolve the conflicts between our decision theories...but we may (...) be unsure which meta-theory is correct as well. This reasoning can launch a regress of ever-higher-order uncertainty, which may leave one forever uncertain about what one ought to do. There is, fortunately, a class of circumstances under which this regress is not a problem. If one holds a cardinal understanding of subjective choiceworthiness, and accepts certain other criteria, one’s hierarchy of metanormative uncertainty ultimately converges to precise definitions of “subjective choiceworthiness” for any finite set of acts. If one allows the metanormative regress to extend to the transfinite ordinals, the convergence criteria can be weakened further. Finally, the structure of these results applies straightforwardly not just to decision-theoretic uncertainty, but also to other varieties of normative uncertainty, such as moral uncertainty. (shrink)
Frege seems committed to the thesis that the senses of the fundamental notions of arithmetic remain stable and are stably grasped by thinkers throughout history. Fully competent practitioners grasp those senses clearly and distinctly, while uncertain practitioners see them, the very same senses, “as if through a mist”. There is thus a common object of the understanding apprehended to a greater or lesser degree by thinkers of diverging conceptual competence. Frege takes the thesis to be a condition for the possibility (...) of the rational intelligibility of mathematical practice. I argue however that the idea that senses could be grasped as a matter of degree is in tension with the constitutive theses that Frege held with regard to sense. Given those theses, there can in fact be no such thing as misty grasp of sense, since any uncertainty as to the logical features of a given sense will entail that one is getting hold of a different sense or of no sense at all. I consider various ways of resolving the tension and conclude that Frege’s thesis cannot be defended if we take it to be a thesis about our competence with concepts. This leaves unresolved what I call the problem of normative guidance, that is, the problem of explaining how the fundamental notions of logic and arithmetic can provide inferential guidance to thinkers. (shrink)
I develop a critique of Hume’s infamous problem of induction based upon the idea that the principle of induction (PI) is a normative rather than descriptive claim. I argue that Hume’s problem is a false dilemma, since the PI might be neither a “relation of ideas” nor a “matter of fact” but rather what I call a contingent normative statement. In this case, the PI could be justified by a means-ends argument in which the link between (...) means and end is established solely by deductive reasoning. The means-ends argument is an elementary result from formal learning theory that you must be willing to make inductive generalizations if you want to be logically reliable in the types of examples Hume described. This justification of the PI avoids both horns of Hume’s dilemma. Since no contradiction ensues from rejecting logical reliability as an aim, the PI is contingent. Yet since the proof concerning the PI and logical reliability is not based on inductive reasoning, there is no threat of circularity. (shrink)
I describe a new problem for metaethical constructivism. The problem arises when agents make conflicting judgments, so that the constructivist is implausibly committed to denying they have any reason for any of the available options. The problem is illustrated primarily with reference to Sharon Street’s version of constructivism. Several possible solutions to the problem are explained and rejected.
Kant and Hegel share a common foundational idea: they believe that the authority of normative claims can be justified only by showing that these norms are self-imposed or autonomous. Yet they develop this idea in strikingly different ways: Kant argues that we can derive specific normative claims from the formal idea of autonomy, whereas Hegel contends that we use the idea of freedom not to derive, but to assess, the specific normative claims ensconced in our social institutions (...) and practices. Exploring these claims, I argue that each approach encounters certain difficulties. I then argue that Nietzsche develops a theory of normative authority that avoids these potential difficulties. Nietzsche’s theory proceeds, in part, by reconciling the most compelling aspects of the Kantian and Hegelian accounts—aspects that have seemed, to many interpreters, to be incompatible. The resultant theory generates a unique and fruitful account of normative authority. (shrink)
To date the wealth of literature on abortion has been dedicated to resolving the question of its legal and moral permissibility in relation to the fetus and pregnant woman as subjects of moral standing. This has created a dichotomised way of talking about abortion chiefly in terms of conflicting rights; as a „wrongful‟ versus „legitimate‟ form of killing. The tension between this individualistic rights-based discourse and the „ethic of care‟ to which women are often expected to conform in their moral (...) deliberations gives rise to a stigmatising picture of a woman who aborts as „callous‟ or „selfish.‟ Males who share in abortion decisions are rarely subject to the same type of criticism. In this paper I aim to conceptualise the impact of normative femininity and social judgement on women‟s capacity for moral self-determination in abortion contexts within the framework of an injustice. I will do so by examining women‟s discursive participation with respect to abortion, and by analysing the impact that abortion stigma has on women‟s moral agency and lived experience. This will enable me to demonstrate how women may be uniquely subject to a hermeneutical injustice, which in abortion contexts gives way to a phenomenological injustice. (shrink)
In his popular film An Inconvenient Truth (Guggenheim 2006), Al Gore identifies anthropogenic climate change as the most menacing threat to the future of life on Earth, and he describes that threat specifically as a moral problem: an uninhabitable planetary environment would be an immoral outcome of human behavior. That outcome must be avoided which means, he argues, that a low-carbon trajectory for future human development must be charted without delay. His call-to-action then advocates, among many other things, fast-tracking (...) clean energy technologies and galvanizing the necessary international political will to get climate change under control. Gore is quite correct to identify climate change as an .. (shrink)
Laudan's normative naturalism' claims to account for the success of science by construing theories and other claims as methodological rules interpreted as defeasible hypothetical imperatives for securing cognitive ends. We ask two questions regarding the adequacy for medicine of Laudan's meta- methodology. First, although Laudan denies that general aims can be assigned to a science, we show that this is not the case for medicine. Second, we argue that Laudan's account yields mixed results as a tool for evaluating methodological (...) rules in medicine. These shortcomings call into question the adequacy of normative naturalism as a meta-methodology for science. (shrink)
In this essay we argue that if the covering-law model of moral justification is correct, Hume's "is"-"ought" paragraph calls the possibility of a justifiable theory of moral obligation in doubt. In the first section we delineate Hume's doubts. In the second section we develop a skeptical solution to those doubts.
Normative judgments involve two gradable features. First, the judgments themselves can come in degrees; second, the strength of reasons represented in the judgments can come in degrees. Michael Smith has argued that non-cognitivism cannot accommodate both of these gradable dimensions. The degrees of a non-cognitive state can stand in for degrees of judgment, or degrees of reason strength represented in judgment, but not both. I argue that (a) there are brands of noncognitivism that can surmount Smith’s challenge, and (b) (...) any brand of non-cognitivism that has even a chance of solving the Frege–Geach Problem and some related problems involving probabilistic consistency can also thereby solve Smith’s problem. Because only versions of non-cognitivism that can solve the Frege–Geach Problem are otherwise plausible, all otherwise plausible versions of noncognitivism can meet Smith’s challenge. (shrink)
I will argue that Raz’s defense of the doctrine of the guise of the good rests on a over-intellectualized account of action. Raz holds that attributing evaluative beliefs to agents is justified on explanatory grounds. I argue that this account fails to do justice to the first-personal character of action explanation. Moreover, I will argue that Raz’s account of action has its root in his restrictive and over-intellectualized understanding of normative explanation. I will suggest that we can have a (...) more plausible understanding of the guise of the good that is not over-intellectualized, if we adopt a broader understanding of normative explanation. (shrink)
I describe a number of views in which metaphysical fundamentality is accounted for in normative terms. After describing many different ways this key idea could be developed, I turn to developing the idea in one specific way. After all, the more detailed the proposal, the easier it is to assess whether it works. The rough idea is that what it is for a property to be fundamental is for it to be prima facie obligatory to theorize in terms of (...) that property. (shrink)
In this article we distinguish between philosophical bioethics (PB), descriptive policy orientated bioethics (DPOB) and normative policy oriented bioethics (NPOB). We argue that finding an appropriate methodology for combining empirical data and moral theory depends on what the aims of the research endeavour are, and that, for the most part, this combination is only required for NPOB. After briefly discussing the debate around the is/ought problem, and suggesting that both sides of this debate are misunderstanding one another (i.e. (...) one side treats it as a conceptual problem, whilst the other treats it as an empirical claim), we outline and defend a methodological approach to NPOB based on work we have carried out on a project exploring the normative foundations of paternal rights and responsibilities. We suggest that given the prominent role already played by moral intuition in moral theory, one appropriate way to integrate empirical data and philosophical bioethics is to utilize empirically gathered lay intuition as the foundation for ethical reasoning in NPOB. The method we propose involves a modification of a long-established tradition on non-intervention in qualitative data gathering, combined with a form of reflective equilibrium where the demands of theory and data are given equal weight and a pragmatic compromise reached. (shrink)
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
In this paper I focus on the question of whether nanotechnology is giving rise to new ethical problems rather than merely to new instances of old ethical problems. Firstly, I demonstrate how important it is to make a general distinction between new ethical problems and new instances of old problems. Secondly, I propose one possible way of interpreting the distinction and offer a definition of a “new ethical problem”. Thirdly, I examine whether there is good reason to claim that (...) nanotechnology is giving or will give rise to new ethical problems. My conclusion is that there are no new ethical problems in nanotechnology but merely new occurrences of certain well-known types of ethical problems. Fourthly, I consider three arguments by van de Poel (NanoEthics 2:25–28, 2008) which contradict my conclusion. I argue that my negative conclusion is consistent with the claim that certain ethical issues arising in nanotechnology may require new normative standards or new analytical tools. I conclude that it is likely that a number of ethical issues arising in nanotechnology will have a considerable impact on our ethical theories and values – and that ethical reflection on nanotechnology will be one of the mother lodes of future ethical research – in spite of the fact that no ethical problem in nanoethics will actually be “new”. (shrink)
My dissertation is a systematic defense of the claim that what it is to be rational is to correctly respond to the reasons you possess. The dissertation is split into two parts, each consisting of three chapters. In Part I--Coherence, Possession, and Correctly Responding--I argue that my view has important advantages over popular views in metaethics that tie rationality to coherence (ch. 2), defend a novel view of what it is to possess a reason (ch. 3), and defend a novel (...) view about what it is to act and hold attitudes for normative reasons (ch. 4). In Part II--Foundationalism, Deception, and The Importance of Being Rational--I argue that foundationalists about epistemic rationality should think that the foundational beliefs are held for sufficient reasons (ch. 5), argue that my view solves the New Evil Demon problem for externalism (and solves a related and underapprieciated problem) (ch. 6), and argue that my view can vindicate the thought that we ought to be rational (ch. 7). (shrink)
According to fitting-attitude (FA) accounts of value, X is of final value if and only if there are reasons for us to have a certain pro-attitude towards it. FA accounts supposedly face the wrong kind of reason (WKR) problem. The WKR problem is the problem of revising FA accounts to exclude so called wrong kind of reasons. And wrong kind of reasons are reasons for us to have certain pro-attitudes towards things that are not of value. I (...) argue that the WKR problem can be dissolved. I argue that (A) the view that there are wrong kind of reasons for the pro-attitudes that figure in FA accounts conflicts with the conjunction of (B) an extremely plausible and extremely weak connection between normative and motivating rea- sons and (C) an extremely plausible generality constraint on the reasons for pro- attitudes that figure in FA accounts. I argue that when confronted with this trilemma we should give up (A) rather than (B) or (C) because there is a good explanation of why (A) seems so plausible but is in fact false, but there is no good explanation of why (B) and (C) seem so plausible but are in fact false. (shrink)
A driving force behind much of the literature on the non-identity problem is the widely shared intuition that actions or policies that change who comes into existence don't, as a result, lose their morally problematic features. We hypothesize that this intuition isn’t entirely shared by the general public, which might have widespread implications concerning how to best motivate public support for large-scale, identity-affecting policies like those involved in climate change mitigation. To test our hypothesis, we ran a behavioural economic (...) experiment, a version of the well-known dictator game, designed to mimic the public's morally loaded behaviour in identity-affecting choice problems. As predicted, we found that the public does seem to behave more selfishly when making identity-affecting choices. We further hypothesised that one possible mechanism involved in this change is the notion of harm that plays a role in the public’s normatively loaded decision making. So, during our study, we also solicited subjects’ attitudes about harm, in particular about whether the “dictators” had done harm through their choices. The data suggest that substantial portions of the population each employ distinct notions of harm in their normative thinking, which raises some puzzling features about the public’s normative thinking that call out for further empirical examination. (shrink)
In the 1960’s, Lars Bergström and Hector-Neri Castañeda noticed a problem with alternative acts and consequentialism. The source of the problem is that some performable acts are versions of other performable acts and the versions need not have the same consequences as the originals. Therefore, if all performable acts are among the agent’s alternatives, act consequentialism yields deontic paradoxes. A standard response is to restrict the application of act consequentialism to certain relevant alternative sets. Many proposals are based (...) on some variation of maximalism, that is, the view that act consequentialism should only be applied to maximally specific acts. In this paper, I argue that maximalism cannot yield the right prescriptions in some cases where one can either (i) form at once the intention to do an immediate act and form at a later time the intention to do a succeeding act or (ii) form at once the intention to do both acts and where the consequences of (i) and (ii) differ in value. Maximalism also violates normative invariance, that is, the condition that if an act is performable in a situation, then the normative status of the act does not depend on what acts are performed in the situation. Instead of maximalism, I propose that the relevant alternatives should be the exhaustive combinations of acts the agent can jointly perform without performing any other act in the situation. In this way, one avoids the problem of act versions without violating normative invariance. Another advantage is that one can adequately differentiate between possibilities like (i) and (ii). (shrink)
A difficulty is exposed in Allan Gibbard's solution to the embedding/Frege-Geach problem, namely that the difference between refusing to accept a normative judgement and accepting its negation is ignored. This is shown to undermine the whole solution.
The Negation Problem states that expressivism has insufficient structure to account for the various ways in which a moral sentence can be negated. We argue that the Negation Problem does not arise for expressivist accounts of all normative language but arises only for the specific examples on which expressivists usually focus. In support of this claim, we argue for the following three theses: 1) a problem that is structurally identical to the Negation Problem arises in (...) non-normative cases, and this problem is solved once the hidden quantificational structure involved in such cases is uncovered; 2) the terms ‘required’, ‘permissible’, and ‘forbidden’ can also be analyzed in terms of hidden quantificational structure, and the Negation Problem disappears once this hidden structure is uncovered; 3) the Negation Problem does not arise for normative language that has no hidden quantificational structure. We conclude that the Negation Problem is not really a problem about expressivism at all but is rather a feature of the quantificational structure of the required, permitted, and forbidden. (shrink)
The purpose of this paper is to contribute to the ongoing analyses that aim to confront the problem of marked variation. Negatively marked differences are those natural variations that are used to cleave human beings into different categories (e.g., of disablement, of medicalized pathology, of subnormalcy, or of deviance). The problem of marked variation is: Why are some rather than other variations marked as epistemically or culturally significant or as a diagnostic of pathology, and What is the epistemic (...) background that makes these—rather than other variations—marked as subnormal? For Wilson (2018a), critical examination of the problem of marked variation is central to understanding the epistemology of medicalized pathology that made the history of eugenics possible. My aim is to explore the role marked variation plays in eugenic and other problematic classifications and the inferences they appear to license. I pay particular attention to the normative valuations of marked variations, how these valuations affect the inferences that are made by others about those possessing the variation, and how those possessing the variation perceive themselves. In the final sections, I illustrate this by critically discussing three putative kinship conceptions of race. I rely on these to extend the scope of the puzzle of marked variation from the context of historic and current markings of an individual’s variation as disability in the eugenics movement to historic and current markings for assigning putative racial ascriptions to individuals and groups. Lastly, I suggest that the problem of marked variation is a problem that looms over any epistemic account that is dependent upon sorting or classifying. (shrink)
Machery & Mallon [The moral psychology handbook (pp. 3–47). New York, NY: Oxford University Press, 2010] argue that existing evidence does not support the claim that moral cognition, understood as a specific form of normative cognition, is a product of evolution. Instead, they suggest that the evidence only supports the more modest claim that a general capacity for normative cognition evolved. They argue that if this is the case then the prospects for evolutionary debunking arguments are bleak. A (...) debunking argument which relied on the fact that normative cognition in general evolved might threaten all areas of normative belief, including the epistemic norms upon which the argument relies. For the sake of argument, we accept their claim that specifically moral cognition did not evolve. However, we reject their contention that this critically undermines evolutionary debunking arguments of morality. A number of strategies are available to solve what we call the “containment problem,” or how to effectively debunk morality without thereby debunking normative cognition tout court. Furthermore, debunking arguments need not rely on the claim that normative cognition in general evolved. So long as at least some aspects of moral cognition have evolved, this may be sufficient to support an evolutionary debunking argument against many of our moral beliefs. Thus, even if Machery & Mallon are right that specifically moral cognition did not evolve, research in evolutionary psychology may have radical implications for moral philosophy. (shrink)
Ethical questions have traditionally been approached through conceptual analysis. Inspired by the rapid advance of modern brain imaging techniques, however, some ethical questions appear in a new light. For example, hotly debated trolley dilemmas have recently been studied by psychologists and neuroscientists alike, arguing that their findings can support or debunk moral intuitions that underlie those dilemmas. Resulting from the wedding of philosophy and neuroscience, neuroethics has emerged as a novel interdisciplinary field that aims at drawing conclusive relationships between neuroscientific (...) observations and normative ethics. A major goal of neuroethics is to derive normative ethical conclusions from the investigation of neural and psychological mechanisms underlying ethical theories, as well as moral judgments and intuitions. The focus of this article is to shed light on the structure and functioning of neuroethical arguments of this sort, and to reveal particular methodological challenges that lie concealed therein. We discuss the methodological problem of how one can—or, as the case may be, cannot—validly infer normative conclusions from neuroscientific observations. Moreover, we raise the issue of how preexisting normative ethical convictions threaten to invalidate the interpretation of neuroscientific data, and thus arrive at question-begging conclusions. Nonetheless, this is not to deny that current neuroethics rightly presumes that moral considerations about actual human lives demand empirically substantiated answers. Therefore, in conclusion, we offer some preliminary reflections on how the discussed methodological challenges can be met. (shrink)
A theory of normative reasons for action faces the fundamental challenge of accounting for the dual nature of reasons. On the one hand, some reasons appear to depend on, and vary with, desires. On the other hand, some reasons appear categorical in the sense of being desire‐independent. However, it has turned out to be difficult to provide a theory that accommodates both these aspects. Internalism is able to account for the former aspect, but has difficulties to account for the (...) latter, whereas externalism is vulnerable to the reverse problem. In this paper, I outline an ecumenical view that consists of two parts: First, I defend a distinction between requiring reasons and justifying reasons in terms of their different connections to rationality. Second, I put forward a subjectivist, procedural, view of rationality. The ecumenical alternative, I argue, is able to accommodate the mentioned duality within a unified theory. In outlining this view, I also suggest that it has a number of other significant advantages. (shrink)