This paper does four things. First it lays out an orthodox position on reasons and defeaters. Then it argues that the position just laid out is mistaken about “undercutting” defeaters. Then the paper explains an unpublished thought experiment by Dorothy Edgington. And then it uses that thought experiment to motivate a new approach to undercutting defeaters.
According to Duncan Pritchard, there are two kinds of radical sceptical problem; the closure-based problem, and the underdetermination-based problem. He argues that distinguishing these two problems leads to a set of desiderata for an anti-sceptical response, and that the way to meet all of these desiderata is by supplementing a form of Wittgensteinian contextualism with disjunctivist views about factivity. I agree that an adequate response should meet most of the initial desiderata Pritchard puts forward, and that some version of Wittgensteinian (...) contextualism shows the most promise as a starting point for this, but I argue, contra Pritchard, that the addition of disjunctivism is unnecessary and potentially counter-productive. If we draw on lessons from Michael Williams's inferential contextualism then it is both possible, and preferable, to meet the most important of Pritchard's desiderata, undercutting both closure-based and underdetermination-based sceptical problems in a unified way, without the need to resort to disjunctivism. (shrink)
In my () I argued that a central component of mathematical practice is that published proofs must be “transferable” — that is, they must be such that the author's reasons for believing the conclusion are shared directly with the reader, rather than requiring the reader to essentially rely on testimony. The goal of this paper is to explain this requirement of transferability in terms of a more general norm on defeat in mathematical reasoning that I will call “convertibility”. I begin (...) by discussing two types of epistemic defeat: “rebutting” and “undercutting”. I give examples of both of these kinds of defeat from the history of mathematics. I then argue that an important requirement in mathematics is that published proofs be detailed enough to allow the conversion of rebutting defeat into undercutting defeat. Finally, I show how this sort of convertibility explains the requirement of transferability, and contributes to the way mathematics develops by the pattern referred to by Lakatos () as “lemma incorporation”. (shrink)
This essay discusses a prominent definition of universal concomitance in the Nyäya School of Classical Indian Philosophy. This définition holds that universal concomitance is equivalent to the absence of undercutting conditions. It will be shown that though this definition seems to be inadequate, there is an auxiliary condition that may be added which makes the equivalence between universal concomitance and the absence of undercutting conditions deductively correct. It will then be shown that this auxiliary condition fits well into (...) the Nyäya foundations of logic and that furthermore this auxiliary condition does not unreasonably restrict the applicability of the definition of universal concomitance as the absence of undercutting conditions. Hence, the conclusion is that this interpretation is a good candidate for how the definition of universal concomitance as the absence of undercutting conditions should be understood. (shrink)
Particularists in ethics emphasize that the normative is holistic, and invite us to infer with them that it therefore defies generalization. This has been supposed to present an obstacle to traditional moral theorizing, to have striking implications for moral epistemology and moral deliberation, and to rule out reductive theories of the normative, making it a bold and important thesis across the areas of normative theory, moral epistemology, moral psychology, and normative metaphysics. Though particularists emphasize the importance of the holism of (...) the normative, however, it is not something that they have been able to explain. In this paper I’ll show how to use a small number of simple and, I’ll argue, independently compelling assumptions in order to both predict and explain the holistic features of the normative with respect to the non-normative. The basic idea of the paper is simple. It is that normative claims are holistic because they are general, rather than because they defy generalization. (shrink)
In a recent article, Joel Pust argued that direct inference based on reference properties of differing arity are incommensurable, and so direct inference cannot be used to resolve the Sleeping Beauty problem. After discussing the defects of Pust's argument, I offer reasons for thinking that direct inferences based on reference properties of differing arity are commensurable, and that we should prefer direct inferences based on logically stronger reference properties, regardless of arity.
It is widely supposed that, in Hilary Putnam’s phrase, there are no “ready-made objects” (Putnam 1982; cf. Putnam 1981, Ch. 3). Instead the objects we consider real are partly of our own making: we carve them out of the world (or out of experience). The usual reason for supposing this lies in the claim that there are available to us alternative ways of “dividing reality” into objects (to quote the title of Hirsch 1993), ways which would afford us every bit (...) as much practical and cognitive mastery as we now possess. Hence there is no warrant for supposing that the objects recognized by such alternative schemes are any less real than the objects we actually consider real—unless we are to appeal to a “God’s-eye perspective”, which virtually no one wants to do. The reasonable conclusion, many philosophers suppose, is that any system of objects exists only relative to a particular conceptual scheme or system of predicates. Our system of objects is, in this sense, partly of our own making. But this claim should not be heard as a reaffirmation of the idealism of Fichte or Husserl. It is more accurate to take it merely as a claim about sameness and difference among the objects of the world. It is the claim that sameness and difference, whether at the level of kinds or of individual objects, do not obtain mind-independently, but only relative to a conceptual scheme or predicate system (see, e.g., Putnam 1981, pp. 53-54). That is, whether two objects are of the same kind, and whether one and the same individual object now exists at such-and-such.. (shrink)
There is an important class of conditionals whose assertibility conditions are not given by the Ramsey test but by an inductive extension of that test. Such inductive Ramsey conditionals fail to satisfy some of the core properties of plain conditionals. Associated principles of nonmonotonic inference should not be assumed to hold generally if interpretations in terms of induction or appeals to total evidence are not to be ruled out.
Modal Arguments in the philosophy of mind purport to show that the body is not necessary for a human person’s existence. The key premise in these arguments are generally supported with thought experiments. I argue that Christians endorsing the Doctrine of the Resurrection have good reason to deny this key premise. Traditional Christianity affirms that eschatological human existence is an embodied existence in the very bodies we inhabited while alive. The raises the Resurrection Question: why would God go through the (...) trouble of resurrecting those bodies? I argue that adequately answering this question requires give up on Modal Arguments within the philosophy of mind. (shrink)
I make a case for distinguishing clearly between subjective and objective accounts of undercutting defeat and for rejecting a hybrid view that takes both subjective and objective elements to be relevant for whether or not a belief is defeated. Moderate subjectivists claim that taking a belief to be defeated is sufficient for the belief to be defeated; subjectivist idealists add that if an idealised agent takes a belief to be defeated then the belief is defeated. Subjectivist idealism evades some (...) of the objections levelled against moderate subjectivism but can be shown to yield inconsistent results in some cases. Both subjectivisms should be rejected. We should be objectivists regarding undercutting defeat. This requirement, however, is likely to be problematic for a popular interpretation of evolutionary debunking arguments in metaethics as it can be shown that existing objectivist accounts of defeat do not support such arguments. I end by discussing the constraints of developing such an account. (shrink)
Scott Sturgeon has recently challenged Pollock’s account of undercutting defeaters. The challenge involves three primary contentions: the account is both too strong and too weak, undercutting defeaters exercise their power to defeat only in conjunction with higher-order beliefs about the basis of the lower-order beliefs whose justification they target, and since rebutting defeaters exercise their power to defeat in isolation, rebutting and undercutting defeaters work in fundamentally different ways. My goal is to reject each of these contentions. (...) I maintain that Sturgeon fails to show that Pollock’s account of undercutting defeaters is either too strong or too weak, his own account of how undercutting defeaters exercise their power to defeat is both too strong and too weak, and his claim that rebutting and undercutting defeaters work in fundamentally different ways is mistaken. (shrink)
In this paper, I argue that the two most common views of how to respond rationally to peer disagreement–the Total Evidence View (TEV) and the Equal Weight View (EWV)–are both inadequate for substantial reasons. TEV does not issue the correct intuitive verdicts about a number of hypothetical cases of peer disagreement. The same is true for EWV. In addition, EWV does not give any explanation of what is rationally required of agents on the basis of sufficiently general epistemic principles. I (...) will then argue that there is a genuine alternative to both views–the Preemption View (PV)–that fares substantially better in both respects. I will give an outline and a detailed defense of PV in the paper. (shrink)
Typically, expert judgments are regarded by laypeople as highly trustworthy. However, expert assertions that strike the layperson as obviously false or outrageous, seem to give one a perfect reason to dispute that this judgment manifests expertise. In this paper, I will defend four claims. First, I will deliver an argument in support of the preemption view on expert judgments according to which we should not rationally use our own domain-specific reasons in the face of expert testimony. Secondly, I will argue (...) that the preemption view does not leave room for rejecting an expert judgment simply because it is outrageous. Thirdly, I will argue that outrageous expert judgments are ambiguous. Whereas some of them should be rationally rejected by laypeople, others are true and rationally acceptable. So, being outrageous is not, in and of itself, a reason to reject the judgment. Finally, I will argue that there are resources available to the preemption view that enable the layperson to reject some but not all outrageous expert judgments. This is sufficient to overcome the challenge from outrageous expert judgments to the preemption view. (shrink)
Several philosophers have recently argued that evolutionary considerations undermine the justification of all objectivist moral beliefs by implying a hypothetical disagreement: had our evolutionary history been different, our moral beliefs would conflict with the moral beliefs of our counterfactual selves. This paper aims at showing that evolutionary considerations do not imply epistemically relevant moral disagreement. In nearby scenarios, evolutionary considerations imply tremendous moral agreement. In remote scenarios, evolutionary considerations do not entail relevant disagreement with our epistemic peers, neither on a (...) narrow nor on a broad conception of peerhood. In conclusion, evolutionary considerations do not reveal epistemically troubling kinds of disagreement. Anti-objectivists need to look elsewhere to fuel their sceptical argument. (shrink)
A familiar criticism of religious belief starts from the claim that a typical religious believer holds the particular religious beliefs she does just because she happened to be raised in a certain cultural setting rather than some other. This claim is commonly thought to have damaging epistemological consequences for religious beliefs, and one can find statements of an argument in this vicinity in the writings of John Stuart Mill and more recently Philip Kitcher, although the argument is seldom spelled out (...) very precisely. This paper begins by offering a reconstruction of an argument against religious beliefs from cultural contingency, which proceeds by way of an initial argument to the unreliability of the processes by which religious beliefs are formed, whose conclusion is then used to derive two further conclusions, one which targets knowledge and the other, rationality. Drawing upon recent work in analytic epistemology, I explore a number of possible ways of spelling out the closely related notions of accidental truth, epistemic luck, and reliability upon which the argument turns. I try to show that the renderings of the argument that succeed in securing the sceptical conclusion against religious beliefs also threaten scepticism about various sorts of beliefs besides religious beliefs. (shrink)
How should we evaluate an argument in which two witnesses independently testify to some claim? In fact what would happen is that the testimony of the second witness would be taken to corroborate that of the first to some extent, thereby boosting up the plausibility of the first argument from testimony. But does that commit the fallacy of double counting, because the second testimony is already taken as independent evidence supporting the claim? Perhaps the corroboration effect should be considered illogical, (...) since each premise should be seen as representing a separate reason in a convergent argument for accepting the claim as plausible. In this paper, we tackle the problem using argumentation schemes and argument diagramming. We examine a number of examples, and come up with two hypotheses that offer methods of analyzing and evaluating this kind of evidence. (shrink)
Support is canvassed for a novel solution to the sceptical problem regarding our knowledge of the external world. Key to this solution is the claim that what initially looks like a single problem is in fact two logically distinct problems. In particular, there are two putative sceptical paradoxes in play here, which each trade on distinctive epistemological theses. It is argued that the ideal solution to radical scepticism would thus be a biscopic proposal—viz., a two-pronged, integrated, undercutting treatment of (...) both putative sceptical paradoxes. A particular biscopic proposal is then explored which brings together two apparently opposing anti-sceptical theses: he Wittgensteinian account of the structure of rational evaluation and epistemological disjunctivism. It is argued that each proposal enables us to gain a purchase on one, but only one, aspect of the two-sided sceptical problem. Furthermore, it is argued that these proposals are not only compatible positions, but also mutually supporting and advanced in the same undercutting spirit. A potential cure is thus offered for epistemic angst. (shrink)
Exclusionary defeat is Joseph Raz’s proposal for understanding the more complex, layered structure of practical reasoning. Exclusionary reasons are widely appealed to in legal theory and consistently arise in many other areas of philosophy. They have also been subject to a variety of challenges. I propose a new account of exclusionary reasons based on their justificatory role, rejecting Raz’s motivational account and especially contrasting exclusion with undercutting defeat. I explain the appeal and coherence of exclusionary reasons by appeal to (...) commonsense value pluralism and the intermediate space of public policies, social roles, and organizations. We often want our choices to have a certain character or instantiate a certain value and in order to do so, that choice can only be based on a restricted set of reasons. Exclusion explains how pro tanto practical reasons can be disqualified from counting towards a choice of a particular kind without being outweighed or undercut. (shrink)
An abstract framework for structured arguments is presented, which instantiates Dung's ('On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming, and n- Person Games', Artificial Intelligence , 77, 321-357) abstract argumentation frameworks. Arguments are defined as inference trees formed by applying two kinds of inference rules: strict and defeasible rules. This naturally leads to three ways of attacking an argument: attacking a premise, attacking a conclusion and attacking an inference. To resolve such attacks, preferences may (...) be used, which leads to three corresponding kinds of defeat: undermining, rebutting and undercutting defeats. The nature of the inference rules, the structure of the logical language on which they operate and the origin of the preferences are, apart from some basic assumptions, left unspecified. The resulting framework integrates work of Pollock, Vreeswijk and others on the structure of arguments and the nature of defeat and extends it in several respects. Various rationality postulates are proved to be satisfied by the framework, and several existing approaches are proved to be a special case of the framework, including assumption-based argumentation and DefLog. (shrink)
Given plausible assumptions about the nature of evidence and undercutting defeat, many believe that the force of the evidential problem of evil depends on sceptical theism being false: if evil is evidence against God, then seeing no justifying reason for some particular instance of evil must be evidence for it truly being pointless. I think this dialectic is mistaken. In this paper, after drawing a lesson about fallibility and induction from the preface paradox, I argue that the force of (...) the evidential problem of evil is compatible with sceptical theism being true. More exactly, I argue that the collection of apparently pointless evil in the world provides strong evidence for there being truly pointless evil, despite the fact that seeing no justifying reason for some particular instance of evil is no evidence whatsoever for it truly being pointless. I call this result the paradox of evil. (shrink)
The goal of this paper is to frame a theory of reasons--what they are, how they support actions or conclusions--using the tools of default logic. After sketching the basic account of reasons as provided by defaults, I show how it can be elaborated to deal with two more complicated issues: first, situations in which the priority relation among defaults, and so reasons as well, is itself established through default reasoning; second, the treatment of undercutting defeat and exclusionary reasons. Finally, (...) and by way of application, I show how the resulting account can shed some light on Jonathan Dancy's argument from reason holism to a form of extreme particularism in moral theory. (shrink)
Once symbolized by a burning armchair, experimental philosophy has in recent years shifted away from its original hostility to traditional methods. Starting with a brief historical review of the experimentalist challenge to traditional philosophical practice, this chapter looks at research undercutting that challenge, and at ways in which experimental work has evolved to complement and strengthen traditional approaches to philosophical questions.
Imagine that you are a farmer living in Kenya. Though you work hard to sell your produce to foreign markets you find yourself unable to do so because affluent countries subsidize their own farmers and erect barriers to trade, like tariffs, thereby undercutting you in the marketplace. As a consequence of their actions you languish in poverty despite your very best efforts. Or, imagine that you are a peasant whose livelihood depends on working in the fields in Indonesia and (...) you are forcibly displaced from your land by a biofuels company because corrupt government officials have stolen the land and sold it to the company. Or, suppose that you work on the coast of Bangladesh but find that increasingly you are unable to cope with salination resulting from sealevel rise – a product of anthropogenic climate change. These, I believe, are cases of global injustice. My question is: What are those who bear the brunt of global injustice entitled to do to secure their, and other people’s, entitlements? Often people focus on the duties of the affluent to respect and uphold the rights of the disadvantaged. This is understandable. But there is a striking omission. Rarely do people analyze, or even mention, what those who lack their entitlements are entitled to do to secure their own rights. This is my focus in this paper. More specifically, I examine what agents are entitled to do to change the underlying social, economic and political practices and structures in a more just direction. (shrink)
Justification depends on context: even if E on its own justifies H, still it might fail to justify in the context of D. This sort of effect, epistemologists think, is due to defeaters, which undermine or rebut a would-be justifier. I argue that there is another fundamental sort of contextual feature, disqualification, which doesn't involve rebuttal or undercutting, and which cannot be reduced to any notion of screening-off. A disqualifier makes some would-be justifier otiose, as direct testimony sometimes does (...) to distal testimony, and as manifestly decisive evidence might do to gratuitous evidence on the same team. Basing a belief on disqualified evidence, moreover, is distinctively irrational. One is not necessarily irresponsible. Instead one is turning down a free upgrade to a sleeker, stabler basis for one's beliefs. Such an upgrade would prevent wastes of epistemic effort, since someone who bases her belief on a disqualified proposition E will need to remember E and rethink her belief should E ever be defeated. The upgrade might also reduce reliance on unwieldy evidence, if E is relevant only thanks to some labyrinthine argument; and if even ideal agents should doubt their ability to follow such arguments, even they should care about disqualifiers. (shrink)
According to the standard account of actions and their explanations, intentional actions are actions done because the agent has a certain desire/belief pair that explains the action by rationalizing it. Any explanation of intentional action in terms of an appetite or occurrent emotion is hence assumed to be elliptical, implicitly appealing to some appropriate belief. In this paper, I challenge this assumption with respect to the " arational " actions of my title---a significant subset of the set of intentional actions (...) explained by occurrent emotion. These actions threaten the standard account, not only by forming a recalcitrant set of counterexamples to it, but also, as we shall see, by undercutting the false semantic theory that holds that account in place. (shrink)
The “Embryo Rescue Case” (ERC) refers to a thought experiment that is used to argue against the view that embryos have a right to life (i.e. are persons). I will argue that cognitive science undermines the intuition elicited by the ERC; I will show that whether or not embryos have a right to life, our mental tools will make it very difficult to believe that embryos have said right. This suggests that the intuition elicited by the ERC is not truth (...) indicative. The upshot of this is that we have an undercutting defeater for our intuition that embryos do not have a right to life. (shrink)
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals (...) are Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
Evolutionary Debunking Arguments purport to show that our moral beliefs do not amount to knowledge because these beliefs are “debunked” by the fact that our moral beliefs are, in some way, the product of evolutionary forces. But there is a substantial gap in this argument between its main evolutionary premise and the skeptical conclusion. What is it, exactly, about the evolutionary origins of moral beliefs that would create problems for realist views in metaethics? I argue that evolutionary debunking arguments are (...) best understood as offering up defeaters for our moral beliefs. Moreover, the defeater in question is a paradigmatic instance of undercutting defeat. If anything is an undercutting defeater, then learning about the evolutionary origins of our moral beliefs is a defeater for those beliefs. (shrink)
Weisberg () provides an argument that neither conditionalization nor Jeffrey conditionalization is capable of accommodating the holist’s claim that beliefs acquired directly from experience can suffer undercutting defeat. I diagnose this failure as stemming from the fact that neither conditionalization nor Jeffrey conditionalization give any advice about how to rationally respond to theory-dependent evidence, and I propose a novel updating procedure that does tell us how to respond to evidence like this. This holistic updating rule yields conditionalization as a (...) special case in which our evidence is entirely theory independent. 1 Introduction2 Conditionalization3 Holism and Conditionalization4 A Holistic Update5 HCondi and Dutch Books6 Commutativity and Learning about Background Theories6.1 Commutativity6.2 Learning about background theories7 In Summation. (shrink)
When prices for basic commodities increase following a disaster, these price increases are often condemned as ‘price gouging.’ In this paper, I discuss what moral wrongs, if any, are most reasonably ascribed to accusations of price gouging. This discussion keeps in mind both practical and moral defenses of price increase following disasters. I first examine existing anti-gouging legislation for commonalities in their definitions of gouging and then present arguments in favor of the permissibility of gouging, focusing on the economic benefits (...) of price increases following disasters. I argue that gouging takes the form of a specific failure of respect for persons by undercutting equitable access to essential goods. While I discuss anti-gouging legislation throughout this paper, my aim is to give anaccount of the moral wrongs associated with gouging rather than guidance for developing morally defensible anti-gouging legislation. (shrink)
The goal of this feature is to demonstrate that distributive justice is a flawed theory of self-defense and must be rejected, thus undercutting the argument that torture can be justified as self-defense.
SEE BELOW ("EXTERNAL LINKS") FOR A FREELY AVAILABLE PDF OF THIS ARTICLE /// ABSTRACT: Debunking arguments are arguments that seek to undermine a belief or doctrine by exposing its causal origins. Two prominent proponents of such arguments are the utilitarians Joshua Greene and Peter Singer. They draw on evidence from moral psychology, neuroscience, and evolutionary theory in an effort to show that there is something wrong with how deontological judgments are typically formed and with where our deontological intuitions come from. (...) They offer debunking explanations of our emotion-driven deontological intuitions and dismiss complex deontological theories as products of confabulatory post hoc rationalization. Through my discussion of Greene and Singer’s empirically informed debunking of deontology, I introduce the distinction between two different types of debunking arguments. The first type of debunking argument operates through regular undercutting defeat, whereas the second type relies on higher-order evidence. I argue that the latter type of debunking argument, of which the argument from confabulation is an example, is objectionably sloppy and therefore inadmissible in academic discussion. (shrink)
In philosophy, as in many other disciplines and domains, stable disagreement among peers is a widespread and well-known phenomenon. Our intuitions about paradigm cases, e.g. Christensen's Restaurant Case, suggest that in such controversies suspension of judgment is rationally required. This would prima facie suggest a robust suspension of judgment in philosophy. But we are still lacking a deeper theoretical explanation of why and under what conditions suspension is rationally mandatory. In the first part of this paper I will focus on (...) this question. After a critical survey of some recent alternative approaches (diversity as a thread to the reliability, decision problem, acquisition of undercutting defeaters), I will argue that in fact discovering disagreement with an opponent provides me with a rebutting defeater, but only if some further non-trivial conditions are satisfied - among them my acknowledging her as my reliability peer. In the second part of the paper I will explore in more detail the skeptical implications this account has for philosophy. Here, I will defend two claims. First, skepticism about philosophy is mandatory only if the relevant peerness assumption can be justified. Second, in philosophy there is no basis available that would support the relevant peerness assumption. If this is correct, we are not forced into skepticism about philosophy and may rationally retain our philosophical beliefs even in the face of controversy. (shrink)
This paper aims to provide a starting point for a non-representational approach to language. It will do so by undoing some of the reifying tendencies that are at the heart of the ontology of scientific psychology. Although non-representational theories are beginning to emerge, they remain committed to giving explanations in terms of ontological structures that are independent of human activity. If they maintain this commitment it is unlikely that they will displace representationalism in domains such as language. By following some (...) of Wittgenstein’s remarks on language, I explain the phenomenon of reification by carefully considering the formative, situational flow of language—thus without invoking representations. In this way, the paper sketches a direction of approach for a non-representational theory of language, undercutting the most important assumptions that justify an explanatory ontology devoid of human activity. (shrink)
According to operator theories, "if" denotes a two-place operator. According to restrictor theories, "if" doesn't contribute an operator of its own but instead merely restricts the domain of some co-occurring quantifier. The standard arguments (Lewis 1975, Kratzer 1986) for restrictor theories have it that operator theories (but not restrictor theories) struggle to predict the truth conditions of quantified conditionals like -/- (1) a. If John didn't work at home, he usually worked in his office. b. If John didn't work at (...) home, he must have worked in his office. -/- Gillies (2010) offers a context-shifty conditional operator theory that predicts the right truth conditions for epistemically modalized conditionals like (1b), thus undercutting one standard argument for restrictor theories. I explore how we might generalize Gillies' theory to adverbially quantified conditionals like (1a) and deontic conditionals, and argue that a natural generalization of Gillies' theory -- following his strategy for handling epistemically modalized conditionals -- won't work for these other conditionals because a crucial assumption that epistemic modal bases are closed (used to neutralize the epistemic quantification contributed by "if") doesn't have plausible analogs in these other domains. (shrink)
In the standard thought experiments, dualism strikes many philosophers as true, including many non-dualists. This ‘striking’ generates prima facie justification: in the absence of defeaters, we ought to believe that things are as they seem to be, i.e. we ought to be dualists. In this paper, I examine several proposed undercutting defeaters for our dualist intuitions. I argue that each proposal fails, since each rests on a false assumption, or requires empirical evidence that it lacks, or overgenerates defeaters. By (...) the end, our prima facie justification for dualism remains undefeated. I close with one objection concerning the dialectical role of rebutting defeaters, and I argue that the prospects for a successful rebutting defeater for our dualist intuitions are dim. Since dualism emerges undefeated, we ought to believe it. (shrink)
Call “epistocracy” a political regime in which the experts, those who know best, rule; and call “the epistocratic claim” the assertion that the experts’ superior knowledge or reliability is “a warrant for their having political authority over others.” Most of us oppose epistocracy and think the epistocratic claim is false. But why is it mistaken? Contemporary discussions of this question focus on two answers. According to the first, expertise could, in principle, be a warrant for authority. What bars the successful (...) justification of epistocracy is that the relevant kind of expertise does not exist in politics (either because there are no procedure-independent standards of right or wrong in politics, or because, though such standards exist, no one knows better than anyone else what they require). This skeptical position comes, however, at a significant cost: Without the assumption that some political decisions are better than others, and that some people know better than others what these decisions are, it is difficult to make sense of much of our political practice, including how we criticize politicians and choose among candidates for office. The second answer accepts that there is expertise of the relevant sort in politics. It argues, however, that such expertise does not justify political authority because political justifications are subject to special “acceptability requirements.” Since claims to expertise are normally not acceptable to all qualified (reasonable etc.) points of view, they cannot function as premises in the justification of political authority, and the epistocratic claim fails. Yet as a number of critics have pointed out, this (broadly Rawlsian) strategy faces significant problems: it is at least unclear whether the strategy in fact bars all epistocratic conclusions whether there is any principled way to draw the distinction between qualified and non-qualified points of views on which it depends; and whether principled defenses for it are available and internally consistent. This article outlines a third and previously largely overlooked answer, which resists the epistocratic claim without either denying the existence of expertise in politics or invoking special acceptability requirements for political justifications. The only plausible argument for the epistocratic claim, this article argues, focuses on the compensatory role that the expert’s authority plays in correcting the subject’s relative unreliability or other agential shortcomings. The expert’s authority is thus justified only if the subject, by adopting a policy of obeying the expert’s directives, does not face problems that are very similar to the ones that the expert’s authority was meant to solve in the first place. If, for instance, the subject finds it no easier to reliably identify what the expert’s directives require of him than to reliably assess and act on the reasons with which the expert is meant to help him, then the expert’s directives lack the compensatory value that would justify her authority. But if some widely accepted empirical conjectures about politics in a pluralistic political community are correct, then citizens normally either have no reason to adopt a policy of obeying experts, or the experts with regard to whom they have reason to adopt such a policy differ, so that no expert has the kind of general authority over the polity that we associate with political rule. (We may call this the “non-compensation argument” against epistocracy.) The argument is important both because it helps shed light on the proper relation between authority and expertise in general, and because it shows that we can normally reject the epistocratic claim without adopting either the skeptical or the Rawlsian strategy, thus undercutting whatever support these views derive from the mistaken perception that they are necessary for resisting the threat of epistocracy. Finally, because the anti-epistocratic constraints it introduces apply only to justifications of the subjects’ duty to obey, but not to the existence or activities of political institutions as such, the compensation argument can accommodate our anti-epistocratic intuitions without excluding epistemic considerations from the design of political institutions more generally. (shrink)
The externalist says that your evidence could fail to tell you what evidence you do or not do have. In that case, it could be rational for you to be uncertain about what your evidence is. This is a kind of uncertainty which orthodox Bayesian epistemology has difficulty modeling. For, if externalism is correct, then the orthodox Bayesian learning norms of conditionalization and reflection are inconsistent with each other. I recommend that an externalist Bayesian reject conditionalization. In its stead, I (...) provide a new theory of rational learning for the externalist. I defend this theory by arguing that its advice will be followed by anyone whose learning dispositions maximize expected accuracy. I then explore some of this theory’s consequences for the rationality of epistemic akrasia, peer disagreement, undercutting defeat, and uncertain evidence. (shrink)
Radical skepticism relies on the hypothesis that one could be completely cut off from the external world. In this paper, I argue that this hypothesis can be rationally motivated by means of a conceivability argument. Subsequently, I submit that this conceivability argument does not furnish a good reason to believe that one could be completely cut off from the external world. To this end, I show that we cannot adequately conceive scenarios that verify the radical skeptical hypothesis. Attempts to do (...) so fall prey to one or another of three pitfalls: they end up incomplete, reveal a deep contradiction or recreate a non-skeptical hypothesis. I use these results to improve upon Pritchard’s recent attempt at undercutting radical skepticism. (shrink)
Central to Bataille’s critique of Hegel is his reading in ‘Hegel, Death, and Sacrifice’ of ‘negation’ and of ‘lordship and bondage’ in the Phenomenology of Spirit. Whereas Hegel invokes negation as inclusive of death, Bataille points out that negation in the dynamic of lordship and bondage must of necessity be representational rather than actual. Derrida, in ‘From Restricted to General Economy’ sees in Bataille’s perspective an undercutting of the overall Hegelian project consonant with his own ongoing deconstruction of Hegelian (...) sublation. I argue that not only does Hegel fail to adequately pursue his own best advice to ‘tarry with the negative,’ but Bataille and Derrida’s critique misconstrues the relation between sublation and dialectic in Hegel’s work. I explicate Adorno’s ‘negative dialectic’ by way of alternative both to Hegelian speculative dialectic and to its Bataillean–Derridean deconstruction. (shrink)
Uncivil behavior by leaders may be viewed as an effective way to motivate employees. However, supervisor incivility, as a form of unethical supervision, may be undercutting employees’ ability to do their jobs. We investigate linkages between workplace incivility and perceived work ability, a variable that captures employees’ appraisals of their ability to continue working in their jobs. We draw upon the appraisal theory of stress and social identity theory to examine incivility from supervisors as an antecedent to PWA, and (...) to investigate job involvement and grit as joint moderators of this association. Results from data collected in two samples of working adults provided evidence for three-way interactions in relation to PWA. Among employees with high levels of grit, there was no significant relation between supervisor incivility and PWA, regardless of employee job involvement. However, we found some evidence that for those low in grit, having high job involvement was associated with a stronger relationship between supervisor incivility and PWA. Findings attest to the importance of unethical supervisor behavior, showing the potential for supervisor incivility to erode PWA, as well as the importance of grit as a potential buffer. (shrink)
Although the study of reasons plays an important role in both epistemology and moral philosophy, little attention has been devoted to the question of how, exactly, reasons interact to support the actions or conclusions they do. In this book, John F. Horty attempts to answer this question by providing a precise, concrete account of reasons and their interaction, based on the logic of default reasoning. The book begins with an intuitive, accessible introduction to default logic itself, and then argues that (...) this logic can be adapted to serve as a foundation for a concrete theory of reasons. Horty then shows that the resulting theory helps to explain how the interplay among reasons can determine what we ought to do by developing two different deontic logics, capturing two different intuitions about moral conflicts.In the central part of the book, Horty elaborates the basic theory to account for reasoning about the strength of our own reasons, and also about the related concepts of undercutting defeaters and exclusionary reasons. The theory is illustrated with an application to particularist arguments concerning the role of principles in moral theory. The book concludes by introducing a pair of issues new to the philosophical literature: the problem of determining the epistemic status of conclusions supported by separate but conflicting reasons, and the problem of drawing conclusions from sets of reasons that can vary arbitrarily in strength, or importance. (shrink)
This is a critical commentary on Pritchard's book Epistemic Angst. In Section 2, I present the closure-based radical skeptical paradox. Then in Section 3, I sketch Pritchard’s undercutting response to this paradox. Finally, in Section 4, I put forward two concerns about Pritchard’s response and I also propose a reading of hinge commitments, the ability reading, that might put some pressure on Pritchard’s own reading of these commitments.
This article presents a formal dialogue game for adjudication dialogues. Existing AI & law models of legal dialogues and argumentation-theoretic models of persuasion are extended with a neutral third party, to give a more realistic account of the adjudicator’s role in legal procedures. The main feature of the model is a division into an argumentation phase, where the adversaries plea their case and the adjudicator has a largely mediating role, and a decision phase, where the adjudicator decides the dispute on (...) the basis of the claims, arguments and evidence put forward in the argumentation phase. The model allows for explicit decisions on admissibility of evidence and burden of proof by the adjudicator in the argumentation phase. Adjudication is modelled as putting forward arguments, in particular undercutting and priority arguments, in the decision phase. The model reconciles logical aspects of burden of proof induced by the defeasible nature of arguments with dialogical aspects of burden of proof as something that can be allocated by explicit decisions on legal grounds. (shrink)
In Naked, Krista K. Thomason offers a multi-faceted account of shame, covering its nature as an emotion, its positive and negative roles in moral life, its association with violence, and its provocation through invitations to shame, public shaming, and stigmatization. Along the way, she reflects on a range of examples drawn from literature, memoirs, journalism, and her own imagination. She also considers alternative views at length, draws a wealth of important distinctions, and articulates many of the most intuitive objections to (...) her own view in order to defend it more thoroughly. As such, the book’s subtitle, The Dark Side of Shame and Moral Life, undersells its scope and ambition. This is an exploration not just of shame’s dark side but a kaleidoscopic appreciation of both the nature and the (dis)value of shame and shaming. Somewhat undercutting this breadth, Thomason relies heavily on Kantian intuitions about equal respect and recognition for persons and their dignity; in several key arguments, she tells us to disregard predictable and systematic consequences of emotions, practices, and institutions so that we can better focus on their constitutive or internal aspects. Of course, every philosopher inevitably brings theoretical commitments to bear when writing about moral psychology, but non-Kantian readers should be forewarned that — despite the fact that Thomason says that she does “not assume any particular moral theory” — her ethical conclusions about shaming and stigmatizing are likely to be plausible only to those who are already snugly tied into a web of “Kantian commitments” (p. 9). (shrink)
Skeptical theism is the view that human knowledge and understanding are severely limited, compared to that of the divine. The view is deployed as an undercutting defeater for evidential arguments from evil. However, skeptical theism has broader skeptical consequences than those for the argument from evil. The epistemic principles of this skeptical creep are identified and shown to be on the road to global skepticism.