Phenomenal Conservatism (the view that an appearance that p gives one prima facie justification for believing that p) is a promising, and popular, internalist theory of epistemic justification. Despite its popularity, it faces numerous objections and challenges. For instance, epistemologists have argued that Phenomenal Conservatism is incompatible with Bayesianism, is afflicted by bootstrapping and cognitive penetration problems, does not guarantee that epistemic justification is a stable property, does not provide an account of defeat, and is not a complete theory of (...) epistemic justification. This book shows that Phenomenal Conservatism is actually immune to some of these problems, though not all of them. Accordingly, it explores the prospects of integrating Phenomenal Conservatism with Explanationism (the view that epistemic justification is a matter of explanatory relations between one’s evidence and propositions supported by that evidence). The resulting theory, Phenomenal Explanationism, has advantages over Phenomenal Conservatism and Explanationism taken on their own. Phenomenal Explanationism is a highly unified, comprehensive internalist theory of epistemic justification that delivers on the promises of Phenomenal Conservatism while avoiding its pitfalls. (shrink)
I review recent work on Phenomenal Conservatism, the position introduced by Michael Huemer according to which if it seems that P to a subject S, in the absence of defeaters S has thereby some degree of justification for believing P.
According to Jim Pryor’s dogmatism, when you have an experience with content p, you often have prima facie justification for believing p that doesn’t rest on your independent justification for believing any proposition. Although dogmatism has an intuitive appeal and seems to have an antisceptical bite, it has been targeted by various objections. This paper principally aims to answer the objections by Roger White according to which dogmatism is inconsistent with the Bayesian account of how evidence affects our rational credences. (...) If this were true, the rational acceptability of dogmatism would be seriously questionable. I respond that these objections don’t get off the ground because they assume that our experiences and our introspective beliefs that we have experiences have the same evidential force, whereas the dogmatist is uncommitted to this assumption. I also consider the question whether dogmatism has an antisceptical bite. I suggest that the answer turns on whether or not the Bayesian can determine the priors of hypotheses and conjectures on the grounds of their extra-empirical virtues. If the Bayesian can do so, the thesis that dogmatism has an antisceptical bite is probably false. (shrink)
Philosophers have claimed that education aims at fostering disparate epistemic goals. In this paper we focus on an important segment of this debate involving conversation between Alvin Goldman and Harvey Siegel. Goldman claims that education is essentially aimed at producing true beliefs. Siegel contends that education is essentially aimed at fostering both true beliefs and, independently, critical thinking and rational belief. Although we find Siegel’s position intuitively more plausible than Goldman’s, we also find Siegel’s defence of it wanting. We suggest (...) novel argumentative strategies that draw on Siegel’s own arguments but look to us more promising. (shrink)
This book examines phenomenal conservatism, one of the most influential and promising internalist conceptions of non-inferential justification debated in current epistemology and philosophy of mind. It also explores the significance of the findings of this examination for the general debate on epistemic justification. According to phenomenal conservatism, non-inferential justification rests on seemings or appearances, conceived of as experiences provided with propositional content. Phenomenal conservatism states that if it appears to S that P, in the absence of defeaters, S thereby has (...) some justification for believing that P. This view provides the basis for foundationalism and many ordinary epistemic practices. This book sheds new light on phenomenal conservatism by assessing objections to it and examining epistemological merits and advantages attributed to it. In a nutshell, phenomenal conservatism is actually compatible with Bayesian reasoning, and it is unaffected by bootstrapping problems and challenges that appeal to the cognitive penetrability of perception. Nevertheless, appearance-based justification proves unstable or elusive and its antisceptical bite is more limited than expected. These difficulties could be surmounted if phenomenal conservatism were integrated with a theory of inferential justification. The book appeals to scholars and postgraduates in the field of epistemology and philosophy of mind who are interested in the rational roles of appearances. (shrink)
Crispin Wright maintains that the architecture of perceptual justification is such that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this principle doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us to (...) acquire justification for these beliefs. In this paper I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without weakening the support available for his conception of the architecture of perceptual justification. (shrink)
Transmission of justification across inference is a valuable and indeed ubiquitous epistemic phenomenon in everyday life and science. It is thanks to the phenomenon of epistemic transmission that inferential reasoning is a means for substantiating predictions of future events and, more generally, for expanding the sphere of our justified beliefs or reinforcing the justification of beliefs that we already entertain. However, transmission of justification is not without exceptions. As a few epistemologists have come to realise, more or less trivial forms (...) of circularity can prevent justification from transmitting from p to q even if one has justification for p and one is aware of the inferential link from p to q. In interesting cases this happens because one can acquire justification for p only if one has independent justification for q. In this case the justification for q cannot depend on the justification for p and the inferential link from p to q, as genuine transmission would require. The phenomenon of transmission failure seems to shed light on philosophical puzzles, such as Moore's proof of a material world and McKinsey's paradox, and it plays a central role in various philosophical debates. For this reason it is being granted continued and increasing attention. (shrink)
In this paper we focus on transmission and failure of transmission of warrant. We identify three individually necessary and jointly sufficient conditions for transmission of warrant, and we show that their satisfaction grounds a number of interesting epistemic phenomena that have not been sufficiently appreciated in the literature. We then scrutinise Wright’s analysis of transmission failure and improve on extant readings of it. Nonetheless, we present a Bayesian counterexample that shows that Wright’s analysis is partially incoherent with our analysis of (...) warrant transmission and prima facie defective. We conclude exploring three alternative lines of reply: developing a more satisfactory account of transmission failure, which we outline; dismissing the Bayesian counterexample by rejecting some of its assumptions; reinterpreting Wright’s analysis to make it immune to the counterexample. (shrink)
Crispin Wright has given an explanation of how a first time warrant can fall short of transmitting across a known entailment. Formal epistemologists have struggled to turn Wright’s informal explanation into cogent Bayesian reasoning. In this paper, I analyse two Bayesian models of Wright’s account respectively proposed by Samir Okasha and Jake Chandler. I argue that both formalizations are unsatisfactory for different reasons, and I lay down a third Bayesian model that appears to me to capture the valid kernel of (...) Wright’s explanation. After this, I consider a recent development in Wright’s account of transmission failure. Wright suggests that his condition sufficient for transmission failure of first time warrant also suffices for transmission failure of supplementary warrant. I propose an interpretation of Wright’s suggestion that shield it from objections. I then lay down a fourth Bayesian framework that provides a simplified model of the unified explanation of transmission failure envisaged by Wright. (shrink)
Perceptual experience is one of our fundamental sources of epistemic justification—roughly, justification for believing that a proposition is true. The ability of perceptual experience to justify beliefs can nevertheless be questioned. This article focuses on an important challenge that arises from countenancing that perceptual experience is cognitively penetrable. -/- The thesis of cognitive penetrability of perception states that the content of perceptual experience can be influenced by prior or concurrent psychological factors, such as beliefs, fears and desires. Advocates of this (...) thesis could for instance claim that your desire of having a tall daughter might influence your perception, so that she appears to you to be taller than she is. Although cognitive penetrability of perception is a controversial empirical hypothesis, it does not appear implausible. The possibility of its veracity has been adduced to challenge positions that maintain that perceptual experience has inherent justifying power. -/- This article presents some of the most influential positions in contemporary literature about whether cognitive penetration would undermine perceptual justification and why it would or would not do so. -/- Some sections of this article focus on phenomenal conservatism, a popular conception of epistemic justification that more than any other has been targeted with objections that adduce the cognitive penetrability of experience. (shrink)
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usual, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is “transmitted” to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our findings have (...) implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
This paper criticizes phenomenal conservatism––the influential view according to which a subject S’s seeming that P provides S with defeasible justification for believing P. I argue that phenomenal conservatism, if true at all, has a significant limitation: seeming-based justification is elusive because S can easily lose it by just reflecting on her seemings and speculating about their causes––I call this the problem of reflective awareness. Because of this limitation, phenomenal conservatism doesn’t have all the epistemic merits attributed to it by (...) its advocates. If true, phenomenal conservatism would constitute a unified theory of epistemic justification capable of giving everyday epistemic practices a rationale, but it wouldn’t afford us the means of an effective response to the sceptic. Furthermore, phenomenal conservatism couldn’t form the general basis for foundationalism. (shrink)
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict it gives as (...) to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation. (shrink)
R. Feldman defends a general principle about evidence the slogan form of which says that ‘evidence of evidence is evidence’. B. Fitelson considers three renditions of this principle and contends they are all falsified by counterexamples. Against both Feldman and Fitelson, J. Comesaña and E. Tal show that the third rendition––the one actually endorsed by Feldman––isn’t affected by Fitelson’s counterexamples, but only because it is trivially true and thus uninteresting. Tal and Comesaña defend a fourth version of Feldman’s principle, which––they (...) claim––has not yet been shown false. Against Tal and Comesaña I show that this new version of Feldman’s principle is false. (shrink)
This paper considers two novel Bayesian responses to a well-known skeptical paradox. The paradox consists of three intuitions: first, given appropriate sense experience, we have justification for accepting the relevant proposition about the external world; second, we have justification for expanding the body of accepted propositions through known entailment; third, we do not have justification for accepting that we are not disembodied souls in an immaterial world deceived by an evil demon. The first response we consider rejects the third intuition (...) and proposes an explanation of why we have a faulty intuition. The second response, which we favor, accommodates all three intuitions; it reconciles the first and the third intuition by the dual component model of justification, and defends the second intuition by distinguishing two principles of epistemic closure. (shrink)
John Hardwig has championed the thesis (NE) that evidence that an expert EXP has evidence for a proposition P, constituted by EXP’s testimony that P, is not evidence for P itself, where evidence for P is generally characterized as anything that counts towards establishing the truth of P. In this paper, I first show that (NE) yields tensions within Hardwig’s overall view of epistemic reliance on experts and makes it imply unpalatable consequences. Then, I use Shogenji-Roche’s theorem of transitivity of (...) incremental confirmation to show that (NE) is false if a natural Bayesian formalization of the above notion of evidence is implemented. I concede that Hardwig could resist my Bayesian objection if he re-interpreted (NE) as a more precise thesis that only applies to community-focused evidence. I argue, however, that this precisification, while diminishing the philosophical relevance of (NE), wouldn’t settle the tensions internal to Hardwig’s views. (shrink)
Recent works in epistemology show that the claim that coherence is truth conducive – in the sense that, given suitable ceteris paribus conditions, more coherent sets of statements are always more probable – is dubious and possibly false. From this, it does not follows that coherence is a useless notion in epistemology and philosophy of science. Dietrich and Moretti (Philosophy of science 72(3): 403–424, 2005) have proposed a formal of account of how coherence is confirmation conducive—that is, of how the (...) coherence of a set of statements facilitates the confirmation of such statements. This account is grounded in two confirmation transmission properties that are satisfied by some of the measures of coherence recently proposed in the literature. These properties explicate everyday and scientific uses of coherence. In his paper, I review the main findings of Dietrich and Moretti (2005) and define two evidence-gathering properties that are satisfied by the same measures of coherence and constitute further ways in which coherence is confirmation conducive. At least one of these properties vindicates important applications of the notion of coherence in everyday life and in science. (shrink)
In this paper we argue that Michael Huemer’s phenomenal conservatism—the internalist view according to which our beliefs are prima facie justified if based on how things seems or appears to us to be—doesn’t fall afoul of Michael Bergmann’s dilemma for epistemological internalism. We start by showing that the thought experiment that Bergmann adduces to conclude that is vulnerable to his dilemma misses its target. After that, we distinguish between two ways in which a mental state can contribute to the justification (...) of a belief: the direct way and the indirect way. We identify a straightforward reason for claiming that the justification contributed indirectly is subject to Bergmann’s dilemma. Then we show that the same reason doesn’t extend to the claim that the justification contributed directly is subject to Bergmann’s dilemma. As is the view that seemings or appearances contribute justification directly, we infer that Bergmann’s contention that his dilemma applies to is unmotivated. In the final part, we suggest that our line of response to Bergmann can be used to shield other types of internalist justification from Bergmann’s objection. We also propose that seeming-grounded justification can be combined with justification of one of these types to form the basis of a promising version of internalist foundationalism. (shrink)
According to Jim Pryor’s dogmatism, if you have an experience as if P, you acquire immediate prima facie justification for believing P. Pryor contends that dogmatism validates Moore’s infamous proof of a material world. Against Pryor, I argue that if dogmatism is true, Moore’s proof turns out to be non-transmissive of justification according to one of the senses of non-transmissivity defined by Crispin Wright. This type of non-transmissivity doesn’t deprive dogmatism of its apparent antisceptical bite.
I focus on a key argument for global external world scepticism resting on the underdetermination thesis: the argument according to which we cannot know any proposition about our physical environment because sense evidence for it equally justifies some sceptical alternative (e.g. the Cartesian demon conjecture). I contend that the underdetermination argument can go through only if the controversial thesis that conceivability is per se a source of evidence for metaphysical possibility is true. I also suggest a reason to doubt that (...) conceivability is per se a source of evidence for metaphysical possibility, and thus to doubt the underdetermination argument. (shrink)
We focus on issues of learning assessment from the point of view of an investigation of philosophical elements in teaching. We contend that assessment of concept possession at school based on ordinary multiple-choice tests might be ineffective because it overlooks aspects of human rationality illuminated by Robert Brandom’s inferentialism––the view that conceptual content largely coincides with the inferential role of linguistic expressions used in public discourse. More particularly, we argue that multiple-choice tests at schools might fail to accurately assess the (...) possession of a concept or the lack of it, for they only check the written outputs of the pupils who take them, without detecting the inferences actually endorsed or used by them. We suggest that school tests would acquire reliability if they enabled pupils to make the reasons of their answers or the inferences they use explicit, so as to contribute to what Brandom calls the game of giving and asking for reasons. We explore the possibility to put this suggestion into practice by deploying two-tier multiple-choice tests. (shrink)
Phenomenal conservatism (PC) is the internalist view that non-inferential justification rests on appearances. PC’s advocates have recently argued that seemings are also required to explain inferential justification. The most general and developed view to this effect is Huemer (2016)’s theory of inferential seemings (ToIS). Moretti (2018) has shown that PC is affected by the problem of reflective awareness, which makes PC open to sceptical challenges. In this paper I argue that ToIS is afflicted by a version of the same problem (...) and it is thus hostage to inferential scepticism. I also suggest a possible response on behalf of ToIS’s advocates. (shrink)
Matthew McGrath has recently challenged all theories that allow for immediate perceptual justification. This challenge comes by way of arguing for what he calls the “Looks View” of visual justification, which entails that our visual beliefs that are allegedly immediately justified are in fact mediately justified based on our independent beliefs about the looks of things. This paper shows that McGrath’s arguments are unsound or, at the very least, that they do not cause genuine concern for the species of dogmatism (...) called “Phenomenal Explanationism”, recently introduced and defended by Kevin McCain and Luca Moretti. (shrink)
This is the introduction to Moretti, Luca and Nikolaj Pedersen (eds), Non-Evidentialist Epistemology. Brill. Contributors: N. Ashton, A. Coliva, J. Kim, K. McCain, A. Meylan, L. Moretti, S. Moruzzi, J. Ohlorst, N. Pedersen, T. Piazza, L. Zanetti.
Three confirmation principles discussed by Hempel are the Converse Consequence Condition, the Special Consequence Condition and the Entailment Condition. Le Morvan (1999) has argued that, when the choice among confirmation principles is just about them, it is the Converse Consequence Condition that must be rejected. In this paper, I make this argument definitive. In doing that, I will provide an indisputable proof that the simple conjunction of the Converse Consequence Condition and the Entailment Condition yields a disastrous consequence.
Hypothetico-deductivists have struggled to develop qualitative confirmation theories not raising the so-called tacking by disjunction paradox. In this paper, I analyze the difficulties yielded by the paradox and I argue that the hypothetico-deductivist solutions given by Gemes (1998) and Kuipers (2000) are questionable because they do not fit such analysis. I then show that the paradox yields no difficulty for the Bayesian who appeals to the Total Evidence Condition. I finally argue that the same strategy is unavailable to the hypothetico-deductivist.
Brogaard and Salerno (2005, Nous, 39, 123–139) have argued that antirealism resting on a counterfactual analysis of truth is flawed because it commits a conditional fallacy by entailing the absurdity that there is necessarily an epistemic agent. Brogaard and Salerno's argument relies on a formal proof built upon the criticism of two parallel proofs given by Plantinga (1982, "Proceedings and Addresses of the American Philosophical Association", 56, 47–70) and Rea (2000, "Nous," 34, 291–301). If this argument were conclusive, antirealism resting (...) on a counterfactual analysis of truth should probably be abandoned. I argue however that the antirealist is not committed to a controversial reading of counterfactuals presupposed in Brogaard and Salerno's proof, and that the antirealist can in principle adopt an alternative reading that makes this proof invalid. My conclusion is that no reductio of antirealism resting on a counterfactual analysis of truth has yet been provided. (shrink)
Dummett has recently presented his most mature and sophisticated version of justificationism, i.e. the view that meaning and truth are to be analysed in terms of justifiability. In this paper, I argue that this conception does not resolve a difficulty that also affected Dummett’s earlier version of justificationism: the problem that large tracts of the past continuously vanish as their traces in the present dissipate. Since Dummett’s justificationism is essentially based on the assumption that the speaker has limited (i.e. non-idealized) (...) cognitive powers, no further refinement of this position is likely to settle the problem of the vanishing past. (shrink)
Crispin Wright’s entitlement theory holds that we have non-evidential justification for accepting propositions of a general type––which Wright calls “cornerstones”––that enables us to acquire justification for believing other propositions––those that we take to be true on the grounds of ordinary evidence. Entitlement theory is meant by Wright to deliver a forceful response to the sceptic who argues that we cannot justify ordinary beliefs. I initially focus on strategic entitlement, which is one of the types of entitlement that Wright has described (...) in more detail. I suggest that it is dubious that we are strategically entitled to accept cornerstones. After this, I focus on entitlement in general. I contend that, in important cases, non-evidential justification for accepting cornerstones cannot secure evidential justification for believing ordinary propositions. My argument rests on a probabilistic regimentation of the so-called “leaching problem”. (shrink)
One type of argument to sceptical paradox proceeds by making a case that a certain kind of metaphysically “heavyweight or “cornerstone” proposition is beyond all possible evidence and hence may not be known or justifiably believed. Crispin Wright has argued that we can concede that our acceptance of these propositions is evidentially risky and still remain rationally entitled to those of our ordinary knowledge claims that are seemingly threatened by that concession. A problem for Wright’s proposal is the so-called Leaching (...) worry: if we are merely rationally entitled to accept the cornerstones without evidence, how can we achieve evidence-based knowledge of the multitude of quotidian propositions that we think we know, which require the cornerstones to be true? This paper presents a rigorous, novel explication of this worry within a Bayesian framework, and offers the Entitlement theorist two distinct responses. (shrink)
[NOTE: I WROTE THIS PAPER BEFORE STARTING MY PhD. SO DON'T EXPECT TOO MUCH.] Laudan and Leplin have argued that empirically equivalent theories can elude underdetermination by resorting to indirect confirmation. Moreover, they have provided a qualitative account of indirect confirmation that Okasha has shown to be incoherent. In this paper, I develop Kukla's recent contention that indirect confirmation is grounded in the probability calculus. I provide a Bayesian rule to calculate the probability of a hypothesis given indirect evidence. I (...) also suggest that the application of the rule presupposes the methodological relevance of non‐empirical virtues of theories. If this is true, Laudan and Leplin's strategy will not work in many cases. Moreover, without an independent way of justifying the role of non‐empirical virtues in methodology, the scientific realists cannot use indirect evidence to defeat underdetermination. (shrink)
In this paper, I focus on the so-called "tacking by disjunction problem". Namely, the problem to the effect that, if a hypothesis H is confirmed by a statement E, H is confirmed by the disjunction E v F, for whatever statement F. I show that the attempt to settle this difficulty made by Grimes 1990, in a paper apparently forgotten by today methodologists, is irremediably faulty.
In this chapter I introduce and analyse the tenets of phenomenal conservatism, and discuss the problem of the nature of appearances. After that, I review the asserted epistemic merits phenomenal conservatism and the principal arguments adduced in support of it. Finally, I survey objections to phenomenal conservatism and responses by its advocates. Some of these objections will be scrutinised and appraised in the next chapters.
In this chapter I introduce the thesis that perceptual appearances are cognitively penetrable and analyse cases made against phenomenal conservatism hinging on this thesis. In particular, I focus on objections coming from the externalist reliabilist camp and the internalist inferentialist camp. I conclude that cognitive penetrability doesn’t yield lethal or substantive difficulties for phenomenal conservatism.
A subject S's belief that Q is well-grounded if and only if it is based on a reason of S that gives S propositional justification for Q. Depending on the nature of S's reason, the process whereby S bases her belief that Q on it can vary. If S's reason is non-doxastic––like an experience that Q or a testimony that Q––S will need to form the belief that Q as a spontaneous and immediate response to that reason. If S's reason (...) is doxastic––like a belief that P––S will need to infer her belief that Q from it. The distinction between these two ways in which S's beliefs can be based on S's reasons is widely presupposed in current epistemology but––we argue in this paper––is not exhaustive. We give examples of quite ordinary situations in which a well-grounded belief of S appears to be based on S's reasons in neither of the ways described above. To accommodate these recalcitrant cases, we introduce the notion of enthymematic inference and defend the thesis that S can base a belief that Q on doxastic reasons P1, P2, …, Pn via inferring enthymematically Q from P1, P2, …, Pn. (shrink)
Beall and Restall 2000; 2001; 2006 advocate a comprehensive pluralist approach to logic, which they call Logical Pluralism, according to which there is not one true logic but many equally acceptable logical systems. They maintain that Logical Pluralism is compatible with monism about metaphysical modality, according to which there is just one correct logic of metaphysical modality. Wyatt 2004 contends that Logical Pluralism is incompatible with monism about metaphysical modality. We first suggest that if Wyatt were right, Logical Pluralism would (...) be strongly implausible because it would get upside down a dependence relation that holds between metaphysics and logic of modality. We then argue that Logical Pluralism is prima facie compatible with monism about metaphysical modality. (shrink)
In this chapter I analyse an objection to phenomenal conservatism to the effect that phenomenal conservatism is unacceptable because it is incompatible with Bayesianism. I consider a few responses to it and dismiss them as misled or problematic. Then, I argue that this objection doesn’t go through because it rests on an implausible formalization of the notion of seeming-based justification. In the final part of the chapter, I investigate how seeming-based justification and justification based on one’s reflective belief that one (...) has a seeming interact with one another. (shrink)
In this chapter I draw the conclusions of my investigation into phenomenal conservatism. I argue that phenomenal conservatism isn’t actually plagued by serious problems attributed to it by its opponents, but that it neither possesses all the epistemic merits that its advocates think it has. I suggest that phenomenal conservatism could provide a more satisfactory account of everyday epistemic practices and a more robust response to the sceptic if it were integrated with a theory of inferential justification. I also identify (...) questions and issues relevant to the assessment of phenomenal conservatism to be investigated in further research. (shrink)
Within his overarching program aiming to defend an epistemic conception of analyticity, Boghossian (1996 and 1997) has offered a clear-cut explanation of how we can acquire a priori knowledge of logical truths and logical rules through implicit definition. The explanation is based on a special template or general form of argument. Ebert (2005) has argued that an enhanced version of this template is flawed because a segment of it is unable to transmit warrant from its premises to the conclusion. This (...) article aims to defend the template from this objection. We provide an accurate description of the type of non-transmissivity that Ebert attributes to the template and clarify why this is a novel type of non-transmissivity. Then, we argue that Jenkins (2008)’s response to Ebert fails because it focuses on doxastic rather than propositional warrant. Finally, we rebut Ebert’s objection on Boghossian’s behalf by showing that it rests on an unwarranted assumption and is internally incoherent. (shrink)
In this introduction I present the topic of the investigation carried out in this book and the central theses defended in it. I also clarify some assumption of my research, specify the intended audience of this book and summarize its structure.