I review recent work on Phenomenal Conservatism, the position introduced by Michael Huemer according to which if it seems that P to a subject S, in the absence of defeaters S has thereby some degree of justification for believing P.
Philosophers have claimed that education aims at fostering disparate epistemic goals. In this paper we focus on an important segment of this debate involving conversation between Alvin Goldman and Harvey Siegel. Goldman claims that education is essentially aimed at producing true beliefs. Siegel contends that education is essentially aimed at fostering both true beliefs and, independently, critical thinking and rational belief. Although we find Siegel’s position intuitively more plausible than Goldman’s, we also find Siegel’s defence of it wanting. We suggest (...) novel argumentative strategies that draw on Siegel’s own arguments but look to us more promising. (shrink)
A subject S's belief that Q is well-grounded if and only if it is based on a reason of S that gives S propositional justification for Q. Depending on the nature of S's reason, the process whereby S bases her belief that Q on it can vary. If S's reason is non-doxastic––like an experience that Q or a testimony that Q––S will need to form the belief that Q as a spontaneous and immediate response to that reason. If S's reason (...) is doxastic––like a belief that P––S will need to infer her belief that Q from it. The distinction between these two ways in which S's beliefs can be based on S's reasons is widely presupposed in current epistemology but––we argue in this paper––is not exhaustive. We give examples of quite ordinary situations in which a well-grounded belief of S appears to be based on S's reasons in neither of the ways described above. To accommodate these recalcitrant cases, we introduce the notion of enthymematic inference and defend the thesis that S can base a belief that Q on doxastic reasons P1, P2, …, Pn via inferring enthymematically Q from P1, P2, …, Pn. (shrink)
According to Jim Pryor’s dogmatism, when you have an experience with content p, you often have prima facie justification for believing p that doesn’t rest on your independent justification for believing any proposition. Although dogmatism has an intuitive appeal and seems to have an antisceptical bite, it has been targeted by various objections. This paper principally aims to answer the objections by Roger White according to which dogmatism is inconsistent with the Bayesian account of how evidence affects our rational credences. (...) If this were true, the rational acceptability of dogmatism would be seriously questionable. I respond that these objections don’t get off the ground because they assume that our experiences and our introspective beliefs that we have experiences have the same evidential force, whereas the dogmatist is uncommitted to this assumption. I also consider the question whether dogmatism has an antisceptical bite. I suggest that the answer turns on whether or not the Bayesian can determine the priors of hypotheses and conjectures on the grounds of their extra-empirical virtues. If the Bayesian can do so, the thesis that dogmatism has an antisceptical bite is probably false. (shrink)
Crispin Wright maintains that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this fact doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us to acquire justification for these beliefs. In this paper (...) I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without endangering his epistemology of perception. (shrink)
This paper considers two novel Bayesian responses to a well-known skeptical paradox. The paradox consists of three intuitions: first, given appropriate sense experience, we have justification for accepting the relevant proposition about the external world; second, we have justification for expanding the body of accepted propositions through known entailment; third, we do not have justification for accepting that we are not disembodied souls in an immaterial world deceived by an evil demon. The first response we consider rejects the third intuition (...) and proposes an explanation of why we have a faulty intuition. The second response, which we favor, accommodates all three intuitions; it reconciles the first and the third intuition by the dual component model of justification, and defends the second intuition by distinguishing two principles of epistemic closure. (shrink)
Crispin Wright has given an explanation of how a first time warrant can fall short of transmitting across a known entailment. Formal epistemologists have struggled to turn Wright’s informal explanation into cogent Bayesian reasoning. In this paper, I analyse two Bayesian models of Wright’s account respectively proposed by Samir Okasha and Jake Chandler. I argue that both formalizations are unsatisfactory for different reasons, and I lay down a third Bayesian model that appears to me to capture the valid kernel of (...) Wright’s explanation. After this, I consider a recent development in Wright’s account of transmission failure. Wright suggests that his condition sufficient for transmission failure of first time warrant also suffices for transmission failure of supplementary warrant. I propose an interpretation of Wright’s suggestion that shield it from objections. I then lay down a fourth Bayesian framework that provides a simplified model of the unified explanation of transmission failure envisaged by Wright. (shrink)
In this paper we focus on transmission and failure of transmission of warrant. We identify three individually necessary and jointly sufficient conditions for transmission of warrant, and we show that their satisfaction grounds a number of interesting epistemic phenomena that have not been sufficiently appreciated in the literature. We then scrutinise Wright’s analysis of transmission failure and improve on extant readings of it. Nonetheless, we present a Bayesian counterexample that shows that Wright’s analysis is partially incoherent with our analysis of (...) warrant transmission and prima facie defective. We conclude exploring three alternative lines of reply: developing a more satisfactory account of transmission failure, which we outline; dismissing the Bayesian counterexample by rejecting some of its assumptions; reinterpreting Wright’s analysis to make it immune to the counterexample. (shrink)
Phenomenal conservatism (PC) is the internalist view that non-inferential justification rests on appearances. PC’s advocates have recently argued that seemings are also required to explain inferential justification. The most general and developed view to this effect is Huemer (2016)’s theory of inferential seemings (ToIS). Moretti (2018) has shown that PC is affected by the problem of reflective awareness, which makes PC open to sceptical challenges. In this paper I argue that ToIS is afflicted by a version of the same problem (...) and it is thus hostage to inferential scepticism. I also suggest a possible response on behalf of ToIS’s advocates. (shrink)
I focus on a key argument for global external world scepticism resting on the underdetermination thesis: the argument according to which we cannot know any proposition about our physical environment because sense evidence for it equally justifies some sceptical alternative (e.g. the Cartesian demon conjecture). I contend that the underdetermination argument can go through only if the controversial thesis that conceivability is per se a source of evidence for metaphysical possibility is true. I also suggest a reason to doubt that (...) conceivability is per se a source of evidence for metaphysical possibility, and thus to doubt the underdetermination argument. (shrink)
R. Feldman defends a general principle about evidence the slogan form of which says that ‘evidence of evidence is evidence’. B. Fitelson considers three renditions of this principle and contends they are all falsified by counterexamples. Against both Feldman and Fitelson, J. Comesaña and E. Tal show that the third rendition––the one actually endorsed by Feldman––isn’t affected by Fitelson’s counterexamples, but only because it is trivially true and thus uninteresting. Tal and Comesaña defend a fourth version of Feldman’s principle, which––they (...) claim––has not yet been shown false. Against Tal and Comesaña I show that this new version of Feldman’s principle is false. (shrink)
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict it gives as (...) to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation. (shrink)
In this paper we argue that Michael Huemer’s phenomenal conservatism—the internalist view according to which our beliefs are prima facie justified if based on how things seems or appears to us to be—doesn’t fall afoul of Michael Bergmann’s dilemma for epistemological internalism. We start by showing that the thought experiment that Bergmann adduces to conclude that is vulnerable to his dilemma misses its target. After that, we distinguish between two ways in which a mental state can contribute to the justification (...) of a belief: the direct way and the indirect way. We identify a straightforward reason for claiming that the justification contributed indirectly is subject to Bergmann’s dilemma. Then we show that the same reason doesn’t extend to the claim that the justification contributed directly is subject to Bergmann’s dilemma. As is the view that seemings or appearances contribute justification directly, we infer that Bergmann’s contention that his dilemma applies to is unmotivated. In the final part, we suggest that our line of response to Bergmann can be used to shield other types of internalist justification from Bergmann’s objection. We also propose that seeming-grounded justification can be combined with justification of one of these types to form the basis of a promising version of internalist foundationalism. (shrink)
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usually, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is "transmitted" to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our findings have (...) implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
Transmission of justification across inference is a valuable and indeed ubiquitous epistemic phenomenon in everyday life and science. It is thanks to the phenomenon of epistemic transmission that inferential reasoning is a means for substantiating predictions of future events and, more generally, for expanding the sphere of our justified beliefs or reinforcing the justification of beliefs that we already entertain. However, transmission of justification is not without exceptions. As a few epistemologists have come to realise, more or less trivial forms (...) of circularity can prevent justification from transmitting from p to q even if one has justification for p and one is aware of the inferential link from p to q. In interesting cases this happens because one can acquire justification for p only if one has independent justification for q. In this case the justification for q cannot depend on the justification for p and the inferential link from p to q, as genuine transmission would require. The phenomenon of transmission failure seems to shed light on philosophical puzzles, such as Moore's proof of a material world and McKinsey's paradox, and it plays a central role in various philosophical debates. For this reason it is being granted continued and increasing attention. (shrink)
According to Jim Pryor’s dogmatism, if you have an experience as if P, you acquire immediate prima facie justification for believing P. Pryor contends that dogmatism validates Moore’s infamous proof of a material world. Against Pryor, I argue that if dogmatism is true, Moore’s proof turns out to be non-transmissive of justification according to one of the senses of non-transmissivity defined by Crispin Wright. This type of non-transmissivity doesn’t deprive dogmatism of its apparent antisceptical bite.
This paper criticizes phenomenal conservatism––the influential view according to which a subject S’s seeming that P provides S with defeasible justification for believing P. I argue that phenomenal conservatism, if true at all, has a significant limitation: seeming-based justification is elusive because S can easily lose it by just reflecting on her seemings and speculating about their causes––I call this the problem of reflective awareness. Because of this limitation, phenomenal conservatism doesn’t have all the epistemic merits attributed to it by (...) its advocates. If true, phenomenal conservatism would constitute a unified theory of epistemic justification capable of giving everyday epistemic practices a rationale, but it wouldn’t afford us the means of an effective response to the sceptic. Furthermore, phenomenal conservatism couldn’t form the general basis for foundationalism. (shrink)
We focus on issues of learning assessment from the point of view of an investigation of philosophical elements in teaching. We contend that assessment of concept possession at school based on ordinary multiple-choice tests might be ineffective because it overlooks aspects of human rationality illuminated by Robert Brandom’s inferentialism––the view that conceptual content largely coincides with the inferential role of linguistic expressions used in public discourse. More particularly, we argue that multiple-choice tests at schools might fail to accurately assess the (...) possession of a concept or the lack of it, for they only check the written outputs of the pupils who take them, without detecting the inferences actually endorsed or used by them. We suggest that school tests would acquire reliability if they enabled pupils to make the reasons of their answers or the inferences they use explicit, so as to contribute to what Brandom calls the game of giving and asking for reasons. We explore the possibility to put this suggestion into practice by deploying two-tier multiple-choice tests. (shrink)
John Hardwig has championed the thesis (NE) that evidence that an expert EXP has evidence for a proposition P, constituted by EXP’s testimony that P, is not evidence for P itself, where evidence for P is generally characterized as anything that counts towards establishing the truth of P. In this paper, I first show that (NE) yields tensions within Hardwig’s overall view of epistemic reliance on experts and makes it imply unpalatable consequences. Then, I use Shogenji-Roche’s theorem of transitivity of (...) incremental confirmation to show that (NE) is false if a natural Bayesian formalization of the above notion of evidence is implemented. I concede that Hardwig could resist my Bayesian objection if he re-interpreted (NE) as a more precise thesis that only applies to community-focused evidence. I argue, however, that this precisification, while diminishing the philosophical relevance of (NE), wouldn’t settle the tensions internal to Hardwig’s views. (shrink)
I am concerned with Crispin Wright (2004, 2007, 2012 and 2014)’s entitlement theory, according to which (1) we have non-evidential justification for accepting propositions of a general type, which Wright calls cornerstones, and (2) this non-evidential justification for cornerstones can secure evidential justification for believing many other propositions––those we take to be true on the grounds of ordinary evidence. I initially focus on strategic entitlement, which is one of the types of entitlement that Wright has described in more detail. Wright (...) (2014) argues that strategic entitlement is a form of epistemic justification rather than pragmatic, as some critics have contended. I respond that whether or not strategic entitlement is epistemic, it is implausible that there are cornerstones we are strategically entitled to accept. Thus, it is implausible that that (1) could be successfully defended by appealing to strategic entitlement. After this, I argue that even if (1) were true, (2) would be false, for in many cases non-evidential justification for accepting cornerstones couldn’t secure evidential justification for believing ordinary propositions. This criticism is more ambitious than the previous one because it aims to strike all forms of epistemic entitlement introduced by Wright at once. My argument relies on an elementary probabilistic regimentation of the so-called leaching problem. (shrink)
Recent works in epistemology show that the claim that coherence is truth conducive – in the sense that, given suitable ceteris paribus conditions, more coherent sets of statements are always more probable – is dubious and possibly false. From this, it does not follows that coherence is a useless notion in epistemology and philosophy of science. Dietrich and Moretti (Philosophy of science 72(3): 403–424, 2005) have proposed a formal of account of how coherence is confirmation conducive—that is, of how the (...) coherence of a set of statements facilitates the confirmation of such statements. This account is grounded in two confirmation transmission properties that are satisfied by some of the measures of coherence recently proposed in the literature. These properties explicate everyday and scientific uses of coherence. In his paper, I review the main findings of Dietrich and Moretti (2005) and define two evidence-gathering properties that are satisfied by the same measures of coherence and constitute further ways in which coherence is confirmation conducive. At least one of these properties vindicates important applications of the notion of coherence in everyday life and in science. (shrink)
Beall and Restall 2000; 2001; 2006 advocate a comprehensive pluralist approach to logic, which they call Logical Pluralism, according to which there is not one true logic but many equally acceptable logical systems. They maintain that Logical Pluralism is compatible with monism about metaphysical modality, according to which there is just one correct logic of metaphysical modality. Wyatt 2004 contends that Logical Pluralism is incompatible with monism about metaphysical modality. We first suggest that if Wyatt were right, Logical Pluralism would (...) be strongly implausible because it would get upside down a dependence relation that holds between metaphysics and logic of modality. We then argue that Logical Pluralism is prima facie compatible with monism about metaphysical modality. (shrink)
Dummett has recently presented his most mature and sophisticated version of justificationism, i.e. the view that meaning and truth are to be analysed in terms of justifiability. In this paper, I argue that this conception does not resolve a difficulty that also affected Dummett’s earlier version of justificationism: the problem that large tracts of the past continuously vanish as their traces in the present dissipate. Since Dummett’s justificationism is essentially based on the assumption that the speaker has limited (i.e. non-idealized) (...) cognitive powers, no further refinement of this position is likely to settle the problem of the vanishing past. (shrink)
Brogaard and Salerno (2005, Nous, 39, 123–139) have argued that antirealism resting on a counterfactual analysis of truth is flawed because it commits a conditional fallacy by entailing the absurdity that there is necessarily an epistemic agent. Brogaard and Salerno's argument relies on a formal proof built upon the criticism of two parallel proofs given by Plantinga (1982, "Proceedings and Addresses of the American Philosophical Association", 56, 47–70) and Rea (2000, "Nous," 34, 291–301). If this argument were conclusive, antirealism resting (...) on a counterfactual analysis of truth should probably be abandoned. I argue however that the antirealist is not committed to a controversial reading of counterfactuals presupposed in Brogaard and Salerno's proof, and that the antirealist can in principle adopt an alternative reading that makes this proof invalid. My conclusion is that no reductio of antirealism resting on a counterfactual analysis of truth has yet been provided. (shrink)
The expression conditional fallacy identifies a family of arguments deemed to entail odd and false consequences for notions defined in terms of counterfactuals. The antirealist notion of truth is typically defined in terms of what a rational enquirer or a community of rational enquirers would believe if they were suitably informed. This notion is deemed to entail, via the conditional fallacy, odd and false propositions, for example that there necessarily exists a rational enquirer. If these consequences do indeed follow from (...) the antirealist notion of truth, alethic antirealism should probably be rejected. In this paper we analyse the conditional fallacy from a semantic (i.e. model-theoretic) point of view. This allows us to identify with precision the philosophical commitments that ground the validity of this type of argument. We show that the conditional fallacy arguments against alethic antirealism are valid only if controversial metaphysical assumptions are accepted. We suggest that the antirealist is not committed to the conditional fallacy because she is not committed to some of these assumptions. (shrink)
Within his overarching program aiming to defend an epistemic conception of analyticity, Boghossian (1996 and 1997) has offered a clear-cut explanation of how we can acquire a priori knowledge of logical truths and logical rules through implicit definition. The explanation is based on a special template or general form of argument. Ebert (2005) has argued that an enhanced version of this template is flawed because a segment of it is unable to transmit warrant from its premises to the conclusion. This (...) article aims to defend the template from this objection. We provide an accurate description of the type of non-transmissivity that Ebert attributes to the template and clarify why this is a novel type of non-transmissivity. Then, we argue that Jenkins (2008)’s response to Ebert fails because it focuses on doxastic rather than propositional warrant. Finally, we rebut Ebert’s objection on Boghossian’s behalf by showing that it rests on an unwarranted assumption and is internally incoherent. (shrink)
Boghossian (1996) has put forward an interesting explanation of how we can acquire logical knowledge via implicit definitions that makes use of a special template. Ebert (2005) has argued that the template is unserviceable, as it doesn't transmit warrant. In this paper, we defend the template. We first suggest that Jenkins (2008)’s response to Ebert fails because it focuses on doxastic rather than propositional warrant. We then reject Ebert’s objection by showing that it depends on an implausible and incoherent assumption.
Three confirmation principles discussed by Hempel are the Converse Consequence Condition, the Special Consequence Condition and the Entailment Condition. Le Morvan (1999) has argued that, when the choice among confirmation principles is just about them, it is the Converse Consequence Condition that must be rejected. In this paper, I make this argument definitive. In doing that, I will provide an indisputable proof that the simple conjunction of the Converse Consequence Condition and the Entailment Condition yields a disastrous consequence.
Hypothetico-deductivists have struggled to develop qualitative confirmation theories not raising the so-called tacking by disjunction paradox. In this paper, I analyze the difficulties yielded by the paradox and I argue that the hypothetico-deductivist solutions given by Gemes (1998) and Kuipers (2000) are questionable because they do not fit such analysis. I then show that the paradox yields no difficulty for the Bayesian who appeals to the Total Evidence Condition. I finally argue that the same strategy is unavailable to the hypothetico-deductivist.
Minimal entities are, roughly, those that fall under notions defined by only deflationary principles. In this paper I provide an accurate characterization of two types of minimal entities: minimal properties and minimal facts. This characterization is inspired by both Schiffer's notion of a pleonastic entity and Horwich's notion of minimal truth. I argue that we are committed to the existence of minimal properties and minimal facts according to a deflationary notion of existence, and that the appeal to the inferential role (...) reading of the quantifiers does not dismiss this commitment. I also argue that deflationary existence is language-dependent existence—this clarifies why minimalists about properties and facts are not realists about these entities though their language may appear indistinguishable from the language of realists. (shrink)
In this paper, I focus on the so-called "tacking by disjunction problem". Namely, the problem to the effect that, if a hypothesis H is confirmed by a statement E, H is confirmed by the disjunction E v F, for whatever statement F. I show that the attempt to settle this difficulty made by Grimes 1990, in a paper apparently forgotten by today methodologists, is irremediably faulty.
Theory unification is a central aim of scientific investigation. In this paper, I lay down the sketch of a Bayesian analysis of the virtue of unification that entails that the unification of a theory has direct implications for the confirmation of the theory’s logical consequences and for its prior probability. This shows that scientists do have epistemic, and not just pragmatic, reasons to prefer unified theories to non-unified ones.
According to Wrights minimalism, a notion of truth neutral with respect to realism and antirealism can be built out of the notion of warranted assertibility and a set of a priori platitudes among which the Equivalence Schema has a prominent role. Wright believes that the debate about realism and antirealism will be properly and fruitfully developed if both parties accept the conceptual framework of minimalism. In this paper, I show that this conceptual framework commits the minimalist to the realist thesis (...) that there are mind-independent propositions; with the consequence that minimalism is not neutral to realism and antirealism. I suggest that Wright could avert this conclusion if he rejected the customary interpretation of the Equivalence Schema according to which this Schema applies to propositions. This would however render minimalism unpalatable to philosophers who welcome the traditional reading of the Equivalence Schema and believe that propositions are bearers of truth. (shrink)
The general tendency or attitude that Dreier 2004 calls creeping minimalism is ramping up in contemporary analytic philosophy. Those who entertain this attitude will take for granted a framework of deflationary or minimal notions – principally semantical1 and ontological – by means of which to analyse problems in different philosophical fields – e.g. theory of truth, metaethics, philosophy of language, the debate on realism and antirealism, etc. Let us call sweeping minimalist the philosopher affected by creeping minimalism. The framework of (...) minimal notions that the sweeping minimalist takes for granted encompasses, for instance, the concept of truth, reference, proposition, fact, individual, and property. Minimal notions are characterized in terms of general platitudinous principles expressed by schemata like the following (cf.: 26): ‘S’ is true if and only if S; ‘S’ is true if and only if ‘S’ corresponds to the facts; a has the property of being P if and only if a is P. Where ‘S’ and ‘a is P’ stand for sentences satisfying superficial constraints of truth-aptitude (i.e. sentences in declarative form subject to communally acknowledged standards of proper use), and.. (shrink)
Laudan and Leplin have argued that empirically equivalent theories can elude underdetermination by resorting to indirect confirmation. Moreover, they have provided a qualitative account of indirect confirmation that Okasha has shown to be incoherent. In this paper, I develop Kukla's recent contention that indirect confirmation is grounded in the probability calculus. I provide a Bayesian rule to calculate the probability of a hypothesis given indirect evidence. I also suggest that the application of the rule presupposes the methodological relevance of non‐empirical (...) virtues of theories. If this is true, Laudan and Leplin's strategy will not work in many cases. Moreover, without an independent way of justifying the role of non‐empirical virtues in methodology, the scientific realists cannot use indirect evidence to defeat underdetermination. (shrink)
In this paper, I show that Lewis' definition of coherence and Fitelson's and Shogenji's measures of coherence are unacceptable because they entail the absurdity that any set of beliefs in general is coherent and not coherent at the same time. This devastating result is obtained if a simple and plausible principle of stability for coherence is accepted.