According to Jim Pryor’s dogmatism, when you have an experience with content p, you often have prima facie justification to believe p that does not rest on your independent justification to believe any proposition. Although dogmatism has an intuitive appeal and seems to have an antisceptical bite, it has been targeted by different objections. This paper principally aims to answer the objections by Roger White according to which dogmatism is inconsistent with the Bayesian account of how evidence affects our credences. (...) If this were true, the rational acceptability of dogmatism would be seriously questionable. I respond that these objections don’t get off the ground because they assume that our experiences and our introspective beliefs that we have experiences have the same evidential force, whereas the dogmatist is uncommitted to this assumption. I also consider the question whether dogmatism has an antisceptical bite. I suggest that the answer turns on whether or not the Bayesian can determine the priors of hypotheses and conjectures on the grounds of their extra-empirical virtues. If the Bayesian can do so, the thesis that dogmatism has an antisceptical bite is probably false. (shrink)
In this paper, I show that Lewis' definition of coherence and Fitelson's and Shogenji's measures of coherence are unacceptable because they entail the absurdity that any set of beliefs in general is coherent and not coherent at the same time. This devastating result is obtained if a simple and plausible principle of stability for coherence is accepted.
The expression conditional fallacy identifies a family of arguments deemed to entail odd and false consequences for notions defined in terms of counterfactuals. The antirealist notion of truth is typically defined in terms of what a rational enquirer or a community of rational enquirers would believe if they were suitably informed. This notion is deemed to entail, via the conditional fallacy, odd and false propositions, for example that the Peircean end of inquiry has been reached or that there is necessarily (...) a rational enquirer. If these consequences followed from the antirealist notion of truth, alethic antirealism should probably be rejected. In this paper we analyse the conditional fallacy from a semantic (i.e. model-theoretic) point of view. This allows us to identify with precision the philosophical commitments that ground the validity of this type of arguments. We show that the conditional fallacy arguments against alethic antirealism are valid only if controversial metaphysical assumptions are accepted. We suggest that the antirealist is not committed to the conditional fallacy because she is not committed to some of these assumptions. (shrink)
I focus on a key argument for global external world scepticism resting on the underdetermination thesis: the argument according to which we cannot know any proposition about our physical environment because sense evidence for it equally justifies some sceptical alternative (e.g. the Cartesian demon conjecture). I contend that the underdetermination argument can go through only if the controversial thesis that conceivability is per se a source of evidence for metaphysical possibility is true. I also suggest a reason to doubt that (...) conceivability is per se a source of evidence for metaphysical possibility, and thus to doubt the underdetermination argument. (shrink)
Transmission of justification across inference is a valuable and indeed ubiquitous epistemic phenomenon in everyday life and science. It is thanks to the phenomenon of epistemic transmission that inferential reasoning is a means for substantiating predictions of future events and, more generally, for expanding the sphere of our justified beliefs or reinforcing the justification of beliefs that we already entertain. However, transmission of justification is not without exceptions. As a few epistemologists have come to realise, more or less trivial forms (...) of circularity can prevent justification from transmitting from p to q even if one has justification for p and one is aware of the inferential link from p to q. In interesting cases this happens because one can acquire justification for p only if one has independent justification for q. In this case the justification for q cannot depend on the justification for p and the inferential link from p to q, as genuine transmission would require. The phenomenon of transmission failure seems to shed light on philosophical puzzles, such as Moore's proof of a material world and McKinsey's paradox, and it plays a central role in various philosophical debates. For this reason it is being granted continued and increasing attention. (shrink)
In this paper we focus on transmission and failure of transmission of warrant. We identify three individually necessary and jointly sufficient conditions for transmission of warrant, and we show that their satisfaction grounds a number of interesting epistemic phenomena that have not been sufficiently appreciated in the literature. We then scrutinise Wright’s analysis of transmission failure and improve on extant readings of it. Nonetheless, we present a Bayesian counterexample that shows that Wright’s analysis is partially incoherent with our analysis of (...) warrant transmission and prima facie defective. We conclude exploring three alternative lines of reply: developing a more satisfactory account of transmission failure, which we outline; dismissing the Bayesian counterexample by rejecting some of its assumptions; reinterpreting Wright’s analysis to make it immune to the counterexample. (shrink)
Crispin Wright has given an explanation of how a first time warrant can fall short of transmitting across a known entailment. Formal epistemologists have struggled to turn Wright’s informal explanation into cogent Bayesian reasoning. In this paper, I analyse two Bayesian models of Wright’s account respectively proposed by Samir Okasha and Jake Chandler. I argue that both formalizations are unsatisfactory for different reasons, and I lay down a third Bayesian model that appears to me to capture the valid kernel of (...) Wright’s explanation. After this, I consider a recent development in Wright’s account of transmission failure. Wright suggests that his condition sufficient for transmission failure of first time warrant also suffices for transmission failure of supplementary warrant. I propose an interpretation of Wright’s suggestion that shield it from objections. I then lay down a fourth Bayesian framework that provides a simplified model of the unified explanation of transmission failure envisaged by Wright. (shrink)
Beall and Restall 2000; 2001; 2006 advocate a comprehensive pluralist approach to logic, which they call Logical Pluralism, according to which there is not one true logic but many equally acceptable logical systems. They maintain that Logical Pluralism is compatible with monism about metaphysical modality, according to which there is just one correct logic of metaphysical modality. Wyatt 2004 contends that Logical Pluralism is incompatible with monism about metaphysical modality. We first suggest that if Wyatt were right, Logical Pluralism would (...) be strongly implausible because it would get upside down a dependence relation that holds between metaphysics and logic of modality. We then argue that Logical Pluralism is prima facie compatible with monism about metaphysical modality. (shrink)
The general tendency or attitude that Dreier 2004 calls creeping minimalism is ramping up in contemporary analytic philosophy. Those who entertain this attitude will take for granted a framework of deflationary or minimal notions – principally semantical1 and ontological – by means of which to analyse problems in different philosophical fields – e.g. theory of truth, metaethics, philosophy of language, the debate on realism and antirealism, etc. Let us call sweeping minimalist the philosopher affected by creeping minimalism. The framework of (...) minimal notions that the sweeping minimalist takes for granted encompasses, for instance, the concept of truth, reference, proposition, fact, individual, and property. Minimal notions are characterized in terms of general platitudinous principles expressed by schemata like the following (cf.: 26): ‘S’ is true if and only if S; ‘S’ is true if and only if ‘S’ corresponds to the facts; a has the property of being P if and only if a is P. Where ‘S’ and ‘a is P’ stand for sentences satisfying superficial constraints of truth-aptitude (i.e. sentences in declarative form subject to communally acknowledged standards of proper use), and.. (shrink)
Brogaard and Salerno (2005, Nous, 39, 123–139) have argued that antirealism resting on a counterfactual analysis of truth is flawed because it commits a conditional fallacy by entailing the absurdity that there is necessarily an epistemic agent. Brogaard and Salerno's argument relies on a formal proof built upon the criticism of two parallel proofs given by Plantinga (1982, "Proceedings and Addresses of the American Philosophical Association", 56, 47–70) and Rea (2000, "Nours," 34, 291–301). If this argument were conclusive, antirealism resting (...) on a counterfactual analysis of truth should probably be abandoned. I argue however that the antirealist is not committed to a controversial reading of counterfactuals presupposed in Brogaard and Salerno's proof, and that the antirealist can in principle adopt an alternative reading that makes this proof invalid. My conclusion is that no reductio of antirealism resting on a counterfactual analysis of truth has yet been provided. (shrink)
Dummett has recently presented his most mature and sophisticated version of justificationism, i.e. the view that meaning and truth are to be analysed in terms of justifiability. In this paper, I argue that this conception does not resolve a difficulty that also affected Dummett’s earlier version of justificationism: the problem that large tracts of the past continuously vanish as their traces in the present dissipate. Since Dummett’s justificationism is essentially based on the assumption that the speaker has limited (i.e. non-idealized) (...) cognitive powers, no further refinement of this position is likely to settle the problem of the vanishing past. (shrink)
Minimal entities are, roughly, those that fall under notions defined by only deflationary principles. In this paper I provide an accurate characterization of two types of minimal entities: minimal properties and minimal facts. This characterization is inspired by both Schiffer's notion of a pleonastic entity and Horwich's notion of minimal truth. I argue that we are committed to the existence of minimal properties and minimal facts according to a deflationary notion of existence, and that the appeal to the inferential role (...) reading of the quantifiers does not dismiss this commitment. I also argue that deflationary existence is language-dependent existence—this clarifies why minimalists about properties and facts are not realists about these entities though their language may appear indistinguishable from the language of realists. (shrink)
According to Wrights minimalism, a notion of truth neutral with respect to realism and antirealism can be built out of the notion of warranted assertibility and a set of a priori platitudes among which the Equivalence Schema has a prominent role. Wright believes that the debate about realism and antirealism will be properly and fruitfully developed if both parties accept the conceptual framework of minimalism. In this paper, I show that this conceptual framework commits the minimalist to the realist thesis (...) that there are mind-independent propositions; with the consequence that minimalism is not neutral to realism and antirealism. I suggest that Wright could avert this conclusion if he rejected the customary interpretation of the Equivalence Schema according to which this Schema applies to propositions. This would however render minimalism unpalatable to philosophers who welcome the traditional reading of the Equivalence Schema and believe that propositions are bearers of truth. (shrink)
Recent works in epistemology show that the claim that coherence is truth conducive – in the sense that, given suitable ceteris paribus conditions, more coherent sets of statements are always more probable – is dubious and possibly false. From this, it does not follows that coherence is a useless notion in epistemology and philosophy of science. Dietrich and Moretti (Philosophy of science 72(3): 403–424, 2005) have proposed a formal of account of how coherence is confirmation conducive—that is, of how the (...) coherence of a set of statements facilitates the confirmation of such statements. This account is grounded in two confirmation transmission properties that are satisfied by some of the measures of coherence recently proposed in the literature. These properties explicate everyday and scientific uses of coherence. In his paper, I review the main findings of Dietrich and Moretti (2005) and define two evidence-gathering properties that are satisfied by the same measures of coherence and constitute further ways in which coherence is confirmation conducive. At least one of these properties vindicates important applications of the notion of coherence in everyday life and in science. (shrink)
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict it gives as (...) to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation. (shrink)
Hypothetico-deductivists have struggled to develop qualitative confirmation theories not raising the so-called tacking by disjunction paradox. In this paper, I analyze the difficulties yielded by the paradox and I argue that the hypothetico-deductivist solutions given by Gemes (1998) and Kuipers (2000) are questionable because they do not fit such analysis. I then show that the paradox yields no difficulty for the Bayesian who appeals to the Total Evidence Condition. I finally argue that the same strategy is unavailable to the hypothetico-deductivist.
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usually, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is "transmitted" to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our findings have (...) implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
In this paper, I focus on the so-called "tacking by disjunction problem". Namely, the problem to the effect that, if a hypothesis H is confirmed by a statement E, H is confirmed by the disjunction E v F, for whatever statement F. I show that the attempt to settle this difficulty made by Grimes 1990, in a paper apparently forgotten by today methodologists, is irremediably faulty.
Three confirmation principles discussed by Hempel are the Converse Consequence Condition, the Special Consequence Condition and the Entailment Condition. Le Morvan (1999) has argued that, when the choice among confirmation principles is just about them, it is the Converse Consequence Condition that must be rejected. In this paper, I make this argument definitive. In doing that, I will provide an indisputable proof that the simple conjunction of the Converse Consequence Condition and the Entailment Condition yields a disastrous consequence.