Proponents of the Fine-Tuning Argument frequently assume that the narrowness of the life-friendly range of fundamental physical constants implies a low probability for the origin of the universe ‘by chance’. We cast this argument in a more rigorous form than is customary and conclude that the narrow intervals do not yield a probability at all because the resulting measure function is non-normalizable. We then consider various attempts to circumvent this problem and argue that they fail.
Internalism and Epistemology is a powerful articulation and defense of a classical answer to an enduring question: What is the nature of rational belief? In opposition to prevailing philosophical fashion, the book argues that epistemic externalism leads, not just to skepticism, but to epistemic nihilism - the denial of the very possibility of justification. And it defends a subtle and sophisticated internalism against criticisms that have widely but mistakenly been thought to be decisive. Beginning with an internalist response to the (...) Gettier problem, the authors deal with the problem of the connection to truth, stressing the distinction between success and rationality as critical to its resolution. They develop a metaregress argument against externalism that has devastating consequences for any view according to which epistemic principles are contingent. The same argument does not, they argue, affect the version of internalism they espouse, since its epistemic principles are analytic and knowable a priori. The final chapter addresses the problem of induction and shows that its solution turns critically on the distinction between success and rationality - the very distinction that lies at the heart of the dispute between internalists and externalists. Provocative, probing, and deliberately unfashionable, Internalism and Epistemology is a ringing defense of internalism that will interest specialists and students alike. It is essential reading for anyone who suspects that rumors of the death of traditional epistemology have been greatly exaggerated. (shrink)
A focus on the conjunction of the contents of witness reports and on the coherence of their contents has had negative effects on the epistemic clarity of the Bayesian coherence literature. Whether or not increased coherence of witness reports is correlated with higher confirmation for some H depends upon the hypothesis in question and upon factors concerning the confirmation and independence of the reports, not directly on the positive relevance of the contents to each other. I suggest that Bayesians should (...) shift focus to “coherence for” an hypothesis – that is, to the definition and analysis of cumulative case arguments in which a body of evidence supports some hypothesis that is not restricted to the conjunction of the contents of reports. Such a shift of focus will be valuable for approaching issues such as the problem of the external world which have interested Bayesian coherentists all along. (shrink)
This book is a sustained defence of traditional internalist epistemology. The aim is threefold: to address some key criticisms of internalism and show that they do not hit their mark, to articulate a detailed version of a central objection to externalism, and to illustrate how a consistent internalism can meet the charge that it fares no better in the face of this objection than does externalism itself. This original work will be recommended reading for scholars with an interest in epistemology.
Testimonial evidence that is particularly helpful to confirmation combines agreement on some content with variation of detail. I examine the phenomenon of “undesigned coincidences” from a probabilistic point of view to explain how varied reports, including those that dovetail in detail, assist confirmation of an hypothesis. The formal analysis uses recent work in probability theory surrounding the concepts of dependence, independence, and varied evidence. I also discuss the connection between these types of report connections and an hypothesis about the reliability (...) of the sources involved. (shrink)
The value of varied evidence, I propose, lies in the fact that more varied evidence is less coherent on the assumption of the negation of the hypothesis under consideration than less varied evidence. I contrast my own analysis with several other Bayesian analyses of the value of evidential diversity and show how my account explains cases where it seems intuitively that evidential variety is valuable for confirmation.
I offer an account of ad hocness that explains why the adoption of an ad hoc auxiliary is accompanied by the disconfirmation of a hypothesis H. H must be conjoined with an auxiliary a′, which is improbable antecedently given H, while ~H does not have this disability. This account renders it unnecessary to require, for identifying ad hocness, that either a′ or H have a posterior probability less than or equal to 0.5; there are also other reasons for abandoning that (...) condition. I distinguish between formal ad hocness, which is bad in the probabilistic sense that it results in disconfirmation of H, and argumentative ad hocness, which actually involves bad reasoning on the part of a subject. The latter is what I call “not counting the cost.” This distinction allows us to see why the 0.5 condition appeared attractive in the first place. The concept of not counting the cost also has implications for other areas of research, including both a Bayesian concept of unfalsifiability and the classic epistemological question of the problem of the external world. (shrink)
I propose a measure of dependence that relates a set of items of evidence to an hypothesis H and to H's negation. I dub this measure relative consilience and propose a method for using it as a correction factor for dependence among items of evidence. Using RC, I examine collusion and testimonial independence, the value of diverse evidence, and the strengthening of otherwise weak or non-existent cases. RC provides a valuable tool for formal epistemologists interested in analyzing cumulative case arguments.
Richard Jeffrey developed the formula for probability kinematics with the intent that it would show that strong foundations are epistemologically unnecessary. But the reasons that support strong foundationalism are considerations of dynamics rather than kinematics. The strong foundationalist is concerned with the origin of epistemic force; showing how epistemic force is propagated therefore cannot undermine his position. The weakness of personalism is evident in the difficulty the personalist has in giving a principled answer to the question of when the conditions (...) for the application of the kinematic formula—the rigidity of the posteriors—are fulfilled, a problem made intractable by the personalist commitment to treating changes in intermediate probability as unexplained surds. Because the strong foundationalist admits changes in the intermediate probability of propositions only when there is some change in the foundations, he can avail himself of ananswer to the problem of the rigidity of the posteriors which the personalist cannot regard as complete. While probability kinematics does not make certain foundations unnecessary, the possession of certain foundations also does not make the probability kinematics formula superfluous. The formula allowsus to model the indirect routes by which the foundations influence various non-foundational propositions in the probability distribution. (shrink)
The phenomenon of mutual support presents a specific challenge to the foundationalist epistemologist: Is it possible to model mutual support accurately without using circles of evidential support? We argue that the appearance of loops of support arises from a failure to distinguish different synchronic lines of evidential force. The ban on loops should be clarified to exclude loops within any such line, and basing should be understood as taking place within lines of evidence. Uncertain propositions involved in mutual support relations (...) are conduits to each other of independent evidence originating ultimately in the foundations. We examine several putative examples of benign loops of support and show that, given the distinctions noted, they can be accurately modeled in a foundationalist fashion. We define an evidential "tangle," a relation among three propositions that appears to require a loop for modeling, and prove that all such tangles are trivial in a sense that precludes modeling them with an evidential circle. (shrink)
Richard Jeffrey developed the formula for probability kinematics with the intent that it would show that strong foundations are epistemologically unnecessary. But the reasons that support strong foundationalism are considerations of dynamics rather than kinematics. The strong foundationalist is concerned with the origin of epistemic force; showing how epistemic force is propagated therefore cannot undermine his position. The weakness of personalism is evident in the difficulty the personalist has in giving a principled answer to the question of when the conditions (...) for the application of the kinematic formula—the rigidity of the posteriors—are fulfilled, a problem made intractable by the personalist commitment to treating changes in intermediate probability as unexplained surds. Because the strong foundationalist admits changes in the intermediate probability of propositions only when there is some change in the foundations, he can avail himself of ananswer to the problem of the rigidity of the posteriors which the personalist cannot regard as complete. While probability kinematics does not make certain foundations unnecessary, the possession of certain foundations also does not make the probability kinematics formula superfluous. The formula allowsus to model the indirect routes by which the foundations influence various non-foundational propositions in the probability distribution. (shrink)
Both proponents and opponents of the argument for the deliberate fine-tuning, by an intelligent agent, of the fundamental constants of the universe have accepted certain assumptions about how the argument will go. These include both treating the fine-tuning of the constants as constitutive of the nature of the universe itself and conditioning on the fact that the constants actually do fall into the life-permitting range, rather than on the narrowness of the range. It is also generally assumed that the fine-tuning (...) argument should precede biological arguments for design from, e.g., the origin of life. I suggest four new arguments, two of which are different orderings of the same data. Each of these abandons one or more of the common assumptions about how the fine-tuning argument should go, and they provide new possibilities for answering or avoiding objections to the fine-tuning argument. (shrink)
Both advocates and opponents of the fine-tuning argument treat multiple universes with a selection effect as a legitimate hypothesis to explain the life-permitting values of the constants in our universe. I argue that, except where there is specific relevant prior information, the occurrence of multiple instances of a low-likelihood causal process should not be treated as an alternative hypothesis to a higher-likelihood causal process. Since an ’ad hoc’ hypothesis can be invented to give high likelihood to any evidence, we must (...) provide some epistemic rationale other than similar likelihood for comparing two hypotheses. (shrink)
It is often assumed by friends and foes alike of intelligent design that a likelihood approach to design inferences will require evidenceregarding the specific motives and abilities of any hypothetical designer. Elliott Sober, like Venn before him, indicates that this information is unavailable when the designer is not human and concludes that there is no good argument for design in biology. I argue that a knowledge of motives and abilities is not always necessary for obtaining a likelihood on design. In (...) many cases, including the case of irreducibly complex objects, frequencies from known agents can supply the likelihood. I argue against the claim that data gathered from humans is inapplicable to non-human agents. Finally, I point out that a broadly Bayesian approach to design inferences, such as that advocated by Sober, is actually advantageous to design advocates in that it frees them from the Popperian requirement that they construct an overarching science which makes high-likelihood predictions. (shrink)
Jonathan Weisberg has argued that Jeffrey Conditioning is inherently “anti-holistic” By this he means, inter alia, that JC does not allow us to take proper account of after-the-fact defeaters for our beliefs. His central example concerns the discovery that the lighting in a room is red-tinted and the relationship of that discovery to the belief that a jelly bean in the room is red. Weisberg’s argument that the rigidity required for JC blocks the defeating role of the red-tinted light rests (...) on the strong assumption that all posteriors within the distribution in this example are rigid on a partition over the proposition that the jelly bean is actually red. But individual JC updates of propositions do not require such a broad rigidity assumption. Jeffrey conditionalizers should consider the advantages of a modest project of targeted updating focused on particular propositions rather than seeking to update the entire distribution using one obvious partition. Although Weisberg’s example fails to show JC to be irrelevant or useless, other problems he raises for JC (the commutativity and inputs problems) remain and actually become more pressing when we recognize the important role of background information. (shrink)
On the “Russellian” solution to the Gettier problem, every Gettier case involves the implicit or explicit use of a false premise on the part of the subject. We distinguish between two senses of “justification” ---“legitimation” and “justification proper.” The former does not require true premises, but the latter does. We then argue that in Gettier cases the subject possesses “legitimation” but not “justification proper,” and we respond to many attempted counterexamples, including several variants of the Nogot scenario, a case involving (...) induction, and the case of the sight-seer and the barn. Finally, we show that, given our analysis, any challenge to a belief’s justification on the grounds that it might be “Gettierized” only requires an argument that one’s premises are themselves likely to be true, moving backwards along the object-Ievel regress. Hence, a move to externalism is neither useful nor necessary in response to the Gettier problem. (shrink)
The formal representation of the strength of witness testimony has been historically tied to a formula — proposed by Condorcet — that uses a factor representing the reliability of an individual witness. This approach encourages a false dilemma between hyper-scepticism about testimony, especially to extraordinary events such as miracles, and an overly sanguine estimate of reliability based on insufficiently detailed evidence. Because Condorcet’s formula does not have the resources for representing numerous epistemically relevant details in the unique situation in which (...) testimony is given, many late 19th century thinkers like Venn turned away from the probabilistic analysis of testimony altogether. But a more nuanced approach using Bayes factors provides a better, more flexible, formalism for representing the evidential force of testimony. (shrink)
While it is natural to assume that contradiction between alleged witness testimonies to some event disconfirms the event, this generalization is subject to important qualifications. I consider a series of increasingly complex probabilistic cases that help us to understand the effect of contradictions more precisely. Due to the possibility of honest error on a difficult detail even on the part of highly reliable witnesses, agreement on such a detail can confirm H much more than contradiction disconfirms H. It is also (...) possible to model scenarios where we strongly suspect ahead of time that one source has copied another. In these cases, contradiction on a detail due to witness error can even confirm H by disconfirming collusion or copying. Finally, still more complex scenarios show that indirect confirmation, as opposed to exact agreement, provides the “best of both worlds,” simultaneously disconfirming suspected copying while permitting the statements of both sources to be true. (shrink)
John Post has argued that the traditional regress argument against nonfoundational justificatory structures does not go through because it depends on the false assumption that “justifies” is in general transitive. But, says Post, many significant justificatory relations are not transitive. The authors counter that there is an evidential relation essential to all inferential justification, regardless of specific inference form or degree of carried-over justificatory force, which is in general transitive. They respond to attempted counterexamples to transitivity brought by Watkins and (...) Salmon as well as to Post’s, arguing that none of these counterexamples apply to the relation they are describing. Given the revived transitivity assumption using this relation, the regress argument does indeed demonstrate the need for foundational stopping points in inferential justification. (shrink)
While one strand of ramified natural theology focuses on direct evidence for miracles, another avenue to investigate is the argument from prophecy. Events that appear to fulfill prophecy may not be miraculous in themselves, but they can provide confirmation, even substantial confirmation, for a supernatural hypothesis. I examine the details of a small set of passages from the Old Testament and evaluate the probabilistic impact of the occurrence of events surrounding the death of Jesus of Nazareth that appear to fulfill (...) these prophecies. The hypothesis under consideration is M—that Jesus of Nazareth was the prophesied Messiah. Using Psalm 22 and Isaiah 53, historical evidence concerning the death of Jesus, and background evidence concerning Roman and Jewish history and culture, I estimate a cumulative Bayes factor of 2.5 × 107 in favor of M from the fact of Jesus’s crucifixion and four further details concerning his death. Independent confirmation of M is pertinent to the prior probability of miraculous claims such as the claim that Jesus rose from the dead. The examination of Jesus’s putative fulfillment of prophecy thus is an example of an objective treatment of the religious context of a miracle which makes a given putative miracle something other than an isolated and arbitrary wonder. (shrink)
Thomas Crisp has attempted to revive something akin to Alvin Plantinga’s Principle of Dwindling Probabilities to argue that the historical case for the resurrection of Jesus does not make the posterior probability of the resurrection very high. I argue that Crisp’s argument fails because he is attempting to evaluate a concrete argument in an a priori manner. I show that the same moves he uses would be absurd in other contexts, as applied both to our acquaintance with human beings and (...) to evidence for divine intervention. Crisp’s attempt to relate the evidence for a specific act of God such as the resurrection to generic theism, thereby creating skepticism about the power of the evidence, is symptomatic of a larger problem in the philosophy of religion which I dub “separationism” and which has characterized the work of both advocates of classical apologetics and philosophers of science. (shrink)
It is sometimes assumed in the Bayesian coherentist literature that the project of finding a truth-conducive measure of coherence of testimonial contents will, if successful, be helpful to the coherentist theory of justification. Various impossibility results in the Bayesian coherentist literature are consequently taken to be prima facie detrimental to the coherentist theory of justification. These attempts to connect Bayesian coherentism to the coherentist/ foundationalist debate in classical epistemology rest upon a confusion between the justification of a proposition and the (...) credibility that a proposition has for some other proposition. Foundationalism requires a class of beliefs that have non- inferential justification, not beliefs that have credibility by themselves for others. Coherentists insist that beliefs can be justified only via inferential relations with others, but this does not mean that coherentists must deny that individual propositions can have credibility for other propositions. I analyze and respond to both Erik Olsson's and Michael Huemer's arguments concerning the alleged connection between the Bayesian coherentist project and the coherentist theory of justification. Finally, I argue that Bayesian coherentism as represented in the literature, so far from being a version of coherentism, is implicitly foundationalist because of its treatment of “witness reports”, especially the reports of memory and sensation, as given evidence. The impossibility results, based on the assumption of given reports, are therefore not targeted at classical coherentism in epistemology at all. (shrink)
In the debate over testimony to miracles, a common Humean move is to emphasize the prior improbability of miracles as the most important epistemic factor. Robert Fogelin uses the example of Henry, who tells multiple tall tales about meeting celebrities, to argue that low prior probabilities alone can render testimony unbelievable, with obvious implications for testimony to miracles. A detailed Bayesian analysis of Henry’s stories shows instead that the fact that Henry tells multiple stories about events that occurred independently if (...) they occurred at all is crucial to his loss of credibility. The epistemic structure is similar to that of a case of multiple lottery wins by the same person. Each of Henry’s stories can confirm only one event, but all the stories confirm the hypothesis that Henry is a liar. This structure does not apply to testimony to just one event, however antecedently improbable. Such examples therefore do nothing to undermine a standard Bayesian analysis involving both priors and likelihoods in evaluating testimony to an improbable event. (shrink)
Two different types of objections to the historical investigation of miracles imply that such investigation is inappropriate or can never lead to rational belief that a historical miracle has occurred. The first objection concerns the alleged chasm between the rational realm of history and the realm of faith. The second objection alleges that God is, or would be if he existed, too much unlike ourselves for us reasonably to use Divine action as an explanatory hypothesis. Both objections involve a tacit (...) question-begging move against the traditional theistic hypothesis that a God exists who is capable of revealing himself to man by public signs. The theist should be free to test the hypothesis of a God who speaks rather than a God who is necessarily separated from his creatures. (shrink)
On the “Russellian” solution to the Gettier problem, every Gettier case involves the implicit or explicit use of a false premise on the part of the subject. We distinguish between two senses of “justification” ---“legitimation” and “justification proper.” The former does not require true premises, but the latter does. We then argue that in Gettier cases the subject possesses “legitimation” but not “justification proper,” and we respond to many attempted counterexamples, including several variants of the Nogot scenario, a case involving (...) induction, and the case of the sight-seer and the barn. Finally, we show that, given our analysis, any challenge to a belief’s justification on the grounds that it might be “Gettierized” only requires an argument that one’s premises are themselves likely to be true, moving backwards along the object-Ievel regress. Hence, a move to externalism is neither useful nor necessary in response to the Gettier problem. (shrink)
John Post has argued that the traditional regress argument against nonfoundational justificatory structures does not go through because it depends on the false assumption that “justifies” is in general transitive. But, says Post, many significant justificatory relations are not transitive. The authors counter that there is an evidential relation essential to all inferential justification, regardless of specific inference form or degree of carried-over justificatory force, which is in general transitive. They respond to attempted counterexamples to transitivity brought by Watkins and (...) Salmon as well as to Post’s, arguing that none of these counterexamples apply to the relation they are describing. Given the revived transitivity assumption using this relation, the regress argument does indeed demonstrate the need for foundational stopping points in inferential justification. (shrink)