Analyticity, or the 'analytic/synthetic' distinction is one of the most important and controversial problems in contemporary philosophy. It is also essential to understanding many developments in logic, philosophy of language, epistemology and metaphysics. In this outstanding introduction to analyticity Cory Juhl and Eric Loomis cover the following key topics: The origins of analyticity in the philosophy of Hume and Kant Carnap's arguments concerning analyticity in the early twentieth century Quine's famous objections to analyticity in his classic 'Two Dogmas of Empiricism' (...) essay The relationship between analyticity and central issues in metaphysics, such as ontology The relationship between analyticity and epistemology Analyticity in the context of the current debates in philosophy, including mathematics and ontology Throughout the book the authors show how many philosophical controversies hinge on the problem of analyticity. Additional features include chapter summaries, annotated further reading and a glossary of technical terms making the book ideal to those coming to the problem for the first time. (shrink)
A number of authors have claimed that the fact that our universe seems ’fine-tuned’ is evidence that there are many universes. Ian Hacking (1987) raised doubts about inferences to many sequential universes. More recently, Roger White has argued that it is a fallacy to infer that there are many universes, whether existing all at once or sequentially, from the fact that ours is fine-tuned. The upshot of our discussion will be that Hacking is right about the existence of certain fallacious (...) inferences, but that he (and White) are incorrect in their assimilation of arguments for many universes to these fallacious cases. (shrink)
This paper places formal learning theory in a broader philosophical context and provides a glimpse of what the philosophy of induction looks like from a learning-theoretic point of view. Formal learning theory is compared with other standard approaches to the philosophy of induction. Thereafter, we present some results and examples indicating its unique character and philosophical interest, with special attention to its unified perspective on inductive uncertainty and uncomputability.
The inductive reliability of Bayesian methods is explored. The first result presented shows that for any solvable inductive problem of a general type, there exists a subjective prior which yields a Bayesian inductive method that solves the problem, although not all subjective priors give rise to a successful inductive method for the problem. The second result shows that the same does not hold for computationally bounded agents, so that Bayesianism is "inductively incomplete" for such agents. Finally a consistency proof shows (...) that inductive agents do not need to disregard inductive failure on sets of subjective probability 0 in order to be ideally rational. Together the results reveal the inadequacy of the subjective Bayesian norms for scientific methodology. (shrink)
This paper is a response to Stephen Leeds’s "Juhl on Many Worlds". Contrary to what Leeds claims, we can legitimately argue for nontrivial conclusions by appeal to our existence. The ’problem of old evidence’, applied to the ’old evidence’ that we exist, seems to be a red herring in the context of determining whether there is a rationally convincing argument for the existence of many universes. A genuinely salient worry is whether multiversers can avoid illicit reuse of empirical evidence in (...) their arguments. (shrink)
Subjective Bayesians typically find the following objection difficult to answer: some joint probability measures lead to intuitively irrational inductive behavior, even in the long run. Yet well-motivated ways to restrict the set of reasonable prior joint measures have not been forthcoming. In this paper I propose a way to restrict the set of prior joint probability measures in particular inductive settings. My proposal is the following: where there exists some successful inductive method for getting to the truth in some situation, (...) we ought to employ a (joint) probability measure that is inductively successful in that situation, if such a measure exists. In order to do show that the restriction is possible to meet in a broad class of cases, I prove a Bayesian Completeness Theorem, which says that for any solvable inductive problem of a certain broad type, there exist probability measures that a Bayesian could use to solve the problem. I then briefly compare the merits of my proposal with two other well-known proposals for constraining the class of admissible subjective probability measures, the leave the door ajar condition and the maximize entropy condition. (shrink)
, Hans Reichenbach made a bold and original attempt to ‘vindicate’ induction. He proposed a rule, the ‘straight rule’ of induction, which would guarantee inductive success if any rule of induction would. A central problem facing his attempt to vindicate the straight rule is that too many other rules are just as good as the straight rule if our only constraint on what counts as ‘success’ for an inductive rule is that it is ‘asymptotic’, i.e. that it converges in the (...) limit to the true limiting frequency (of some type of outcome O in a sequence of events) whenever such a limiting frequency exists. In this paper I consider the consequences of requiring speed-optimality of asymptotic methods, that is, requiring that inductive methods must get to the truth as quickly as possible. Two main results are proved: (1) the straight rule is speed-optimal; (2) there are (uncountably) many non-speed-optimal asymptotic methods. A further result gives a sufficient but not necessary condition for speed-optimality among asymptotic methods. Some consequences and open questions are then discussed. (shrink)
From one perspective, the fundamental notions of point-set topology have to do with sequences and their limits. A broad class of epistemological questions also appear to be concerned with sequences and their limits. For example, problems of empirical underdetermination—which of a collection of alternative theories is true—have to do with logical properties of sequences of evidence. Underdetermination by evidence is the central problem of Plato’s Meno, of one of Sextus Empiricus’ many skeptical doubts, and arguably it is the idea at (...) work in Kant’s antinomies, for example in his account of the infinite divisibility of matter. Many questions of methodology, or of the logic of discovery, have to do with sequences and their limits. For example, under what conditions do Bayesian procedures, which put a prior probability distribution over alternative hypotheses and possible evidence and form conditional probabilities as new evidence is obtained, converge to the truth? Some analyses of “S knows that p” seem to appeal to properties of actual and possible sequences of something—for example Nozick’s proposal that knowledge of p is belief in p produced by a method that would not produce belief in p if p were false and would produce belief in p if p were true. Even questions about finding the truth under a quite radical relativism, in which truth depends on conceptual scheme and conceptual schemes can be altered, have been analyzed as a kind of limiting property of sequences. (shrink)
In this paper, we argue for the centrality of countable additivity to realist claims about the convergence of science to the truth. In particular, we show how classical sceptical arguments can be revived when countable additivity is dropped.
Diagonalization is a proof technique that formal learning theorists use to show that inductive problems are unsolvable. The technique intuitively requires the construction of the mathematical equivalent of a "Cartesian demon" that fools the scientist no matter how he proceeds. A natural question that arises is whether diagonalization is complete. That is, given an arbitrary unsolvable inductive problem, does an invincible demon exist? The answer to that question turns out to depend upon what axioms of set theory we adopt. The (...) two main results of the paper show that if we assume ZermeloFraenkel set theory plus AC and CH, there exist undetermined inductive games. The existence of such games entails that diagonalization is incomplete. On the other hand, if we assume the Axiom of Determinacy, or even a weaker axiom known as Wadge Determinacy, then diagonalization is complete. In order to prove the results, inductive inquiry is viewed as an infinitary game played between the scientist and nature. Such games have been studied extensively by descriptive set theorists. Analogues to the results above are mentioned, in which both the scientist and the demon are restricted to computable strategies. The results exhibit a surprising connection between inductive methodology and the foundations of set theory. (shrink)
In this paper we propose that cosmological fine-tuning arguments, when levied in support of the existence of Intelligent Designers or Multiverses, are much less interesting than they are thought to be. Our skepticism results from tracking the distinction between merely epistemic or logical possibilities on one hand and nonepistemic possibilities, such as either nomological or metaphysical possibilities, on the other. We find that fine-tuning arguments readily conflate epistemic or logical possibilities with nonepistemic possibilities and we think that this leads to (...) treating the search for an explanation of fine-tuning as analogous to standard empirical theorizing about first-order nomological matters, when in fact the two investigational enterprises are profoundly different. Similar conflation occurs when fine-tuning arguments do not carefully distinguish between different interpretations of probabilities within the arguments. Finally, these arguments often rely on spatial analogies, which are often misleading precisely in that they encourage the conflation of epistemic and nonepistemic possibility. When we pay attention to the distinctions between merely epistemic versus nonepistemic modalities and probabilities, the extant arguments in favor of intelligent designers or multiverses, or even for the nonepistemic improbability of fine-tuning, consist of empirically unconstrained speculation concerning relevant nonepistemic modal facts. (shrink)
Almeder begins by distinguishing between two senses of "knows." What he calls "weak knowledge," although nominally defined in the classical way as justified true belief, does not require truth in the correspondence sense. This follows from the fact that weak knowledge of a proposition p does not require evidence that entails p, yet weak knowledge of p requires evidence that entails the truth of p. Further, Almeder argues that any interesting definition of knowledge or truth must allow us to determine (...) which things have these properties, and that a correspondence sense of truth would not allow such determination. (shrink)
Statistical tests of the primality of some numbers look similar to statistical tests of many nonmathematical, clearly empirical propositions. Yet interpretations of probability prima facie appear to preclude the possibility of statistical tests of mathematical propositions. For example, it is hard to understand how the statement that n is prime could have a frequentist probability other than 0 or 1. On the other hand, subjectivist approaches appear to be saddled with ‘coherence’ constraints on rational probabilities that require rational agents to (...) assign extremal probabilities to logical and mathematical propositions. In the light of these problems, many philosophers have come to think that there must be some way to generalize a Bayesian statistical account. In this article I propose that a classical frequentist approach should be reconsidered. I conclude that we can give a conditional justification of statistical testing of at least some mathematical hypotheses: if statistical tests provide us with reasons to believe or bet on empirical hypotheses in the standard situations, then they also provide us with reasons to believe or bet on mathematical hypotheses in the structurally similar mathematical cases. (shrink)
Glock’s most recent book is a critical examination of the views of Quine and Davidson. One of the novel features of the book that will prove helpful to most readers is Glock’s comparative treatment of the two. Glock not only thoroughly articulates their views, he also points out significant differences between their basic assumptions and between the goals driving their various projects. For example, Glock compares Quine’s ’radical translation’ project with Davidson’s ’radical interpretation’ project, pointing out interesting differences in assumptions (...) and purposes. Another unusual feature of the book is that Glock is himself fundamentally at odds with both Quine and Davidson, and holds views that are broadly Witttgensteinian. Thus, unlike most extant books on Quine and Davidson, Glock’s strives to make manifest various weaknesses of their arguments and views, rather than to show how they can be salvaged from what would appear to be devastating criticisms. However, while fundamentally critical, Glock’s book is not particularly polemical. He clearly and forcefully presents the views that he criticizes and defends positions of his protagonists from criticisms that he takes to be off-target or unfair. (shrink)