then essentially characterized the hypotheses that mechanical scientists can successfully decide in the limit in terms of arithmetic complexity. These ideas were developed still further by Peter Kugel . In this paper, I extend this approach to obtain characterizations of identification in the limit, identification with bounded mind-changes, and identification in the short run, both for computers and for ideal agents with unbounded computational abilities. The characterization of identification with n mind-changes entails, as a corollary, an exact arithmetic characterization of (...) Putnam's n-trial predicates, which closes a gap of a factor of two in Putnam's original characterization . (shrink)
This thesis examines the prospects for mechanical procedures that can identify true, complete, universal, first-order logical theories on the basis of a complete enumeration of true atomic sentences. A sense of identification is defined that is..
Worst case complexity analyses of algorithms are sometimes held to be less informative about the real difficulty of computation than are expected complexity analyses. We show that the two most common representations of problem solving in cognitive science each admit aigorithms that have constant expected complexity, and for one of these representations we obtain constant expected complexity bounds under a variety of probability measures.
Many distinct theories are compatible with current experience. Scientiﬁc realists recommend that we choose the simplest. Anti-realists object that such appeals to “Ockham’s razor” cannot be truth-conducive, since they lead us astray in complex worlds. I argue, on behalf of the realist, that always preferring the simplest theory compatible with experience is necessary for eﬃcient convergence to the truth in the long run, even though it may point in the wrong direction in the short run. Eﬃciency is a matter of (...) minimizing errors or retractions prior to convergence to the truth. (shrink)
A ﬁnite data set is consistent with inﬁnitely many alternative theories. Scientiﬁc realists recommend that we prefer the simplest one. Anti-realists ask how a ﬁxed simplicity bias could track the truth when the truth might be complex. It is no solution to impose a prior probability distribution biased toward simplicity, for such a distribution merely embodies the bias at issue without explaining its eﬃcacy. In this note, I argue, on the basis of computational learning theory, that a ﬁxed simplicity bias (...) is necessary if inquiry is to converge to the right answer eﬃciently, whatever the right answer might be. Eﬃciency is understood in the sense of minimizing the least ﬁxed bound on retractions or errors prior to convergence. (shrink)
Scientific methods may be viewed as procedures for converging to the true answer to a given empirical question. Typically, such methods converge to the truth only if certain empirical presuppositions are satisfied, which raises the question whether the presuppositions are satisfied. Another scientific method can be applied to this empirical question, and so forth, occasioning an empirical regress. So there is an obvious question about the point of such a regress. This paper explains how to assess the methodological worth of (...) a methodological regress by solving for the strongest sense of single-method performance that can be achieved given that such a regress exists. Several types of regresses are. (shrink)
Philosophical logicians proposing theories of rational belief revision have had little to say about whether their proposals assist or impede the agent's ability to reliably arrive at the truth as his beliefs change through time. On the other hand, reliability is the central concern of formal learning theory. In this paper we investigate the belief revision theory of Alchourron, Gardenfors and Makinson from a learning theoretic point of view.
We argue that uncomputability and classical scepticism are both re ections of inductive underdetermination, so that Church's thesis and Hume's problem ought to receive equal emphasis in a balanced approach to the philosophy of induction. As an illustration of such an approach, we investigate how uncomputable the predictions of a hypothesis can be if the hypothesis is to be reliably investigated by a computable scienti c method.
Learning is the acquisition of new knowledge and skills. It spans a range of processes from practice and rote memorization to the invention of entirely novel abilities and scientiﬁc theories that extend past experience. Learning is not restricted to humans: machines and animals can learn, social organizations can learn, and a genetic population can learn through natural selection. In this broad sense, learning is adaptive change, whether in behavior or in belief.
The approach to scientiﬁc methodology developed in my recent book The Logic of Reliable Inquiry (LRI) shares many general features with that summarized in Larry Laudan’s concurrently published collection of papers Beyond Positivism and Relativism (BPR). Nonetheless, this fact might not be apparent, as my own work emphasizes mathematical theorems, whereas Laudan’s draws primarily upon historiography. It is, therefore, of some interest to discuss the extent of the agreement and the signiﬁcance of the diﬀerences. More generally, the discussion will (I) (...) provide a logical analysis of the instrumental signiﬁcance of empirical meta-methodology and (II) redeﬁne the role of logic in a post-positivistic, naturalized approach to epistemology and scientiﬁc method. (shrink)
This paper presents a new explanation of how preferring the simplest theory compatible with experience assists one in ﬁnding the true answer to a scientiﬁc question when the answers are theories or models. Inquiry is portrayed as an unending game between science and nature in which the scientist aims to converge to the true theory on the basis of accumulating information. Simplicity is a topological invariant reﬂecting sequences of theory choices that nature can force an arbitrary, convergent scientist to produce. (...) It is demonstrated that among the methods that converge to the truth in an empirical problem, the ones that do so with a minimum number of reversals of opinion prior to convergence are exactly the ones that prefer simple theories. The approach explains not only simplicity tastes in model selection, but aspects of theory testing and the unwillingness of natural science to break symmetries without a reason. (shrink)
Belief revision theory aims to describe how one should change one’s beliefs when they are contradicted by newly input information. The guiding principle of belief revision theory is to change one’s prior beliefs as little as possible in order to maintain consistency with the new information. Learning theory focuses, instead, on learning power: the ability to arrive at true beliefs in a wide range of possible environments. The goal of this paper is to bridge the two approaches by providing a (...) learning theoretic analysis of the learning power of belief revision methods proposed by Spohn, Boutilier, Darwiche and Pearl, and others. The results indicate that learning power depends sharply on details of the methods. Hence, learning power can provide a well-motivated constraint on the design and implementation of concrete belief revision methods. (shrink)
Here is the usual way philosophers think about science and induction. Scientists do many things— aspire, probe, theorize, conclude, retract, and reﬁne— but successful research culminates in a published research report that presents an argument for some empirical conclusion. In mathematics and logic there are sound deductive arguments that fully justify their conclusions, but such proofs are unavailable in the empirical domain because empirical hypotheses outrun the evidence adduced for them. Inductive skeptics insist that such conclusions cannot be justiﬁed. But (...) “justiﬁcation” is a vague term— if empirical conclusions cannot be established fully, as mathematical conclusions are, perhaps they are justiﬁed in the sense that they are partially supported or conﬁrmed by the available evidence. To respond to the skeptic, one merely has to explicate the concept of conﬁrmation or partial justiﬁcation in a systematic manner that agrees, more or less, with common usage and to observe that our scientiﬁc conclusions are conﬁrmed in the explicated sense. This process of explication is widely thought to culminate in some version of Bayesian conﬁrmation theory. (shrink)
Philosophy of science, statistics, and machine learning all recommend the selection of simple theories or models on the basis of empirical data, where simplicity has something to do with minimizing independent entities, principles, causes, or equational coefficients. This intuitive preference for simplicity is called Ockham's razor, after the fourteenth century theologian and logician William of Ockham. But in spite of its intuitive appeal, how could Ockham's razor help one find the true theory? For, in an updated version of Plato's Meno (...) paradox, if we already know that the truth is simple, we don't need Ockham's help. And if we don't already know that the truth is simple, what entitles us to assume that it is? (shrink)
We defend a set of acceptance rules that avoids the lottery paradox, that is closed under classical entailment, and that accepts uncertain propositions without ad hoc restrictions. We show that the rules we recommend provide a semantics that validates exactly Adamsâ€™ conditional logic and are exactly the rules that preserve a natural, logical structure over probabilistic credal states that we call probalogic . To motivate probalogic, we first expand classical logic to geo-logic , which fills the entire unit cube, and (...) then we project the upper surfaces of the geo-logical cube onto the plane of probabilistic credal states by means of standard, linear perspective, which may be interpreted as an extension of the classical principle of indifference. Finally, we apply the geometrical/logical methods developed in the paper to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty. (shrink)
Explaining the connection, if any, between simplicity and truth is among the deepest problems facing the philosophy of science, statistics, and machine learning. Say that an efficient truth finding method minimizes worst case costs en route to converging to the true answer to a theory choice problem. Let the costs considered include the number of times a false answer is selected, the number of times opinion is reversed, and the times at which the reversals occur. It is demonstrated that (1) (...) always choosing the simplest theory compatible with experience, and (2) hanging onto it while it remains simplest, is both necessary and sufficient for efficiency. †To contact the author, please write to: Department of Philosophy, Carnegie Mellon University, Baker Hall 135, Pittsburgh, PA 15213-3890; e-mail: email@example.com. (shrink)
6 Abstract. I propose that empirical procedures, like computational procedures, are justiﬁed in 7 terms of truth-ﬁnding eﬃciency. I contrast the idea with more standard philosophies of science 8 and illustrate it by deriving Ockham’s razor from the aim of minimizing dramatic changes of 9 opinion en route to the truth.
I propose that empirical procedures, like computational procedures, are justified in terms of truth-finding efficiency. I contrast the idea with more standard philosophies of science and illustrate it by deriving Ockham's razor from the aim of minimizing dramatic changes of opinion en route to the truth.
The problem of induction reminds us that science cannot wait for empirical hypotheses to be verified and Duhem’s problem reminds us that we cannot expect full refutations either. We must settle for something less. The shape of this something less depends on which features of full verification and refutation we choose to emphasize. If we conceive of verification and refutation as arguments in which evidence entails the hypothesis or its negation, then the central problem of the philosophy of science is (...) to explicate a relation of confirmation or support that is weaker than full entailment but which serves, nonetheless, to justify empirical conclusions. (shrink)
Belief revision theory concerns methods for reformulating an agent's epistemic state when the agent's beliefs are refuted by new information. The usual guiding principle in the design of such methods is to preserve as much of the agent's epistemic state as possible when the state is revised. Learning theoretic research focuses, instead, on a learning method's reliability or ability to converge to true, informative beliefs over a wide range of possible environments. This paper bridges the two perspectives by assessing the (...) reliability of several proposed belief revision operators. Stringent conceptions of minimal change are shown to occasion a limitation called inductive amnesia: they can predict the future only if they cannot remember the past. Avoidance of inductive amnesia can therefore function as a plausible and hitherto unrecognized constraint on the design of belief revision operators. (shrink)
This paper places formal learning theory in a broader philosophical context and provides a glimpse of what the philosophy of induction looks like from a learning-theoretic point of view. Formal learning theory is compared with other standard approaches to the philosophy of induction. Thereafter, we present some results and examples indicating its unique character and philosophical interest, with special attention to its unified perspective on inductive uncertainty and uncomputability.
In this paper, we argue for the centrality of countable additivity to realist claims about the convergence of science to the truth. In particular, we show how classical sceptical arguments can be revived when countable additivity is dropped.
There is a popular view that the alleged meaning shifts resulting from scientific revolutions are somehow incompatible with the formulation of general norms for scientific inquiry. We construct methods that can be shown to be maximally reliable at getting to the truth when the truth changes in response to the state of the scientist or his society.
Convergent realists desire scientific methods that converge reliably to informative, true theories over a wide range of theoretical possibilities. Much attention has been paid to the problem of induction from quantifier-free data. In this paper, we employ the techniques of formal learning theory and model theory to explore the reliable inference of theories from data containing alternating quantifiers. We obtain a hierarchy of inductive problems depending on the quantifier prefix complexity of the formulas that constitute the data, and we provide (...) bounds relating the quantifier prefix complexity of the data to the quantifier prefix complexity of the theories that can be reliably inferred from such data without background knowledge. We also examine the question whether there are theories with mixed quantifiers that can be reliably inferred with closed, universal formulas in the data, but not without. (shrink)
One construal of convergent realism is that for each clear question, scientific inquiry eventually answers it. In this paper we adapt the techniques of formal learning theory to determine in a precise manner the circumstances under which this ideal is achievable. In particular, we define two criteria of convergence to the truth on the basis of evidence. The first, which we call EA convergence, demands that the theorist converge to the complete truth "all at once". The second, which we call (...) AE convergence, demands only that for every sentence in the theorist's language, there is a time at which the theorist settles the status of the sentence. The relative difficulties of these criteria are compared for effective and ineffective agents. We then examine in detail how the enrichment of an agent's hypothesis language makes the task of converging to the truth more difficult. In particular, we parametrize first-order languages by predicate and function symbol arity, presence or absence of identity, and quantifier prefix complexity. For nearly each choice of values of these parameters, we determine the senses in which effective and ineffective agents can converge to the complete truth on an arbitrary structure for the language. Finally, we sketch directions in which our learning theoretic setting can be generalized or made more realistic. (shrink)
Formal learning theory is an approach to the study of inductive inference that has been developed by computer scientists. In this paper, I discuss the relevance of formal learning theory to such standard topics in the philosophy of science as underdetermination, realism, scientific progress, methodology, bounded rationality, the problem of induction, the logic of discovery, the theory of knowledge, the philosophy of artificial intelligence, and the philosophy of psychology.
There is renewed interest in the logic of discovery as well as in the position that there is no reason for philosophers to bother with it. This essay shows that the traditional, philosophical arguments for the latter position are bankrupt. Moreover, no interesting defense of the philosophical irrelevance or impossibility of the logic of discovery can be formulated or defended in isolation from computation-theoretic considerations.