then essentially characterized the hypotheses that mechanical scientists can successfully decide in the limit in terms of arithmetic complexity. These ideas were developed still further by Peter Kugel . In this paper, I extend this approach to obtain characterizations of identification in the limit, identification with bounded mind-changes, and identification in the short run, both for computers and for ideal agents with unbounded computational abilities. The characterization of identification with n mind-changes entails, as a corollary, an exact arithmetic characterization of (...) Putnam's n-trial predicates, which closes a gap of a factor of two in Putnam's original characterization . (shrink)
This thesis examines the prospects for mechanical procedures that can identify true, complete, universal, first-order logical theories on the basis of a complete enumeration of true atomic sentences. A sense of identification is defined that is..
Philosophy of science, statistics, and machine learning all recommend the selection of simple theories or models on the basis of empirical data, where simplicity has something to do with minimizing independent entities, principles, causes, or equational coefficients. This intuitive preference for simplicity is called Ockham's razor, after the fourteenth century theologian and logician William of Ockham. But in spite of its intuitive appeal, how could Ockham's razor help one find the true theory? For, in an updated version of Plato's Meno (...) paradox, if we already know that the truth is simple, we don't need Ockham's help. And if we don't already know that the truth is simple, what entitles us to assume that it is? (shrink)
We defend a set of acceptance rules that avoids the lottery paradox, that is closed under classical entailment, and that accepts uncertain propositions without ad hoc restrictions. We show that the rules we recommend provide a semantics that validates exactly Adamsâ€™ conditional logic and are exactly the rules that preserve a natural, logical structure over probabilistic credal states that we call probalogic . To motivate probalogic, we first expand classical logic to geo-logic , which fills the entire unit cube, and (...) then we project the upper surfaces of the geo-logical cube onto the plane of probabilistic credal states by means of standard, linear perspective, which may be interpreted as an extension of the classical principle of indifference. Finally, we apply the geometrical/logical methods developed in the paper to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty. (shrink)
I propose that empirical procedures, like computational procedures, are justified in terms of truth-finding efficiency. I contrast the idea with more standard philosophies of science and illustrate it by deriving Ockham's razor from the aim of minimizing dramatic changes of opinion en route to the truth.
Belief revision theory concerns methods for reformulating an agent's epistemic state when the agent's beliefs are refuted by new information. The usual guiding principle in the design of such methods is to preserve as much of the agent's epistemic state as possible when the state is revised. Learning theoretic research focuses, instead, on a learning method's reliability or ability to converge to true, informative beliefs over a wide range of possible environments. This paper bridges the two perspectives by assessing the (...) reliability of several proposed belief revision operators. Stringent conceptions of minimal change are shown to occasion a limitation called inductive amnesia: they can predict the future only if they cannot remember the past. Avoidance of inductive amnesia can therefore function as a plausible and hitherto unrecognized constraint on the design of belief revision operators. (shrink)
This paper places formal learning theory in a broader philosophical context and provides a glimpse of what the philosophy of induction looks like from a learning-theoretic point of view. Formal learning theory is compared with other standard approaches to the philosophy of induction. Thereafter, we present some results and examples indicating its unique character and philosophical interest, with special attention to its unified perspective on inductive uncertainty and uncomputability.
In this paper, we argue for the centrality of countable additivity to realist claims about the convergence of science to the truth. In particular, we show how classical sceptical arguments can be revived when countable additivity is dropped.
There is a popular view that the alleged meaning shifts resulting from scientific revolutions are somehow incompatible with the formulation of general norms for scientific inquiry. We construct methods that can be shown to be maximally reliable at getting to the truth when the truth changes in response to the state of the scientist or his society.
Convergent realists desire scientific methods that converge reliably to informative, true theories over a wide range of theoretical possibilities. Much attention has been paid to the problem of induction from quantifier-free data. In this paper, we employ the techniques of formal learning theory and model theory to explore the reliable inference of theories from data containing alternating quantifiers. We obtain a hierarchy of inductive problems depending on the quantifier prefix complexity of the formulas that constitute the data, and we provide (...) bounds relating the quantifier prefix complexity of the data to the quantifier prefix complexity of the theories that can be reliably inferred from such data without background knowledge. We also examine the question whether there are theories with mixed quantifiers that can be reliably inferred with closed, universal formulas in the data, but not without. (shrink)
One construal of convergent realism is that for each clear question, scientific inquiry eventually answers it. In this paper we adapt the techniques of formal learning theory to determine in a precise manner the circumstances under which this ideal is achievable. In particular, we define two criteria of convergence to the truth on the basis of evidence. The first, which we call EA convergence, demands that the theorist converge to the complete truth "all at once". The second, which we call (...) AE convergence, demands only that for every sentence in the theorist's language, there is a time at which the theorist settles the status of the sentence. The relative difficulties of these criteria are compared for effective and ineffective agents. We then examine in detail how the enrichment of an agent's hypothesis language makes the task of converging to the truth more difficult. In particular, we parametrize first-order languages by predicate and function symbol arity, presence or absence of identity, and quantifier prefix complexity. For nearly each choice of values of these parameters, we determine the senses in which effective and ineffective agents can converge to the complete truth on an arbitrary structure for the language. Finally, we sketch directions in which our learning theoretic setting can be generalized or made more realistic. (shrink)
Formal learning theory is an approach to the study of inductive inference that has been developed by computer scientists. In this paper, I discuss the relevance of formal learning theory to such standard topics in the philosophy of science as underdetermination, realism, scientific progress, methodology, bounded rationality, the problem of induction, the logic of discovery, the theory of knowledge, the philosophy of artificial intelligence, and the philosophy of psychology.
There is renewed interest in the logic of discovery as well as in the position that there is no reason for philosophers to bother with it. This essay shows that the traditional, philosophical arguments for the latter position are bankrupt. Moreover, no interesting defense of the philosophical irrelevance or impossibility of the logic of discovery can be formulated or defended in isolation from computation-theoretic considerations.