This book is devoted to a different proposal--that the logical structure of the scientist's method should guarantee eventual arrival at the truth given the scientist's background assumptions.
Clark Glymour, Richard Scheines, Peter Spirtes and Kevin Kelly. Discovering Causal Structure: Artifical Intelligence, Philosophy of Science and Statistical Modeling.
This paper concerns the extent to which uncertain propositional reasoning can track probabilistic reasoning, and addresses kinematic problems that extend the familiar Lottery paradox. An acceptance rule assigns to each Bayesian credal state p a propositional belief revision method B p , which specifies an initial belief state B p (T) that is revised to the new propositional belief state B(E) upon receipt of information E. An acceptance rule tracks Bayesian conditioning when B p (E) = B p|E (T), for (...) every E such that p(E) > 0; namely, when acceptance by propositional belief revision equals Bayesian conditioning followed by acceptance. Standard proposals for uncertain acceptance and belief revision do not track Bayesian conditioning. The "Lockean" rule that accepts propositions above a probability threshold is subject to the familiar lottery paradox (Kyburg 1961), and we show that it is also subject to new and more stubborn paradoxes when the tracking property is taken into account. Moreover, we show that the familiar AGM approach to belief revision (Harper, Synthese 30(1-2):221-262, 1975; Alchourrón et al., J Symb Log 50:510-530, 1985) cannot be realized in a sensible way by any uncertain acceptance rule that tracks Bayesian conditioning. Finally, we present a plausible, alternative approach that tracks Bayesian conditioning and avoids all of the paradoxes. It combines an odds-based acceptance rule proposed originally by Levi (1996) with a non-AGM belief revision method proposed originally by Shoham (1987). (shrink)
We defend a set of acceptance rules that avoids the lottery paradox, that is closed under classical entailment, and that accepts uncertain propositions without ad hoc restrictions. We show that the rules we recommend provide a semantics that validates exactly Adams’ conditional logic and are exactly the rules that preserve a natural, logical structure over probabilistic credal states that we call probalogic. To motivate probalogic, we first expand classical logic to geo-logic, which fills the entire unit cube, and then we (...) project the upper surfaces of the geo-logical cube onto the plane of probabilistic credal states by means of standard, linear perspective, which may be interpreted as an extension of the classical principle of indifference. Finally, we apply the geometrical/logical methods developed in the paper to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty. (shrink)
Ockham’s razor is the characteristic scientific penchant for simpler, more testable, and more unified theories. Glymour’s early work on confirmation theory eloquently stressed the rhetorical plausibility of Ockham’s razor in scientific arguments. His subsequent, seminal research on causal discovery still concerns methods with a strong bias toward simpler causal models, and it also comes with a story about reliability—the methods are guaranteed to converge to true causal structure in the limit. However, there is a familiar gap between convergent reliability and (...) scientific rhetoric: convergence in the long run is compatible with any conclusion in the short run. For that reason, Carnap suggested that the proper sense of reliability for scientific inference should lie somewhere between short-run reliability and mere convergence in the limit. One natural such concept is straightest possible convergence to the truth, where straightness is explicated in terms of minimizing reversals of opinion and cycles of opinion prior to convergence. We close the gap between scientific rhetoric and scientific reliability by showing that Ockham’s razor is necessary for cycle-optimal convergence to the truth, and that patiently waiting for information to resolve conflicts among simplest hypotheses is necessary for reversal-optimal convergence to the truth. (shrink)
Explaining the connection, if any, between simplicity and truth is among the deepest problems facing the philosophy of science, statistics, and machine learning. Say that an efficient truth finding method minimizes worst case costs en route to converging to the true answer to a theory choice problem. Let the costs considered include the number of times a false answer is selected, the number of times opinion is reversed, and the times at which the reversals occur. It is demonstrated that (1) (...) always choosing the simplest theory compatible with experience, and (2) hanging onto it while it remains simplest, is both necessary and sufficient for efficiency. †To contact the author, please write to: Department of Philosophy, Carnegie Mellon University, Baker Hall 135, Pittsburgh, PA 15213-3890; e-mail: [email protected] (shrink)
Synchronic norms of theory choice, a traditional concern in scientific methodology, restrict the theories one can choose in light of given information. Diachronic norms of theory change, as studied in belief revision, restrict how one should change one’s current beliefs in light of new information. Learning norms concern how best to arrive at true beliefs. In this paper, we undertake to forge some rigorous logical relations between the three topics. Concerning, we explicate inductive truth conduciveness in terms of optimally direct (...) convergence to the truth, where optimal directness is explicated in terms of reversals and cycles of opinion prior to convergence. Concerning, we explicate Ockham’s razor and related principles of choice in terms of the information topology of the empirical problem context and show that the principles are necessary for reversal or cycle optimal convergence to the truth. Concerning, we weaken the standard principles of agm belief revision theory in intuitive ways that are also necessary for reversal or cycle optimal convergence. Then we show that some of our weakened principles of change entail corresponding principles of choice, completing the triangle of relations between,, and. (shrink)
I propose that empirical procedures, like computational procedures, are justified in terms of truth-finding efficiency. I contrast the idea with more standard philosophies of science and illustrate it by deriving Ockham's razor from the aim of minimizing dramatic changes of opinion en route to the truth.
Simplicity has long been recognized as an apparent mark of truth in science, but it is difficult to explain why simplicity should be accorded such weight. This chapter examines some standard, statistical explanations of the role of simplicity in scientific method and argues that none of them explains, without circularity, how a reliance on simplicity could be conducive to finding true models or theories. The discussion then turns to a less familiar approach that does explain, in a sense, the elusive (...) connection between simplicity and truth. The idea is that simplicity does not point at or reliably indicate the truth but, rather, keeps inquiry on the cognitively most direct path to the truth. (shrink)
Belief revision theory concerns methods for reformulating an agent's epistemic state when the agent's beliefs are refuted by new information. The usual guiding principle in the design of such methods is to preserve as much of the agent's epistemic state as possible when the state is revised. Learning theoretic research focuses, instead, on a learning method's reliability or ability to converge to true, informative beliefs over a wide range of possible environments. This paper bridges the two perspectives by assessing the (...) reliability of several proposed belief revision operators. Stringent conceptions of minimal change are shown to occasion a limitation called inductive amnesia: they can predict the future only if they cannot remember the past. Avoidance of inductive amnesia can therefore function as a plausible and hitherto unrecognized constraint on the design of belief revision operators. (shrink)
Here is the usual way philosophers think about science and induction. Scientists do many things— aspire, probe, theorize, conclude, retract, and refine— but successful research culminates in a published research report that presents an argument for some empirical conclusion. In mathematics and logic there are sound deductive arguments that fully justify their conclusions, but such proofs are unavailable in the empirical domain because empirical hypotheses outrun the evidence adduced for them. Inductive skeptics insist that such conclusions cannot be justified. But (...) “justification” is a vague term— if empirical conclusions cannot be established fully, as mathematical conclusions are, perhaps they are justified in the sense that they are partially supported or confirmed by the available evidence. To respond to the skeptic, one merely has to explicate the concept of confirmation or partial justification in a systematic manner that agrees, more or less, with common usage and to observe that our scientific conclusions are confirmed in the explicated sense. This process of explication is widely thought to culminate in some version of Bayesian confirmation theory. (shrink)
This paper places formal learning theory in a broader philosophical context and provides a glimpse of what the philosophy of induction looks like from a learning-theoretic point of view. Formal learning theory is compared with other standard approaches to the philosophy of induction. Thereafter, we present some results and examples indicating its unique character and philosophical interest, with special attention to its unified perspective on inductive uncertainty and uncomputability.
One construal of convergent realism is that for each clear question, scientific inquiry eventually answers it. In this paper we adapt the techniques of formal learning theory to determine in a precise manner the circumstances under which this ideal is achievable. In particular, we define two criteria of convergence to the truth on the basis of evidence. The first, which we call EA convergence, demands that the theorist converge to the complete truth "all at once". The second, which we call (...) AE convergence, demands only that for every sentence in the theorist's language, there is a time at which the theorist settles the status of the sentence. The relative difficulties of these criteria are compared for effective and ineffective agents. We then examine in detail how the enrichment of an agent's hypothesis language makes the task of converging to the truth more difficult. In particular, we parametrize first-order languages by predicate and function symbol arity, presence or absence of identity, and quantifier prefix complexity. For nearly each choice of values of these parameters, we determine the senses in which effective and ineffective agents can converge to the complete truth on an arbitrary structure for the language. Finally, we sketch directions in which our learning theoretic setting can be generalized or made more realistic. (shrink)
Philosophical logicians proposing theories of rational belief revision have had little to say about whether their proposals assist or impede the agent's ability to reliably arrive at the truth as his beliefs change through time. On the other hand, reliability is the central concern of formal learning theory. In this paper we investigate the belief revision theory of Alchourron, Gardenfors and Makinson from a learning theoretic point of view.
then essentially characterized the hypotheses that mechanical scientists can successfully decide in the limit in terms of arithmetic complexity. These ideas were developed still further by Peter Kugel [4]. In this paper, I extend this approach to obtain characterizations of identification in the limit, identification with bounded mind-changes, and identification in the short run, both for computers and for ideal agents with unbounded computational abilities. The characterization of identification with n mind-changes entails, as a corollary, an exact arithmetic characterization of (...) Putnam's n-trial predicates, which closes a gap of a factor of two in Putnam's original characterization [12]. (shrink)
I show that a version of Ockham’s razor (a preference for simple answers) is advantageous in both domains when infallible inference is infeasible. A familiar response to the empirical problem..
Belief revision theory aims to describe how one should change one’s beliefs when they are contradicted by newly input information. The guiding principle of belief revision theory is to change one’s prior beliefs as little as possible in order to maintain consistency with the new information. Learning theory focuses, instead, on learning power: the ability to arrive at true beliefs in a wide range of possible environments. The goal of this paper is to bridge the two approaches by providing a (...) learning theoretic analysis of the learning power of belief revision methods proposed by Spohn, Boutilier, Darwiche and Pearl, and others. The results indicate that learning power depends sharply on details of the methods. Hence, learning power can provide a well-motivated constraint on the design and implementation of concrete belief revision methods. (shrink)
Over the past two decades, several consistent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian network or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, extends prior results to the effect that causal discovery (...) cannot be reliable on a given sample size. We argue that since repeated flipping of causal conclusions is unavoidable in principle for consistent methods, the best possible discovery methods are consistent methods that retract their earlier conclusions no more than necessary. A series of simulations of various methods across a wide range of sample sizes illustrates concretely both the theorem and the principle of comparing methods in terms of retractions. (shrink)
Synchronic norms of theory choice, a traditional concern in scientific methodology, restrict the theories one can choose in light of given information. Diachronic norms of theory change, as studied in belief revision, restrict how one should change one’s current beliefs in light of new information. Learning norms concern how best to arrive at true beliefs. In this paper, we undertake to forge some rigorous logical relations between the three topics. Concerning, we explicate inductive truth conduciveness in terms of optimally direct (...) convergence to the truth, where optimal directness is explicated in terms of reversals and cycles of opinion prior to convergence. Concerning, we explicate Ockham’s razor and related principles of choice in terms of the information topology of the empirical problem context and show that the principles are necessary for reversal or cycle optimal convergence to the truth. Concerning, we weaken the standard principles of agm belief revision theory in intuitive ways that are also necessary for reversal or cycle optimal convergence. Then we show that some of our weakened principles of change entail corresponding principles of choice, completing the triangle of relations between,, and. (shrink)
There is renewed interest in the logic of discovery as well as in the position that there is no reason for philosophers to bother with it. This essay shows that the traditional, philosophical arguments for the latter position are bankrupt. Moreover, no interesting defense of the philosophical irrelevance or impossibility of the logic of discovery can be formulated or defended in isolation from computation-theoretic considerations.
In this paper, we argue for the centrality of countable additivity to realist claims about the convergence of science to the truth. In particular, we show how classical sceptical arguments can be revived when countable additivity is dropped.
This paper presents a new explanation of how preferring the simplest theory compatible with experience assists one in finding the true answer to a scientific question when the answers are theories or models. Inquiry is portrayed as an unending game between science and nature in which the scientist aims to converge to the true theory on the basis of accumulating information. Simplicity is a topological invariant reflecting sequences of theory choices that nature can force an arbitrary, convergent scientist to produce. (...) It is demonstrated that among the methods that converge to the truth in an empirical problem, the ones that do so with a minimum number of reversals of opinion prior to convergence are exactly the ones that prefer simple theories. The approach explains not only simplicity tastes in model selection, but aspects of theory testing and the unwillingness of natural science to break symmetries without a reason. (shrink)
I have applied a fairly general, learning theoretic perspective to some questions raised by Reichenbach's positions on induction and discovery. This is appropriate in an examination of the significance of Reichenbach's work, since the learning-theoretic perspective is to some degree part of Reichenbach's reliabilist legacy. I have argued that Reichenbach's positivism and his infatuation with probabilities are both irrelevant to his views on induction, which are principally grounded in the notion of limiting reliability. I have suggested that limiting reliability is (...) still a formidable basis for the formulation of methodological norms, particularly when reliability cannot possibly be had in the short run, so that refined judgments about evidential support must depend upon measure-theoretic choices having nothing to do in the short run with the truth of the hypothesis under investigation. To illustrate the generality of Reichenbach's program, I showed how it can be applied to methods that aim to solve arbitrary assessment and discovery problems in various senses. In this generalized Reichenbachian setting, we can characterize the intrinsic complexity of reliable inductive inference in terms of topological complexity. Finally, I let Reichenbach's theory of induction have the last say about hypothetico-deductive method. (shrink)
The problem of induction reminds us that science cannot wait for empirical hypotheses to be verified and Duhem’s problem reminds us that we cannot expect full refutations either. We must settle for something less. The shape of this something less depends on which features of full verification and refutation we choose to emphasize. If we conceive of verification and refutation as arguments in which evidence entails the hypothesis or its negation, then the central problem of the philosophy of science is (...) to explicate a relation of confirmation or support that is weaker than full entailment but which serves, nonetheless, to justify empirical conclusions. (shrink)
Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is (...) the Ockham efficiency theorem, which states that scientists who heed Ockham’s razor retract their opinions less often and sooner than do their non-Ockham competitors. The theorem neglects, however, to consider competitors following random strategies and in many applications random strategies are known to achieve better worst-case loss than deterministic strategies. In this paper, we describe two ways to extend the result to a very general class of random, empirical strategies. The first extension concerns expected retractions, retraction times, and errors and the second extension concerns retractions in chance, times of retractions in chance, and chances of errors. (shrink)
This chapter presents a new semantics for inductive empirical knowledge. The epistemic agent is represented concretely as a learner who processes new inputs through time and who forms new beliefs from those inputs by means of a concrete, computable learning program. The agent’s belief state is represented hyper-intensionally as a set of time-indexed sentences. Knowledge is interpreted as avoidance of error in the limit and as having converged to true belief from the present time onward. Familiar topics are re-examined within (...) the semantics, such as inductive skepticism, the logic of discovery, Duhem’s problem, the articulation of theories by auxiliary hypotheses, the role of serendipity in scientific knowledge, Fitch’s paradox, deductive closure of knowability, whether one can know inductively that one knows inductively, whether one can know inductively that one does not know inductively, and whether expert instruction can spread common inductive knowledge—as opposed to mere, true belief—through a community of gullible pupils. (shrink)
We argue that uncomputability and classical scepticism are both reflections of inductive underdetermination, so that Church's thesis and Hume's problem ought to receive equal emphasis in a balanced approach to the philosophy of induction. As an illustration of such an approach, we investigate how uncomputable the predictions of a hypothesis can be if the hypothesis is to be reliably investigated by a computable scientific method.
Convergent realists desire scientific methods that converge reliably to informative, true theories over a wide range of theoretical possibilities. Much attention has been paid to the problem of induction from quantifier-free data. In this paper, we employ the techniques of formal learning theory and model theory to explore the reliable inference of theories from data containing alternating quantifiers. We obtain a hierarchy of inductive problems depending on the quantifier prefix complexity of the formulas that constitute the data, and we provide (...) bounds relating the quantifier prefix complexity of the data to the quantifier prefix complexity of the theories that can be reliably inferred from such data without background knowledge. We also examine the question whether there are theories with mixed quantifiers that can be reliably inferred with closed, universal formulas in the data, but not without. (shrink)
A finite data set is consistent with infinitely many alternative theories. Scientific realists recommend that we prefer the simplest one. Anti-realists ask how a fixed simplicity bias could track the truth when the truth might be complex. It is no solution to impose a prior probability distribution biased toward simplicity, for such a distribution merely embodies the bias at issue without explaining its efficacy. In this note, I argue, on the basis of computational learning theory, that a fixed simplicity bias (...) is necessary if inquiry is to converge to the right answer efficiently, whatever the right answer might be. Efficiency is understood in the sense of minimizing the least fixed bound on retractions or errors prior to convergence. (shrink)
We defend a set of acceptance rules that avoids the lottery paradox, that is closed under classical entailment, and that accepts uncertain propositions without ad hoc restrictions. We show that the rules we recommend provide a semantics that validates exactly Adams’ conditional logic and are exactly the rules that preserve a natural, logical structure over probabilistic credal states that we call probalogic. To motivate probalogic, we first expand classical logic to geologic, which fills the entire unit cube, and then we (...) project the upper surfaces of the geological cube onto the plane of probabilistic credal states by means of standard, linear perspective, which may be interpreted as an extension of the classical condition of indifference. Finally, we apply the geometrical/logical methods developed in the paper to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty. (shrink)
We argue that uncomputability and classical scepticism are both re ections of inductive underdetermination, so that Church's thesis and Hume's problem ought to receive equal emphasis in a balanced approach to the philosophy of induction. As an illustration of such an approach, we investigate how uncomputable the predictions of a hypothesis can be if the hypothesis is to be reliably investigated by a computable scienti c method.
Many distinct theories are compatible with current experience. Scientific realists recommend that we choose the simplest. Anti-realists object that such appeals to “Ockham’s razor” cannot be truth-conducive, since they lead us astray in complex worlds. I argue, on behalf of the realist, that always preferring the simplest theory compatible with experience is necessary for efficient convergence to the truth in the long run, even though it may point in the wrong direction in the short run. Efficiency is a matter of (...) minimizing errors or retractions prior to convergence to the truth. (shrink)
Written by a Roman Catholic theologian, these essays cover how pastoral ministry should be done in today's society, including how to handle church law and build a collaborative church as well as specific issues such as euthanasia and embryo research.