Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin proposal is (...) successful in this respect. I expose the general strategy to evade Putnam’s argument, leading to a broader discussion of the outer limits of mechanized induction. I argue that this strategy ultimately still succumbs to diagonalization, reinforcing Putnam’s impossibility claim. (shrink)
In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized (...) data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff and Levin. This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself. (shrink)
My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...) problems from current work in history of engineering and technology studies. I then model Carnap’s practice based on those lessons and uncover ways in which Carnap’s technical work in inductive logic takes some of these lessons on board. This shows ways in which Carnap’s philosophical project subtly changes right through his late work on induction, providing an important corrective to interpretations that ignore the work on inductive logic. Specifically, I show that paying attention to the historical details of Carnap’s attempt to apply his work in inductive logic to decision theory and theoretical statistics in the 1950s and 1960s helps us understand how Carnap develops and rearticulates the philosophical point of the practical/theoretical distinction in his late work, offering thus a new interpretation of Carnap’s technical work within the broader context of philosophy of science and analytical philosophy in general. (shrink)
One way of explaining Rudolf Carnap’s mature philosophical view is by drawing an analogy between his technical projects—like his work on inductive logic—with a certain kind of conceptual engineering. After all, there are many mathematical similarities between Carnap’s work in inductive logic and a number of results from contemporary confirmation theory, statistics and mathematical probability theory. However, in stressing these similarities, the conceptual dependence of Carnap’s inductive logic on his work on semantics is downplayed. Yet it is precisely the conceptual (...) resources made available to Carnap from his work on semantics which allows him to understand his work on inductive logic as a kind of conceptual engineering project. The aim of this paper is to elucidate this engineering analogy in light of Carnap’s mature views through the lens of both inductive logic and semantics. (shrink)
We characterize those identities and independencies which hold for all probability functions on a unary language satisfying the Principle of Atom Exchangeability. We then show that if this is strengthen to the requirement that Johnson's Sufficientness Principle holds, thus giving Carnap's Continuum of inductive methods for languages with at least two predicates, then new and somewhat inexplicable identities and independencies emerge, the latter even in the case of Carnap's Continuum for the language with just a single predicate.
Carnap's inductive logic (or confirmation) project is revisited from an "increase in firmness" (or probabilistic relevance) point of view. It is argued that Carnap's main desiderata can be satisfied in this setting, without the need for a theory of "logical probability." The emphasis here will be on explaining how Carnap's epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap's goal of bridging confirmation and credence, in favor of bridging (...) confirmation and evidential support. (shrink)
This paper starts by summarizing work that philosophers have done in the fields of inductive logic since 1950s and truth approximation since 1970s. It then proceeds to interpret and critically evaluate the studies on machine learning within artificial intelligence since 1980s. Parallels are drawn between identifiability results within formal learning theory and convergence results within Hintikka’s inductive logic. Another comparison is made between the PAC-learning of concepts and the notion of probable approximate truth.
In 1959 Carnap published a probability model that was meant to allow forreasoning by analogy involving two independent properties. Maher (2000)derived a generalized version of this model axiomatically and defended themodel''s adequacy. It is thus natural to now consider how the model mightbe extended to the case of more than two properties. A simple extension waspublished by Hess (1964); this paper argues that it is inadequate. Amore sophisticated one was developed jointly by Carnap and Kemeny in theearly 1950s but never (...) published; this paper gives the first published descriptionof Carnap and Kemeny''s model and argues that it too is inadequate. Since noother way of extending the two-property model is currently known, the conclusionof this paper is that a satisfactory extension to multiple properties requires somenew approach. (shrink)
In this thesis I defend and pursue that line about the foundations of probability theory which has come to be known as "the logicist view about probability", and, in particular, the shape which it took in Carnap's Inductive Logic. ;Most philosophers who now deal with probability theory claim that Carnap's program of Inductive Logic has failed. The main aim of my thesis is to show that this judgment is based on a fundamental misunderstanding about the nature and the aim of (...) inductive logic. To that end, I follow, in chapter 1, the events in the history of probability, logic and mathematics which led to choosing certain requirements as the requirements which any theory about the foundations of probability must fulfill in order to be acceptable. ;In chapter 2 I explain how Carnap's inductive logic fulfills those requirements and how the method it uses to give foundations to probability theory can also be used to solve another major problem about probability, namely, the problem of statistical inference. ;Chapter 3 shows how the ideas explained in chapter 2 were developed, formally, by Carnap, and how Carnap's program can be continued in a particular case, that of reasoning by analogy. ;Finally, in chapter 4, I attempt to show that subjectivism about probability theory, the rival theory to Carnap's logicism, does not succeed in meeting the most basic requirement for the acceptability of any theory about the foundations of probability. (shrink)
A basic system of inductive logic; An axiomatic foundation for the logic of inductive generalization; A survey of inductive systems; On the condition of partial exchangeability; Representation theorems of the de finetti type; De finetti's generalizations of excahngeability; The structure of probabilities defined on first-order languages; A subjectivit's guide to objective chance.
This paper is a sequel to the joint publication of Scott and Krauss in which the first aspects of a mathematical theory are developed which might be called "First Order Probability Logic". No attempt will be made to present this additional material in a self-contained form. We will use the same notation and terminology as introduced and explained in Scott and Krauss, and we will frequently refer to the theorems stated and proved in the preceding paper. The main objective of (...) this study is to show that the probability of symmetric probability systems may be represented as a "weighted average" of what might be called "product probabilities". We then discuss some applications of our results to Carnap's "Principle of instantial relevance", which plays an important role in his system of inductive logic. (shrink)
The central concept of Carnap's probabilistic theory of induction is a triadic relation, c, the probability or degree of confirmation of the hypothesis, h, on evidence, e. The relation is a purely logical one. The value of c can be computed from a knowledge of h, of e, of the structure of the language, and of the inductive rule to be employed.
Most handbooks on statistics and the theory of probability leave the reader in a mysterious tangle of mathematical rules for computing apparently arbitrarily chosen numerical functions. At first sight, then, a treatise on the Logical Foundations of Probability raises hopes that it will be a guide to clarity in these matters. These hopes are strengthened if the reader remembers that the author, Professor Rudolph Carnap of the University of Chicago, is noted for his thesis that philosophy is the study of (...) the logic of science. On the other hand, a first glance into this book may discourage a reader who, though familiar with statistics, is not familiar with modern logic and its notation. For this reason particularly I shall try to give an account of this book that will enable the interested student to form some opinion of its usefulness to him. (shrink)
APA PsycNET abstract: This is the first volume of a two-volume work on Probability and Induction. Because the writer holds that probability logic is identical with inductive logic, this work is devoted to philosophical problems concerning the nature of probability and inductive reasoning. The author rejects a statistical frequency basis for probability in favor of a logical relation between two statements or propositions. Probability "is the degree of confirmation of a hypothesis (or conclusion) on the basis of some given evidence (...) (or premises)." Furthermore, all principles and theorems of inductive logic are analytic, and the entire system is to be constructed by means of symbolic logic and semantic methods. This means that the author confines himself to the formalistic procedures of word and symbol systems. The resulting sentence or language structures are presumed to separate off logic from all subjectivist or psychological elements. Despite the abstractionism, the claim is made that if an inductive probability system of logic can be constructed it will have its practical application in mathematical statistics, and in various sciences. 16-page bibliography. (PsycINFO Database Record (c) 2016 APA, all rights reserved). (shrink)
Among the various meanings in which the word ‘probability’ is used in everyday language, in the discussion of scientists, and in the theories of probability, there are especially two which must be clearly distinguished. We shall use for them the terms ‘probability1’ and ‘probability2'. Probability1 is a logical concept, a certain logical relation between two sentences ; it is the same as the concept of degree of confirmation. I shall write briefly “c” for “degree of confirmation,” and “c” for “the (...) degree of confirmation of the hypothesis h on the evidence e“; the evidence is usually a report on the results of our observations. On the other hand, probability2 is an empirical concept; it is the relative frequency in the long run of one property with respect to another. The controversy between the so-called logical conception of probability, as represented e.g. by Keynes, and Jeffreys, and others, and the frequency conception, maintained e.g. by v. Mises and Reichenbach, seems to me futile. These two theories deal with two different probability concepts which are both of great importance for science. Therefore, the theories are not incompatible, but rather supplement each other. (shrink)