In ‘Induction and Natural Kinds’, I proposed a solution to the problem of induction according to which our use of inductive inference is reliable because it is grounded in the natural kind structure of the world. When we infer that unobserved members of a kind will have the same properties as observed members of the kind, we are right because all members of the kind possess the same essential properties. The claim that the existence of natural kinds is (...) what grounds reliable use of induction is based on an inference to the best explanation of the success of our inductive practices. As such, the argument for the existence of natural kinds employs a form of ampliative inference. But induction is likewise a form of ampliative inference. Given both of these facts, my account of the reliability of induction is subject to the objection that it provides a circular justification of induction, since it employs an ampliative inference to justify an ampliative inference. In this paper, I respond to the objection of circularity by arguing that what justifies induction is not the inference to the best explanation of its reliability. The ground of induction is the natural kinds themselves. (shrink)
Pure Inductive Logic is the study of rational probability treated as a branch of mathematical logic. This monograph, the first devoted to this approach, brings together the key results from the past seventy years, plus the main contributions of the authors and their collaborators over the last decade, to present a comprehensive account of the discipline within a single unified context.
This book brings together eleven case studies of inductive risk-the chance that scientific inference is incorrect-that range over a wide variety of scientific contexts and fields. The chapters are designed to illustrate the pervasiveness of inductive risk, assist scientists and policymakers in responding to it, and productively move theoretical discussions of the topic forward.
Although epistemic values have become widely accepted as part of scientific reasoning, non-epistemic values have been largely relegated to the "external" parts of science (the selection of hypotheses, restrictions on methodologies, and the use of scientific technologies). I argue that because of inductive risk, or the risk of error, non-epistemic values are required in science wherever non-epistemic consequences of error should be considered. I use examples from dioxin studies to illustrate how non-epistemic consequences of error can and should be considered (...) in the internal stages of science: choice of methodology, characterization of data, and interpretation of results. (shrink)
In a recent work, Popper claims to have solved the problem of induction. In this paper I argue that Popper fails both to solve the problem, and to formulate the problem properly. I argue, however, that there are aspects of Popper's approach which, when strengthened and developed, do provide a solution to at least an important part of the problem of induction, along somewhat Popperian lines. This proposed solution requires, and leads to, a new theory of the role (...) of simplicity in science, which may have helpful implications for science itself, thus actually stimulating scientific progress. (shrink)
Applying good inductive rules inside the scope of suppositions leads to implausible results. I argue it is a mistake to think that inductive rules of inference behave anything like 'inference rules' in natural deduction systems. And this implies that it isn't always true that good arguments can be run 'off-line' to gain a priori knowledge of conditional conclusions.
Originally published in 1969. This book explains what is wrong with the traditional methodology of "inductive" reasoning and shows that the alternative scheme of reasoning associated with Whewell, Pierce and Popper can give the scientist a useful insight into the way he thinks.
The paper sketches an ontological solution to an epistemological problem in the philosophy of science. Taking the work of Hilary Kornblith and Brian Ellis as a point of departure, it presents a realist solution to the Humean problem of induction, which is based on a scientific essentialist interpretation of the principle of the uniformity of nature. More specifically, it is argued that use of inductive inference in science is rationally justified because of the existence of real, natural kinds of (...) things, which are characterized as such by the essential properties which all members of a kind necessarily possess in common. The proposed response to inductive scepticism combines the insights of epistemic naturalism with a metaphysical outlook that is due to s cientific realism. (shrink)
How induction was understood took a substantial turn during the Renaissance. At the beginning, induction was understood as it had been throughout the medieval period, as a kind of propositional inference that is stronger the more it approximates deduction. During the Renaissance, an older understanding, one prevalent in antiquity, was rediscovered and adopted. By this understanding, induction identifies defining characteristics using a process of comparing and contrasting. Important participants in the change were Jean Buridan, humanists such as (...) Lorenzo Valla and Rudolph Agricola, Paduan Aristotelians such as Agostino Nifo, Jacopo Zabarella, and members of the medical faculty, writers on philosophy of mind such as the Englishman John Case, writers of reasoning handbooks, and Francis Bacon. (shrink)
From what norms does the ethics of belief derive its oughts, its attributions of virtues and vices, responsibilities and irresponsibilities, its permissioning and censuring? Since my inductive risk account is inspired by pragmatism, and this method understands epistemology as the theory of inquiry, the paper will try to explain what the aims and tasks are for an ethics of belief, or project of guidance, which best fits with this understanding of epistemology. More specifically, this chapter approaches the ethics of belief (...) from a focus on responsible risk management, where doxastic responsibility is understood in terms of the degree of riskiness of agents’ doxastic strategies, a riskiness which is in turn most objectively measured through accordance or violation of inductive norms. Doxastic responsibility is attributable to agents on the basis of how epistemically risky was the process or strategy of inquiry salient in the etiology of their belief or in their maintenance of what one believes or accepts. (shrink)
I would like to assume that Reichenbach's distinction of Justification and Discovery lives on, and to seek arguments in his texts that would justify their relevance in this field. The persuasive force of these arguments transcends the contingent circumstances apart from which their genesis and local transmission cannot be made understandable. I shall begin by characterizing the context distinction as employed by Reichenbach in "Experience and Prediction" to differentiate between epistemology and science (1). Following Thomas Nickles and Kevin T. Kelly, (...) one can distinguish two meanings of the context distinction in Reichenbach's work. One meaning, which is primarily to be found in the earlier writings, conceives of scientific discoveries as potential objects of epistemological justification. The other meaning, typical for the later writings, removes scientific discoveries from the possible domain of epistemology. The genesis of both meanings, which demonstrates the complexity of the relationships obtaining between epistemology and science, can be made understandable by appealing to the historical context (2). Both meanings present Reichenbach with the task of establishing the autonomy of epistemology through the justification of induction. Finally, I shall expound this justification and address some of its elements of rationality characterizing philosophy of science(3). (shrink)
In recent years, the argument from inductive risk against value free science has enjoyed a revival. This paper investigates and clarifies this argument through means of a case-study: neonicitinoid research. Sect. 1 argues that the argument from inductive risk is best conceptualised as a claim about scientists’ communicative obligations. Sect. 2 then shows why this argument is inapplicable to “public communication”. Sect. 3 outlines non-epistemic reasons why non-epistemic values should not play a role in public communicative contexts. Sect. 4 analyses (...) the implications of these arguments both for the specific case of neonicitinoid research and for understanding the limits of the argument from inductive risk. Sect. 5 sketches the broader implications of my claims for understanding the “Value Free Ideal” for science. (shrink)
Inductive methods can be used to estimate the accuracies of inductive methods. Call a method immodest if it estimates that it is at least as accurate as any of its rivals. It would be unreasonable to adopt any but an immodest method. Under certain assumptions, exactly one of Carnap's lambda-methods is immodest. This may seem to solve the problem of choosing among the lambda-methods; but sometimes the immodest lambda-method is λ =0, which it would not be reasonable to adopt. We (...) should therefore reconsider the assumptions that led to this conclusion: for instance, the measure of accuracy. (shrink)
Views which deny that there are necessary connections between distinct existences have often been criticized for leading to inductive skepticism. If there is no glue holding the world together then there seems to be no basis on which to infer from past to future. However, deniers of necessary connections have typically been unconcerned. After all, they say, everyone has a problem with induction. But, if we look at the connection between induction and explanation, we can develop the problem (...) of induction in a way that hits deniers of necessary connections, but not their opponents. The denier of necessary connections faces an `internal' problem with induction -- skepticism about important inductive inferences naturally flows from their position in a way that it doesn't for those who accept necessary connections. This is a major problem, perhaps a fatal one, for the denial of necessary connections. (shrink)
Originally published in 1969. This book explains what is wrong with the traditional methodology of "inductive" reasoning and shows that the alternative scheme of reasoning associated with Whewell, Pierce and Popper can give the scientist a useful insight into the way he thinks.
Safety accounts of knowledge claim, roughly, that knowledge that p requires that one's belief that p could not have easily been false. Such accounts have been very popular in recent epistemology. However, one serious problem safety accounts have to confront is to explain why certain lottery‐related beliefs are not knowledge, without excluding obvious instances of inductive knowledge. We argue that the significance of this objection has hitherto been underappreciated by proponents of safety. We discuss Duncan Pritchard's recent solution to the (...) problem and argue that it fails. More importantly, the problem reaches deeper and poses a threat to any current safety accounts that require a belief's modal stability in close possibilities (as well as safety accounts that appeal to ‘normality’). We end by arguing that ways out of the problem require substantial reconstruction for a safety‐based account of knowledge. (shrink)
This paper formulates some paradoxes of inductive knowledge. Two responses in particular are explored: According to the first sort of theory, one is able to know in advance that certain observations will not be made unless a law exists. According to the other, this sort of knowledge is not available until after the observations have been made. Certain natural assumptions, such as the idea that the observations are just as informative as each other, the idea that they are independent, and (...) that they increase your knowledge monotonically (among others) are given precise formulations. Some surprising consequences of these assumptions are drawn, and their ramifications for the two theories examined. Finally, a simple model of inductive knowledge is offered, and independently derived from other principles concerning the interaction of knowledge and counterfactuals. (shrink)
Eliminative induction is a method for finding the truth by using evidence to eliminate false competitors. It is often characterized as "induction by means of deduction"; the accumulating evidence eliminates false hypotheses by logically contradicting them, while the true hypothesis logically entails the evidence, or at least remains logically consistent with it. If enough evidence is available to eliminate all but the most implausible competitors of a hypothesis, then (and only then) will the hypothesis become highly confirmed. I (...) will argue that, with regard to the evaluation of hypotheses, Bayesian inductive inference is essentially a probabilistic form of induction by elimination. Bayesian induction is an extension of eliminativism to cases where, rather than contradict the evidence, false hypotheses imply that the evidence is very unlikely, much less likely than the evidence would be if some competing hypothesis were true. This is not, I think, how Bayesian induction is usually understood. The recent book by Howson and Urbach, for example, provides an excellent, comprehensive explanation and defense of the Bayesian approach; but this book scarcely remarks on Bayesian induction's eliminative nature. Nevertheless, the very essence of Bayesian induction is the refutation of false competitors of a true hypothesis, or so I will argue. (shrink)
The pessimistic induction plays an important role in the contemporary realism/anti-realism debate in philosophy of science. But there is some disagreement about the structure and aim of the argument. And a number of scholars have noted that there is more than one type of PI in the philosophical literature. I review four different versions of the PI. I aim to show that PIs have been appealed to by philosophers of science for a variety of reasons. Even some realists have (...) appealed to a PI. My goal is to advance our understanding of what the various PIs can teach us about science and the threat posed by PIs to scientific realism. (shrink)
I set up two axiomatic theories of inductive support within the framework of Kolmogorovian probability theory. I call these theories ‘Popperian theories of inductive support’ because I think that their specific axioms express the core meaning of the word ‘inductive support’ as used by Popper (and, presumably, by many others, including some inductivists). As is to be expected from Popperian theories of inductive support, the main theorem of each of them is an anti-induction theorem, the stronger one of them (...) saying, in fact, that the relation of inductive support is identical with the empty relation. It seems to me that an axiomatic treatment of the idea(s) of inductive support within orthodox probability theory could be worthwhile for at least three reasons. Firstly, an axiomatic treatment demands from the builder of a theory of inductive support to state clearly in the form of specific axioms what he means by ‘inductive support’. Perhaps the discussion of the new anti-induction proofs of Karl Popper and David Miller would have been more fruitful if they had given an explicit definition of what inductive support is or should be. Secondly, an axiomatic treatment of the idea(s) of inductive support within Kolmogorovian probability theory might be accommodating to those philosophers who do not completely trust Popperian probability theory for having theorems which orthodox Kolmogorovian probability theory lacks; a transparent derivation of anti-induction theorems within a Kolmogorovian frame might bring additional persuasive power to the original anti-induction proofs of Popper and Miller, developed within the framework of Popperian probability theory. Thirdly, one of the main advantages of the axiomatic method is that it facilitates criticism of its products: the axiomatic theories. On the one hand, it is much easier than usual to check whether those statements which have been distinguished as theorems really are theorems of the theory under examination. On the other hand, after we have convinced ourselves that these statements are indeed theorems, we can take a critical look at the axioms—especially if we have a negative attitude towards one of the theorems. Since anti-induction theorems are not popular at all, the adequacy of some of the axioms they are derived from will certainly be doubted. If doubt should lead to a search for alternative axioms, sheer negative attitudes might develop into constructive criticism and even lead to new discoveries. -/- I proceed as follows. In section 1, I start with a small but sufficiently strong axiomatic theory of deductive dependence, closely following Popper and Miller (1987). In section 2, I extend that starting theory to an elementary Kolmogorovian theory of unconditional probability, which I extend, in section 3, to an elementary Kolmogorovian theory of conditional probability, which in its turn gets extended, in section 4, to a standard theory of probabilistic dependence, which also gets extended, in section 5, to a standard theory of probabilistic support, the main theorem of which will be a theorem about the incompatibility of probabilistic support and deductive independence. In section 6, I extend the theory of probabilistic support to a weak Popperian theory of inductive support, which I extend, in section 7, to a strong Popperian theory of inductive support. In section 8, I reconsider Popper's anti-inductivist theses in the light of the anti-induction theorems. I conclude the paper with a short discussion of possible objections to our anti-induction theorems, paying special attention to the topic of deductive relevance, which has so far been neglected in the discussion of the anti-induction proofs of Popper and Miller. (shrink)
In the mid-eighteenth century David Hume argued that successful prediction tells us nothing about the truth of the predicting theory. But physical theory routinely predicts the values of observable magnitudes within very small ranges of error. The chance of this sort of predictive success without a true theory suggests that Hume's argument is flawed. However, Colin Howson argues that there is no flaw and examines the implications of this disturbing conclusion; he also offers a solution to one of the central (...) problems of Western philosophy, the problem of induction. (shrink)
In this article, I argue that arguments from the history of science against scientific realism, like the arguments advanced by P. Kyle Stanford and Peter Vickers, are fallacious. The so-called Old Induction, like Vickers's, and New Induction, like Stanford's, are both guilty of confirmation bias—specifically, of cherry-picking evidence that allegedly challenges scientific realism while ignoring evidence to the contrary. I also show that the historical episodes that Stanford adduces in support of his New Induction are indeterminate between (...) a pessimistic and an optimistic interpretation. For these reasons, these arguments are fallacious, and thus do not pose a serious challenge to scientific realism. (shrink)
Without inductive reasoning, we couldn't generalize from one instance to another, derive scientific hypotheses, or predict that the sun will rise again tomorrow morning. Despite the widespread nature of inductive reasoning, books on this topic are rare. Indeed, this is the first book on the psychology of inductive reasoning in twenty years. The chapters survey recent advances in the study of inductive reasoning and address questions about how it develops, the role of knowledge in induction, how best to model (...) people's reasoning, and how induction relates to other forms of thinking. Written by experts in philosophy, developmental science, cognitive psychology, and computational modeling, the contributions here will be of interest to a general cognitive science audience as well as to those with a more specialized interest in the study of thinking. (shrink)
According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important problems arising within the Bayesian approach to scientific methodology is the choice of prior probabilities. Here this problem is considered in detail w.r.t. two applications of the Bayesian approach: (1) the theory of inductive probabilities (TIP) developed by Rudolf (...) Carnap and other epistomologists and (2) the analysis of the multinational inferences provided by Bayesian statstics (BS). ... Zie: Summary. (shrink)
I review prominent historical arguments against scientific realism to indicate how they display a systematic overshooting in the conclusions drawn from the historical evidence. The root of the overshooting can be located in some critical, undue presuppositions regarding realism. I will highlight these presuppositions in connection with both Laudan’s ‘Old induction’ and Stanford’s New induction, and then delineate a minimal realist view that does without the problematic presuppositions.
In this paper I adduce a new argument in support of the claim that IBE is an autonomous form of inference, based on a familiar, yet surprisingly, under-discussed, problem for Hume’s theory of induction. I then use some insights thereby gleaned to argue for the claim that induction is really IBE, and draw some normative conclusions.
I want to examine a possible solution to the problem of induction-one which, as far as I know, has not been discussed elsewhere. The solution makes crucial use of the notion of objective natural necessity. For the purposes of this discussion, I shall assume that this notion is coherent. I am aware that this assumption is controversial, but I do not have space to examine the issue here.
An account of inductive inference is presented which addresses both its epistemological and metaphysical dimensions. It is argued that inductive knowledge is possible by virtue of the fit between our innate psychological capacities and the causal structure of the world.
John Foster; VI*—Induction, Explanation and Natural Necessity, Proceedings of the Aristotelian Society, Volume 83, Issue 1, 1 June 1983, Pages 87–102, https://d.
Aristotle said that induction (epagōgē) is a proceeding from particulars to a universal, and the definition has been conventional ever since. But there is an ambiguity here. Induction in the Scholastic and the (so-called) Humean tradition has presumed that Aristotle meant going from particular statements to universal statements. But the alternate view, namely that Aristotle meant going from particular things to universal ideas, prevailed all through antiquity and then again from the time of Francis Bacon until the mid-nineteenth (...) century. Recent scholarship is so steeped in the first-mentioned tradition that we have virtually forgotten the other. In this essay McCaskey seeks to recover that alternate tradition, a tradition whose leading theoreticians were William Whewell, Francis Bacon, Socrates, and in fact Aristotle himself. The examination is both historical and philosophical. The first part of the essay fills out the history. The latter part examines the most mature of the philosophies in the Socratic tradition, specifically Bacon’s and Whewell’s. After tracing out this tradition, McCaskey shows how this alternate view of induction is indeed employed in science, as exemplified by several instances taken from actual scientific practice. In this manner, McCaskey proposes to us that the Humean problem of induction is merely an artifact of a bad conception of induction and that a return to the Socratic conception might be warranted. (shrink)
Inductive Logic is a ‘thematic compilation’ by Avi Sion. It collects in one volume many (though not all) of the essays, that he has written on this subject over a period of some 23 years, which all demonstrate the possibility and conditions of validity of human knowledge, the utility and reliability of human cognitive means when properly used, contrary to the skeptical assumptions that are nowadays fashionable. A new essay, The Logic of Analogy, was added in 2022.
I present a new thermodynamic argument for the existence of God. Naturalistic physics provides evidence for the failure of induction, because it provides evidence that the past is not at all what you think it is, and your existence is just a momentary fluctuation. The fact that you are not a momentary fluctuation thus provides evidence for the existence of God – God would ensure that the past is roughly what we think it is, and you have been in (...) existence for roughly the amount of time you think you have. I don’t have a definitive way for the atheist to refute this argument, but I give one suggestion that relies on physics-based simplicity considerations. I close with an epistemological discussion of self-undermining arguments. (shrink)
According to David Hume, the concept of causation and probability are to be understood in terms of the concepts of similarity and repetition. In this book, it is shown that they are to be understood in terms of the concept of continuity. One corollary is that there is no legitimate basis for skepticism concerning the legitimacy of inductive inference. Another is that anti-realism about theoretical entities is misconceived.
CL diagrams – the abbreviation of Cubus Logicus – are inspired by J.C. Lange’s logic machine from 1714. In recent times, Lange’s diagrams have been used for extended syllogistics, bitstring semantics, analogical reasoning and many more. The paper presents a method for testing statistical syllogisms (also called proportional syllogisms or inductive syllogisms) by using CL diagrams.
As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...) the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The paper asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach ‘elsewheres’ in space and time or deploy ML models in non-benign environments. The paper argues that the only viable version of the contract can be based on optimality (instead of on reliability which cannot be justified without circularity) and aligns this position with Schurz’s optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (‘elsewheres’ and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full. (shrink)
This paper starts by summarizing work that philosophers have done in the fields of inductive logic since 1950s and truth approximation since 1970s. It then proceeds to interpret and critically evaluate the studies on machine learning within artificial intelligence since 1980s. Parallels are drawn between identifiability results within formal learning theory and convergence results within Hintikka’s inductive logic. Another comparison is made between the PAC-learning of concepts and the notion of probable approximate truth.
In this paper, I outline a reductio against Stanford’s “New Induction” on the History of Science, which is an inductive argument against scientific realism that is based on what Stanford (2006) calls “the Problem of Unconceived Alternatives” (PUA). From the supposition that Stanford’s New Induction on the History of Science is cogent, and the parallel New Induction on the History of Philosophy (Mizrahi 2014), it follows that scientific antirealism is not worthy of belief. I also show that (...) denying a key premise in the reductio only forces antirealists who endorse Stanford’s New Induction on the History of Science into a dilemma: either antirealism falls under the axe of Stanford’s New Induction on the History of Science or it falls under the axe of the New Induction on the History of Philosophy. (shrink)
The standard backward-induction reasoning in a game like the centipede assumes that the players maintain a common belief in rationality throughout the game. But that is a dubious assumption. Suppose the first player X didn't terminate the game in the first round; what would the second player Y think then? Since the backwards-induction argument says X should terminate the game, and it is supposed to be a sound argument, Y might be entitled to doubt X's rationality. Alternatively, Y (...) might doubt that X believes Y is rational, or that X believes Y believes X is rational, or Y might have some higher-order doubt. X’s deviant first move might cause a breakdown in common belief in rationality, therefore. Once that goes, the entire argument fails. The argument also assumes that the players act rationally at each stage of the game, even if this stage could not be reached by rational play. But it is also dubious to assume that past irrationality never exerts a corrupting influence on present play. However, the backwards-induction argument can be reconstructed for the centipede game on a more secure basis.1 It may be implausible to assume a common belief in rationality throughout the game, however the game might go, but the argument requires less than this. The standard idealisations in game theory certainly allow us to assume a common belief in rationality at the beginning of the game. They also allow us to assume this common belief persists so long as no one makes an irrational move. That is enough for the argument to go through. (shrink)
Arguably, Hume's greatest single contribution to contemporary philosophy of science has been the problem of induction (1739). Before attempting its statement, we need to spend a few words identifying the subject matter of this corner of epistemology. At a first pass, induction concerns ampliative inferences drawn on the basis of evidence (presumably, evidence acquired more or less directly from experience)—that is, inferences whose conclusions are not (validly) entailed by the premises. Philosophers have historically drawn further distinctions, often appropriating (...) the term “induction” to mark them; since we will not be concerned with the philosophical issues for which these distinctions are relevant, we will use the word “inductive” in a catch-all sense synonymous with “ampliative”. But we will follow the usual practice of choosing, as our paradigm example of inductive inferences, inferences about the future based on evidence drawn from the past and present. A further refinement is more important. Opinion typically comes in degrees, and this fact makes a great deal of difference to how we understand inductive inferences. For while it is often harmless to talk about the conclusions that can be rationally believed on the basis of some.. (shrink)
The analogical position, as traditionally understood, is the claim that a person can inductively infer the existence of other minds from what he knows about his own mind and about physical objects. Of course this body of knowledge must not include such propositions about physical objects as "that human body over there is animated by a human mind," or "this automobile was designed by a human mind"; nor could my evidence for the existence of other minds be that I have (...) it on the authority of some of the best minds in the country. The body of knowledge in question must not entail that there are any other minds. In "Induction and Other Minds" I used the term "total evidence" to refer to this body of knowledge, defining that term as follows. (shrink)
Hailed by the Bulletin of the American Mathematical Society as "easy to use and a pleasure to read," this research monograph is recommended for students and professionals interested in model theory and definability theory. The sole prerequisite is a familiarity with the basics of logic, model theory, and set theory. 1974 edition.
A notion of finitary inductively presented (f.i.p.) logic is proposed here, which includes all syntactically described logics (formal systems)met in practice. A f.i.p. theory FS0 is set up which is universal for all f.i.p. logics; though formulated as a theory of functions and classes of expressions, FS0 is a conservative extension of PRA. The aims of this work are (i)conceptual, (ii)pedagogical and (iii)practical. The system FS0 serves under (i)and (ii)as a theoretical framework for the formalization of metamathematics. The general approach (...) may be used under (iii)for the computer implementation of logics. In all cases, the work aims to make the details manageable in a natural and direct way. (shrink)
The current state of inductive logic is puzzling. Survey presentations are recurrently offered and a very rich and extensive handbook was entirely dedicated to the topic just a few years ago [23]. Among the contributions to this very volume, however, one finds forceful arguments to the effect that inductive logic is not needed and that the belief in its existence is itself a misguided illusion , while other distinguished observers have eventually come to see at least the label as “slightly (...) antiquated” .What seems not to have lost any of its currency is the problem which inductive logic is meant to address. Inference from limited ascertained information to uncertain hypotheses is ubiquitous in learning, prediction and discovery. The logical insight that such kind of inference is fallible m .. (shrink)