Coping with uncertainty is a necessary part of ordinary life and is crucial to an understanding of how the mind works. For example, it is a vital element in developing artificial intelligence that will not be undermined by its own rigidities. There have been many approaches to the problem of uncertain inference, ranging from probability to inductive logic to nonmonotonic logic. Thisbook seeks to provide a clear exposition of these approaches within a unified framework. The principal market for the book (...) will be students and professionals in philosophy, computer science, and AI. Among the special features of the book are a chapter on evidential probability, which has not received a basic exposition before; chapters on nonmonotonic reasoning and theory replacement, matters rarely addressed in standard philosophical texts; and chapters on Mill's methods and statistical inference that cover material sorely lacking in the usual treatments of AI and computer science. (shrink)
Measurement is fundamental to all the sciences, the behavioural and social as well as the physical and in the latter its results provide our paradigms of 'objective fact'. But the basis and justification of measurement is not well understood and is often simply taken for granted. Henry Kyburg Jr proposes here an original, carefully worked out theory of the foundations of measurement, to show how quantities can be defined, why certain mathematical structures are appropriate to them and what meaning attaches (...) to the results generated. Crucial to his approach is the notion of error - it can not be eliminated entirely from its introduction and control, her argues, arises the very possibility of measurement. Professor Kyburg's approach emphasises the empirical process of making measurements. In developing it he discusses vital questions concerning the general connection between a scientific theory and the results which support it (or fail to). (shrink)
We examine the notion of conditionals and the role of conditionals in inductive logics and arguments. We identify three mistakes commonly made in the study of, or motivation for, non-classical logics. A nonmonotonic consequence relation based on evidential probability is formulated. With respect to this acceptance relation some rules of inference of System P are unsound, and we propose refinements that hold in our framework.
Quantities are naturally viewed as functions, whose arguments may be construed as situations, events, objects, etc. We explore the question of the range of these functions: should it be construed as the real numbers (or some subset thereof)? This is Carnap's view. It has attractive features, specifically, what Carnap views as ontological economy. Or should the range of a quantity be a set of magnitudes? This may have been Helmholtz's view, and it, too, has attractive features. It reveals the close (...) connection between measurement and natural law, it makes dimensional analysis intelligible, and explains the concern of scientists and engineers with units in equations. It leaves the philosophical problem of the relation between the structure of magnitudes and the structure of the reals. What explains it? And is it always the same? We will argue that on the whole, construing the values of quantities as magnitudes has some advantages, and that (as Helmholtz seems to suggest in "Numbering and Measuring from an Epistemological Viewpoint") the relation between magnitudes and real numbers can be based on foundational similarities of structure. (shrink)
The dominant argument for the introduction of propensities or chances as an interpretation of probability depends on the difficulty of accounting for single case probabilities. We argue that in almost all cases, the``single case'' application of probability can be accounted for otherwise. ``Propensities'' are needed only intheoretical contexts, and even there applications of probability need only depend on propensities indirectly.
There are a number of reasons for being interested in uncertainty, and there are also a number of uncertainty formalisms. These formalisms are not unrelated. It is argued that they can all be reflected as special cases of the approach of taking probabilities to be determined by sets of probability functions defined on an algebra of statements. Thus, interval probabilities should be construed as maximum and minimum probabilities within a set of distributions, Glenn Shafer's belief functions should be construed as (...) lower probabilities, etc. Updating probabilities introduces new considerations, and it is shown that the representation of belief as a set of probabilities conflicts in this regard with the updating procedures advocated by Shafer. The attempt to make subjectivistic probability plausible as a doctrine of rational belief by making it more flowery — i.e., by adding new dimensions — does not succeed. But, if one is going to represent beliefs by sets of distributions, those sets of distributions might as well be based in statistical knowledge, as they are in epistemological or evidential probability. (shrink)
Kyburg proposes the following test for the simplicity of a theory: the complexity of a theory is measured by the number of quantifiers that occur in the set of statements comprising the theory. (staff).
Charles Morgan has argued that nonmonotonic logic is ``impossible''. We show here that those arguments are mistaken, and that Morgan's preferred alternative, the representation of nonmonotonic reasoning by ``presuppositions'' fails to provide a framework in which nonmonotonic reasoning can be constructively criticised. We argue that an inductive logic, based on probabilistic acceptance, offers more than Morgan's approach through presuppositions.