Measurement is fundamental to all the sciences, the behavioural and social as well as the physical and in the latter its results provide our paradigms of 'objective fact'. But the basis and justification of measurement is not well understood and is often simply taken for granted. Henry Kyburg Jr proposes here an original, carefully worked out theory of the foundations of measurement, to show how quantities can be defined, why certain mathematical structures are appropriate to them and what meaning (...) attaches to the results generated. Crucial to his approach is the notion of error - it can not be eliminated entirely from its introduction and control, her argues, arises the very possibility of measurement. Professor Kyburg's approach emphasises the empirical process of making measurements. In developing it he discusses vital questions concerning the general connection between a scientific theory and the results which support it (or fail to). (shrink)
There are a number of reasons for being interested in uncertainty, and there are also a number of uncertainty formalisms. These formalisms are not unrelated. It is argued that they can all be reflected as special cases of the approach of taking probabilities to be determined by sets of probability functions defined on an algebra of statements. Thus, interval probabilities should be construed as maximum and minimum probabilities within a set of distributions, Glenn Shafer's belief functions should be construed as (...) lower probabilities, etc. Updating probabilities introduces new considerations, and it is shown that the representation of belief as a set of probabilities conflicts in this regard with the updating procedures advocated by Shafer. The attempt to make subjectivistic probability plausible as a doctrine of rational belief by making it more flowery -- i.e., by adding new dimensions -- does not succeed. But, if one is going to represent beliefs by sets of distributions, those sets of distributions might as well be based in statistical knowledge, as they are in epistemological or evidential probability. (shrink)
The dominant argument for the introduction of propensities or chances as an interpretation of probability depends on the difficulty of accounting for single case probabilities. We argue that in almost all cases, the "single case" application of probability can be accounted for otherwise. "Propensities" are needed only in theoretical contexts, and even there applications of probability need only depend on propensities indirectly.
Bishop Butler, [Butler, 1736], said that probability was the very guide of life. But what interpretations of probability can serve this function? It isn't hard to see that empirical (frequency) views won't do, and many recent writers—for example John Earman, who has said that Bayesianism is "the only game in town"—have been persuaded by various dutch book arguments that only subjective probability will perform the function required. We will defend the thesis that probability construed in this way offers very little (...) guidance, dutch book arguments notwithstanding. We will sketch a way out of the impasse. (shrink)
In this work Henry Kyburg presents his views on a wide range of philosophical problems associated with the study and practice of science and mathematics. The main structure of the book consists of a presentation of Kyburg's notions of epistemic probability and its use in the scientific enterprise i.e., the effort to modify previously adopted beliefs in the light of experience. Intended for cognitive scientists and people in artificial intelligence as well as for technically oriented philosophers, the book (...) also provides a general overview of the philosophy of science for the non-philosopher by one of the leading authorities in the field. (shrink)
In two studies, we used the Ethics Position Questionnaire (EPQ) to investigate the relationship between individual differences in moral philosophy, involvement in the animal rights movement, and attitudes toward the treatment of animals. In the first, 600 animal rights activists attending a national demonstration and 266 nonactivist college students were given the EPQ. Analysis of the returns from 157 activists and 198 students indicated that the activists were more likely than the students to hold an "absolutist" moral orientation (high idealism, (...) low relativism). In the second study, 169 students were given the EPQ with a scale designed to measure attitudes toward the treatment of animals. Multiple regression showed that gender and the EPQ dimension of idealism were related to attitudes toward animal use. (shrink)
We examine the notion of conditionals and the role of conditionals in inductive logics and arguments. We identify three mistakes commonly made in the study of, or motivation for, non-classical logics. A nonmonotonic consequence relation based on evidential probability is formulated. With respect to this acceptance relation some rules of inference of System P are unsound, and we propose refinements that hold in our framework.
The system presented by the author in The Logical Foundations of Statistical Inference (Kyburg 1974) suffered from certain technical difficulties, and from a major practical difficulty; it was hard to be sure, in discussing examples and applications, when you had got hold of the right reference class. The present paper, concerned mainly with the characterization of randomness, resolves the technical difficulties and provides a well structured framework for the choice of a reference class. The definition of randomness that leads (...) to this framework is simplified and clarified in a number of respects. It resolves certain puzzles raised by S. Spielman and W. Harper in their contributions to Profiles: Henry E. Kyburg, Jr. and Isaac Levi (R. Bogdan (ed.) 1982). (shrink)
There are a number of reasons for being interested in uncertainty, and there are also a number of uncertainty formalisms. These formalisms are not unrelated. It is argued that they can all be reflected as special cases of the approach of taking probabilities to be determined by sets of probability functions defined on an algebra of statements. Thus, interval probabilities should be construed as maximum and minimum probabilities within a set of distributions, Glenn Shafer's belief functions should be construed as (...) lower probabilities, etc. Updating probabilities introduces new considerations, and it is shown that the representation of belief as a set of probabilities conflicts in this regard with the updating procedures advocated by Shafer. The attempt to make subjectivistic probability plausible as a doctrine of rational belief by making it more flowery — i.e., by adding new dimensions — does not succeed. But, if one is going to represent beliefs by sets of distributions, those sets of distributions might as well be based in statistical knowledge, as they are in epistemological or evidential probability. (shrink)
A speaker often decides whether or not to saysomething based on his assessment of the impact itwould have on his hearer's beliefs. If he thinks itwould bring them more in line with the truth, he saysit; otherwise he does not. In this paper, I developa model of these judgments, focusing specifically onthose of vague sentences. Under the simplifyingassumption that an utterance only conveys a speaker'sapplicability judgments, I present a Bayesian model ofan utterance's impact on a hearer's beliefs. Fromthis model I (...) derive a model of a speaker's judgment ofwhether or not an utterance would be informative. Iillustrate it with several examples of judgments ofvague and non-vague sentences. For instance, I showthat it models the common judgment that assertingeither ``George is tall'' or ``George is not tall'' wouldbe misleading if George were borderline tall, butasserting ``George is tall and he isn't tall'' would notbe. (shrink)
To understand better why evidence of student cheating is often ignored, a national sample of psychology instructors was sampled for their opinions. The 127 respondents overwhelmingly agreed that dealing with instances of academic dishonesty was among the most onerous aspects of their profession. Respondents cited insufficient evidence that cheating has occurred as the most frequent reason for overlooking student behavior or writing that might be dishonest. A factor analysis revealed 4 other clusters of reasons as to why cheating may be (...) ignored. Emotional reasons included stress and lack of courage. Difficult reasons included the extensive time and effort required to deal with cheating students. Fear reasons included concern about retaliation or a legal challenge. Denial reasons included beliefs that cheating students would fail anyway and that the worst offenders do not get caught. The reasons why instances of academic dishonesty should be proactively confronted are presented. (shrink)
The evidence of your own eyes has often been regarded as unproblematic. But we know that people make mistaken observations. This can be looked on as unimportant if there issome class of statements that can serve as evidence for others, or if every statement in our corpus of knowledge is allowed to be no more than probable. Neither of these alternatives is plausible when it comes to machine or robotic observation. Then we must take the possibility of error seriously, and (...) we must be prepared to deal with error quantitatively. The problem of using internal evidence to arrive at error distributions is the main focus of the paper. (shrink)
The rapprochement between methodology and statistics suggested by Chow's book is a much needed one. His examples suggest that the situation is even worse in psychology than in some other disciplines. It is suggested that both historical accuracy and attention to recent work on the foundations of statistics would be beneficial in achieving the goals that Chow seeks.
Charles Morgan has argued that nonmonotonic logic is ``impossible''. We show here that those arguments are mistaken, and that Morgan's preferred alternative, the representation of nonmonotonic reasoning by ``presuppositions'' fails to provide a framework in which nonmonotonic reasoning can be constructively criticised. We argue that an inductive logic, based on probabilistic acceptance, offers more than Morgan's approach through presuppositions.
The dominant argument for the introduction ofpropensities or chances as an interpretation of probabilitydepends on the difficulty of accounting for single caseprobabilities. We argue that in almost all cases, the``single case'' application of probability can be accountedfor otherwise. ``Propensities'' are needed only intheoretical contexts, and even there applications ofprobability need only depend on propensities indirectly.