The limits of probability modelling: A serendipitous tale of goldfish, transfinite numbers, and pieces of string
Mind and Society 1 (2):17-38 (2000)
|Abstract||This paper is about the differences between probabilities and beliefs and why reasoning should not always conform to probability laws. Probability is defined in terms of urn models from which probability laws can be derived. This means that probabilities are expressed in rational numbers, they suppose the existence of veridical representations and, when viewed as parts of a probability model, they are determined by a restricted set of variables. Moreover, probabilities are subjective, in that they apply to classes of events that have been deemed (by someone) to be equivalent, rather than to unique events. Beliefs on the other hand are multifaceted, interconnected with all other beliefs, and inexpressible in their entirety. It will be argued that there are not sufficient rational numbers to characterise beliefs by probabilities and that the idea of a veridical set of beliefs is questionable. The concept of a complete probability model based on Fisher's notion of identifiable subsets is outlined. It is argued that to be complete a model must be known to be true. This can never be the case because whatever a person supposes to be true must be potentially modifiable in the light of new information. Thus to infer that an individual's probability estimate is biased it is necessary not only to show that the estimate differs from that given by a probability model, but also to assume that this model is complete, and completeness is not empirically verifiable. It follows that probability models and Bayes theorem are not necessarily appropriate standards for people's probability judgements. The quality of a probability model depends on how reasonable it is to treat some existing uncertainty as if it were equivalent to that in a particular urn model and this cannot be determined empirically. Bias can be demonstrated in estimates of proportions of finite populations such as in the false consensus effect. However the modification of beliefs by ad hoc methods like Tversky and Kahneman's heuristics can be justified, even though this results in biased judgements. This is because of pragmatic factors such as the cost of obtaining and taking account of additional information which are not included even in a complete probability model. Finally, an analogy is drawn between probability models and geometric figures. Both idealisations are useful but qualitatively inadequate characterisations of nature. A difference between the two is that the size of any error can be limited in the case of the geometric figure in a way that is not possible in a probability model|
|Keywords||No keywords specified (fix it)|
|Categories||No categories specified (fix it)|
|Through your library||Configure|
Similar books and articles
James Logue (1995). Projective Probability. Oxford University Press.
Peter Milne (1987). Physical Probabilities. Synthese 73 (2):329 - 359.
Henry E. Kyburg Jr (1992). Getting Fancy with Probability. Synthese 90 (2):189 - 203.
Henry E. Kyburg (1992). Getting Fancy with Probability. Synthese 90 (2):189-203.
Aidan Lyon (2010). Deterministic Probability: Neither Chance nor Credence. Synthese 182 (3):413-432.
Chunlai Zhou (2010). Probability Logic of Finitely Additive Beliefs. Journal of Logic, Language and Information 19 (3).
Sorry, there are not enough data points to plot this chart.
Added to index2010-08-10
Total downloads1 ( #274,602 of 549,007 )
Recent downloads (6 months)0
How can I increase my downloads?