The Empire of Chance tells how quantitative ideas of chance transformed the natural and social sciences, as well as daily life over the last three centuries. A continuous narrative connects the earliest application of probability and statistics in gambling and insurance to the most recent forays into law, medicine, polling and baseball. Separate chapters explore the theoretical and methodological impact in biology, physics and psychology. Themes recur - determinism, inference, causality, free will, evidence, the shifting meaning of probability - but (...) in dramatically different disciplinary and historical contexts. In contrast to the literature on the mathematical development of probability and statistics, this book centres on how these technical innovations remade our conceptions of nature, mind and society. Written by an interdisciplinary team of historians and philosophers, this readable, lucid account keeps technical material to an absolute minimum. It is aimed not only at specialists in the history and philosophy of science, but also at the general reader and scholars in other disciplines. (shrink)
Electron microscopy, and in particular low dose electron microscopy, offers interesting cases of experimental techniques where the theory of the phenomena studied and the theory of the apparatus used, are intertwined. A single primary exposure usually does not give an interpretable image, and computerized image enhancement techniques are used to create from multiple exposures a single, visually meaningful image. Some of the enhancement programs start from informed guesses at the structure of the specimen and use the primary exposures in a (...) series of corrections to arrive at a image that can be read by trained observers.In this paper I describe in the general deterministic case the possible relations between phenomena theory and instrument theory. I give a Bayesian criterion for when an experiment is a test of the theory of the apparatus, rather than a test of the theory of the phenomena, and describe strategies used to ensure that tests of the theory of the phenomena are possible. (shrink)
Even in a theory corroboration context, attention to effect size is called for if significance testing is to be of any value. I sketch a Popperian construal of significance tests that better fits into scientific inference as a whole. Because of its many errors Chow's book cannot be recommended to the novice.
Randomization is a generally accepted principle of sound experimental design and common practice among working scientists. But Bayesian statisticians reject it, most often because of decision theoretic argument against randomization. I trace it back to Abraham Wald's Theory of Inductive Behavior and argue that Bayesians should concur with Ronald Fisher 's criticism of Wald's analysis of randomization. The paper ends with a Bayesian argument in favor of randomization: randomization can lead to an increase in expected utility.
In this paper I give a Bayesian criterion for when an experiment is a test of the theory of the apparatus, rather than a test of the theory of the phenomena, and describe strategies used to ensure that tests of the theory of the phenomena are possible. I extend this framework to low dose electron microscopy which has a stochastic instrument theory and which provides an exception to a thesis by Robert Ackermann on the independence between theory and instrumentation.