Minimum message length and statistically consistent invariant (objective?) Bayesian probabilistic inference—from (medical) “evidence”
Social Epistemology 22 (4):433 – 460 (2008)
“Evidence” in the form of data collected and analysis thereof is fundamental to medicine, health and science. In this paper, we discuss the “evidence-based” aspect of evidence-based medicine in terms of statistical inference, acknowledging that this latter field of statistical inference often also goes by various near-synonymous names—such as inductive inference (amongst philosophers), econometrics (amongst economists), machine learning (amongst computer scientists) and, in more recent times, data mining (in some circles). Three central issues to this discussion of “evidence-based” are (i) whether or not the statistical analysis can and/or should be objective and/or whether or not (subjective) prior knowledge can and/or should be incorporated, (ii) whether or not the analysis should be invariant to the framing of the problem (e.g. does it matter whether we analyse the ratio of proportions of morbidity to non-morbidity rather than simply the proportion of morbidity?), and (iii) whether or not, as we get more and more data, our analysis should be able to converge arbitrarily closely to the process which is generating our observed data. For many problems of data analysis, it would appear that desiderata (ii) and (iii) above require us to invoke at least some form of subjective (Bayesian) prior knowledge. This sits uncomfortably with the understandable but perhaps impossible desire of many medical publications that at least all the statistical hypothesis testing has to be classical non-Bayesian—i.e. it is not permitted to use any (subjective) prior knowledge
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
References found in this work BETA
A Mathematical Theory of Communication.Claude Shannon - 1948 - Bell System Technical Journal 27:379–423.
How to Tell When Simpler, More Unified, or Less Ad Hoc Theories Will Provide More Accurate Predictions.Malcolm Forster & Elliott Sober - 1994 - British Journal for the Philosophy of Science 45 (1):1-35.
Universal Intelligence: A Definition of Machine Intelligence.Shane Legg & Marcus Hutter - 2007 - Minds and Machines 17 (4):391-444.
Bayes Not Bust! Why Simplicity Is No Problem for Bayesians.David L. Dowe, Steve Gardner & and Graham Oppy - 2007 - British Journal for the Philosophy of Science 58 (4):709 - 754.
Citations of this work BETA
No citations found.
Similar books and articles
Empirical Data Sets Are Algorithmically Compressible: Reply to McAllister.Charles Twardy, Steve Gardner & David Dowe - 2005 - Studies in the History and Philosophy of Science, Part A 36 (2):391-402.
Bayesian Model Learning Based on Predictive Entropy.Jukka Corander & Pekka Marttinen - 2006 - Journal of Logic, Language and Information 15 (1-2):5-20.
Logical Relations in a Statistical Problem.Jon Williamson, Jan-Willem Romeijn, Rolf Haenni & Gregory Wheeler - 2008 - In Benedikt Lowe, Jan-Willem Romeijn & Eric Pacuit (eds.), Proceedings of the Foundations of the Formal Sciences VI: Reasoning about probabilities and probabilistic reasoning. College Publications.
Bayesian Induction IS Eliminative Induction.James Hawthorne - 1993 - Philosophical Topics 21 (1):99-138.
Objective Bayesian Nets for Systems Modelling and Prognosis in Breast Cancer.Jon Williamson - manuscript
When Can Non‐Commutative Statistical Inference Be Bayesian?Miklós Rédei - 1992 - International Studies in the Philosophy of Science 6 (2):129-132.
When Can Non-Commutative Statistical Inference Be Bayesian?Miklós Rédei - 1992 - International Studies in the Philosophy of Science 6 (2):129 – 132.
Added to index2009-02-01
Total downloads19 ( #257,087 of 2,163,620 )
Recent downloads (6 months)1 ( #348,040 of 2,163,620 )
How can I increase my downloads?