David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Journal of Logic, Language and Information 15 (1-2):5-20 (2006)
Bayesian paradigm has been widely acknowledged as a coherent approach to learning putative probability model structures from a finite class of candidate models. Bayesian learning is based on measuring the predictive ability of a model in terms of the corresponding marginal data distribution, which equals the expectation of the likelihood with respect to a prior distribution for model parameters. The main controversy related to this learning method stems from the necessity of specifying proper prior distributions for all unknown parameters of a model, which ensures a complete determination of the marginal data distribution. Even for commonly used models, subjective priors may be difficult to specify precisely, and therefore, several automated learning procedures have been suggested in the literature. Here we introduce a novel Bayesian learning method based on the predictive entropy of a probability model, that can combine both subjective and objective probabilistic assessment of uncertain quantities in putative models. It is shown that our approach can avoid some of the limitations of the earlier suggested objective Bayesian methods.
|Keywords||Bayesian inference entropy information theoretic criteria objective model learning|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Evan Heit (2001). What is the Probability of the Bayesian Model, Given the Data? Behavioral and Brain Sciences 24 (4):672-673.
Joel D. Velasco (2008). The Prior Probabilities of Phylogenetic Trees. Biology and Philosophy 23 (4):455-473.
Teddy Seidenfeld (1986). Entropy and Uncertainty. Philosophy of Science 53 (4):467-491.
Gert de Cooman & Peter Walley (2002). A Possibilistic Hierarchical Model for Behaviour Under Uncertainty. Theory and Decision 52 (4):327-374.
Scott Moss & Bruce Edmonds (1994). Modelling Learning as Modelling. Philosophical Explorations.
Frederick Eberhardt & David Danks (2011). Confirmation in the Cognitive Sciences: The Problematic Case of Bayesian Models. [REVIEW] Minds and Machines 21 (3):389-410.
Added to index2009-01-28
Total downloads8 ( #198,837 of 1,692,471 )
Recent downloads (6 months)2 ( #111,548 of 1,692,471 )
How can I increase my downloads?