Bayesian model learning based on predictive entropy

Bayesian paradigm has been widely acknowledged as a coherent approach to learning putative probability model structures from a finite class of candidate models. Bayesian learning is based on measuring the predictive ability of a model in terms of the corresponding marginal data distribution, which equals the expectation of the likelihood with respect to a prior distribution for model parameters. The main controversy related to this learning method stems from the necessity of specifying proper prior distributions for all unknown parameters of a model, which ensures a complete determination of the marginal data distribution. Even for commonly used models, subjective priors may be difficult to specify precisely, and therefore, several automated learning procedures have been suggested in the literature. Here we introduce a novel Bayesian learning method based on the predictive entropy of a probability model, that can combine both subjective and objective probabilistic assessment of uncertain quantities in putative models. It is shown that our approach can avoid some of the limitations of the earlier suggested objective Bayesian methods.
Keywords Bayesian inference  entropy  information theoretic criteria  objective model learning
Categories (categorize this paper)
DOI 10.1007/s10849-005-9004-8
 Save to my reading list
Follow the author(s)
Edit this record
My bibliography
Export citation
Find it on Scholar
Mark as duplicate
Request removal from index
Revision history
Download options
Our Archive

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 29,841
Through your library
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles
Added to PP index

Total downloads
21 ( #259,577 of 2,210,404 )

Recent downloads (6 months)
1 ( #387,753 of 2,210,404 )

How can I increase my downloads?

Monthly downloads
My notes
Sign in to use this feature