A Similarity-Weighted Informative Prior Distribution for Bayesian Multiple Regression Models

Frontiers in Psychology 12 (2021)
  Copy   BIBTEX

Abstract

Specifying accurate informative prior distributions is a question of carefully selecting studies that comprise the body of comparable background knowledge. Psychological research, however, consists of studies that are being conducted under different circumstances, with different samples and varying instruments. Thus, results of previous studies are heterogeneous, and not all available results can and should contribute equally to an informative prior distribution. This implies a necessary weighting of background information based on the similarity of the previous studies to the focal study at hand. Current approaches to account for heterogeneity by weighting informative prior distributions, such as the power prior and the meta-analytic predictive prior are either not easily accessible or incomplete. To complicate matters further, in the context of Bayesian multiple regression models there are no methods available for quantifying the similarity of a given body of background knowledge to the focal study at hand. Consequently, the purpose of this study is threefold. We first present a novel method to combine the aforementioned sources of heterogeneity in the similarity measure ω. This method is based on a combination of a propensity-score approach to assess the similarity of samples with random- and mixed-effects meta-analytic models to quantify the heterogeneity in outcomes and study characteristics. Second, we show how to use the similarity measure ωas a weight for informative prior distributions for the substantial parameters in Bayesian multiple regression models. Third, we investigate the performance and the behavior of the similarity-weighted informative prior distribution in a comprehensive simulation study, where it is compared to the normalized power prior and the meta-analytic predictive prior. The similarity measure ω and the similarity-weighted informative prior distribution as the primary results of this study provide applied researchers with means to specify accurate informative prior distributions.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,590

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Bayesian model learning based on predictive entropy.Jukka Corander & Pekka Marttinen - 2006 - Journal of Logic, Language and Information 15 (1-2):5-20.

Analytics

Added to PP
2021-05-13

Downloads
9 (#449,242)

6 months
6 (#1,472,471)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations