JOURNAL ARTICLE

Power-Expected-Posterior Priors for Variable Selection in Gaussian Linear Models

Dimitris FouskakisIoannis NtzoufrasDavid Draper

Year: 2015 Journal:   Bayesian Analysis Vol: 10 (1)   Publisher: International Society for Bayesian Analysis

Abstract

In the context of the expected-posterior prior (EPP) approach to Bayesian variable selection in linear models, we combine ideas from power-prior and unit-information-prior methodologies to simultaneously (a) produce a minimally-informative prior and (b) diminish the effect of training samples. The result is that in practice our power-expected-posterior (PEP) methodology is sufficiently insensitive to the size $n^{*}$ of the training sample, due to PEP’s unit-information construction, that one may take $n^{*}$ equal to the full-data sample size $n$ and dispense with training samples altogether. This promotes stability of the resulting Bayes factors, removes the arbitrariness arising from individual training-sample selections, and greatly increases computational speed, allowing many more models to be compared within a fixed CPU budget. We find that, under an independence Jeffreys (reference) baseline prior, the asymptotics of PEP Bayes factors are equivalent to those of Schwartz’s Bayesian Information Criterion (BIC), ensuring consistency of the PEP approach to model selection. Our PEP prior, due to its unit-information structure, leads to a variable-selection procedure that — in our empirical studies — (1) is systematically more parsimonious than the basic EPP with minimal training sample, while sacrificing no desirable performance characteristics to achieve this parsimony; (2) is robust to the size of the training sample, thus enjoying the advantages described above arising from the avoidance of training samples altogether; and (3) identifies maximum-a-posteriori models that achieve better out-of-sample predictive performance than that provided by standard EPPs, the $g$ -prior, the hyper- $g$ prior, non-local priors, the Least Absolute Shrinkage and Selection Operator (LASSO) and Smoothly-Clipped Absolute Deviation (SCAD) methods.

Keywords:
Prior probability Mathematics Statistics Applied mathematics Posterior probability Gaussian Selection (genetic algorithm) Mathematical optimization Computer science Artificial intelligence Bayesian probability Physics

Metrics

38
Cited By
8.22
FWCI (Field Weighted Citation Impact)
43
Refs
0.97
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Statistical Methods and Inference
Physical Sciences →  Mathematics →  Statistics and Probability
Statistical Methods and Bayesian Inference
Physical Sciences →  Mathematics →  Statistics and Probability
Advanced Statistical Methods and Models
Physical Sciences →  Mathematics →  Statistics and Probability

Related Documents

JOURNAL ARTICLE

Variations of power-expected-posterior priors in normal regression models

Dimitris FouskakisIoannis NtzoufrasKonstantinos Perrakis

Journal:   Computational Statistics & Data Analysis Year: 2019 Vol: 143 Pages: 106836-106836
BOOK-CHAPTER

Bayesian Variable Selection for Linear Models Using I-Priors

Haziq JamilWicher Bergsma

Studies in systems, decision and control Year: 2020 Pages: 107-132
© 2026 ScienceGate Book Chapters — All rights reserved.