Recent work proposes new algorithms for feature selection based on a Bayesian hierarchical model that places priors on both the identity of all features, and the identity-conditioned feature-label distribution. Given training data, Bayesian inference can be used to predict the feature identities. While algorithms developed in prior work rely on certain independence assumptions, in this work we present a new algorithm, with low computational complexity, designed for a family of Bayesian models that each assume different block covariance structures. We show the new algorithm, and the previous algorithm assuming independent features, have robust performance across the family of models under synthetic data, and provide results from real colon cancer microarray data.
Ali Foroughi pourLori A. Dalton
Ali Foroughi pourLori A. Dalton
Dejun WangLin LiWei LiuWeiping SunShengsheng Yu
Pour, Ali ForoughiDalton, Lori