Regularization, e.g. lasso, has been shown to be effective in quantile regression in\nimproving the prediction accuracy (Li and Zhu, 2008; Wu and Liu, 2009). This paper\nstudies regularization in quantile regressions from a Bayesian perspective. By proposing\na hierarchical model framework, we give a generic treatment to a set of regularization\napproaches, including lasso, group lasso and elastic net penalties. Gibbs samplers are\nderived for all cases. This is the first work to discuss regularized quantile regression\nwith the group lasso penalty and the elastic net penalty. Both simulated and real data\nexamples show that Bayesian regularized quantile regression methods often outperform\nquantile regression without regularization and their non-Bayesian counterparts with\nregularization.
Salah‐Eddine El AdlouniGarba SalaouAndré St‐Hilaire
Weihua ZhaoRiquan ZhangYa-zhao LüJicai Liu
Rahim AlhamzawiAhmed AlhamzawiHaithem Taha Mohammad Ali
Yuzhu TianSi-Lian ShenGe LuMan‐Lai TangMaozai Tian
Qiaoqiao TangHaomin ZhangShifeng Gong