Goran MarjanovicAlfred O. Hero
Recently, major attention has been given to penalized log-likelihood estimators for sparse precision (inverse covariance) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1 norm. However, it is not always the case that the best estimator is achieved with this penalty. So, to improve spar-sity and reduce biases associated with the l1 norm, one must move to non-convex penalties such as the lq (0 ≤ q < 1). In this paper we in-troduce the resulting non-concave lq penalized log-likelihood prob-lem, and derive the corresponding optimality conditions. A novel cyclic descent algorithm is presented for penalized log-likelihood optimization, and we show how the derived conditions can be used to reduce algorithm computation. We illustrate by comparing recon-struction quality over the range 0 ≤ q ≤ 1 for several experiments. Index Terms — sparsity, lq penalty, non-convex, precision ma-trix, optimality conditions. 1.
Goran MarjanovicMagnús Ö. ÚlfarssonVictor Solo
Zening FuSheng HanAo TanYiheng TuZhiguo Zhang