JOURNAL ARTICLE

On l<inf>q</inf> estimation of sparse inverse covariance

Abstract

Recently, major attention has been given to penalized log-likelihood estimators for sparse precision (inverse covariance) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1 norm. However, it is not always the case that the best estimator is achieved with this penalty. So, to improve spar-sity and reduce biases associated with the l1 norm, one must move to non-convex penalties such as the lq (0 ≤ q &lt; 1). In this paper we in-troduce the resulting non-concave lq penalized log-likelihood prob-lem, and derive the corresponding optimality conditions. A novel cyclic descent algorithm is presented for penalized log-likelihood optimization, and we show how the derived conditions can be used to reduce algorithm computation. We illustrate by comparing recon-struction quality over the range 0 ≤ q ≤ 1 for several experiments. Index Terms — sparsity, lq penalty, non-convex, precision ma-trix, optimality conditions. 1.

Keywords:
Estimator Covariance Inverse Norm (philosophy) Mathematics Combinatorics Convex optimization Algorithm Computer science Regular polygon Applied mathematics Mathematical optimization Discrete mathematics Statistics

Metrics

12
Cited By
3.17
FWCI (Field Weighted Citation Impact)
42
Refs
0.91
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Sparse and Compressive Sensing Techniques
Physical Sciences →  Engineering →  Computational Mechanics
Direction-of-Arrival Estimation Techniques
Physical Sciences →  Computer Science →  Signal Processing
Statistical Methods and Inference
Physical Sciences →  Mathematics →  Statistics and Probability
© 2026 ScienceGate Book Chapters — All rights reserved.