Finding fast recovery algorithms is a problem of significant interest in compressed sensing. In many applications, extremely large problem sizes are envisioned, with at least tens of thousands of equations and hundreds of thousands of unknowns. The interior point methods for solving the best studied algorithm -? 1 minimization- are very slow and hence they are ineffective for such problem sizes. Faster methods such as LARS or homotopy have been proposed for solving ? 1 minimization but there is still a need for algorithms with less computational complexity. Therefore there is a fast growing literature on greedy methods and iterative thresholding for finding the sparsest solution. It was shown by that most of these algorithms perform worse than ? 1 when it comes to the sparsity-measurement tradeoff. In a recent paper the authors introduced a new algorithm called AMP which is as fast as iterative soft thresholding algorithms proposed before and performs exactly the same as ? 1 minimization. This algorithm is inspired from the message passing algorithms on bipartite graphs. The statistical properties of AMP let the authors propose a theoretical framework to analyze the asymptotic performance of the algorithm. This results in very sharp predictions of different observables in the algorithm. In this paper we address several questions regarding the thresholding policy. We consider a very general thresholding policy ?(?) for the algorithm and will use the maxmin framework to tune it optimally. It is shown that when formulated in this general form, the maxmin thresholding policy is not unique and many different thresholding policies may lead to the same phase transition. We will then propose two very simple thresholding policies that can be implemented easily in practice and prove that they are both maxmin. This analysis will also shed some light on several other aspects of the algorithm, such as the least favorable distribution and similarity of all maxmin optimal thresholding policies. We will also show how one can derive the AMP algorithm from the full message passing algorithm.
Hui ZhangHai ZhangYong LiangZiyi YangYanqiong Ren
Hai ZhangYan MaHui ZhangPuyu WangShenghan WangWenhui Meng
Yongjie LuoQun WanGuan GuiFumiyuki Adachi
Kota Naga Srinivasaro BattaIndrajit Chakrabarti