This paper considers stochastic convex optimization problems with two sets of\nconstraints: (a) deterministic constraints on the domain of the optimization\nvariable, which are difficult to project onto; and (b) deterministic or\nstochastic constraints that admit efficient projection. Problems of this form\narise frequently in the context of semidefinite programming as well as when\nvarious NP-hard problems are solved approximately via semidefinite relaxation.\nSince projection onto the first set of constraints is difficult, it becomes\nnecessary to explore projection-free algorithms, such as the stochastic\nFrank-Wolfe (FW) algorithm. On the other hand, the second set of constraints\ncannot be handled in the same way, and must be incorporated as an indicator\nfunction within the objective function, thereby complicating the application of\nFW methods. Similar problems have been studied before; however, they suffer\nfrom slow convergence rates. This work, equipped with momentum based gradient\ntracking technique, guarantees fast convergence rates on par with the\nbest-known rates for problems without the second set of constraints.\nZeroth-order variants of the proposed algorithms are also developed and again\nimprove upon the state-of-the-art rate results. We further propose the novel\ntrimmed FW variants that enjoy the same convergence rates as their classical\ncounterparts, but are empirically shown to require significantly fewer calls to\nthe linear minimization oracle speeding up the overall algorithm. The efficacy\nof the proposed algorithms is tested on relevant applications of sparse matrix\nestimation, clustering via semidefinite relaxation, and uniform sparsest cut\nproblem.\n
Liyuan ChenGuanghui WenJinlong LeiYiguang Hong
Xinlei YiShengjun ZhangTao YangKarl Henrik Johansson