Privacy protection is gaining increased attention in distributed optimization and learning. As differential privacy is becoming a de facto standard for privacy preservation, recently results have emerged integrating differential privacy with distributed optimization. However, to ensure differential privacy (with a finite cumulative privacy budget), all existing approaches have to sacrifice provable convergence to the optimal solution. In this paper, we propose a differentially-private distributed optimization algorithm that can ensure, for the first time, both $\epsilon$ -differential privacy and optimality, even on the infinite time horizon. Numerical simulation results confirm the effectiveness of the proposed approach.
Zhenqi HuangSayan MitraNitin H. Vaidya
Shuo HanUfuk TopcuGeorge J. Pappas
Antai XieXinlei YiXiaofan WangMing CaoXiaoqiang Ren