JOURNAL ARTICLE

Quantization avoids saddle points in distributed optimization

Yanan BoYongqiang Wang

Year: 2024 Journal:   Proceedings of the National Academy of Sciences Vol: 121 (17)Pages: e2319625121-e2319625121   Publisher: National Academy of Sciences

Abstract

Distributed nonconvex optimization underpins key functionalities of numerous distributed systems, ranging from power systems, smart buildings, cooperative robots, vehicle networks to sensor networks. Recently, it has also merged as a promising solution to handle the enormous growth in data and model sizes in deep learning. A fundamental problem in distributed nonconvex optimization is avoiding convergence to saddle points, which significantly degrade optimization accuracy. We find that the process of quantization, which is necessary for all digital communications, can be exploited to enable saddle-point avoidance. More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization. With an easily adjustable quantization granularity, the approach allows a user to control the number of bits sent per iteration and, hence, to aggressively reduce the communication overhead. Numerical experimental results using distributed optimization and learning problems on benchmark datasets confirm the effectiveness of the approach.

Keywords:
Quantization (signal processing) Computer science Saddle point Optimization problem Mathematical optimization Granularity Benchmark (surveying) Convergence (economics) Distributed computing Algorithm Mathematics

Metrics

4
Cited By
2.88
FWCI (Field Weighted Citation Impact)
63
Refs
0.81
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Sparse and Compressive Sensing Techniques
Physical Sciences →  Engineering →  Computational Mechanics
Distributed Control Multi-Agent Systems
Physical Sciences →  Computer Science →  Computer Networks and Communications
Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Accelerated Multiplicative Weights Update Avoids Saddle Points Almost Always

Yi FengIoannis PanageasXiao Wang

Journal:   Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence Year: 2022 Pages: 1811-1817
JOURNAL ARTICLE

All saddle points for polynomial optimization

Anwa ZhouShuli YinJinyan Fan

Journal:   Computational Optimization and Applications Year: 2025 Vol: 90 (3)Pages: 721-752
BOOK-CHAPTER

Optimization of Minima and Saddle Points

Trygve Helgaker

Lecture notes in chemistry Year: 1992 Pages: 295-324
JOURNAL ARTICLE

Escaping Saddle Points in Constrained Optimization

Aryan MokhtariAsuman OzdaglarAli Jadbabaie

Journal:   arXiv (Cornell University) Year: 2018 Vol: 31 Pages: 3629-3639
© 2026 ScienceGate Book Chapters — All rights reserved.