JOURNAL ARTICLE

Accelerated Distributed Stochastic Non-Convex Optimization over Time-Varying Directed Networks

Abstract

We study non-convex optimization problems where the data is distributed across nodes of a time-varying directed network; this describes dynamic settings in which the communication between network nodes is affected by delays or link failures. The network nodes, which can access only their local objectives and query a stochastic first-order oracle for the gradient estimates, collaborate by exchanging messages with their neighbors to minimize a global objective function. We propose an algorithm for non-convex optimization problems in such settings that leverages stochastic gradient descent with momentum and gradient tracking. We further prove, by analyzing dynamic network systems with gradient acceleration, that the oracle complexity of the proposed algorithm is $\mathcal{O}\left( {1/{\varepsilon ^{1.5}}} \right)$. The results demonstrate superior performance of the proposed framework compared to state-of-the-art related methods used in a variety of machine learning tasks.

Keywords:
Oracle Stochastic gradient descent Computer science Convex function Stochastic optimization Convex optimization Mathematical optimization Acceleration Optimization problem Gradient descent Function (biology) Stochastic neural network Regular polygon Algorithm Mathematics Artificial neural network Artificial intelligence Recurrent neural network

Metrics

1
Cited By
0.34
FWCI (Field Weighted Citation Impact)
31
Refs
0.44
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Sparse and Compressive Sensing Techniques
Physical Sciences →  Engineering →  Computational Mechanics
Distributed Control Multi-Agent Systems
Physical Sciences →  Computer Science →  Computer Networks and Communications
Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.