JOURNAL ARTICLE

Distributed subgradient-push online convex optimization on time-varying directed graphs

Abstract

This paper presents a class of subgradient-push algorithms for online distributed optimization over time-varying networks. In this setting, a private strongly convex objective function is revealed to each agent at each time step. In the next time step, this agent makes a decision about its state using this knowledge, along with the information gathered only from its neighboring agents, prescribed by a sequence of time-varying directed graphs. Under the assumption that this sequence is uniformly strongly connected, we design an algorithm, distributed over this time-varying topology, that guarantees that the individual regret, the difference between the accumulated cost of agents' states and the best static offline cost, grows only sublinearly. Simulations illustrate our results.

Keywords:
Subgradient method Regret Sequence (biology) Mathematical optimization Convex function Computer science Convex optimization Class (philosophy) State (computer science) Function (biology) Regular polygon Strongly connected component Topology (electrical circuits) Mathematics Algorithm Artificial intelligence Combinatorics Machine learning

Metrics

13
Cited By
3.68
FWCI (Field Weighted Citation Impact)
24
Refs
0.93
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Distributed Control Multi-Agent Systems
Physical Sciences →  Computer Science →  Computer Networks and Communications
Advanced Bandit Algorithms Research
Social Sciences →  Decision Sciences →  Management Science and Operations Research
Advanced Wireless Network Optimization
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.