This paper presents a class of subgradient-push algorithms for online distributed optimization over time-varying networks. In this setting, a private strongly convex objective function is revealed to each agent at each time step. In the next time step, this agent makes a decision about its state using this knowledge, along with the information gathered only from its neighboring agents, prescribed by a sequence of time-varying directed graphs. Under the assumption that this sequence is uniformly strongly connected, we design an algorithm, distributed over this time-varying topology, that guarantees that the individual regret, the difference between the accumulated cost of agents' states and the best static offline cost, grows only sublinearly. Simulations illustrate our results.
Mohammad Amin AkbariBahman GharesifardTamás Linder
Orhan Eren AkgünArif Kerem DayıStephanie GilAngelia Nedić