Chenyuan FengAhmed ArafaZihan ChenMingxiong ZhaoTony Q. S. QuekHoward H. Yang
This paper studies the efficiency of training a statistical model among an edge server and multiple clients via Federated Learning (FL) – a machine learning method that preserves data privacy in the training process – over wireless networks. Due to unreliable wireless channels and constrained communication resources, the server can only choose a handful of clients for parameter updates during each communication round. To address this issue, analytical expressions are derived to characterize the FL convergence rate, accounting for key features from both communication and algorithmic aspects, including transmission reliability, scheduling policies, and momentum method. First, the analysis reveals that either delicately designed user scheduling policies or expanding higher bandwidth to accommodate more clients in each communication round can expedite model training in networks with reliable connections. However, these methods become ineffective when the connection is erratic. Second, it has been verified that incorporating the momentum method into the model training algorithm accelerates the rate of convergence and provides greater resilience against transmission failures. Last, extensive empirical simulations are provided to verify these theoretical discoveries and enhancements in performance.
Zheshun WuZenglin XuDun ZengJunfan LiJie Liu
Chenyuan FengHoward H. YangZihan ChenDaquan FengZhenzhong WangTony Q. S. Quek
Chen YuHanlin TangCédric RenggliSimon KassingAnkit SinglaDan AlistarhCe ZhangJi Liu
Su WangSeyyedali HosseinalipourVaneet AggarwalChristopher G. BrintonDavid J. LoveWeifeng SuMung Chiang
Paul ZhengYao ZhuYulin HuZhengming ZhangAnke Schmeink