Federated learning (FL) with progressive training is a promising privacy-preserving and communication-efficient framework for edge intelligence applications. Specifically, by partitioning the global model into multiple sub-models and dividing the FL training into multiple stages, FL with progressive training enables the gradual training of a large model, thereby significantly reducing the transmission overhead without compromising learning performance. However, implementing FL with progressive training over wireless networks is hindered by the limited radio and energy resources. To address these issues, we adopt over-the-air computation (AirComp) to support FL with progressive training over wireless networks. By balancing the tradeoff between the AirComp transmission distortion and the transition efficiency of progressive training, we formulate a mixed-integer optimization problem with energy and power constraints, which is further decomposed into several subproblems via Lyapunov optimization. Subsequently, we develop a low computational-complexity algorithm that jointly optimizes transmit power, receive beamforming, and transition indicator in an alternating manner. Simulation results demonstrate the effectiveness of our optimization algorithm in improving the learning performance of the considered FL system.
Fangtong ZhouZhibin WangXiliang LuoYong Zhou
Deyou ZhangMing XiaoMikael SkoglundH. Vincent Poor
Yuanming ShiKai YangZhanpeng YangYong Zhou