Zhigang YanDong LiXianhua YuZhichao Zhang
Federated learning (FL) protects data privacy through local training and parameter aggregation. However, there is no need that all users are required to train their local models, and the parameter needs to be quantized via wireless channels in practice. In this letter, we investigate and analyze how to improve the model prediction accuracy with the system latency guarantee. Specifically, our goal is to minimize the loss function under the latency constraint by taking the parameter quantization, user scheduling, and channel bandwidth and transmit power into account. To make the optimization problem tractable, we first derive an upper bound on the loss function with joint quantization and scheduling and an upper bound on the number of bits for parameter aggregation, and then solve the reformulated problem based on the derived upper bounds to obtain closed-form expressions for the quantization level, the scheduling number, the optimized bandwidth and power allocation. Simulation results confirm the convergence and the effectiveness of the proposed algorithm.
Xuechen ChenAixiang WangXiaoheng DengJinsong Gui
Zhixiong ChenWenqiang YiYansha DengArumugam Nallanathan
Wenchao XiaWanli WenKai‐Kit WongTony Q. S. QuekJun ZhangHongbo Zhu
Xuefeng HanWen ChenJun LiMing DingQingqing WuKang WeiXiumei DengZhen Mei
Zheng QinGang FengYiJing LiuTak-Shing Peter YumFei WangJun Wang