Chen BoruHejiao HuangYouqiang Hu
In recent years, a distributed training frame has gradually replaced the traditional cloud-based centralized training, which is called Federated Learning (FL). It allows users of mobile devices to train their respective data locally and upload the upgraded models to cloud servers for aggregation, instead of uploading their data to the base station for centralized training, thus protecting their privacy. With one federated learning training session, the cloud server sends an initialized model to the devices, which use their respective local datasets to train the model, and then upload the upgraded models to the cloud server for aggregation. Then repeat the step until the model converges. However, this training frame still has some issues, such as high training latency and energy consumption. We formulate a nonlinear mixed-integer programming problem for minimizing training latency under energy constraint, and then we solve it by decoupling it into two subproblems. The subproblems are resource allocation and device scheduling problem, respectively. For the resource allocation subproblem, a method based on binary search is proposed to find a numerical solution for CPU frequency and transmit power allocated to the device. For the device scheduling subproblem, we propose a greedy policy to select the device that consumes the least time in model training in each iteration. The Simulation shows that the policy we proposed outperforms other policies and achieves a better model accuracy under the energy constraint.
Wenzhi FangDong-Jun HanChristopher G. Brinton
Yanbing YangHuiling ZhuWenchi ChengChangrun Chen
MohammadHossein AlishahiPaul FortierMing ZengFang FangAohan Li
Mingzhe ChenH. Vincent PoorWalid SaadShuguang Cui