<p>Federated learning is the most famous algorithm for training machine learning models over distributed datasets without sharing the local data with a central server. However, in many real-life settings, not all clients are equally beneficial for training the model. The selection of the optimal subset of participating clients is critical for ensuring high performance and low energy consumption. This paper formulates the client selection problem as a multi-agent optimization task with the goal of finding a trade-off between performance and energy consumption. In particular, we propose a fully decentralized client selection policy based on non-stationary multi-armed bandits where clients autonomously decide to participate or not in the training process without relying on the central server. The proposed solution outperforms the random client selection policy reducing up to 12% both the number of rounds required to achieve the target accuracy and energy consumption.</p>
William MarfoDeepak K. ToshShirley Moore
Maciel, Filipede Souza, Allan M.Bittencourt, Luiz F.Villas, Leandro A.Braun, Torsten
Yiyang ZhangYiming LuoTao YangXiaofeng WuBo Hu
Yan KangNina ShuTao WuChunsheng LiuJun HuangYU Jing-Bo