Yujun ChengZhewei ZhangShengjin Wang
Federated Learning (FL) is a widely utilized distributed learning methodology that facilitates real-time continuous learning while preserving client privacy. In most FL implementations, it is assumed that all edge clients possess sufficient computational capabilities to participate in the training of a Deep Neural Network (DNN) model. However, in practical applications, some clients may have limited resources and can only train a significantly smaller local model. To address system heterogeneity, this paper introduces Fed-SDS, an approach that adaptively tailors sparsity strategies for local models. In comparison to existing sparse FL schemes, Fed-SDS improves convergence and enhances model accuracy through a novel channel-wise sparsity metric, namely Mean Weight Magnitude with Gradient (MWMG). In our experiments, we compared Fed-SDS with other sparse FL methods. Empirical results demonstrate that while other sparse methods can significantly impact convergence, Fed-SDS can achieve the highest task accuracies and convergence speed in various system and data heterogeneity scenarios.
Yuesheng LiangChangshan OuyangXunjun Chen
Syed Saqib AliMazhar AliDost Muhammad Saqib BhattiBong Jun Choi
Jinglong ShenNan ChengZhisheng YinYuchuan FuWenchao XuSong Guo
Syed Saqib AliAjit KumarMazhar AliAnkit Kumar SinghBong Jun Choi