Federated Learning (FL) has gained significant popularity as a means of handling large scale of data in Edge Computing (EC) applications. Due to the frequent communication between edge devices and server, the parameter server based framework for FL may suffer from the communication bottleneck and lead to a degraded training efficiency. As an alternative solution, Hierarchical Federated Learning (HFL), which leverages edge servers as intermediaries to perform model aggregation among devices in proximity, comes into being. However, the existing HFL solutions fail to perform effective training considering the constrained and heterogeneous communication resources on edge devices. In this paper, we design a communication-efficient HFL framework, named CE-HFL, to accelerate the convergence of HFL. Concretely, we propose to adjust the global and edge aggregation frequencies in HFL according to heterogeneous communication resources among edge devices. By performing multiple local updating before communication, the communication overhead on edge servers and the cloud server can be significantly reduced. The experimental results on real-world dataset demonstrate the effectiveness of the proposed method.
Xiangnan WangYang XuHongli XuZhipeng SunYunming LiaoJi Qi
Zhiyuan WangHongli XuJianchun LiuHe HuangChunming QiaoYangming Zhao
Xiangyun TangR PengHao LiTao ZhangXiangwang HouXiangzhi Liu
Zhiyuan WangHongli XuJianchun LiuYang XuHe HuangYangming Zhao