Kean RenGuocheng LiaoQian MaXu Chen
Federated learning (FL) is a distributed machine learning scheme in which clients jointly train a model without exposing their private data to a central server. However, two challenges exist: one technical challenge of the non-IID issue and one economic challenge of the incentive issue. Many existing works presented incentive mechanisms to select clients with high-quality data to tackle the non-IID issue. However, the existing works assumed the server's availability of clients' true data quality information. We notice that this assumption is hard to satisfy due to the private nature of the information. In this paper, we try to eliminate this assumption and adopt a local differentially private mechanism in the incentive mechanism. In this regard, we propose a Bayesian-based method for the server to estimate the clients' qualities and an efficient algorithm that incentivizes clients with approximately high-quality data. We prove that our solution has an approximation guarantee and is incentive-compatible, individually rational, and computationally efficient. We also analyze the quality loss due to the integration of the privacy-preserving mechanism. We conduct extensive experiments and show that our proposed solution outperforms the mechanism without considering the non-IID issue and is comparable to the mechanism without privacy protection.
Shuyan ChengPeng LiRuchuan WangHe Xu
Yiwei LiShuai WangChong‐Yung ChiTony Q. S. Quek
Bjarne PfitznerMax M. MaurerAxel WinterChristoph RiepeIgor M. SauerRobin van de WaterBert ArnrichJohann PratschkeBert Arnrich