Training federated learning models on personal data has raised increasing privacy concerns in many applications. One popular way to preserve data privacy is to apply local differential privacy (LDP) into federated learning. However, existing work does not provide a practical solution in achieving privacy-utility tradeoff due to three issues. First, the magnitudes of model parameters in federated learning may vary drastically across layers as well as training epochs. Thus, clipping gradients based on fixed clipping range may degrade the model utility seriously, when applying the LDP mechanism. Second, if LDP is applied at each iteration along the iterative training process, the privacy budget will accumulate, which causes the explosion of the total privacy budget. Last, since the training data and computing nodes are distributed, how to train federated learning models on heterogeneous data also becomes an intriguing problem. In this paper, we proposed a local differentially private clustered federated learning model, which is expected to improve privacy-utility trade-off in clustered federated learning and mitigate the privacy degradation caused when applying the LDP mechanism in multiple training iterations. Empirical evaluations on MNIST dataset demonstrate that the proposed model achieves a superior performance in balancing the tradeoff between privacy and utility.
Zaobo HeLintao WangZhipeng Cai
Yu XueZiyi LiuYifan SunWu Wang
Jianzhe ZhaoChenxi HuangWenji WangRulin XieRongrong DongStan Matwin
Ajesh Koyatan ChathothClark P NecciaiAbhyuday JagannathaStephen Lee