Recent advances in Federated Learning (FL) have enabled large-scale collaborative machine learning (ML) for a vast number of distributed clients, ensuring both performance and data privacy. However, most current works prioritize the interests of the central controller in FL, such as enhancing model performance and training efficiency, while neglecting the interests of the FL clients. This oversight can lead to unfair treatment of clients, discouraging their active participation in the training process, and potentially undermining the sustainability of the FL ecosystem. As a result, addressing fairness issues in FL has become increasingly essential. In this thesis, we address three critical issues in Fairness-Aware Federated Learning (FAFL). Firstly, FAFL is in its nascent stages, and there is a general lack of comprehensive understanding among many researchers and practitioners. Consequently, identifying fairness concerns in FL and establishing metrics to measure fairness are fundamental questions that must be addressed to develop a robust FL ecosystem. Secondly, given the dynamic nature of FL systems, adapting FL policies to changing environments and achieving a balance between fairness and model performance pose significant challenges. Thirdly, most existing FL research is centered on a monopoly scenario, where a single server selects data owners from a common pool. However, in practical applications, multiple FL servers often compete for the same FL clients. Therefore, the problem of FL job scheduling in multi-server environments remains unresolved, and adapting FL fairness policies to these non-monopoly settings is an important area of investigation.
Wei DuDepeng XuXintao WuHanghang Tong
Yuxin ShiZelei LiuZhuan ShiHan Yu