JOURNAL ARTICLE

Robust Federated Learning With Contrastive Learning and Meta-Learning

Huan ZhangYuxiang ChenKuanching LiYuhui LiSisi ZhouWei LiangAneta Poniszewska-Marańda

Year: 2025 Journal:   International Journal of Interactive Multimedia and Artificial Intelligence   Publisher: International University of La Rioja

Abstract

Federated learning is regarded as an effective approach to addressing data privacy issues in the era of artificial intelligence. Still, it faces the challenges of unbalanced data distribution and client vulnerability to attacks. Current research solves these challenges but ignores the situation where abnormal updates account for a large proportion, which may cause the aggregated model to contain excessive abnormal information to deviate from the normal update direction, thereby reducing model performance. Some are not suitable for non-Independent and Identically Distribution (non-IID) situations, which may lead to the lack of information on small category data under non-IID and, thus, inaccurate prediction. In this work, we propose a robust federated learning architecture, called FedCM, which integrates contrastive learning and meta-learning to mitigate the impact of poisoned client data on global model updates. The approach improves features by leveraging extracted data characteristics combined with the previous round of local models through contrastive learning to improve accuracy. Additionally, a meta-learning method based on Gaussian noise model parameters is employed to fine-tune the local model using a global model, addressing the challenges posed by non-independent and identically distributed data, thereby enhancing the model’s robustness. Experimental validation is conducted on real datasets, including CIFAR10, CIFAR100, and SVHN. The experimental results show that FedCM achieves the highest average model accuracy across all proportions of attacked clients. In the case of a non-IID distribution with a parameter of 0.5 on CIFAR10, under attack client proportions of 0.2, 0.5, and 0.8, FedCM improves the average accuracy compared to the baseline methods by 8.2%, 7.9%, and 4.6%, respectively. Across different proportions of attacked clients, FedCM achieves at least 4.6%, 5.2%, and 0.45% improvements in average accuracy on the CIFAR10, CIFAR100, and SVHN datasets, respectively. FedCM converges faster in all training groups, especially showing a clear advantage on the SVHN dataset, where the number of training rounds required for convergence is reduced by approximately 34.78% compared to other methods.

Keywords:

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.18
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Robust Inference for Federated Meta-Learning

Zijian GuoXiudi LiLarry HanTianxi Cai

Journal:   Journal of the American Statistical Association Year: 2025 Vol: 120 (551)Pages: 1695-1710
JOURNAL ARTICLE

Model-Contrastive Federated Learning

Qinbin LiBingsheng HeDawn Song

Year: 2021 Pages: 10708-10717
JOURNAL ARTICLE

Personalized Federated Contrastive Learning

Yupei ZhangYunan XuShuangshuang WeiYifei WangYuxin LiXuequn Shang

Journal:   2022 IEEE International Conference on Big Data (Big Data) Year: 2022 Pages: 4218-4225
© 2026 ScienceGate Book Chapters — All rights reserved.