JOURNAL ARTICLE

Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning

Hongsheng HuXuyun ZhangZoran SalčićLichao SunKim‐Kwang Raymond ChooGillian Dobbie

Year: 2023 Journal:   IEEE Transactions on Dependable and Secure Computing Vol: 21 (4)Pages: 3012-3029   Publisher: IEEE Computer Society

Abstract

Federated learning (FL) is a popular approach to facilitate privacy-aware machine learning since it allows multiple clients to collaboratively train a global model without granting others access to their private data. It is, however, known that FL can be vulnerable to membership inference attacks (MIAs), where the training records of the global model can be distinguished from the testing records. Surprisingly, research focusing on the investigation of the source inference problem appears to be lacking. We also observe that identifying a training record's source client can result in privacy breaches extending beyond MIAs. For example, consider an FL application where multiple hospitals jointly train a COVID-19 diagnosis model, membership inference attackers can identify the medical records that have been used for training, and any additional identification of the source hospital can result the patient from the particular hospital more prone to discrimination. Seeking to contribute to the literature gap, we take the first step to investigate source privacy in FL. Specifically, we propose a new inference attack (hereafter referred to as source inference attack – SIA), designed to facilitate an honest-but-curious server to identify the training record's source client. The proposed SIAs leverage the Bayesian theorem to allow the server to implement the attack in a non-intrusive manner without deviating from the defined FL protocol. We then evaluate SIAs in three different FL frameworks to show that in existing FL frameworks, the clients sharing gradients, model parameters, or predictions on a public dataset will leak such source information to the server. We also conduct extensive experiments on various datasets to investigate the key factors in an SIA. The experimental results validate the efficacy of the proposed SIAs, e.g., an attack success rate of 67.1% (baseline 10%) can be achieved when the clients share model parameters with the server. Comprehensive ablation studies demonstrate that the success of an SIA is directly related to the overfitting of the local models.

Keywords:
Inference Computer science Leverage (statistics) Artificial intelligence Machine learning Bayesian inference Identification (biology) Computer security Information retrieval Bayesian probability

Metrics

23
Cited By
5.88
FWCI (Field Weighted Citation Impact)
119
Refs
0.95
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Cryptography and Data Security
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Enhance membership inference attacks in federated learning

Xinlong HeYang XuSicong ZhangWeida XuJiale Yan

Journal:   Computers & Security Year: 2023 Vol: 136 Pages: 103535-103535
JOURNAL ARTICLE

Source Inference Attacks in Federated Learning

Hongsheng HuZoran SalčićLichao SunGillian DobbieXuyun Zhang

Journal:   2021 IEEE International Conference on Data Mining (ICDM) Year: 2021 Pages: 1102-1107
BOOK-CHAPTER

FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning

Zilu YangYanchao ZhaoJiale Zhang

Lecture notes in computer science Year: 2023 Pages: 364-378
JOURNAL ARTICLE

Enhancing black-box membership inference attacks in federated learning

Qiang ShiLuzhen RenXinfeng He

Journal:   Journal of Information Security and Applications Year: 2025 Vol: 96 Pages: 104302-104302
© 2026 ScienceGate Book Chapters — All rights reserved.