JOURNAL ARTICLE

Threshold Filtering for Detecting Label Inference Attacks in Vertical Federated Learning

Lei DingH. -R. BaoQingzhe LvFeng ZhangZhouyang ZhangJie HanShuang Ding

Year: 2024 Journal:   Electronics Vol: 13 (22)Pages: 4376-4376   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

Federated learning, as an emerging machine-learning method, has received widespread attention because it allows users to train locally during the training process and uses relevant cryptographic knowledge to safeguard the privacy of data during model aggregation. However, existing federated learning is also susceptible to privacy breaches, e.g., label inference attacks against vertical federated learning scenarios, where an adversary is able to reason about the labels of other participants based on the trained model, leading to serious privacy breaches. In this paper, we design a detection method for label inference attacks in vertical federated learning scenarios, which is able to detect the attacks based on the principles of the attacks. We design a threshold-filtering detection method based on the principle of attack to determine that the model is under attack when the threshold value is greater than a set parameter. Furthermore, we have created six threat model classifications based on different a priori conditions of the adversary to comprehensively analyze the adversary’s attacks. In addition to the detection method of attacks, the extent of attacks on the model and the effectiveness of the defense can also be evaluated. The evaluation module will experimentally measure the changes in the relevant metrics such as the accuracy of the attack, the F1 score, and the change in the accuracy after the defense method. For example, detection in the full connected neural network model assesses the attack and defense effectiveness of the model with an attack accuracy of 86.72% in the breast cancer Wisconsin dataset and an F1 score of 0.743, which is reduced to 36.36% after dispersed training. This ensures that users have an overall grasp of the extent to which the training model is under attack before deploying the model.

Keywords:
Inference Computer science Artificial intelligence Machine learning Pattern recognition (psychology)

Metrics

2
Cited By
1.28
FWCI (Field Weighted Citation Impact)
20
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Cryptography and Data Security
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.