D. BrindhaMs I N Sountharia -M VishalMr. T G Mouriyan -Mr. M Sidharth -Mr. G. Aathish Kumar -
Face recognition systems are widely used in security-sensitive applications, but they remain vulnerable to adversarial attacks, where small perturbations can mislead deep learning models. Addressing these vulnerabilities is crucial for ensuring robust and reliable AI-driven security solutions. This paper proposes a multi-stage adversarial training framework that enhances the resilience of face recognition models. We integrate Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) to generate adversarial examples, enabling the model to learn from perturbed inputs. Additionally, EfficientNet, a state-of-the-art convolutional neural network, improves both robustness and computational efficiency. Beyond adversarial training, we introduce three key defense mechanisms: adversarial detection to identify manipulated inputs, adaptive preprocessing to mitigate adversarial effects, and ensemble learning to improve decision-making under attack conditions. Extensive experiments on Labeled Faces in the Wild (LFW) and CASIA-WebFace show that our approach significantly reduces attack success rates while maintaining high accuracy on clean images. These results highlight its effectiveness as a scalable defense strategy for face recognition systems. Future work will explore real-world deployments and optimize computational efficiency, ensuring practical applicability in large-scale security environments.
Takuma AmadaSeng Pei LiewKazuya KakizakiToshinori Araki
Jia GuoZhipeng DuJiankang Deng
Zexin LiBangjie YinTaiping YaoJunfeng GuoShouhong DingSimin ChenCong Liu