JOURNAL ARTICLE

Defensive dropout for hardening deep neural networks under adversarial attacks

Abstract

Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. This work provides a solution to hardening DNNs under adversarial attacks through defensive dropout. Besides using dropout during training for the best test accuracy, we propose to use dropout also at test time to achieve strong defense effects. We consider the problem of building robust DNNs as an attacker-defender two-player game, where the attacker and the defender know each others' strategies and try to optimize their own strategies towards an equilibrium. Based on the observations of the effect of test dropout rate on test accuracy and attack success rate, we propose a defensive dropout algorithm to determine an optimal test dropout rate given the neural network model and the attacker's strategy for generating adversarial examples.We also investigate the mechanism behind the outstanding defense effects achieved by the proposed defensive dropout. Comparing with stochastic activation pruning (SAP), another defense method through introducing randomness into the DNN model, we find that our defensive dropout achieves much larger variances of the gradients, which is the key for the improved defense effects (much lower attack success rate). For example, our defensive dropout can reduce the attack success rate from 100% to 13.89% under the currently strongest attack i.e., C&W attack on MNIST dataset.

Keywords:

Metrics

42
Cited By
3.38
FWCI (Field Weighted Citation Impact)
10
Refs
0.93
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Malware Detection Techniques
Physical Sciences →  Computer Science →  Signal Processing

Related Documents

BOOK-CHAPTER

Defensive Strategy for Explainability in Deep Neural Networks Under Adversarial Attacks

Tuan Trung MacTan Loc NguyenBac Le

Communications in computer and information science Year: 2025 Pages: 37-51
JOURNAL ARTICLE

Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

Ayesha SiddiqueKhaza Anuarul Hoque

Journal:   2022 Design, Automation & Test in Europe Conference & Exhibition (DATE) Year: 2022 Pages: 364-369
BOOK-CHAPTER

Hardening Deep Neural Networks in Condition Monitoring Systems against Adversarial Example Attacks

Felix SpechtJens Otto

Technologien für die intelligente Automation Year: 2020 Pages: 103-111
DISSERTATION

Understanding Deep Neural Networks using Adversarial Attacks

Nakka, Krishna Kanth

University:   Infoscience (Ecole Polytechnique Fédérale de Lausanne) Year: 2022
JOURNAL ARTICLE

Adversarial Attacks and Defenses in Deep Neural Networks

Journal:   International Journal of Artificial Intelligence Data Science and Machine Learning Year: 2022 Vol: 3
© 2026 ScienceGate Book Chapters — All rights reserved.