JOURNAL ARTICLE

Efficacy of defending deep neural networks against adversarial attacks with randomization

Abstract

Adversarial machine learning is concerned with the study of vulnerabilities of machine learning techniques to adversarial attacks and potential defenses against such attacks. Intrinsic vulnerabilities, incongruous and often suboptimal defenses are both rooted in the standard assumption upon which machine learning methods have been developed. The assumption that data are independent and identically distributed (i.i.d) samples implies training data are representative of the general population. Thus, learning models that fit the training data accurately would perform well on the test data from the rest of the population. Violations of the i.i.d assumption characterize the challenges of detecting and defending against adversarial attacks. For an informed adversary, the most effective attack strategy is to transform malicious data so that they appear indistinguishable from legitimate data to the target model. Current development in adversarial machine learning suggests that the adversary can easily gain the upper hand on this arms race since the adversary only needs to make a local breakthrough against the stationary target while the target model struggles to extend its predictive power to the general population, including the corrupted data. The fundamental cause of stagnation in effective defense against adversarial attacks suggests developing a moving target defense for a machine learning model for greater robustness. We investigate the feasibility and effectiveness of employing randomization in creating moving target defense for deep neural network learning models. Randomness is introduced through randomizing the input and adding small random noise to the learned parameters. Extensive empirical study is performed, covering different attack strategies and defense/detection techniques against adversarial attacks.

Keywords:
Adversarial system Computer science Adversary Artificial intelligence Population Machine learning Robustness (evolution) Adversarial machine learning Artificial neural network Deep neural networks Deep learning Randomness Computer security Mathematics Statistics

Metrics

1
Cited By
0.15
FWCI (Field Weighted Citation Impact)
0
Refs
0.52
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Malware Detection Techniques
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.