JOURNAL ARTICLE

Evaluating Robustness of AI Models against Adversarial Attacks

Abstract

Recently developed adversarial attacks on neural networks have become more aggressive and dangerous, because of which Artificial Intelligence (AI) models are no longer sufficiently robust against them. It is important to have a set of effective and reliable methods to detect malicious attacks to ensure the security of AI models. Such standardized methods can also serve as a reference for researchers to develop robust models and new kinds of attacks. This study proposes a method to assess the robustness of AI models. Six commonly used image classification CNN models were evaluated when subjected to 13 types of adversarial attacks. The robustness of the models is calculated unbiased and can be used as a reference for further improvement. It is distinguished from prior related works that our algorithm is attack-agnostic and is applicable to neural network model.

Keywords:
Robustness (evolution) Computer science Adversarial system Artificial intelligence Artificial neural network Machine learning Threat model Deep neural networks Attack model Data mining Computer security

Metrics

23
Cited By
1.17
FWCI (Field Weighted Citation Impact)
38
Refs
0.82
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Bacillus and Francisella bacterial research
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Molecular Biology
Advanced Malware Detection Techniques
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.