JOURNAL ARTICLE

Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks

Abstract

With the rapid development of deep neural networks (DNNs), eXplainable AI, which provides a basis for prediction on inputs, has become increasingly important. In addition, DNNs have a vulnerability called an Adversarial Example (AE), which can cause incorrect output by applying special perturbations to inputs. Potential vulnerabilities can also exist in image interpreters such as GradCAM, necessitating their investigation, as these vulnerabilities could potentially result in misdiagnosis within medical imaging. Therefore, this study proposes a black-box adversarial attack method that misleads the image interpreter using Sep-CMA-ES. The proposed method deceptively shifts the focus area of the image interpreter to a different location from that of the original image while maintaining the same predictive labels.

Keywords:
Adversarial system Interpreter Computer science Black box Deep neural networks Vulnerability (computing) Artificial intelligence Image (mathematics) Focus (optics) Artificial neural network Visualization Machine learning Computer security Programming language

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
49
Refs
0.10
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Explainable Artificial Intelligence (XAI)
Physical Sciences →  Computer Science →  Artificial Intelligence
Artificial Intelligence in Healthcare and Education
Health Sciences →  Medicine →  Health Informatics

Related Documents

JOURNAL ARTICLE

Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks

Lifeng HuangShuxin WeiChengying GaoNing Liu

Journal:   Pattern Recognition Year: 2022 Vol: 131 Pages: 108831-108831
JOURNAL ARTICLE

Query efficient black-box adversarial attack on deep neural networks

Yang BaiYisen WangYuyuan ZengYong JiangShu‐Tao Xia

Journal:   Pattern Recognition Year: 2022 Vol: 133 Pages: 109037-109037
JOURNAL ARTICLE

ANJeL: Black-box Adversarial Attack on Deep Neural Networks for Japanese Language

Ryuji KawanoKurihara AkimotoSatoshi Ono

Journal:   Transactions of the Japanese Society for Artificial Intelligence Year: 2025 Vol: 40 (2)Pages: C-O52_1
© 2026 ScienceGate Book Chapters — All rights reserved.