JOURNAL ARTICLE

Black-box adversarial attack via overlapped shapes

Phoenix WilliamsKe LiGeyong Min

Year: 2022 Journal:   Proceedings of the Genetic and Evolutionary Computation Conference Companion Pages: 467-468

Abstract

Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples. Many works assume an attacker has total access to the targeted model. A realistic assumption is that an attacker has access to the targeted model only by querying some input and observing its predicted class probabilities. In this paper we propose a concept of applying techniques similar to those used within evolutionary-art to generated adversarial images.

Keywords:
Adversarial system Computer science Deep neural networks Black box Class (philosophy) State (computer science) Artificial intelligence Artificial neural network Theoretical computer science Algorithm

Metrics

2
Cited By
0.24
FWCI (Field Weighted Citation Impact)
7
Refs
0.40
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Digital Media Forensic Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Simulator Attack+ for Black-Box Adversarial Attack

Yimu JiJianyu DingZhiyu ChenFei WuChi ZhangYiming SunJing SunShangdong Liu

Journal:   2022 IEEE International Conference on Image Processing (ICIP) Year: 2022 Pages: 636-640
JOURNAL ARTICLE

Saliency Attack: Towards Imperceptible Black-box Adversarial Attack

Zeyu DaiShengcai LiuQing LiKe Tang

Journal:   ACM Transactions on Intelligent Systems and Technology Year: 2023 Vol: 14 (3)Pages: 1-20
© 2026 ScienceGate Book Chapters — All rights reserved.