Abstract

Currently, adversarial attacks are typically designed for specific tasks, and although there are some task-agnostic attacks are generally less effective than task-specific ones. These attacks exploit the fact that CNN-based feature extractors cannot be reversed or inverted, making the downstream models vulnerable to these attacks. However, they are not optimally designed because they use the entire CNN to generate an adversarial example. This paper proposes a modified version of this approach called Faster Mimic and Fool (MaF), which requires less time and fewer resources to create an adversarial image. The experiment involved selecting 100 random FlickR 8K images and testing the attack on an Inception-V3-based captioning model. The results showed that Faster MaF achieved a Bleu-4 score that is 13.5% and 31.1% better than MaF and OIMO, respectively. Since Faster MaF requires knowledge of the CNN, it can be considered a grey-box attack.

Keywords:
Adversarial system Computer science Computer security Artificial intelligence

Metrics

5
Cited By
7.09
FWCI (Field Weighted Citation Impact)
36
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Emotions and Moral Behavior
Social Sciences →  Psychology →  Social Psychology
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Mimic and Fool: A Task-Agnostic Adversarial Attack

Akshay ChaturvediUtpal Garain

Journal:   IEEE Transactions on Neural Networks and Learning Systems Year: 2020 Vol: 32 (4)Pages: 1801-1808
JOURNAL ARTICLE

Adversarial attack to fool object detector

Sahil KhattarC. Rama Krishna

Journal:   Journal of Discrete Mathematical Sciences and Cryptography Year: 2020 Vol: 23 (2)Pages: 547-562
JOURNAL ARTICLE

Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors

Nyee Thoang LimMeng Yi KuanMuxin PuMei Kuan LimChun Yong Chong

Journal:   2022 26th International Conference on Pattern Recognition (ICPR) Year: 2022 Pages: 2503-2509
JOURNAL ARTICLE

Block-Sparse Adversarial Attack to Fool Transformer-Based Text Classifiers

Sahar SadrizadehLjiljana DolamicPascal Frossard

Journal:   ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Year: 2022 Pages: 7837-7841
© 2026 ScienceGate Book Chapters — All rights reserved.