JOURNAL ARTICLE

Mimic and Fool: A Task-Agnostic Adversarial Attack

Akshay ChaturvediUtpal Garain

Year: 2020 Journal:   IEEE Transactions on Neural Networks and Learning Systems Vol: 32 (4)Pages: 1801-1808   Publisher: Institute of Electrical and Electronics Engineers

Abstract

At present, adversarial attacks are designed in a task-specific fashion. However, for downstream computer vision tasks such as image captioning and image segmentation, the current deep-learning systems use an image classifier such as VGG16, ResNet50, and Inception-v3 as a feature extractor. Keeping this in mind, we propose Mimic and Fool (MaF), a task-agnostic adversarial attack. Given a feature extractor, the proposed attack finds an adversarial image, which can mimic the image feature of the original image. This ensures that the two images give the same (or similar) output regardless of the task. We randomly select 1000 MSCOCO validation images for experimentation. We perform experiments on two image captioning models, Show and Tell, Show Attend and Tell, and one visual question answering (VQA) model, namely, end-to-end neural module network (N2NMN). The proposed attack achieves a success rate of 74.0%, 81.0%, and 87.1% for Show and Tell, Show Attend and Tell, and N2NMN, respectively. We also propose a slight modification to our attack to generate natural-looking adversarial images. In addition, we also show the applicability of the proposed attack for invertible architecture. Since MaF only requires information about the feature extractor of the model, it can be considered as a gray-box attack.

Keywords:
Closed captioning Computer science Adversarial system Artificial intelligence Classifier (UML) Feature (linguistics) Image (mathematics) Extractor Feature extraction Pattern recognition (psychology) Computer vision

Metrics

34
Cited By
3.08
FWCI (Field Weighted Citation Impact)
48
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Task and Model Agnostic Adversarial Attack on Graph Neural Networks

Kartik SharmaSamidha VermaSourav MedyaArnab BhattacharyaSayan Ranu

Journal:   Proceedings of the AAAI Conference on Artificial Intelligence Year: 2023 Vol: 37 (12)Pages: 15091-15099
JOURNAL ARTICLE

Meta-Attack: Class-agnostic and Model-agnostic Physical Adversarial Attack

Weiwei FengBaoyuan WuTianzhu ZhangYong ZhangYongdong Zhang

Journal:   2021 IEEE/CVF International Conference on Computer Vision (ICCV) Year: 2021 Pages: 7767-7776
JOURNAL ARTICLE

Adversarial attack to fool object detector

Sahil KhattarC. Rama Krishna

Journal:   Journal of Discrete Mathematical Sciences and Cryptography Year: 2020 Vol: 23 (2)Pages: 547-562
© 2026 ScienceGate Book Chapters — All rights reserved.