JOURNAL ARTICLE

Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example

Hyun KwonHyunsoo YoonDaeseon Choi

Year: 2019 Journal:   IEEE Access Vol: 7 Pages: 60908-60919   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Deep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial examples have focused on ways of causing misclassification by a DNN by modulating the entire image. However, in some cases, a restricted adversarial example may be required in which only certain parts of the image are modified rather than the entire image and that results in misclassification by the DNN. For example, when the placement of a road sign has already been completed, an attack may be required that will change only a specific part of the sign, such as by placing a sticker on it, to cause misidentification of the entire image. As another example, an attack may be required that causes a DNN to misinterpret images according to a minimal modulation of the outside border of the image. In this paper, we propose a new restricted adversarial example that modifies only a restricted area to cause misclassification by a DNN while minimizing distortion from the original sample. It can also select the size of the restricted area. We used the CIFAR10 and ImageNet datasets to evaluate the performance. We measured the attack success rate and distortion of the restricted adversarial example while adjusting the size, shape, and position of the restricted area. The results show that the proposed scheme generates restricted adversarial examples with a 100% attack success rate in a restricted area of the whole image (approximately 14% for CIFAR10 and 1.07% for ImageNet) while minimizing the distortion distance.

Keywords:
Adversarial system Computer science Image (mathematics) Distortion (music) Artificial intelligence Sample (material) Artificial neural network Evasion (ethics) Deep neural networks Pattern recognition (psychology) Computer vision Telecommunications

Metrics

17
Cited By
2.15
FWCI (Field Weighted Citation Impact)
36
Refs
0.90
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Integrated Circuits and Semiconductor Failure Analysis
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Restricted‐Area Adversarial Example Attack for Image Captioning Model

Hyun KwonSunghwan Kim

Journal:   Wireless Communications and Mobile Computing Year: 2022 Vol: 2022 (1)
JOURNAL ARTICLE

Restricted Black-Box Adversarial Attack Against DeepFake Face Swapping

Junhao DongYuan WangJianhuang LaiXiaohua Xie

Journal:   IEEE Transactions on Information Forensics and Security Year: 2023 Vol: 18 Pages: 2596-2608
BOOK-CHAPTER

Restricted Area:

Konrad Klejsa

Berghahn Books Year: 2024 Pages: 158-183
BOOK-CHAPTER

restricted area

Martin H. Weik

Year: 2000 Pages: 1486-1486
© 2026 ScienceGate Book Chapters — All rights reserved.