In the development process of today's artificial intelligence system, many text, voice and image recognition systems have been born. In the process of development and application, it has been found that the artificial intelligence system will suffer from classification errors and other problems after adding a small disturbance to the recognized objects. We call such objects with small perturbations adversarial examples, and the process of generating adversarial examples is called adversarial attacks. At present, there are FGSM, I-FGSM and MI-FGSM adversarial attacks algorithms for image. These algorithms are able to carry out global disturbance on image examples. Although they have good adversarial attack effect, in real life, the adversarial examples is often used to cause the failure of artificial intelligence system with smaller local disturbance. Therefore, we hope to study the influence of small-scale disturbance on artificial intelligence system. In this paper, an adversarial algorithm based on saliency detection, LC-MIFGSM, is proposed to maintain the adversarial attack effect and at the same time compress the adversarial attack range, improve the attack concealment, and make the generation of adversarial attack examples more oriented to real scenes.
Xuecai HuXin ZhaoKaiqi HuangTieniu Tan
Yingfeng CaiLei DaiHai WangLong ChenYicheng Li
Jiadi YaoXing ChenXiao-Lei ZhangWei-Qiang ZhangKunde Yang
Zeyu DaiShengcai LiuQing LiKe Tang