Most of security issues in deep learning are based on human-imperceptible adversarial perturbation, which can fool image recognition models of deep learning and bring a serious security threats to many practical applications. However, how to construct a universal adversarial perturbation for images is still an open question. In this paper, we make fully use of a residual network to get a universal perturbation, and then utilize a loss network to perform the similarity measure of images to carry out the adversarial attack. Experiment results on the CIFAR-10 dataset show that our scheme can get an 89% attack success rate.
Jian XuHeng LiuDexin WuFucai ZhouChong-zhi GaoLinzhi Jiang
P. V. Naga SrinivasM. V. P. Chandra Sekhara Rao
Weinan GuanZiwen HeWei WangJing DongBo Peng