Grigor BezirganyanHenrik Sergoyan
Today, neural networks are used in various domains, in most of which it is critical to have reliable and correct output. This is why adversarial attacks make deep neural networks less reliable to be used in safety-critical areas. Hence, it is important to study the potential attack methods to be able to develop much more robust networks. In this paper, we review four white box, targeted adversarial attacks, and compare them in terms of their misclassification rate, targeted misclassification rate, attack duration, and imperceptibility. Our goal is to find the attack(s), which would be efficient, generate adversarial samples with small perturbations, and be undetectable to the human eye.
Xueqin ZhangPeng GengWei HongYixuan WangChunhua Gu
Nina NarodytskaShiva Prasad Kasiviswanathan
Chenxu WangMing ZhangJinjing ZhaoXiaohui Kuang