Ao WangChenhong SuiHaipeng WangDanfeng HongQingtao GongShengwen ZhouJian Hu
Studies have shown that attackers adding imperceptible perturbations to natural examples can cause catastrophically erroneous output of deep learning models, severely limiting the application of deep learning in security-sensitive areas. Adversarial training is known as one of the most effective defense methods, but unfortunately it requires extensive training. Note that diffusion model has great advantage in adaptive removal of complex noise. Therefore, this paper proposes a robust salient object detection framework based on diffusion model (DRSOD). Specifically, the gradient based PGD attack is first introduced to attack the salient object detection model and generate adversarial examples. Then, to enhance the reliability of detection results under attacks, the pre-trained diffusion model is leveraged for data denoising. This helps us to turn adversarial examples into clean data within the original domain. Evidently, the cross-domain detection impact posed by attacks on the model can be reduced. Comparative experiments on four benchmarks demonstrate the effectiveness of our method.
Hanchen YeYuyue ZhangXiaoli Zhao
Shuo ZhangJiaming HuangWenbing TangYan WuTengjiang HuXiaogang XuJing Liu
Yihua TanYansheng LiChen ChenJin-Gang YuJinwen Tian
Peng JiangNuno VasconcelosJingliang Peng