Since generative models rely on providing input data samples, it is essential to have a robust generative model capable of standing against adversarial attacks that can tamper with the model's output. This paper employs empirical analysis to examine the weaknesses of critical generative models like GANs and VAEs and additionally discovers the defense schemes. In a controlled environment created by accurately modeled adversarial trial data sets and time-sensitive analyses, we test and compare various confirmed adversarial training methods and defenses, such as implicit generative modeling and probabilistic adversarial robustness. Our results emphasize the difficulty of gaining complete robustness and suggest a way to deal with such attacks while preserving the model's accuracy. The analysis also reveals gaps in existing techniques, opening up possibilities for future research to improve the protection of generative models. This work will be valuable for the machine learning community in the future, as it contributes to discussing adversarial robustness and offers insights for researchers and practitioners.
Chih-Ling ChangJui-Lung HungChin‐Wei TienChia‐Wei TienSy‐Yen Kuo
Yiyi TaoYixian ShenHang ZhangYanxin ShenLun WangChuanqi ShiShaoshuai Du