Nimish KumarHimanshu VermaYogesh Kumar Sharma
Graph neural networks (GNNs) are a useful tool for analyzing graph-based data in areas like social networks, molecular chemistry, and recommendation systems. Adversarial attacks on GNNs include introducing malicious perturbations that manipulate the model's predictions without being detected. These attacks can be structural or feature-based depending on whether the attacker modifies the graph's topology or node/edge features. To defend against adversarial attacks, researchers have proposed countermeasures like robust training, adversarial training, and defense mechanisms that identify and correct adversarial examples. These methods aim to improve the model's generalization capabilities, enforce regularization, and incorporate defense mechanisms into the model architecture to improve its robustness against attacks. This chapter offers an overview of recent advances in adversarial attacks on GNNs, including attack methods, evaluation metrics, and their impact on model performance.
Qian TaoJianpeng LiaoEnze ZhangLusi Li
Jinyin ChenGuohan HuangHaibin ZhengShanqing YuWenrong JiangChen Cui
Daniel ZügnerOliver BorchertAmir AkbarnejadStephan Günnemann
Xiao YangZhengqiang MaWeipeng HuangQiao ChengBowen ZhaoDalin ZhangQing Pei
Boyuan FengYuke WangYufei Ding