Exploiting Self-Refining Normal Graph Structures for Robust Defense against Unsupervised Adversarial Attacks
Exploiting Self-Refining Normal Graph Structures for Robust Defense against Unsupervised Adversarial Attacks
Bingdao Feng, Di Jin, Xiaobao Wang, Dongxiao He, Jingyi Cao, Zhen Wang
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 2793-2801.
https://doi.org/10.24963/ijcai.2025/311
Defending against adversarial attacks on graphs has become increasingly important. Graph refinement to enhance the quality and robustness of representation learning is a critical area that requires thorough investigation. We observe that representations learned from attacked graphs are often ineffective for refinement due to perturbations that cause the endpoints of perturbed edges to become more similar, complicating the defender's ability to distinguish them. To address this challenge, we propose a robust unsupervised graph learning framework that utilizes cleaner graphs to learn effective representations. Specifically, we introduce an anomaly detection model based on contrastive learning to obtain a rough graph excluding a large number of perturbed structures. Subsequently, we then propose the Graph Pollution Degree (GPD), a mutual information-based measure that leverages the encoder's representation capability on the rough graph to assess the trustworthiness of the predicted graph and refine the learned representations. Extensive experiments on four benchmark datasets demonstrate that our method outperforms nine state-of-the-art defense models, effectively defending against adversarial attacks and enhancing node classification performance.
Keywords:
Data Mining: DM: Mining graphs
Machine Learning: ML: Adversarial machine learning
Data Mining: DM: Anomaly/outlier detection
