Adversarial Training for Graph Convolutional Networks: Stability and Generalization Analysis

Adversarial Training for Graph Convolutional Networks: Stability and Generalization Analysis

Chang Cao, Han Li, Yulong Wang, Rui Wu, Hong Chen

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4797-4805. https://doi.org/10.24963/ijcai.2025/534

Recently, numerous methods have been proposed to enhance the robustness of the Graph Convolutional Networks (GCNs) for their vulnerability against adversarial attacks. Despite their empirical success, a significant gap remains in understanding GCNs' adversarial robustness from the theoretical perspective. This paper addresses this gap by analyzing generalization against both node and structure attacks for multi-layer GCNs through the framework of uniform stability. Under the smoothness assumption of the loss function, we establish the first adversarial generalization bound of GCNs in expectation. Our theoretical analysis contributes to a deeper understanding of how adversarial perturbations and graph architectures influence generalization performance, which provides meaningful insights for designing robust models. Experimental results on benchmark datasets confirm the validity of our theoretical findings, highlighting their practical significance.
Keywords:
Machine Learning: ML: Adversarial machine learning
Machine Learning: ML: Learning theory
Machine Learning: ML: Sequence and graph learning