Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, Liming Zhu

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 4816-4823. https://doi.org/10.24963/ijcai.2019/669

Graph deep learning models, such as graph convolutional networks (GCN) achieve state-of-the-art performance for tasks on graph data. However, similar to other deep learning models, graph deep learning models are susceptible to adversarial attacks. However, compared with non-graph data the discrete nature of the graph connections and features provide unique challenges and opportunities for adversarial attacks and defenses. In this paper, we propose techniques for both an adversarial attack and a defense against adversarial attacks. Firstly, we show that the problem of discrete graph connections and the discrete features of common datasets can be handled by using the integrated gradient technique that accurately determines the effect of changing selected features or edges while still benefiting from parallel computations. In addition, we show that an adversarially manipulated graph using a targeted attack statistically differs from un-manipulated graphs. Based on this observation, we propose a defense approach which can detect and recover a potential adversarial perturbation. Our experiments on a number of datasets show the effectiveness of the proposed techniques.
Keywords:
Multidisciplinary Topics and Applications: Security and Privacy
Machine Learning: Interpretability
Machine Learning: Adversarial Machine Learning