Cluster Attack: Query-based Adversarial Attacks on Graph with Graph-Dependent Priors

Cluster Attack: Query-based Adversarial Attacks on Graph with Graph-Dependent Priors

Zhengyi Wang, Zhongkai Hao, Ziqiao Wang, Hang Su, Jun Zhu

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 768-775. https://doi.org/10.24963/ijcai.2022/108

While deep neural networks have achieved great success in graph analysis, recent work has shown that they are vulnerable to adversarial attacks. Compared with adversarial attacks on image classification, performing adversarial attacks on graphs is more challenging because of the discrete and non-differential nature of the adjacent matrix for a graph. In this work, we propose Cluster Attack --- a Graph Injection Attack (GIA) on node classification, which injects fake nodes into the original graph to degenerate the performance of graph neural networks (GNNs) on certain victim nodes while affecting the other nodes as little as possible. We demonstrate that a GIA problem can be equivalently formulated as a graph clustering problem; thus, the discrete optimization problem of the adjacency matrix can be solved in the context of graph clustering. In particular, we propose to measure the similarity between victim nodes by a metric of Adversarial Vulnerability, which is related to how the victim nodes will be affected by the injected fake node, and to cluster the victim nodes accordingly. Our attack is performed in a practical and unnoticeable query-based black-box manner with only a few nodes on the graphs that can be accessed. Theoretical analysis and extensive experiments demonstrate the effectiveness of our method by fooling the node classifiers with only a small number of queries.
Keywords:
AI Ethics, Trust, Fairness: Safety & Robustness
Machine Learning: Adversarial Machine Learning
Machine Learning: Clustering
Machine Learning: Sequence and Graph Learning