Rethinking the Promotion Brought by Contrastive Learning to Semi-Supervised Node Classification
Rethinking the Promotion Brought by Contrastive Learning to Semi-Supervised Node Classification
Deli Chen, Yankai Lin, Lei Li, Xuancheng Ren, Peng Li, Jie Zhou, Xu Sun
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2852-2858.
https://doi.org/10.24963/ijcai.2022/395
Graph Contrastive Learning (GCL) has proven highly effective in promoting the performance of Semi-Supervised Node Classification (SSNC). However, existing GCL methods are generally transferred from other fields like CV or NLP, whose underlying working mechanism remains underexplored. In this work, we first deeply probe the working mechanism of GCL in SSNC, and find that the promotion brought by GCL is severely unevenly distributed: the improvement mainly comes from subgraphs with less annotated information, which is fundamentally different from contrastive learning in other fields. However, existing GCL methods generally ignore this uneven distribution of annotated information and apply GCL evenly to the whole graph. To remedy this issue and further improve GCL in SSNC, we propose the Topology InFormation gain-Aware Graph Contrastive Learning (TIFA-GCL) framework that considers the annotated information distribution across graph in GCL. Extensive experiments on six benchmark graph datasets, including the enormous OGB-Products graph, show that TIFA-GCL can bring a larger improvement than existing GCL methods in both transductive and inductive settings. Further experiments demonstrate the generalizability and interpretability of TIFA-GCL.
Keywords:
Machine Learning: Sequence and Graph Learning
Data Mining: Mining Graphs
Machine Learning: Semi-Supervised Learning