Strategy-Architecture Synergy: A Multi-View Graph Contrastive Paradigm for Consistent Representations
Strategy-Architecture Synergy: A Multi-View Graph Contrastive Paradigm for Consistent Representations
Shuman Zhuang, Zhihao Wu, Yuhong Chen, Zihan Fang, Jiali Yin, Ximeng Liu
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 7291-7299.
https://doi.org/10.24963/ijcai.2025/811
Facing the growing diversity of multi-view data, multi-view graph-based models have made encouraging progress in handling multi-view data modeled as graphs. Graph Contrastive Learning (GCL) naturally fits multi-view graph data by treating their inherent views as augmentations. However, the development of GCL on multi-view graph data is still in the infant stage. Challenges remain in designing strategies that coordinate preprocessing and contrastive learning, and in developing model architectures that automatically meet the needs of diverse views. To tackle these, we propose a framework named CAMEL, which refines consistency learning by introducing a tailored contrastive paradigm for multi-view graphs. Initially, we theoretically analyze the positive effect of edge-dropping preprocessing on the consistency and quantify the factors that influence it. Paired with a learnable model architecture, the proposed adaptive edge-dropping preprocessing strategy is guided by dynamic topology, making the heterogeneity of views more controllable and better aligned with contrastive learning. Finally, we design a neighborhood consistency multi-view contrastive objective that enhances consistency information interaction by extending positive samples. Extensive experiments on downstream tasks, including node classification and clustering, validate the superiority of our proposed model.
Keywords:
Machine Learning: ML: Multi-view learning
Machine Learning: ML: Self-supervised Learning
