Going Beyond Consistency: Target-oriented Multi-view Graph Neural Network
Going Beyond Consistency: Target-oriented Multi-view Graph Neural Network
Sujia Huang, Lele Fu, Shuman Zhuang, Yide Qiu, Bo Huang, Zhen Cui, Tong Zhang
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5426-5434.
https://doi.org/10.24963/ijcai.2025/604
Multi‐view learning has emerged as a pivotal research area driven by the growing heterogeneity of real‐world data, and graph neural network-based models, modeling multi-view data as multi-view graphs, have achieved remarkable performance by revealing its deep semantics. However, by assuming cross‐view consistency, most approaches collect not only task-relevant (determinative) semantics but also symbiotic yet task-irrelevant (incidental) factors are collected to obscure model inference. Furthermore, these approaches often lack rigorous theoretical analysis that bridges training data to test data. To address these issues, we propose Target-oriented Graph Neural Network (TGNN), a novel framework that goes beyond traditional consistency by prioritizing task-relevant information, ensuring alignment with the target. Specifically, TGNN employs a class-level dual-objective loss to minimize the classification similarity between determinative and incidental factors, accentuating the former while suppressing the latter during model inference. Meanwhile, to ensure consistency between the learned semantics and predictions in representation learning, we introduce a penalty term that aims to amplify the divergence between these two types of factors. Furthermore, we derive an upper bound on the loss discrepancy between training and test data, providing formal guarantees for generalization to test domains. Extensive experiments conducted on three types of multi-view datasets validate the superiority of TGNN.
Keywords:
Machine Learning: ML: Multi-view learning
Machine Learning: ML: Representation learning
Machine Learning: ML: Semi-supervised learning
