Gaussian Mixture Model for Graph Domain Adaptation
Gaussian Mixture Model for Graph Domain Adaptation
Mengzhu Wang, Wenhao Ren, Yu Zhang, Yanlong Fan, Dianxi Shi, Luoxi Jing, Nan Yin
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 1963-1972.
https://doi.org/10.24963/ijcai.2025/219
Unsupervised domain adaptation (UDA) has been widely studied with the goal of transferring knowledge from a label-rich source domain to a related but unlabeled target domain. Most UDA techniques achieve this by reducing the feature discrepancies between the two domains to learn domain-invariant feature representations. While domain-invariant feature representations can reduce the differences between the source and target domains, excessively simplifying these differences may cause the model to overlook important domain-specific features, resulting in a decline in transfer learning effectiveness. To address this issue, this paper proposes a novel Gaussian Mixture Model for graph domain adaptation (GMM). This model effectively reduces the distributional bias between the source and target domains by modeling the distribution differences on a graph structure. GMM leverages the local structural information of the graph and the clustering capability of the Gaussian mixture model to automatically learn the latent mapping relationships between the source and target domains. To the best of our knowledge, this is the first work to introduce a Gaussian mixture model into UDA. Extensive experimental results on three standard benchmarks demonstrate that the proposed GMM algorithm outperforms state-of-the-art unsupervised domain adaptation methods in terms of performance.
Keywords:
Computer Vision: CV: Machine learning for vision
Computer Vision: CV: Representation learning
Computer Vision: CV: Transfer, low-shot, semi- and un- supervised learning
