Learning Robust Multi-view Representation Using Dual-masked VAEs
Learning Robust Multi-view Representation Using Dual-masked VAEs
Jiedong Wang, Kai Guo, Peng Hu, Xi Peng, Hao Wang
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 6298-6306.
https://doi.org/10.24963/ijcai.2025/701
Most existing multi-view representation learning methods assume view-completeness and noise-free data. However, such assumptions are strong in real-world applications. Despite advances in methods tailored to view-missing or noise problems individually, a one-size-fits-all approach that concurrently addresses both remains unavailable. To this end, we propose a holistic method, called Dual-masked Variational Autoencoders (DualVAE), which aims at learning robust multi-view representation. The DualVAE exhibits an innovative amalgamation of dual-masked prediction, mixture-of-experts learning, representation disentangling, and a joint loss function in wrapping up all components. The key novelty lies in the dual-masked (view-mask and patch-mask) mechanism to mimic missing views and noisy data. Extensive experiments on four multi-view datasets show the effectiveness of the proposed method and its superior performance in comparison to baselines. The code is available at https://github.com/XLearning-SCU/2025-IJCAI-DualVAE.
Keywords:
Machine Learning: ML: Multi-view learning
Machine Learning: ML: Representation learning
Machine Learning: ML: Clustering
