MAGE: Multimodal Alignment and Generation Enhancement via Bridging Visual and Semantic Spaces

MAGE: Multimodal Alignment and Generation Enhancement via Bridging Visual and Semantic Spaces

Shaojun E, Yuchen Yang, Jiaheng Wu, Yan Zhang, Tiejun Zhao, Ziyan Chen

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 954-962. https://doi.org/10.24963/ijcai.2025/107

In the latest advancements in multimodal learning, effectively addressing the spatial and semantic losses of visual data after encoding remains a critical challenge. This is because the performance of large multimodal models is positively correlated with the coupling between visual encoders and large language models. Existing approaches often face issues such as vector gaps or semantic disparities, resulting in information loss during the propagation process. To address these issues, we propose MAGE (Multimodal Alignment and Generation Enhancement), a novel framework that bridges the semantic spaces of vision and text through an innovative alignment mechanism. By introducing the Intelligent Alignment Network (IAN), MAGE achieves dimensional and semantic alignment. To reduce the gap between synonymous heterogeneous data, we employ a training strategy that combines cross-entropy and mean squared error, significantly enhancing the alignment effect. Moreover, to enhance MAGE’s “Any-to-Any” capability, we developed a fine-tuning dataset for multimodal tool-calling instructions to expand the model’s output capability boundaries. Finally, our proposed multimodal large model architecture, MAGE, achieved significantly better performance compared to similar works across various evaluation benchmarks, including MME, MMBench, and SEED. Complete code and appendix are available at: https://github.com/GTCOM-NLP/MAGE
Keywords:
Computer Vision: CV: Multimodal learning
Agent-based and Multi-agent Systems: MAS: Multi-agent planning
Natural Language Processing: NLP: Language generation