M^2LLM: Multi-view Molecular Representation Learning with Large Language Models
M^2LLM: Multi-view Molecular Representation Learning with Large Language Models
Jiaxin Ju, Yizhen Zheng, Huan Yee Koh, Can Wang, Shirui Pan
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 7437-7445.
https://doi.org/10.24963/ijcai.2025/827
Accurate molecular property prediction is a critical challenge with wide-ranging applications in chemistry, materials science, and drug discovery. Molecular representation methods, including fingerprints and graph neural networks (GNNs), achieve state-of-the-art results by effectively deriving features from molecular structures. However, these methods often overlook decades of accumulated semantic and contextual knowledge. Recent advancements in large language models (LLMs) demonstrate remarkable reasoning abilities and prior knowledge across scientific domains, leading us to hypothesize that LLMs can generate rich molecular representations when guided to reason in multiple perspectives. To address these gaps, we propose M^2LLM, a multi-view framework that integrates three perspectives: the molecular structure view, the molecular task view, and the molecular rules view. These views are fused dynamically to adapt to task requirements, and experiments demonstrate that M^2LLM achieves state-of-the-art performance on multiple benchmarks across classification and regression tasks. Moreover, we demonstrate that representation derived from LLM achieves exceptional performance by leveraging two core functionalities: the generation of molecular embeddings through their encoding capabilities and the curation of molecular features through advanced reasoning processes.
Keywords:
Multidisciplinary Topics and Applications: MTA: Bioinformatics
Data Mining: DM: Mining graphs
Natural Language Processing: NLP: Language models
