HiTuner: Hierarchical Semantic Fusion Model Fine-Tuning on Text-Attributed Graphs

HiTuner: Hierarchical Semantic Fusion Model Fine-Tuning on Text-Attributed Graphs

Zihan Fang, Zhiling Cai, Yuxuan Zheng, Shide Du, Yanchao Tan, Shiping Wang

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5110-5117. https://doi.org/10.24963/ijcai.2025/569

Text-Attributed Graphs (TAGs) are vital for modeling entity relationships across various domains. Graph Neural Networks have become cornerstone for processing graph structures, while the integration of text attributes remains a prominent research. The development of Large Language Models (LLMs) provides new opportunities for advancing textual encoding in TAGs. However, LLMs face challenges in specialized domains due to their limited task-specific knowledge, and fine-tuning them for specific tasks demands significant resources. To cope with the above challenges, we propose HiTuner, a novel framework that leverages fine-tuned Pre-trained Language Models (PLMs) with domain expertise as tuner to enhance the hierarchical LLM contextualized representations for modeling TAGs. Specifically, we first strategically select hierarchical hidden states of LLM to form a set of diverse and complementary descriptions as input for the sparse projection operator. Concurrently, a hybrid representation learning is developed to amalgamate the broad linguistic comprehension of LLMs with task-specific insights of the fine-tuned PLMs. Finally, HiTuner employs a confidence network to adaptively fuse the semantically-augmented representations. Empirical results across benchmark datasets spanning various domains validate the effectiveness of the proposed framework. Our codes are available at: https://github.com/ZihanFang11/HiTuner
Keywords:
Machine Learning: ML: Sequence and graph learning
Natural Language Processing: NLP: Language models
Data Mining: DM: Mining graphs