CLLMRec: Contrastive Learning with LLMs-based View Augmentation for Sequential Recommendation
CLLMRec: Contrastive Learning with LLMs-based View Augmentation for Sequential Recommendation
Fan Lu, Xiaolong Xu, Haolong Xiang, Lianyong Qi, Xiaokang Zhou, Fei Dai, Wanchun Dou
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 3153-3161.
https://doi.org/10.24963/ijcai.2025/351
Sequential recommendation generates embedding representations from historical user-item interactions to recommend the next potential interaction item. Due to the complexity and variability of historical user-item interactions, extracting effective user features is quite challenging. Recent studies have employed sequential networks such as time series networks and Transformers to capture the intricate dependencies and temporal patterns in historical user-item interactions, extracting more effective user features. However, limited by the scarcity and suboptimal quality of data, these methods struggle to capture subtle differences in user sequences, which results in diminished recommendation accuracy. To address the above issue, we propose a contrastive learning framework with LLMs-based view augmentation (CLLMRec), which effectively mines differences in behavioral sequences through sample generation. Specifically, CLLMRec utilizes LLMs (Large Language Models) to augment views and expand user behavior sequence representations, providing high-quality positive and negative samples. Subsequently, CLLMRec employs the augmented views for effective contrastive learning, capturing subtle differences in behavioral sequences to suppress interference from irrelevant noise. Experimental results on three public datasets demonstrate that the proposed method outperforms state-of-the-art baseline models, and significantly enhances recommendation performance.
Keywords:
Data Mining: DM: Recommender systems
Data Mining: DM: Information retrieval
