STLSP: Integrating Structure and Text with Large Language Models for Link Sign Prediction of Networks

STLSP: Integrating Structure and Text with Large Language Models for Link Sign Prediction of Networks

Lijia Ma, Haoyang Fu, Zhijie Cao, Xiongnan Jin, Qiuzhen Lin, Jianqiang Li

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 3180-3188. https://doi.org/10.24963/ijcai.2025/354

Link Sign Prediction (LSP) in signed networks is a critical task with applications in recommendation systems, community detection, and social network analysis. Existing methods primarily rely on graph neural networks to exploit structural information, often neglecting the valuable insights from edge-level textual data. Furthermore, utilizing large language models (LLMs) for LSP faces challenges in reliability and interpreting graph structures. To address these issues, we propose a novel STLSP framework that integrates signed networks' \underline{S}tructural and \underline{T}extual information with LLMs for the \underline{LSP} task. STLSP leverages structural balance theory to generate node embeddings that capture positive and negative relationships. These embeddings are transformed into natural language representations through clustering techniques, allowing LLMs to utilize the structural context fully. By integrating these representations with edge text, STLSP improves the accuracy and reliability of the LSP task. Extensive experiments conducted on five real-world datasets demonstrate that STLSP outperformed state-of-the-art baselines, achieving an 8.7% improvement in terms of accuracy. Moreover, STLSP shows robust performance across various LLMs, making it adaptable to different computational environments. The code and data are publically available at https://github.com/sss483/STLSP.
Keywords:
Data Mining: DM: Networks
Machine Learning: ML: Feature extraction, selection and dimensionality reduction
Natural Language Processing: NLP: Information retrieval and text mining