Grounding Open-Domain Knowledge from LLMs to Real-World Reinforcement Learning Tasks: A Survey

Grounding Open-Domain Knowledge from LLMs to Real-World Reinforcement Learning Tasks: A Survey

Haiyan Yin, Hangwei Qian, Yaxin Shi, Ivor Tsang, Yew-Soon Ong

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Survey Track. Pages 10797-10806. https://doi.org/10.24963/ijcai.2025/1198

Grounding open-domain knowledge from large language models (LLMs) into real-world reinforcement learning (RL) tasks represents a transformative frontier in developing intelligent agents capable of advanced reasoning, adaptive planning, and robust decision-making in dynamic environments. In this paper, we introduce the LLM-RL Grounding Taxonomy, a systematic framework that categorizes emerging methods for integrating LLMs into RL systems by bridging their open-domain knowledge and reasoning capabilities with the task-specific dynamics, constraints, and objectives inherent to real-world RL environments. This taxonomy encompasses both training-free approaches, which leverage the zero-shot and few-shot generalization capabilities of LLMs without fine-tuning, and fine-tuning paradigms that adapt LLMs to environment-specific tasks for improved performance. We critically analyze these methodologies, highlight practical examples of effective knowledge grounding, and examine the challenges of alignment, generalization, and real-world deployment. Our work not only illustrates the potential of LLM-RL agents for enhanced decision-making, but also offers actionable insights for advancing the design of next-generation RL systems that integrate open-domain knowledge with adaptive learning.
Keywords:
Natural Language Processing: NLP: Language models
Machine Learning: ML: Reinforcement learning
Natural Language Processing: NLP: Applications