Enhancing the Logical Reasoning Abilities of Large Language Models

Enhancing the Logical Reasoning Abilities of Large Language Models

Fengxiang Cheng

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 10969-10970. https://doi.org/10.24963/ijcai.2025/1239

Large language models (LLMs) have demonstrated impressive progress in various natural language progress tasks. However, it has been observed that LLMs still struggle with complex causal and logical reasoning. To facilitate this research direction, we first proposed a training method to distinguish causal relationships from spurious correlations in sentiment classification tasks. Then we conducted a comprehensive survey categorizing existing approaches, firstly identifying the main challenges of complex logical question-answering tasks and logical inconsistency across different questions. Our ongoing projects mainly focus on two points: (1) incorporating modal and epistemic logic to evaluate and enhance LLMs’ reasoning ability to handle more complex and diverse reasoning tasks, and (2) phased training LLMs with curriculum learning to improve their logical reasoning performance.
Keywords:
Knowledge Representation and Reasoning: KRR: Learning and reasoning
Knowledge Representation and Reasoning: KRR: Applications
Machine Learning: ML: Neuro-symbolic methods
Natural Language Processing: NLP: Language models