Instantiation-based Formalization of Logical Reasoning Tasks Using Language Models and Logical Solvers
Instantiation-based Formalization of Logical Reasoning Tasks Using Language Models and Logical Solvers
Mohammad Raza, Natasa Milic-Frayling
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4633-4641.
https://doi.org/10.24963/ijcai.2025/516
Robustness of reasoning remains a significant challenge for large language models, and addressing it is essential for the practical applicability of AI-driven reasoning systems. We introduce Semantic Self-Verification (SSV), a novel approach that addresses the key challenge in combining language models with the rigor of logical solvers: to accurately formulate the reasoning problem from natural language to the formal language of the solver. SSV uses a consistency-based approach to produce strong abstract formalizations of problems using concrete instantiations that are generated by the model and verified by the solver. In addition to significantly advancing the overall reasoning accuracy over the state-of-the-art, a key novelty that this approach presents is a feature of verification that has near-perfect precision over a significant coverage of cases, as we demonstrate on open reasoning benchmarks. We propose such *near-certain reasoning* as a new approach to reduce the need for manual verification in many cases, taking us closer to more dependable and autonomous AI reasoning systems.
Keywords:
Knowledge Representation and Reasoning: KRR: Learning and reasoning
Humans and AI: HAI: Intelligent user interfaces
Knowledge Representation and Reasoning: KRR: Automated reasoning and theorem proving
Knowledge Representation and Reasoning: KRR: Common-sense reasoning
