Can We Verify Step by Step for Incorrect Answer Detection?
Can We Verify Step by Step for Incorrect Answer Detection?
Xin Xu, Shizhe Diao, Can Yang, Yang Wang
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 8322-8330.
https://doi.org/10.24963/ijcai.2025/925
Chain-of-Thought (CoT) prompting has marked a significant advancement in enhancing the reasoning capabilities of large language models (LLMs). Previous studies have developed various extensions of CoT, which focus primarily on enhancing end-task performance. In addition, there has been research on assessing the quality of reasoning chains in CoT. This raises an intriguing question: Is it possible to predict the accuracy of LLM outputs by scrutinizing the reasoning chains they generate? To answer this research question, we introduce a benchmark, R2PE, designed specifically to explore the relationship between reasoning chains and performance in various reasoning tasks spanning five different domains. This benchmark aims to measure the falsehood of the final output of LLMs based on the reasoning steps. To make full use of information in multiple reasoning chains, we propose the process discernibility score (PDS) framework that beats the answer-checking baseline by a large margin. Concretely, this resulted in an average of 5.1% increase in the F1 score and 2.97% improvement in AUC-PR across all 45 subsets within R2PE. We further demonstrate our PDS’s efficacy in advancing open-domain QA accuracy. Our code will be released in the final version. Codes and data are available at https://github.com/XinXU-USTC/R2PE.git. For further details on the appendix, please refer to https://arxiv.org/abs/2402.10528.
Keywords:
Natural Language Processing: NLP: Applications
Natural Language Processing: NLP: Interpretability and analysis of models for NLP
