SAFE: Structured Argumentation for Fact-checking with Explanations
SAFE: Structured Argumentation for Fact-checking with Explanations
Xiaoou Wang, Elena Cabrio, Serena Villata
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Demo Track. Pages 11114-11118.
https://doi.org/10.24963/ijcai.2025/1274
Explainable fact-checking plays a vital role in the fight against disinformation in today’s digital landscape. With the increasing volume of unverified content online, providing justifications for fact-checking has become essential to help users make informed decisions. While recent studies provide user-friendly explanations through abstractive or extractive summarization, they often assume the availability of human-written fact-checking articles, which is not always the case. This demo introduces SAFE, an argument-based framework designed to enhance both fact-checking and its justification. Specifically, SAFE offers three key features: i) producing argument-structured summaries of human-written fact-checking articles, ii) in the absence of human-written articles, generating structured summaries based on evidence retrieved from a corpus through a jointly trained summarization and evidence retrieval system, and iii) assessing the truthfulness of a claim by analyzing the structured summary.
Keywords:
Natural Language Processing: NLP: Sentiment analysis, stylistic analysis, and argument mining
Natural Language Processing: NLP: Applications
