Disconfounding Fake News Video Explanation with Causal Inference

Disconfounding Fake News Video Explanation with Causal Inference

Lizhi Chen, Zhong Qian, Peifeng Li, Qiaoming Zhu

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4842-4850. https://doi.org/10.24963/ijcai.2025/539

The proliferation of fake news videos on social media has heightened the demand for credible verification systems. While existing methods focus on detecting false content, generating human-readable explanations for such predictions remains a critical challenge. Current approaches suffer from spurious correlations caused by two key confounders: 1) video object bias, where co-occurring objects entangle features leading to incorrect semantic associations; and 2) explanation aspect bias, where models over-rely on frequent aspects while neglecting rare ones. To address these issues, we propose CIFE, a causal inference framework that disentangles confounding factors to generate unbiased explanations. First, we formalize the problem through a Structural Causal Model (SCM) to identify confounding factors. We then introduce two novel modules: 1) the Interventional Video-Object Detector (IVOD), which employs backdoor adjustment to decouple object-level visual semantics; and 2) the Interventional Explanation Aspect Module (IEAM), which balances aspect selection during multimodal fusion. Extensive experiments on the FakeVE dataset demonstrate the effectiveness of CIFE, which generates more faithful explanations by mitigating object entanglement and aspect imbalance. Our code is available at https://github.com/Lieberk/CIFE.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
Machine Learning: ML: Causality
Machine Learning: ML: Generative models
Machine Learning: ML: Multi-modal learning