Banner Banner

Beyond Transparency: Evaluating Explainability in AI-Supported Fact-Checking

Vera Schmitt
Isabel Bezzaoui
Charlott Jakob
Premtim Sahitaj
Qianli Wang
Arthur Hilbert
Max Upravitelev
Jonas Fegert
Sebastian Möller
Veronika Solopova

July 14, 2025

The rise of Generative AI has made the creation and spread of disinformation easier than ever. In response, the EU’s Digital Services Act now requires social media platforms to implement effective countermeasures. However, the sheer volume of online content renders manual verification increasingly impractical. Recent research shows that combining AI with human expertise can improve fact-checking performance, but human oversight remains crucial, especially in domains involving fundamental rights like free speech. When ground truth is uncertain, AI systems must be both transparent and explainable. While various explainability methods have been applied to disinformation detection, they often lack human-centered evaluation regarding their task-specific usefulness and interpretability. In this study, we evaluate different explainability features in AI systems for fact-checking, focusing on their impact on performance, perceived usefulness, and understandability. Based on a user study (n=406) including crowdworkers and journalists, we find that explanations enhance perceived usefulness and clarity but do not consistently improve human-AI performance, and can even lead to overconfidence. Moreover, whereas XAI features generally help to increase performance, they enabled more individual interpretation among experts and lay-users, resulting in a broader variation of outcomes under. This underscores the need for complementary interventions and training to mitigate overreliance and support effective human-AI collaboration in fact-checking.