Banner Banner

FakeXplain – Development of transparent and meaningful explanations in the context of disinformation detection

Icon

Lead
Prof. Dr. Wojciech Samek, Prof. Dr. Konrad Rieck, Prof. Dr. Sebastian Möller

FakeXplain aims to develop transparent and meaningful explanations for AI-based systems that detect disinformation, addressing the challenge of making these explanations understandable for non-technical users like journalists and citizens. Given the rising influence of generative AI models in spreading misinformation, this project focuses on creating explanations that build trust and comply with the upcoming EU AI regulations.The project will investigate various explanation methods, such as attribution techniques and chain-of-thought prompting, while developing criteria to evaluate these explanations based on factors like transparency, robustness, and user trust. Ultimately, FakeXplain will create a framework to systematically evaluate AI explanations and integrate effective methods into existing disinformation detection tools.

Rieck Konrad BIFOLD Research Security Machine Learning

Prof. Dr. Konrad Rieck

Research Group Lead

Kindly take note that only researchers who have received funding from BIFOLD have their individual profiles displayed on www.bifold.berlin.