Banner Banner

Lunch Talk, Dr. Vera Schmitt: From Opaque Models to Actionable Insights: Explainable NLP for Responsible AI in High-Stakes Scenarios

Icon

June 19, 2025 Icon 12:00 - 13:00

Icon

Technische Universität Berlin, Einsteinufer 17, 10587 Berlin - Room EN 148

Icon

Contact person
Dr. Laura Wollenweber
laura.wollenweber@tu-berlin.de

Icon

Dr. Vera Schmitt

The BIFOLD Lunch Talk series gives BIFOLD members and external partners the opportunity to engage in dialogue about their research in Machine Learning and Big Data. Each Lunch Talk offers BIFOLD members, fellows and colleagues from other research institutes the chance to present their research and to network with each other. The Lunch Talk takes place at the TU Berlin.

For further information on the Lunch Talks and registration, contact: Dr. Laura Wollenweber via email.

Large language models are increasingly used in decision-critical domains, yet their opaque nature limits reliability and usability. This talk presents research from the XplaiNLP group that advances interpretable NLP methods for high-stakes applications such as (semi-)automated fact-checking and medical decision support. The XplaiNLP group is working on reliable evidence retrieval and narrative monitoring approaches and multi-level explanation techniques, such as attribution methods, natural language rationales, and counterfactuals. Model outputs and explanations are evaluate based on their impact on user trust and decision quality and overall task performance in empirical studies. The work aims to develop intelligent decision support systems for high-stake scenarios by aligning LLM outputs with domain knowledge, user expertise, and regulatory requirements, contributing toward actionable and responsible human-AI collaboration.

©Vera Schmitt

Bio: 

Dr. Vera Schmitt is head of the XplaiNLP research group at TU Berlin and the German Research Center for Artificial Intelligence (DFKI), where she leads interdisciplinary research at the intersection of natural language processing, explainable AI, and human-computer interaction. Her work focuses on interpretable and robust language technologies for high-stakes decision-making, particularly in the domains of automated fact-checking and medical AI. Dr. Schmitt has raised over €4 million in third-party research funding and is PI of multiple projects funded by BMBF, and the EU. Her research has been published in venues such as FAccT, ACL, COLING, and LREC, and is actively contributing to connecting technical innovation with regulatory frameworks such as the AI Act and DSA.