Banner Banner

Dr. Vera Schmitt

Icon

Technische Universität Berlin

© V. Schmitt

Dr. Vera Schmitt

Dr. Vera Schmitt build up and is leading the XplaiNLP group at the QUL, based on the aquired funding from third-party projects. With her group she is exploring core NLP topisc, xAI, HCI and legal aspects of AI systems in the domain of disinformation detection and medical data processing. Vera completed her undergraduate studies in Politics and Public Administration (B.A.) at the University of Konstanz. During this time, she developed a keen interest in statistics and co-founded CorrelAid, a non-profit community of data science enthusiasts. Afterward, she pursued her passion for data science by enrolling in a Master's program in Data Science at Leuphana University in Lüneburg. As a member of the ChangemakerXchange she actively contributed to projects of CorrelAid in various countries, including Malaysia, Japan, and Singapore. Following this, she started a Ph.D. at the Q&U Labe at the TU Berlin, concerning the topic of economic aspects of privacy. During her PhD, she aquired a BMBF research group funding (KI-Nachwuchsgruppen unter Leitung von Frauen) of 1.4 Million € to significantly extend her already existing group.

Max Upravitelev, Nicolau Duran-Silva, Christian Woerle, Giuseppe Guarino, Salar Mohtaj, Jing Yang, Veronika Solopova, Vera Schmitt

Comparing LLMs and BERT-based Classifiers for Resource-Sensitive Claim Verification in Social Media

July 26, 2025
https://aclanthology.org/2025.sdp-1.26/

Max Upravitelev, Premtim Sahitaj, Arthur Hilbert, Veronika Solopova, Jing Yang, Nils Feldhus, Tatiana Anikina, Simon Ostermann, Vera Schmitt

Exploring Semantic Filtering Heuristics For Efficient Claim Verification

July 26, 2025
https://aclanthology.org/2025.fever-1.17.pdf

Vera Schmitt, Isabel Bezzaoui, Charlott Jakob, Premtim Sahitaj, Qianli Wang, Arthur Hilbert, Max Upravitelev, Jonas Fegert, Sebastian Möller, Veronika Solopova

Beyond Transparency: Evaluating Explainability in AI-Supported Fact-Checking

July 14, 2025
https://doi.org/10.1145/3733567.3735566

Qianli Wang, Nils Feldhus, Simon Ostermann, Luis Felipe Villa-Arenas, Sebastian Möller, Vera Schmitt

FITCF: A Framework for Automatic Feature Importance-guided Counterfactual Example Generation

January 02, 2025
https://arxiv.org/abs/2501.00777

News
#PREVIEW| Jul 28, 2025

Effective transparency is inevitable for AI in intelligent decision support!

As AI-powered fact-checkers become more common in newsrooms and social media platforms, the question is no longer if we should use them—but how. A new study from researchers at BIFOLD and the Karlsruhe Institute of Technology (KIT) reveals that the secret to trustworthy AI-supported fact-checking may lie not just in what they say, but how they explain themselves.

News
BIFOLD Update| Feb 12, 2025

Expert Opinions on the Recent Success of DeepSeek

Experts from BIFOLD and TU Berlin on the difference between open source applications such as DeepSeek and other LLMs, and Europe's role in the development of artificial intelligence (AI).