Dr. Vera Schmitt
Affiliation: External Partner
Dr. Vera Schmitt build up and is leading the XplaiNLP group at the QUL, based on the aquired funding from third-party projects. With her group she is exploring core NLP topisc, xAI, HCI and legal aspects of AI systems in the domain of disinformation detection and medical data processing. Vera completed her undergraduate studies in Politics and Public Administration (B.A.) at the University of Konstanz. During this time, she developed a keen interest in statistics and co-founded CorrelAid, a non-profit community of data science enthusiasts. Afterward, she pursued her passion for data science by enrolling in a Master's program in Data Science at Leuphana University in Lüneburg. As a member of the ChangemakerXchange she actively contributed to projects of CorrelAid in various countries, including Malaysia, Japan, and Singapore. Following this, she started a Ph.D. at the Q&U Labe at the TU Berlin, concerning the topic of economic aspects of privacy. During her PhD, she aquired a BMBF research group funding (KI-Nachwuchsgruppen unter Leitung von Frauen) of 1.4 Million € to significantly extend her already existing group.
Yoana Tsoneva, Paul-Conrad Feig, Jiaao Li, Veronika Solopova, Neda Foroutan, Arthur Hilbert, Vera Schmitt
Selective Multimodal Retrieval for Automated Verification of Image–Text Claims
Max Upravitelev, Veronika Solopova, Charlott Jakob, Premtim Sahitaj, Sebastian Möller, Vera Schmitt
Retrieving Climate Change Disinformation by Narrative
Max Upravitelev, Veronika Solopova, Jing Yang, Charlott Jakob, Premtim Sahitaj, Ariana Sahitaj, Vera Schmitt
Multiperspectivity as a Resource for Narrative Similarity Prediction
Max Upravitelev, Veronika Solopova, Premtim Sahitaj, Ariana Sahitaj, Charlott Jakob, Sebastian Möller, Vera Schmitt
Take It All: Ensemble Retrieval for Multimodal Evidence Aggregation
Max Upravitelev, Nicolau Duran-Silva, Christian Woerle, Giuseppe Guarino, Salar Mohtaj, Jing Yang, Veronika Solopova, Vera Schmitt
Comparing LLMs and BERT-based Classifiers for Resource-Sensitive Claim Verification in Social Media
Effective transparency is inevitable for AI in intelligent decision support
As AI-powered fact-checkers become more common in newsrooms and social media platforms, the question is no longer if we should use them—but how. A new study from researchers at BIFOLD and the Karlsruhe Institute of Technology (KIT) reveals that the secret to trustworthy AI-supported fact-checking may lie not just in what they say, but how they explain themselves.
Expert Opinions on the Recent Success of DeepSeek
Experts from BIFOLD and TU Berlin on the difference between open source applications such as DeepSeek and other LLMs, and Europe's role in the development of artificial intelligence (AI).