Banner Banner

SEARCH

ALL NEWS

Machine Learning| September 01, 2025

Simulations of large biomolecules with quantum accuracy

An international team of researchers from BIFOLD, the University of Luxembourg, and Google DeepMind has developed a new machine learning foundation model capable of simulating a wide variety of molecular systems – for example, large and complex biological molecules – with quantum-mechanical accuracy.

© VLDB
BIFOLD Update| August 29, 2025

VLDB 2025 Conference Contributions

Researchers from five BIFOLD research groups will participate in the 51st International Conference on Very Large Data Bases (VLDB 2025), presenting a range of papers, tutorials, demos, and workshops. The conference will run from September 1 to 5, 2025, in London, United Kingdom.

© M Esmailoghli
BIFOLD Update| August 29, 2025

Researcher Spotlight: Dr. Mahdi Esmailoghli

Dr. Mahdi Esmailoghli earned his PhD at BIFOLD/TU Berlin in 2024 with a thesis on efficient data discovery in large-scale data lakes. Now a Postdoctoral researcher at Humboldt-Universität zu Berlin, his work expands from data discovery to intelligent analysis pipelines, aiming to make Big Data more accessible and effective for machine learning and scientific research.

© USENIX
BIFOLD Update| August 11, 2025

USENIX 2025 Conference Contributions

BIFOLD researchers from the MLSec group will present two papers at the 34th USENIX Security Symposium (Aug 13–15, 2025, Seattle). One paper shows that virtual backgrounds in video calls can unintentionally reveal parts of a user’s real surroundings, exposing them to privacy risks.

© BIFOLD
Machine Learning| August 05, 2025

Researcher Spotlight Dr. Temesgen Mehari

Dr. Temesgen Mehari defended his PhD in early 2025 with a thesis titled "Advancing Cardiac Health: Trustworthy and Practical Approaches to Deep 12-lead ECG Analysis." His research bridges the fields of AI and medicine, focusing on the development of explainable and robust deep learning models for ECG diagnostics.

© IGARSS
BIFOLD Update| August 01, 2025

IGARSS 2025 Conference Contributions

Researchers from BIFOLD’s RSiM and DIMA groups will present a total of six papers and organize a community-contributed session at the IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2025) taking place from August 3rd to August 8th in Brisbane, Australia.

© Hartono Creative Studio
Explainable AI| July 28, 2025

Effective transparency is inevitable for AI in intelligent decision support

As AI-powered fact-checkers become more common in newsrooms and social media platforms, the question is no longer if we should use them—but how. A new study from researchers at BIFOLD and the Karlsruhe Institute of Technology (KIT) reveals that the secret to trustworthy AI-supported fact-checking may lie not just in what they say, but how they explain themselves.

© BIFOLD
BIFOLD Update| July 25, 2025

Researcher Spotlight: Dr. Arnab Phani

Dr. Arnab Phani is a Postdoctoral Researcher at BIFOLD, where he addresses data management challenges in modern AI. Building directly on his foundational PhD work at the DAMS Lab, his current research at the DEEM Lab focuses on enhancing runtime efficiency and fostering responsible data management practices across the entire machine learning pipeline - from data cleaning and validation to training and inference.

© BIFOLD
Machine Learning| July 22, 2025

XAI 2025: Best Paper Award

Congratulations to BIFOLD researchers Simon Letzgus, Klaus-Robert Müller and Grégoire Montavon, whose publication: XpertAI: uncovering regression model strategies for sub-manifolds won the BestPaperAward at the  3rd World Conference on eXplainable Artificial Intelligence. BIFOLD researchers contributed on several levels to this leading conference on XAI. 

© S.Kurfeß/unsplash
Machine Learning| July 17, 2025

Even the smallest number can make a big difference

Minor deviations in backend libraries like CUDA or MKL can cause identical AI models to produce different outputs. At ICML 2025, BIFOLD researcher Konrad Rieck showed how such subtle imprecisions can be exploited—posing a significant risk to AI system security.