Banner Banner

Explaining Deep Neural Networks


Prof. Dr. Grégoire Montavon


Dept. of Mathematics and Computer Science, Institute for Computer Science
Arnimallee 7, 14195 Berlin

Explainable AI, Machine Learning, Data Science


The Junior Research Group of Prof. Dr. Grégoire Montavon advances the foundations and algorithms of explainable AI (XAI) in the context of deep neural networks. One particular focus is on closing the gap between existing XAI methods and practical desiderata. Examples include using XAI to build more trustworthy and autonomous machine learning models and using XAI to model the behavior of complex real-world systems so that the latter become meaningfully actionable. In future research, the team explores: (1) how to use XAI to assess, on which data a deep neural network model can be trusted to perform autonomously or requires human intervention, and (2) how to use XAI in combination with a deep neural network to model complex real-world systems and identify actionable components. Grégoire Montavon and his team will collaborate with the members of BIFOLD’s “Explainable Artificial Intelligence Lab” (XAI-Lab).

Wojciech Samek, Gregoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

March 04 , 2021

Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima, Kristof T. Schütt, Klaus-Robert Müller, Gregoire Montavon

Higher-Order Explanations of Graph Neural Networks via Relevant Walks

November 01 , 2022

Jacob Kauffmann, Malte Esders, Lukas Ruff, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

From Clustering to Cluster Explanations via Neural Networks

July 07 , 2022

Philipp Keyl, Michael Bockmayr, Daniel Heim, Gabriel Dernbach, Gregoire Montavon, Klaus-Robert Müller, Frederick Klauschen

Patient-level proteomic network prediction by explainable artificial intelligence

June 07 , 2022

Hassan El-Hajj, Maryam Zamani, Jochen Büttner, Julius Martinetz, Oliver Eberle, Noga Shlomi, Anna Siebold, Grégoire Montavon, Klaus-Robert Müller, Holger Kantz & Matteo Valleriani

An Ever-Expanding Humanities Knowledge Graph: The Sphaera Corpus at the Intersection of Humanities, Data Management, and Machine Learning

May 16 , 2022

C: BIFOLD/Michael Setzpfandt
October 11, 2023

Photo recap: All Hands Meeting 2023

On October 9 and 10, 2023, BIFOLD welcomed the other Geman AI centers (ScaDS.AI Dresden/Leipzig, Lamarr Institute, Tübingen AI Center, MCML, and the DFKI) in Berlin. The annual meeting featured guests, partners, visitors, and researchers from all over Germany. 

C: BIFOLD/Michael Setzpfandt
October 10, 2023

AI centers are the foundation of the German AI ecosystem

On October 9th and 10th, 2023, the Berlin Institute for the Foundations of Learning and Data (BIFOLD) at TU Berlin invited scientists from the university AI competence centers (BIFOLD, ScaDS.AI Dresden/Leipzig, Lamarr Institute, Tübingen AI Center, and MCML) and the DFKI to Berlin to present and discuss the latest results of their research on the EUREF campus.

© Grégoire Montavon
Dr. Grégoire Montavon with the 2020 Pattern Recognition Best Paper Award in hand.
February 23, 2021

2020 pattern recognition best paper award

A team of scientists from TU Berlin, Fraunhofer Heinrich Hertz Institute (HHI) and University of Oslo has jointly received the 2020 “Pattern Recognition Best Paper Award” and “Pattern Recognition Medal” of the international scientific journal Pattern Recognition. The award committee honored the publication “Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition” by Dr. Grégoire Montavon and Prof. Dr. Klaus-Robert Müller from TU Berlin, Prof. Dr. Alexander Binder from University of Oslo, as well as Dr. Wojciech Samek and Dr. Sebastian Lapuschkin from HHI.

Prof. Dr. Grégoire Montavon

Research Group Lead

BIFOLD researcher Florian Bley

Florian Bley

Doctoral Researcher

Gabriel Dernbach

Doctoral Researcher

Julius Hense

Doctoral Researcher

Lorenz Linhardt

Doctoral Researcher

Philip Naumann Bifold Researcher

Philip Naumann

Doctoral Researcher