

Explaining Deep Neural Networks
Lead
Prof. Dr.
Grégoire
Montavon
Dept. of Mathematics and Computer Science, Institute for Computer Science
Arnimallee 7,
14195
Berlin
Explainable AI, Machine Learning, Data Science
The Junior Research Group of Prof. Dr. Grégoire Montavon advances the foundations and algorithms of explainable AI (XAI) in the context of deep neural networks. One particular focus is on closing the gap between existing XAI methods and practical desiderata. Examples include using XAI to build more trustworthy and autonomous machine learning models and using XAI to model the behavior of complex real-world systems so that the latter become meaningfully actionable. In future research, the team explores: (1) how to use XAI to assess, on which data a deep neural network model can be trusted to perform autonomously or requires human intervention, and (2) how to use XAI in combination with a deep neural network to model complex real-world systems and identify actionable components. Grégoire Montavon and his team will collaborate with the members of BIFOLD’s “Explainable Artificial Intelligence Lab” (XAI-Lab).
Wojciech Samek, Gregoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima, Kristof T. Schütt, Klaus-Robert Müller, Gregoire Montavon
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Jacob Kauffmann, Malte Esders, Lukas Ruff, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller
From Clustering to Cluster Explanations via Neural Networks
Philipp Keyl, Michael Bockmayr, Daniel Heim, Gabriel Dernbach, Gregoire Montavon, Klaus-Robert Müller, Frederick Klauschen
Patient-level proteomic network prediction by explainable artificial intelligence
Hassan El-Hajj, Maryam Zamani, Jochen Büttner, Julius Martinetz, Oliver Eberle, Noga Shlomi, Anna Siebold, Grégoire Montavon, Klaus-Robert Müller, Holger Kantz & Matteo Valleriani
An Ever-Expanding Humanities Knowledge Graph: The Sphaera Corpus at the Intersection of Humanities, Data Management, and Machine Learning

Photo recap: All Hands Meeting 2023
On October 9 and 10, 2023, BIFOLD welcomed the other Geman AI centers (ScaDS.AI Dresden/Leipzig, Lamarr Institute, Tübingen AI Center, MCML, and the DFKI) in Berlin. The annual meeting featured guests, partners, visitors, and researchers from all over Germany.

AI centers are the foundation of the German AI ecosystem
On October 9th and 10th, 2023, the Berlin Institute for the Foundations of Learning and Data (BIFOLD) at TU Berlin invited scientists from the university AI competence centers (BIFOLD, ScaDS.AI Dresden/Leipzig, Lamarr Institute, Tübingen AI Center, and MCML) and the DFKI to Berlin to present and discuss the latest results of their research on the EUREF campus.

2020 pattern recognition best paper award
A team of scientists from TU Berlin, Fraunhofer Heinrich Hertz Institute (HHI) and University of Oslo has jointly received the 2020 “Pattern Recognition Best Paper Award” and “Pattern Recognition Medal” of the international scientific journal Pattern Recognition. The award committee honored the publication “Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition” by Dr. Grégoire Montavon and Prof. Dr. Klaus-Robert Müller from TU Berlin, Prof. Dr. Alexander Binder from University of Oslo, as well as Dr. Wojciech Samek and Dr. Sebastian Lapuschkin from HHI.





