Explainable AI (XAI), Trustworthy and autonomous machine learning models, Deep neural networks
The Junior Research Group of Prof. Dr. Grégoire Montavon advances the foundations and algorithms of explainable AI (XAI) in the context of deep neural networks. One particular focus is on closing the gap between existing XAI methods and practical desiderata. Examples include using XAI to build more trustworthy and autonomous machine learning models and using XAI to model the behavior of complex real-world systems so that the latter become meaningfully actionable. In future research, the team explores: (1) how to use XAI to assess, on which data a deep neural network model can be trusted to perform autonomously or requires human intervention, and (2) how to use XAI in combination with a deep neural network to model complex real-world systems and identify actionable components. Grégoire Montavon and his team will collaborate with the members of BIFOLD’s “Explainable Artificial Intelligence Lab” (XAI-Lab).
Pattarawat Chormai, Jan Herrmann, Klaus-Robert Müller, Gregoire Montavon
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Philipp Keyl, Philip Bischoﬀ, Gabriel Dernbach, Michael Bockmayr, Rebecca Fritz, David Horst, Nils Blüthgen, Gregoire Montavon, Klaus-Robert Müller, Frederick Klauschen
Single-cell gene regulatory network prediction by explainable AI
Alexander Binder, Leander Weber, Sebastian Lapuschkin, Gregoire Montavon, Klaus-Robert Müller, Wojciech Samek
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
Wojciech Samek, Leila Arras, Ahmed Osman, Grégoire Montavon, Klaus-Robert Müller
Explaining the Decisions of Convolutional and Recurrent Neural Networks. Mathematical Aspects of Deep Learning.
Philipp Keyl, Michael Bockmayr, Daniel Heim, Gabriel Dernbach, Gregoire Montavon, Klaus-Robert Müller, Frederick Klauschen