Banner Banner

Making the use of AI systems safe

BIFOLD Fellow Dr. Wojciech Samek and Luis Oala (Fraunhofer Heinrich Hertz Institute) together with Jan Macdonald and Maximilian März (TU Berlin) were honored with the award for “best scientific contribution” at this year’s medical imaging conference BVM. Their paper “Interval Neural Networks as Instability Detectors for Image Reconstructions” demonstrates how uncertainty quantification can be used to detect errors in deep learning models.

The award winners were announced during the virtual BVM (Bildverarbeitung für die Medizin) conference on March 9, 2021. The award for “best scientific contribution” is granted each year by the BVM Award Committee. It honors innovative research with a methodological focus on medical image processing in a medically relevant application context.

The interdisciplinary group of researchers investigated the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks. Although neural networks often empirically outperform traditional reconstruction methods, their usage for sensitive medical applications remains controversial. Limits in the understanding of an AI system’s behavior create risks for system failure. Hence, the identification of failure modes in AI systems is an important pre-requisite for their reliable deployment in medicine.

In a recent series of works, it has been demonstrated that deep learning approaches are susceptible to various types of instabilities, caused for instance by adversarial noise or out-of-distribution features. It is argued that this phenomenon can be observed regardless of the underlying architecture and that there is no easy remedy. Based on this insight, the present work demonstrates on two use cases how uncertainty quantification methods can be employed as instability detectors. In particular, it is shown that the recently proposed Interval Neural Networks are highly effective in revealing instabilities of reconstructions. This is an important contribution to making the use of AI systems safer and more reliable.

THE PAPER IN DETAIL:

Authors:
Jan Macdonald, Maximilian März, Luis Oala, Wojciech Samek

Abstract:
This work investigates the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks. Although neural networks often empirically outperform traditional reconstruction methods, their usage for sensitive medical applications remains controversial. Indeed, in a recent series of works, it has been demonstrated that deep learning approaches are susceptible to various types of instabilities, caused for instance by adversarial noise or out-ofdistribution features. It is argued that this phenomenon can be observed regardless of the underlying architecture and that there is no easy remedy. Based on this insight, the present work demonstrates, how uncertainty quantification methods can be employed as instability detectors. In particular, it is shown that the recently proposed Interval Neural Networks are highly effective in revealing instabilities of reconstructions. Such an ability is crucial to ensure a safe use of deep learning-based methods for medical image reconstruction.

Publication:
In: Bildverarbeitung für die Medizin 2021. Informatik aktuell. Springer Vieweg, Wiesbaden.
https://doi.org/10.1007/978-3-658-33198-6_79