Banner Banner

Dr. Marina Marie-Claire Höhne (Née Vidovic)

Icon

Technische Universität Berlin
Machine Learning Department, UMI lab

Marchstr. 23, Room: MAR-4.012, D-10587 Berlin

Dr. Marina Marie-Claire Höhne (Née Vidovic)

Fellow

Junior Fellow | BIFOLD

Head of Junior Research Group | UMI lab (Understandable Machine Intelligence), Technische Universität Berlin

Marina M.-C. Höhne (née Vidovic) received the Master’s degree in Technomathematics in 2012. From 2012 to 2014 she worked as a researcher at Ottobock in Vienna, Austria, on time series data and domain adaptation for controlling prosthetic devices. In 2014 she started her PhD on explainable AI and received the Dr. rer. nat. degree with summa cum laude from TU Berlin in 2017. Afterwards from 2017 to 2018 she took one year maternity leave, and continued working at the machine learning chair at TU Berlin as a postdoctoral researcher in 2018. In this time, from 2018-2020, she has been lecturing seminars and lectures in machine learning, supervised bachelor, master and PhD students and continued her studies in explainable AI and domain adaptation. In 2020 she started her own junior research group – Understandable Machine Intelligence (UMI) lab – in the area of explainable AI at TU Berlin, funded by the german federal ministry of education and research.

Furthermore, she received the best paper prize at the workshop for explainable AI for complex systems in 2016. She is a reviewer for NeurIPS and ICML and she serves as a reviewer for the german federal ministry of education and research (BMBF). In 2021 she joined the Berlin Institute for the Foundations of Learning and Data (BIFOLD) as a junior fellow.

2017 Dr. rer. nat.: summa cum laude
2016 Best paper award at NeurIPS Workshop on AI explainability for complex system

  • Explainable Artificial Intelligence (XAI) – local & global
  • Robustness of Neural Networks and XAI Methods
  • Domain Adaptation
  • Representation Learning
  • Applications: EMG, EEG, Brain Computer Interfaces, Computer Vision, Bioinformatics, Digital Pathology, Climate Data Analysis

  • ProFil

Dilyara Bareeva, Marina M.-C. Höhne, Alexander Warnecke, Lukas Pirch, Klaus-Robert Müller, Konrad Rieck, Kirill Bykov

MANIPULATING FEATURE VISUALIZATIONS WITH GRADIENT SLINGSHOTS

January 11, 2024
https://doi.org/10.48550/arXiv.2401.06122

Kirill Bykov, Laura Kopf, Shinichi Nakajima, Marius Kloft, Marina M.-C. Höhne

Labeling Neural Representations with Inverse Recognition

November 22, 2023
https://doi.org/10.48550/arXiv.2311.13594

Dennis Grinwald, Kirill Bykov, Shinichi Nakajima, Marina MC Höhne

Visualizing the Diversity of Representations Learned by Bayesian Neural Networks

November 10, 2023
https://openreview.net/pdf?id=ZSxvyWrX6k

Kirill Bykov, Mayukh Deb, Dennis Grinwald, Klaus-Robert Müller, Marina M.-C. Höhne

DORA: Exploring Outlier Representations in Deep Neural Networks

July 10, 2023
https://doi.org/10.48550/arXiv.2206.04530

News
Machine Learning| Feb 17, 2022

Shining a light into the Black Box of AI Systems

In the paper “NoiseGrad — Enhancing Explanations by Introducing Stochasticity to Model Weights,” to be presented at the 36th AAAI-22 Conference on Artificial Intelligence, a team of researchers, among them BIFOLD researchers Dr. Marina Höhne, Shinichi Nakajima, PhD, and Kirill Bykov, propose new methods to reduce visual diffusion of the different explanation methods, which have shown to make existing explanation methods more robust and reliable.

News
Machine Learning| Nov 10, 2021

Intelligent machines also need control

Dr. Marina Höhne, BIFOLD Junior Fellow, was awarded two million euros funding by the German Federal Ministry of Education and Research to establish a research group working on explainable artificial intelligence.