Banner Banner

Lorenz Linhardt

Icon

Technische Universität Berlin
Explaining Deep Neural Networks

Secretariat MAR 4-1, Marchstraße 23, 10587 Berlin
https://www.tu.berlin/en/ml

Lorenz Linhardt

Doctoral Researcher

Lorenz Linhardt is a research associate working in the Machine Learning Group of TU Berlin. He received a M.Sc. in Computer Science from ETH Zurich in 2019.

  • Explainable Machine Learning
  • Machine Learning for medical applications
  • Neural architecture search
  • Representation learning

Lorenz Linhardt, Marco Morik, Sidney Bender, Naima Elosegui Borras

AN ANALYSIS OF HUMAN ALIGNMENT OF LATENT DIFFUSION MODELS

March 13, 2024
https://doi.org/10.48550/arXiv.2403.08469

Lukas Muttenthaler, Lorenz Linhardt, Jonas Dippel, Robert A. Vandermeulen, Katherine Hermann, Andrew K. Lampinen, Simon Kornblith

Improving neural network representations using human similarity judgments

December 10, 2023
https://arxiv.org/pdf/2306.04507.pdf

Lukas Muttenthaler, Jonas Dippel, Lorenz Linhardt, Robert A. Vandermeulen, Simon Kornblith

Human alignment of neural network representations

October 18, 2022
https://openreview.net/forum?id=b2DmQYY-XY

News
Explainable AI| Feb 23, 2024

Call for XAI-Papers!

Two research groups associated with BIFOLD take part in the organization of the 2nd World Conference on Explainable Artificial Intelligence. Each group is hosting a special track and has already published a Call for Papers. Researchers are encouraged to submit their papers by March 5th, 2024.

News
Machine Learning| Mar 13, 2023

Do computers and humans "see" alike?

The field of computer vision has long since left the realm of research and is now used in countless daily applications, such as object recognition and measuring geometric structures of objects. One question that is not or only rarely asked is: To what extent do computer vision systems see the world in the same way that humans do?