Lorenz Linhardt
Doctoral Researcher
Lorenz Linhardt is a research associate working in the Machine Learning Group of TU Berlin. He received a M.Sc. in Computer Science from ETH Zurich in 2019.
- Explainable Machine Learning
- Machine Learning for medical applications
- Neural architecture search
- Representation learning
Theo Chow, Mario D'Onghia, Lorenz Linhardt, Zeliang Kan, Daniel Arp, Lorenzo Cavallaro, Pierazzi Fabio
Beyond the TESSERACT: Trustworthy Dataset Curation for Sound Evaluations of Android Malware Classifiers
Theo Chow, Mario D'Onghia, Lorenz Linhardt, Zeliang Kan, Daniel Arp, Lorenzo Cavallaro, Fabio Pierazzi
Breaking Out from the Tesseract: Reassessing ML-based Malware Detection under Spatio-Temporal Drift
Laure Ciernik, Lorenz Linhardt, Marco Morik, Jonas Dippel, Simon Kornblith, Lukas Muttenthaler
Objective drives the consistency of representational similarity across datasets
Jonas Loos, Lorenz Linhardt
Latent Diffusion U-Net Representations Contain Positional Embeddings and Anomalies
Lorenz Linhardt, Tom Neuhäuser, Lenka Tětková, Oliver Eberle
Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments
SaTML 2026 Conference Contributions
BIFOLD supports this year's IEEE SaTML, which is held from March 23 to 25 at the Technical University of Munich.
Publication Highlight – Pruning Clever-Hans strategies
Hidden Clever-Hans effects can undermine the reliability of AI models. The paper “Preemptively pruning Clever-Hans strategies in deep neural networks” introduces a method that corrects biases in neural networks without prior knowledge of faulty features.
BIFOLD researchers present three papers at ICLR 2024
The International Conference on Learning Representations (ICLR) is a Core-A gathering of experts who are dedicated to advancing a branch of artificial intelligence known as representation learning, which is also called deep learning.
Call for XAI-Papers!
Two research groups associated with BIFOLD take part in the organization of the 2nd World Conference on Explainable Artificial Intelligence. Each group is hosting a special track and has already published a Call for Papers. Researchers are encouraged to submit their papers by March 5th, 2024.
Do computers and humans "see" alike?
The field of computer vision has long since left the realm of research and is now used in countless daily applications, such as object recognition and measuring geometric structures of objects. One question that is not or only rarely asked is: To what extent do computer vision systems see the world in the same way that humans do?