Banner Banner

Dr. Thorsten Eisenhofer

© Eisenhofer

Dr. Thorsten Eisenhofer

Felix Weissberg, Lukas Pirch, Erik Imgrund, Jonas Möller, Thorsten Eisenhofer, Konrad Rieck

LLM-based Vulnerability Discovery through the Lens of Code Metrics

September 23, 2025
https://doi.org/10.48550/arXiv.2509.19117

Jonas Möller, Lukas Pirch, Felix Weissberg, Sebastian Baunsgaard, Thorsten Eisenhofer, Konrad Rieck

Adversarial Inputs for Linear Algebra Backends

July 13, 2025
https://www.mlsec.org/docs/2025-icml.pdf

David Beste, Grégoire Menguy, Hossein Hajipour, Mario Fritz, Antonio Emanuele Cinà, Sébastien Bardin, Thorsten Holz, Thorsten Eisenhofer & Lea Schönherr

Exploring the Potential of LLMs for Code Deobfuscation

July 10, 2025
https://doi.org/10.1007/978-3-031-97620-9_15

Roei Schuster, Jin Peng Zhou, Thorsten Eisenhofer, Paul Grubbs, Nicolas Papernot

Learned-Database Systems Security

July 02, 2025
https://doi.org/10.48550/arXiv.2212.10318

News
BIFOLD Update| Oct 16, 2025

ACM CCS 2025: Distinguished Paper Award

Congratulations to BIFOLD researchers Erik Imgrund, Thorsten Eisenhofer and Konrad Rieck from the ML Sec group, whose paper “Exposing Security Risks in AI Weather Forecasting” received a Distinguished Paper Award at the ACM Conference on Computer and Communications Security (CCS) 2025.

News
Cyber Security| Sep 16, 2025

Attacking privacy leaks in virtual backgrounds

Peeking through the virtual curtain: A new study by the BIFOLD MLSEC group reveals that current virtual backgrounds in video calls can leak enough pixels from the environment to reconstruct objects in the background.

News
BIFOLD Update| Apr 09, 2025

IEEE SaTML 2025 Conference Contribution

Dr. Thorsten Eisenhofer will present the paper “Verifiable and Provably Secure Machine Unlearning,” at SaTML 2025. Eisenhofer is Postdoc in the research group “Machine Learning and Security”. His paper introduces a new framework designed to verify that user data has been correctly deleted from machine learning models, supported by cryptographic proofs.