AQTIVATE Workshop on Machine Learning

Advanced Computing, Quantum Algorithms, and Data-driven Approaches for Science, Technology, and Engineering (AQTIVATE) is an interdisciplinary training program in the area of high-performance computing, scalable algorithms and machine learning, and quantum computing for research projects from physics, engineering, and biology. 

The BIFOLD-AQTIVATE workshop will focus on machine learning and is structured in two parts: Part I: Basic Machine Learning, from February 12th to 26th, 2024, and Part II: Machine Learning for Physics/Chemistry, from February 27th to March 1st, 2024.

Part I, Basic Machine Learning: This two-week plus one-day part covers the basics of Machine Learning, including Python coding, classical methods, Bayesian learning, unsupervised/supervised learning, kernel methods, (convolutional/recurrent) neural networks with tricks for training, explainable AI, and generative modeling. A set of lecture and exercise sessions on one topic per day and two one-week-long group exercises are provided. This part suits researchers with little background in Machine Learning but who plan to use or like to understand Machine Learning methods for enhancing their research (in the future).

Part II, Machine Learning for Physics/Chemistry: This four-day workshop consists of twelf research talks and two group exercises on Machine Learning for physics. The research talks include molecular analysis/optimization/generation, diffusion models for turbulence flow, Machine Learning for partial differential equation solver, AI in fluid dynamics, generative modeling for Monte Carlo simulation, Bayesian optimization for quantum computing, and XAI for graphs. This part suits researchers interested in the state-of-the-art Machine Learning approaches in physics/chemistry applications.

Organizers: BIFOLD and the Machine Learning Group at TU Berlin is chaired by Prof. Dr. Klaus-Robert Müller

Date: February 12th to March 1st, 2024
Location: Technische Universität Berlin, MAR building, Marchstr. 23, Room MAR 4.033, 10587 Berlin

Agenda

Week 1

  10:00-12:00 12:00-14:00 14:00-16:00 16:00-17:00 18:00
12.02.2024
Mon
Lecture: Introduction to ML
Klaus-Robert Müller,
Thomas Schnake
Lunch Exercise: Introduction to ML
Klaus-Robert Müller,
Thomas Schnake
  Reception
13.02.2024
Tue
Lecture: Python introduction
Jannik Wolff
Lunch Exercise: Python introduction
Jannik Wolff
Group Exercise 1
Florian Bley
 
14.02.2024
Wed
Lecture: Classical methods
Julius Hense
Lunch Exercise: Classical methods
Julius Hense
Group Exercise 1
self supervised
 
15.02.2024
Thu
Lecture: Bayesian methods
Sergej Dogadov
Lunch Exercise: Bayesian methods
Sergej Dogadov
Group Exercise 1
self supervised
 
16.02.2024
Fri
Lecture: Unsupervised Learning
Malte Esders
Lunch Exercise: Unsupervised Learning
Malte Esders
Group Exercise 1
presentation
 

 

 

Week 2

  10:00-12:00 12:00-14:00 14:00-16:00 16:00-17:00
19.02.2024
Mon
Lecture: Kernel methods
Stefan Blücher
Lunch Exercise: Kernel methods
Stefan Blücher
 
20.02.2024
Tue
Lecture: Neural networks introduction
Marco Morik
Lunch Exercise: Neural networks introduction
Marco Morik
Group Exercise 2
Pattarawat Chormai
21.02.2024
Wed
Lecture: Neural networks tricks
Sidney Bender
Lunch Exercise: Neural networks tricks
Sidney Bender
Group Exercise 2
self supervised
22.02.2024
Thu
Lecture: Convolutional neural networks
Saeed Salehi
Lunch Exercise: Convolutional neural networks
Saeed Salehi
Group Exercise 2
self supervised
23.02.2024
Fri
Lecture: Recurrent neural networks
Farnoush Rezaei Jafari
Lunch Exercise: Recurrent neural networks
Farnoush Rezaei Jafari
Group Exercise 2
presentation

 

 

Week 3

  10:00-12:00 12:00-14:00 14:00-16:00 16:00-17:00
26.02.2024
Mon
Lecture: XAI
Lorenz Linhardt
Lunch Lecture: Generative models
Khaled Kahouli
Group Exercise 3
Adrian Hill

 

  10:00-12:00 12:00-14:00 14:00-16:00 16:00-17:00 18:00
27.02.2024
Tue
Research Talk 1
Stefaan Hessmann

Group Exercise 3
self supervised
Lunch Research Talk 2
Niklas Gebauer

Group Exercise 4
Jonas Lederer
Group Exercise 4
self supervised
 
28.02.2024
Wed
Group Exercise 3, 4
self supervised
Lunch Research Talk 3
Tianyi Li

Research Talk 4
Kiwon Um
Research Talk 5
Cenk Tüysüz
Banquet Dinner
29.02.2024
Thu
Group Exercise 3, 4
presentation
Lunch Research Talk 6
Jacob Finkenrath

Research Talk 7
Andreas Demou
Research Talk 8
Thomas Schnake
 
01.03.2024
Fri

Research Talk 9
Kim Nicoli

Research Talk 10
Ankur Singha

Lunch Research Talk 11
Lorenz Vaitl

Research Talk 12
Elia Cellini
Research Talk 13
Shinichi Nakajima

Research Talks Details

Research talk 1: Stefaan Hessmann, TU Berlin
Title: Crystal structure search accelerated by neural network force fields
Abstract: Crystalline materials arise with a wide variety of physical properties that are determined by their three-dimensional atomic configuration. Because of their diversity and the nearly infinite number of physically meaningful structures, the development of novel stable crystal structures is a critical undertaking in materials science, with global optimization being the most fundamental task. However, accurate quantum chemistry computations are inevitable for this task. Due to their high computational costs, the exploration of complex systems is often not feasible. To address this challenge, a large number of computationally less demanding methods have been developed.
In my talk, I will give a brief introduction to neural network force fields and how they can be applied within an active learning scheme for finding low-energy structures while reducing the necessity for expensive quantum chemistry computations.

Research talk 2: Niklas Gebauer, TU Berlin
Title: Inverse design of 3d molecular structures with conditional generative neural networks
Abstract: The exploration of novel molecules and materials is a driving force in essential fields such as medicine and renewable energies. However, finding the most suitable compounds for specific applications is extremely difficult as the space of possible solutions is vast and complex. Machine learning has the potential to tremendously speed up this exploration by sampling promising candidate structures with generative models.
In this talk, I introduce the problem setting and existing approaches, where I focus on cG-SchNet, a conditional generative machine learning model developed in our group that allows to sample 3d molecules with desirable properties.

Research talk 3: Tianyi Li, University of Rome
Title: Advancing Turbulence Research with Generative Diffusion Models: From Synthetic Lagrangian Trajectories to Multi-scale Flow Reconstruction
Abstract:This talk explores the application of generative diffusion models (DMs) in tackling complex turbulence challenges in physics, focusing on two relevant studies. The first, "Synthetic Lagrangian Turbulence by Generative Diffusion Models", introduces a machine learning approach that employs DMs to generate three-dimensional turbulence trajectories. These models are particularly adept at accurately capturing multiscale turbulence dynamics, spanning from large-scale forces to nuanced behaviors in the inertial and dissipative scales, thus enhancing our understanding and modeling capabilities in Lagrangian turbulence.

The second study, "Multi-scale Reconstruction of Turbulent Rotating Flows with Generative Diffusion Models", shifts the focus to the use of DMs in geophysical contexts, particularly in enhancing data reconstruction for rotating turbulence. It includes a comparative analysis of DMs, such as RePaint and Palette, against generative adversarial networks (GANs) in reconstructing spatially incomplete flow fields. The results highlight the potential of DMs in offering more accurate mean squared error reduction and statistical representation, suggesting Palette DM as a notable tool in this context.

Overall, the presentation aims to shed light on how DMs can contribute to a deeper understanding and enhanced data augmentation capabilities in turbulence research, underlining their role in advancing physical studies and engineering applications.

Research talk 4: Kiwon Um, Telecom Paris
Title: Automatic Differentiation and Machine Learning for Numerical Simulations
Abstract: Numerical simulation for partial differential equations (PDEs) is a central tool to solve many scientific and engineering problems. To achieve accurate solutions to the given problem, numerical simulation typically involves computation-intensive procedures. Consequently, developing both accurate and efficient numerical methods has been a long-lasting challenge. Recently, machine learning techniques have demonstrated their great capacity for a variety of PDE problems in improving conventional numerical solvers. In this talk, the speaker will first introduce a fundamental element of machine learning algorithms, namely automatic differentiation, and discuss a novel machine learning approach that adopts a differentiable physics framework, which utilizes the automatic differentiation functionality. This framework allows trainable models to interact with PDE solvers in their learning process such that the models can learn better, particularly for recurrent learning tasks. The experiments will be presented demonstrating that the proposed approach can address the limitations of conventional PDE solvers. Aiming to reduce numerical errors of given iterative PDE solvers, different learning approaches will be discussed and compared.

Research talk 5: Cenk Tüysüz, DESY Berlin
Title: Introduction to Quantum Machine Learning
Abstract: Quantum computers offer a computational speed-up for certain problems, from prime number factorization to solving linear systems of equations. Motivated by these advances, the field of quantum machine learning emerged as a promising direction. In this talk, I will be introducing fundamental concepts in quantum machine learning and giving a beginner-friendly overview of the current research landscape.

Research talk 6: Jacob Finkenrath, Bergische Universität Wuppertal
Title: Localized machine learned flow maps to accelerate Markov Chain Monte Carlo simulations
Abstract: State-of-the-art simulations of discrete gauge theories are based on Markov chains with local changes in the field space, which however at very fine lattice spacings are notoriously difficult due to separated topological sectors of the gauge field. Hybrid Monte Carlo (HMC) algorithms, which are very efficient at coarser lattice spacings, suffer from increasing autocorrelation times.

An approach, which can overcome long autocorrelation times, is based on trivializing maps, where a new gauge proposal is given by mapping a configuration from a trivial space to the target one, distributed via the associated Boltzmann factor. Generative models are known which approximated the map. 

In the talk we will discuss applications to lower dimensional models and discuss strategies how to utilize the flow in large scale applications in the search for new fundamental physics beyond the standard model of particle physics.
 

Research talk 7: Andreas Demou, The Cyprus Institute
Title: Integrating data-driven models to enhance existing reduced-order models for wetting
Abstract: Using data-driven methods to solve partial differential equations (PDEs) has recently developed into a field of active research, with a proliferation of candidate deep learning architectures that may be deployed depending on the intended application. Within this context, it is important for such data-driven approaches to be capable of obtaining generalizable models. Such models may then be used for a broad range of parameters and auxiliary data that capture as diverse dynamics as possible, extending beyond the dynamics of the training datasets. This is particularly true when dealing with transport phenomena in the presence of complex heterogeneous environments, where small temporal or spatial errors can lead to markedly different behaviors in the longer term.
As highlighted in this very workshop, by incorporating some physical insight in the AI architectures (e.g., conservation laws, symmetries, etc.), the developed data-driven models can exhibit improved accuracy and generalizability. A relevant and promising approach is the construction of hybrid models, featuring the combination of data-driven and analytically derived reduced-order models. Even though the available reduced-order models for any engineering/physics problem are typically imperfect and limited in their applicability, leading to significant deviations between the reduced-order prediction and the reference solution (termed reduced-order error), nonetheless, these models encompass invaluable physical content. The coupling of such reduced-order models with data-driven counterparts that are simply trained to learn the reduced-order error, was found to enhance the overall performance of the resulting hybrid models, compared to purely data-driven models. 
This talk will present the application of these ideas to wetting hydrodynamics problems, modeling the behavior of droplets on solid substrates. Two setups are considered: 
1.    Horizontal, chemically heterogeneous substrates, upon which the droplet moves due to the presence of hydrophilic/hydrophobic regions. In this setup, a reduced-order model is available and is coupled to a data-driven model providing the higher-order corrections.
2.    Inclined, chemically heterogeneous substrates, upon which the droplet moves mainly due to gravity, but the presence of hydrophilic/hydrophobic regions also affects the movement. In this setup, there is no reduced-order model available, therefore we adopt a reduced-order model for horizontal substrates and include the effects of gravity via the data-driven model.
Adopting such an AI-assisted approach holds the potential to significantly reduce the solution time compared to computationally intensive high-fidelity simulations. In the context of wetting hydrodynamics, such models can expedite the design of surface features for controllable droplet transport, a task that is highly relevant in microfabrication, biomedicine, among many other applications. In a wider context, this approach constitutes a proof-of-concept that may be extended in different settings beyond contact line motion.

Research talk 8: Thomas Schnake, TU Berlin
Title: Extending Explainable Artificial Intelligence - Towards Structured and Application-Oriented Interpretations
Abstract: Explainable artificial intelligence (XAI) is becoming more and more important for the applicability of artificial intelligence (AI), particularly in the light of the coming EU-law for AI regulations (AI Act). Yet, the interpretation of machine learning models can vary widely, depending on the data and the domain it is applied on. This means, in many cases a simple pixel-wise heat map is not sufficient to strengthen the intuition of an analyst. We aim to explore the complex structures that neural networks can learn from, especially when dealing with graph data. In addition, we will see how the expectation of what an explanation should reflect, varies significantly in different domains. For example, a computational chemist most likely asks which functional group, or set of atoms, in a molecule is indicative of its mutagenicity prediction. On the other hand, epidemiologists would search for the most indicative infection chain, to understand the prediction for the spread of disease during an epidemic. We will also delve into technical details on accelerating these computationally complex explanation algorithms to achieve high energy efficiency.

Research talk 9: Kim Nicoli, University of Bonn
Title: Generative AI as the new frontier of sampling algorithms in high energy physics
Abstract: The task of learning non-trivial probability densities is a crucial problem in machine learning with an uncountable number of applications in, among others, computer vision, sound synthesis, text generation, and natural sciences. The subfield of machine learning that leverages deep learning to learn complicated probability distributions and samples from them is known as Generative AI. Recently, the relevance of this problem has been extensively studied in several domains where deep generative models have been proposed to sample non-trivial Boltzmann-like densities in, for instance, lattice quantum field theory, statistical mechanics, and quantum chemistry. Learning the underlying density, namely a normalized Boltzmann distribution, greatly improves the sampling task and opens the possibility of estimating physical observables, such as the partition function and related thermodynamic observables, which are hard to estimate using standard sampling methods. In my talk, I will discuss some applications of deep generative models in the context of high-energy physics and lattice field theory, showing how these new sampling paradigms, based on machine learning, hold promise to become the new state-of-the-art simulations in computational physics.

Research talk 10: Ankur Singha, TU Berlin
Title: Exploring Lattice Gauge Theories in 2-dimensions using Generative model
Abstract: In this talk, I will discuss generative AI methods for sampling lattice distributions within gauge symmetric theories. Gauge symmetry plays a major role in comprehending fundamental physics through the Quantum Field Theory framework. Our discussion will include the concept of gauge field in a discretized spacetime and sampling such a lattice field using generative models. In the conventional sampling approach, the computation of gauge symmetric observables faces significant challenges due to large autocorrelation, as seen in phenomena like topological charge, a problem commonly referred to as topological freezing. I will demonstrate how generative models can effectively mitigate topological freezing in 2D scenarios, focusing on the Flow-based approach and the Gaussian Mixture Model (GMM) approach.

Research talk 11: Lorenz Vaitl, TU Berlin
Title: Path Gradients for Normalizing Flows
Abstract: In numerous scientific and machine learning tasks, we are given an energy function that characterizes a system and serves as an unnormalized target distribution.

Normalizing Flows, a class of deep generative models, offer a promising avenue for efficient and deterministic sampling, improving upon traditional methods. I will present our work, which enhances Normalizing Flows through low-variance path gradient estimators and a novel model that significantly advances machine learning-based sampling in Lattice Gauge Theory, highlighting potential synergies between machine learning and theoretical physics.

Research talk 12: Elia Cellini, TU Berlin
Title: Stochastic normalizing flows as non-equilibrium transformation
Abstract: Normalizing Flows (NFs) are a class of deep generative models recently proposed as a promising alternative to traditional Markov Chain Monte Carlo methods in lattice field theory calculations. In this talk, we explore Stochastic Normalizing Flows (SNFs), a combination of NF layers and out-of-equilibrium stochastic updates: in particular, we show how SNFs share the same theoretical framework of Monte Carlo simulations based on Jarzynski's equality. The latter is a well-known result in non-equilibrium statistical mechanics which proved to be highly efficient in the computation of free-energy differences in lattice gauge theory. We discuss the most appealing features of this extended class of generative models using numerical results from $\phi^4$ scalar field theory, Effective String Theory, and lattice gauge theory.

Research talk 13: Shinichi Nakajima, TU Berlin
Title: Machine Learning for Quantum Computing
Abstract: This talk introduces an application of machine learning techniques to variational quantum eigensolvers (VQEs), a hybrid quantum-classical computing protocol.  VQE requires to solve a noisy black-box minimization problem in a classical computer, where Gaussian process regression and Bayesian optimization can improve the optimization performance.  The talk focuses on how to incorporate the physical prior knowledge to maximize statistical efficiency.

Workshop Organizers

Stefaan Hessmann
Lorenz Vaitl
Dr. Ankur Singha
Dr. Tina Schwabe
Dr. Elke Witt
Dr. Shinichi Nakajima
Prof. Dr. Klaus-Robert Müller