Mathematics of Deep Learning
BERLIN, 19 - 30 August 2019, at Zuse Institute Berlin
Deep learning (DL) methodologies are currently showing tremendous success in a variety of applications. In many cases, DL based methods outperform traditional approaches by far. At the same time, they lack rigorous mathematical foundation. Recently, various researchers of the mathematical community decided to start developing mathematical theories for DL from different angles, e.g., approaches focusing on analyzing the abstract approximation power of deep neural networks including approaches to understanding the convergence of the numerical minimization methods used in DL, or mathematical frameworks for convolutional neural networks such as the scattering transform approach, or the convolutional sparse coding approach, pushing a door open to the methodology of compressed sensing.
The summer school will offer lectures on both the theory of deep neural networks, on related questions such as generalization, expressivity, or explainability, as well as on applications of deep neural networks (e.g. to PDEs, inverse problems, or specific real-world problems).
The school consists of two weeks, where the first week is devoted to the theory of deep neural networks, and the second week has a focus on applications. The format is dominated by 1,5 hour lectures by international experts. There will also be a poster session for the participants.
Taco Cohen (Qualcomm)
Francois Fleuret (IDIAP, EPF Lausanne)
Eldad Haber (University of British Columbia)
Robert Jenssen (Tromso)
Andreas Krause (ETH Zurich)
Gitta Kutyniok (TU Berlin)
Ben Leimkuhler (U Edinburgh)
Klaus-Robert Müller (TU Berlin)
Frank Noé (FU Berlin)
Christof Schütte (FU Berlin, ZIB)
Vladimir Spokoiny (HU Berlin, WIAS)
René Vidal (Johns Hopkins University)
The application deadline was 8 April 2019.
There is no registration fee.