# 401-3650-68L Numerical Analysis Seminar: Mathematics of Deep Neural Network Approximation

Semester | Herbstsemester 2019 |

Dozierende | C. Schwab |

Periodizität | jährlich wiederkehrende Veranstaltung |

Lehrsprache | Englisch |

Kommentar | Number of participants limited to 6. Consent of Instructor needed. |

Kurzbeschreibung | The seminar will review recent _mathematical results_ on approximation power of deep neural networks (DNNs). The focus will be on mathematical proof techniques to obtain approximation rate estimates (in terms of neural network size and connectivity) on various classes of input data including, in particular, selected types of PDE solutions. |

Lernziel | |

Inhalt | Presentation of the Seminar: Deep Neural Networks (DNNs) have recently attracted substantial interest and attention due to outperforming the best established techniques in a number of tasks (Chess, Go, Shogi, autonomous driving, language translation, image classification, etc.). In big data analysis, DNNs achieved remarkable performance in computer vision, speech recognition and natural language processing. In many cases, these successes have been achieved by heuristic implementations combined with massive compute power and training data. For a (bird's eye) view, see https://arxiv.org/abs/1901.05639 and, more mathematical and closer to the seminar theme, https://arxiv.org/abs/1901.02220 The seminar will review recent _mathematical results_ on approximation power of deep neural networks (DNNs). The focus will be on mathematical proof techniques to obtain approximation rate estimates (in terms of neural network size and connectivity) on various classes of input data including, in particular, selected types of PDE solutions. Mathematical results support that DNNs can equalize or outperform the best mathematical results known to date. Particular cases comprise: high-dimensional parametric maps, analytic and holomorphic maps, maps containing multi-scale features which arise as solution classes from PDEs, classes of maps which are invariant under group actions. Format of the Seminar: The seminar format will be oral student presentations, combined with written report. Student presentations will be based on a recent research paper selected in two meetings at the start of the semester. Grading of the Seminar: Passing grade will require a) 1hr oral presentation with Q/A from the seminar group and b) typed seminar report (``Ausarbeitung'') of several key aspects of the paper under review. Each seminar topic will allow expansion to a semester or a master thesis in the MSc MATH or MSc Applied MATH. Disclaimer: The seminar will _not_ address recent developments in DNN software, eg. TENSORFLOW, and algorithmic training heuristics, or programming techniques for DNN training in various specific applications. |