Christoph Schwab: Katalogdaten im Frühjahrssemester 2019 |
Name | Herr Prof. Dr. Christoph Schwab |
Lehrgebiet | Mathematik |
Adresse | Seminar für Angewandte Mathematik ETH Zürich, HG G 57.1 Rämistrasse 101 8092 Zürich SWITZERLAND |
Telefon | +41 44 632 35 95 |
Fax | +41 44 632 10 85 |
christoph.schwab@sam.math.ethz.ch | |
URL | http://www.sam.math.ethz.ch/~schwab |
Departement | Mathematik |
Beziehung | Ordentlicher Professor |
Nummer | Titel | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|
401-1652-10L | Numerische Mathematik I | 6 KP | 3V + 2U | C. Schwab | |
Kurzbeschreibung | Dieser Kurs gibt eine Einführung in numerische Methoden für Studierende der Mathematik im 2. Semester. Abgedeckt werden Methoden der linearen Algebra (lineare Gleichungssysteme, Matrixeigenwertprobleme) sowie der Analysis (Nullstellensuche von Funktionen sowie numerische Interpolation, Integration und Approximation) in Theorie und Implementierung. | ||||
Lernziel | Kenntnis der grundlegenden numerischen Verfahren sowie `numerische Kompetenz': Anwendung der numerischen Verfahren zur Problemloesung, Mathematische Beweistechniken fuer den Nachweis von Stabilitaet, Konsistenz u. Konvergenz der Verfahren sowie deren MATLAB Implementierung. | ||||
Inhalt | Rundungsfehler, lineare Gleichungssysteme, nichtlineare Gleichungen (Skalar und Systeme), Interpolation, Extrapolation, lineare und nichtlineare Ausgleichsrechnung, elementare Optimierungsverfahren, numerische Integration. | ||||
Skript | Skript zur Vorlesung sowie Leseliste sind auf der Webseite der Vorlesung verfügbar. | ||||
Literatur | Skript wird eingeschriebenen Studierenden des ETH BSc Mathematik zur Verfuegung gestellt. _Zusaetzlich_ wird empfohlen: Quarteroni, Sacco und Saleri, Numerische Mathematik 1 + 2, Springer Verlag 2002. | ||||
Voraussetzungen / Besonderes | Zulassungsbedingungen: Linear Algebra I , Analysis I in ETH BSc MATH u. parallele Belegung von Linear Algebra II, Analysis II in ETH BSc MATH Woechentliche Hausuebungsserien sind integraler Bestandteil des Kurses; die Hausuebungen involvieren MATLAB Programmieraufgaben, u. werden bewertet. | ||||
401-3650-19L | Numerical Analysis Seminar: Mathematics of Deep Neural Network Approximation Number of participants limited to 6. | 4 KP | 2S | C. Schwab | |
Kurzbeschreibung | This seminar will review recent _mathematical results_ on approximation power of deep neural networks (DNNs). The focus will be on mathematical proof techniques to obtain approximation rate estimates (in terms of neural network size and connectivity) on various classes of input data including, in particular, selected types of PDE solutions. | ||||
Lernziel | |||||
Inhalt | Presentation of the Seminar: Deep Neural Networks (DNNs) have recently attracted substantial interest and attention due to outperforming the best established techniques in a number of tasks (Chess, Go, Shogi, autonomous driving, language translation, image classification, etc.). In many cases, these successes have been achieved by heuristic implementations combined with massive compute power and training data. For a (bird's eye) overview, see https://arxiv.org/abs/1901.05639 and, more mathematical and closer to the seminar theme, https://arxiv.org/abs/1901.02220 This seminar will review recent _mathematical results_ on approximation power of deep neural networks (DNNs). The focus will be on mathematical proof techniques to obtain approximation rate estimates (in terms of neural network size and connectivity) on various classes of input data including, in particular, selected types of PDE solutions. Mathematical results support that DNNs can equalize or outperform the best mathematical results known to date. Particular cases comprise: high-dimensional parametric maps, analytic and holomorphic maps, maps containing multi-scale features which arise as solution classes from PDEs, classes of maps which are invariant under group actions. The seminar format will be oral student presentations in the first half of May 2019, combined with a written report. Student presentations will be based on a recent research paper selected in two meetings at the start of the semester (end of February). | ||||
Literatur | Partial reading list: Error bounds for approximations with deep ReLU networks Author: Dmitry Yarotsky arXiv:1610.01145 Quantified advantage of discontinuous weight selection in approximations with deep neural networks Author: Dmitry Yarotsky arXiv:1705.01365 Optimal approximation of piecewise smooth functions using deep ReLU neural networks Authors: Philipp Petersen and Felix Voigtlaender arXiv:1709.05289 Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ, Authors: Ch. Schwab and J. Zech Analysis and Applications, Singapore, 17/1 (2019), pp. 19-55. Optimal approximation of continuous functions by very deep ReLU networks Author: Dmitry Yarotsky arXiv:1802.03620 Universal approximations of invariant maps by neural networks Author: Dmitry Yarotsky arXiv:1804.10306 ReLU Deep Neural Networks and Linear Finite Elements Authors: Juncai He, Lin Li, Jinchao Xu and Chunyue Zheng arXiv:1807.03973 Deep Neural Network Approximation Theory Authors: Philipp Grohs, Dmytro Perekrestenko, Dennis Elbrächter and Helmut Bölcskei arXiv:1901.02220 Deep ReLU Networks and High-Order Finite Element Methods Authors: J. A. A. Opschoor, P. C. Petersen and Ch. Schwab (Res. Report number 2019-07, SAM, ETH) A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations Authors: Martin Hutzenthaler, Arnulf Jentzen, Thomas Kruse and Tuan Anh Nguyen arXiv: 1901.10854 | ||||
Voraussetzungen / Besonderes | Each seminar topic will allow expansion to a semester or a master thesis in the MSc MATH or MSc Applied MATH. Disclaimer: The seminar will _not_ address recent developments in DNN software, such as training heuristics, or programming techniques for DNN training in various specific applications. | ||||
401-5650-00L | Zurich Colloquium in Applied and Computational Mathematics | 0 KP | 1K | R. Abgrall, R. Alaifari, H. Ammari, R. Hiptmair, A. Jentzen, S. Mishra, S. Sauter, C. Schwab | |
Kurzbeschreibung | Forschungskolloquium | ||||
Lernziel |