Christoph Schwab: Catalogue data in Spring Semester 2021
|Name||Prof. Dr. Christoph Schwab|
Seminar für Angewandte Mathematik
ETH Zürich, HG G 57.1
|Telephone||+41 44 632 35 95|
|Fax||+41 44 632 10 85|
|401-1652-10L||Numerical Analysis I||6 credits||3V + 2U||C. Schwab|
|Abstract||This course will give an introduction to numerical methods, aimed at mathematics majors. It covers numerical linear algebra, quadrature, interpolation and approximation methods as well as their error analysis and implementation.|
|Objective||Knowledge of the fundamental numerical methods as well as |
`numerical literacy': application of numerical methods for the solution
of application problems, mathematical foundations of numerical
methods, and basic mathematical methods of the analysis of
stability, consistency and convergence of numerical methods,
|Content||Rounding errors, solution of linear systems of equations, nonlinear equations, |
interpolation (polynomial as well as trigonometric), least squares problems,
extrapolation, numerical quadrature, elementary optimization methods.
|Lecture notes||Lecture Notes and reading list will be available.|
|Literature||Lecture Notes (german or english) will be made available to students of ETH BSc MATH.|
Quarteroni, Sacco and Saleri, Numerische Mathematik 1 + 2, Springer Verlag 2002 (in German).
There is an English version of this text, containing both German volumes, from the same publisher. If you feel more comfortable with English, you can follow this text as well. Content and Indexing are identical in the German and the English text.
|Prerequisites / Notice||Admission Requirements:|
Linear Algebra I, Analysis I in ETH BSc MATH
Parallel enrolment in
Linear Algebra II, Analysis II in ETH BSc MATH
Weekly homework assignments involving MATLAB programming
are an integral part of the course.
Turn-in of solutions will be graded.
|401-3650-19L||Numerical Analysis Seminar: Deep Neural Network Approximation |
Does not take place this semester.
|4 credits||2S||C. Schwab|
|Abstract||This seminar will review recent _mathematical results_ on approximation power of deep neural networks (DNNs). The focus will be on mathematical proof techniques to obtain approximation rate estimates (in terms of neural network size and connectivity) on various classes of input data including, in particular, selected types of PDE solutions.|
|Content||Presentation of the Seminar:|
Deep Neural Networks (DNNs) have recently attracted substantial
interest and attention due to outperforming the best established
techniques in a number of tasks (Chess, Go, Shogi,
autonomous driving, language translation, image classification, etc.).
In many cases, these successes have been achieved by
heuristic implementations combined
with massive compute power and training data.
The seminar will address mathematical results on
the approximation/ expressive power of DNNs.
For a (bird's eye) overview, see
and, more mathematical and closer to the seminar theme,
this seminar will review recent _mathematical results_
on approximation power of deep neural networks (DNNs).
The focus will be on mathematical proof techniques to
obtain approximation rate estimates (in terms of neural network
size and connectivity) on various classes of input data
including, in particular, selected types of PDE solutions.
Mathematical results support that DNNs can
equalize or outperform the best mathematical results
known to date.
Particular cases comprise:
high-dimensional parametric maps,
analytic and holomorphic maps,
maps containing multi-scale features which arise as solution classes from PDEs,
classes of maps which are invariant under group actions.
|Prerequisites / Notice||Each seminar topic will allow expansion to a semester or a |
master thesis in the MSc MATH or MSc Applied MATH.
The seminar format will be oral student presentations in
the first half of May 2021, combined with a written report.
Student presentations will be
based on a recent research paper selected in two meetings
at the start of the semester (end of February).
The seminar will _not_ address recent developments in DNN software,
such as training heuristics, or programming techniques
for DNN training in various specific applications.
|401-5650-00L||Zurich Colloquium in Applied and Computational Mathematics||0 credits||1K||R. Abgrall, R. Alaifari, H. Ammari, R. Hiptmair, S. Mishra, S. Sauter, C. Schwab|