Search result: Catalogue data in Autumn Semester 2018

Mathematics Bachelor Information
Early enrolments for seminars in myStudies are encouraged, so that we will recognise need for additional seminars in a timely manner. Some seminars have waiting lists. Nevertheless, register for at most two mathematics seminars.
401-3110-68LFractal Geometry Information Restricted registration - show details
Number of participants limited to 12.
Registration to the seminar will only be effective once confirmed by the organisers. Please contact Link.
W4 credits2SM. Einsiedler, further speakers
AbstractIntroductory seminar about the mathematical foundations of fractal geometry and its applications in various areas of mathematics
- classical examples
- notions of dimension and their calculation
- local structure
- projections, products, intersections

Possible Applications:
- Dynamical Systems: iterated function systems, self-similar and self-affine sets
- Pure Mathematics: the Kakeya problem, fractal groups and rings, graphs of functions
- Complex Dynamics: Julia sets and the Mandelbrot set, Vitushkin's conjecture
- Number Theory: distribution of digits, continued fractions, Diophantine approximation
- Probability Theory: random fractals, Brownian motion
LiteratureKenneth Falconer: Fractal Geometry, Mathematical Foundations and Applications.
Prerequisites / NoticePrerequisites: Content of the first two years of the ETH Bachelor program in mathematics, especially measure theory and topology. Some applications require complex analysis and probability theory.

In order to obtain the 4 credit points, each student is expected to give two 1h-talks and regularly attend the seminar.
401-3160-68LRepresentation Theory: Groups, Algebras and Quivers Restricted registration - show details
Number of participants limited to 14.
W4 credits2SG. Felder, further speakers
AbstractRepresentation theory of groups, algebras, quivers, based on examples and solving problems.
ObjectiveThe students will learn basic notions and techniques of representation theory and to apply these techniques in concrete situations and examples.
ContentIntroduction to representation theory with many examples. Lie algebras and universal enveloping algebra. Schur lemma, representations of a matrix algebra. Jordan-Holder theorem, extensions. Category O for sl(2). Representations of finite groups. Burnside theorem, Frobenius reciprocity. Representations of symmetric groups. Representations of GL_2(F_q). Quivers. McKay correspondence.
LiteratureThe main reference is:

[E] P.Etingof, O. Goldberg, S. Hensel, T. Liu, A. Schwendner, D. Vaintrob, E. Yudovina, Introduction to representation theory, with historical interludes by S. Gerovitch, AMS 2010.
Much of the material (but not the historical interludes) can be found at Link

Additional literature that might be helpful:
[F] W.Fulton, Representation theory, A first course.
[L] S. Lang, Algebra.
[S] J-P. Serre, Lie Algebras and Lie Groups
[BGP] I N Bernstein, I M Gel'fand and V A Ponomarev "COXETER FUNCTORS AND GABRIEL'S THEOREM"
Russ. Math. Surv. 28
[D] Igor Dolgachev
McKay's correspondence for cocompact discrete subgroups of SU(1,1)
available at Link
Prerequisites / NoticeLinear algebra and basic notions of algebra. Please refresh (or learn) basic notions of multilinear algebra to be able to solve the first problems on tensor products of vector spaces in [E].

Each participant gives a presentation and is assigned a set of exercises, whose solution is published on the wiki of the seminar.
401-3200-64LProofs from THE BOOK Restricted registration - show details
Number of participants limited to 24.
W4 credits2SM. Burger, further speakers
ObjectiveZiel des Seminares ist zu lernen wie man Mathematik vortraegt. Als
Vorlage fuer dieses Seminar dient das Buch von Aigner und Ziegler "Proofs from the BOOK"
das aus allen Gebieten der Mathematik fundamentale Saetze und deren "schoensten" Beweise
praesentiert. Die Auswahl der Themen ist also gross und es gibt etwas fuer jeden Geschmack.
401-3350-68LIntroduction to Optimal Transport Information Restricted registration - show details
Number of participants limited to 11.
W4 credits2SA. Figalli, further speakers
AbstractIntroductory seminar about the theory of optimal transport.
Starting from Monge's and Kantorovich's statements of the optimal transport problem, we will investigate the theory of duality necessary to prove the fundamental Brenier's theorem.
After some applications, we will study the properties of the Wasserstein space and we will conclude introducing the dynamical point of view on the problem.
ContentGiven two distributions of mass, it is natural to ask ourselves what is the "best way" to transport one into the other. What are mathematically acceptable definitions of "distributions of mass" and "to transport one into the other"?
Measures are perfectly suited to play the role of the distributions of mass, whereas a map that pushes-forward one measure into the other is the equivalent of transporting the distributions. By "best way" we mean that we want to minimize the map in some norm.

The original problem of Monge is to understand whether there is an optimal map and to study its properties. In order to attack the problem we will need to relax the formulation (Kantorovich's statement) and to apply a little bit of duality theory. The main theorem we will prove in this direction is Brenier's theorem that answers positively to the existence problem of optimal maps (under certain conditions).
The Helmotz's decomposition and the isoperimetric inequality will then follow rather easily as applications of the theory.
Finally, we will see how the optimal transport problem gives a natural way to define a distance on the space of probabilities (Wasserstein distance) and we will study some of its properties.
Literature"Optimal Transport, Old and New", C. Villani

"Optimal Transport for Applied Mathematicians", F. Santambrogio
Prerequisites / NoticeThe students are expected to have mastered the content of the first two
years taught at ETH, especially Measure Theory.
The seminar is mainly intended for Bachelor students.

In order to obtain the 4 credit points, each student is expected to give two 1h-talks and regularly attend the seminar. Moreover some exercises will be assigned.

Further information can be found at Link
401-3620-68LStudent Seminar in Statistics: Statistical Learning with Sparsity Restricted registration - show details
Number of participants limited to 24.

Mainly for students from the Mathematics Bachelor and Master Programmes who, in addition to the introductory course unit 401-2604-00L Probability and Statistics, have heard at least one core or elective course in statistics. Also offered in the Master Programmes Statistics resp. Data Science.
W4 credits2SM. Mächler, M. H. Maathuis, N. Meinshausen, S. van de Geer
AbstractWe study selected chapters from the 2015 book "Statistical Learning with Sparsity" by Trevor Hastie, Rob Tibshirani and Martin Wainwright.

(details see below)
ObjectiveDuring this seminar, we will study roughly one chapter per week from the book. You will obtain a good overview of the field of sparse & high-dimensional modeling of modern statistics.
Moreover, you will practice your self-studying and presentation skills.
Content(From the book's preface:) "... summarize the actively developing
field of statistical learning with sparsity.
A sparse statistical model is one having only a small number of nonzero parameters or weights. It represents a classic case of “less is more”: a sparse model can be much easier to estimate and interpret than a dense model.
In this age of big data, the number of features measured on a person or object can be large, and might be larger than the number of observations. The sparsity assumption allows us to tackle such problems and extract useful and reproducible patterns from big datasets."

For presentation of the material, occasionally you'd consider additional published research, possibly e.g., for "High-Dimensional Inference"
Lecture notesWebsite: with groups, FAQ, topics, slides, and Rscripts :
LiteratureTrevor Hastie, Robert Tibshirani, Martin Wainwright (2015)
Statistical Learning with Sparsity: The Lasso and Generalization
Monographs on Statistics and Applied Probability 143
Chapman Hall/CRC
ISBN 9781498712170

Access :

- Link
(full access via ETH (library) network, if inside ETH (VPN))

- Author's website (includes errata, updated pdf, data):
Prerequisites / NoticeWe require at least one course in statistics in addition to the 4th semester course Introduction to Probability and Statistics, as well as some experience with the statistical software R.

Topics will be assigned during the first meeting.
401-3640-13LSeminar in Applied Mathematics: Shape Calculus Restricted registration - show details
Number of participants limited to 10
W4 credits2SR. Hiptmair
AbstractShape calculus studies the dependence of solutions of partial differential equations on deformations of the domain and/or interfaces. It is the foundation of gradient methods for shape optimization. The seminar will rely on several sections of monographs and research papers covering analytical and numerical aspects of shape calculus.
Objective* Understanding of concepts like shape derivative, shape gradient, shape Hessian, and adjoint problem.
* Ability to derive analytical formulas for shape gradients
* Knowledge about numerical methods for the computation of shape gradients.

1. The velocity method and Eulerian shape gradients: Main reference [SZ92, Sect. 2.8–2.11, 2.1, 2.18], covers the “velocity method”, the Hadamard structure theorem and formulas for shape gradients of particular functionals. Several other sections of [SZ92,Ch. 2] provide foundations and auxiliary results and should be browsed, too.

2. Material derivatives and shape derivatives, based on [SZ92, Sect. 2.25–2.32].

3. Shape calculus with exterior calculus, following [HL13] (without Sections 5 & 6). Based on classical vector analysis the formulas are also derived in [SZ92, Sects 2,19,2.20] and [DZ10, Ch. 9, Sect. 5]. Important background and supplementary information about the shape Hessian can be found in [DZ91, BZ97] and [DZ10, Ch. 9, Sect. 6].

4. Shape derivatives of solutions of PDEs using exterior calculus [HL17], see also [HL13,Sects. 5 & 6]. From the perspective of classical calculus the topic is partly covered in [SZ92, Sects. 3.1-3.2].

5. Shape gradients under PDE constraints according to [Pag16, Sect. 2.1] including a presentation of the adjoint method for differentiating constrained functionals [HPUU09, Sect. 1.6]. Related information can be found in [DZ10, Ch. 10, Sect. 2.5] and [SZ92, Sect. 3.3].

6. Approximation of shape gradients following [HPS14]. Comparison of discrete shape gradients based on volume and boundary formulas, see also [DZ10, Ch. 10, Sect. 2.5].

7. Optimal shape design based on boundary integral equations following [Epp00b], with some additional information provided in [Epp00a].

8. Convergence in elliptic shape optimization as discussed in [EHS07]. Relies on results reported in [Epp00b] and [DP00]. Discusses Ritz-Galerkin discretization of optimality conditions for normal displacement parameterization.

9. Shape optimization by pursuing diffeomorphisms according to [HP15], see also [Pag16,Ch. 3] for more details, and [PWF17] for extensions.

10. Distributed shape derivative via averaged adjoint method following [LS16].

[BZ97] Dorin Bucur and Jean-Paul Zolsio. Anatomy of the shape hessian via
lie brackets. Annali di Matematica Pura ed Applicata, 173:127–143, 1997.

[DP00] Marc Dambrine and Michel Pierre. About stability of equilibrium shapes. M2AN Math. Model. Numer. Anal., 34(4):811–834, 2000.

[DZ91] Michel C. Delfour and Jean-Paul Zolésio. Velocity method and Lagrangian formulation for the computation of the shape Hessian. SIAM J. Control Optim.,
29(6):1414–1442, 1991.

[DZ10] M.C. Delfour and J.-P. Zolésio. Shapes and Geometries, volume 22 of Advances in Design and Control. SIAM, Philadelphia, 2nd edition, 2010.

[EHS07] Karsten Eppler, Helmut Harbrecht, and Reinhold Schneider. On convergence in elliptic shape optimization. SIAM J. Control Optim., 46(1):61–83 2007.

[Epp00a] Karsten Eppler. Boundary integral representations of second derivatives in shape optimization. Discuss. Math. Differ. Incl. Control Optim., 20(1):63–78, 2000.
German-Polish Conference on Optimization—Methods and Applications (Żagań,

[Epp00b] Karsten Eppler. Optimal shape design for elliptic equations via BIE-methods. Int. J. Appl. Math. Comput. Sci., 10(3):487–516, 2000.

[HL13] Ralf Hiptmair and Jingzhi Li. Shape derivatives in differential forms I: an intrinsic perspective. Ann. Mat. Pura Appl. (4), 192(6):1077–1098, 2013.

[HL17] R. Hiptmair and J.-Z. Li. Shape derivatives in differential forms II: Application
to scattering problems. Report 2017-24, SAM, ETH Zürich, 2017. To appear in
Inverse Problems.

[HP15] Ralf Hiptmair and Alberto Paganini. Shape optimization by pursuing diffeomorphisms. Comput. Methods Appl. Math., 15(3):291–305, 2015.

[HPS14] R. Hiptmair, A. Paganini, and S. Sargheini. Comparison of approximate shape gradients. BIT Numerical Mathematics, 55:459–485, 2014.

[HPUU09] M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich. Optimization with PDE constraints, volume 23 of Mathematical Modelling: Theory and Applications. Springer, New York, 2009.

[LS16] Antoine Laurain and Kevin Sturm. Distributed shape derivative via averaged adjoint method and applications. ESAIM Math. Model. Numer. Anal., 50(4):1241–1267,2016.

[Pag16] A. Paganini. Numerical shape optimization with finite elements. Eth dissertation 23212, ETH Zurich, 2016.

[PWF17] A. Paganini, F. Wechsung, and P.E. Farell. Higher-order moving mesh methods for pde-constrained shape optimization. Preprint arXiv:1706.03117 [math.NA], arXiv, 2017.

[SZ92] J. Sokolowski and J.-P. Zolesio. Introduction to shape optimization, volume 16 of Springer Series in Computational Mathematics. Springer, Berlin, 1992.
Prerequisites / NoticeKnowledge of analysis and functional analysis; knowledge of PDEs is an advantage and so is some familiarity with numerical methods for PDEs
401-3650-68LNumerical Analysis Seminar: Mathematics of Deep Neural Network Approximation Restricted registration - show details
Number of participants limited to 6.
W4 credits2SC. Schwab
AbstractThis seminar will review recent (2016-) _mathematical results_ on approximation power of deep neural networks (DNNs). The focus will be on mathematical proof techniques to obtain approximation rate estimates (in terms of neural network size and connectivity) on various classes of input data.
ContentPresentation of the Seminar:
Deep Neural Networks (DNNs) have recently attracted substantial
interest and attention due to outperforming the best established
techniques in a number of application areas (Chess, Go, autonomous
driving, language translation, image classification, etc.).
In many cases, these successes have been achieved by
implementations, based on heuristics, with massive compute power
and training data.

This seminar will review recent (2016-) _mathematical results_
on approximation power of deep neural networks (DNNs).
The focus will be on mathematical proof techniques to
obtain approximation rate estimates (in terms of neural network
size and connectivity) on various classes of input data.
Also here, there is mounting mathematical evidence that DNNs
equalize or outperform the best known mathematical results.

Particular cases comprise:
high-dimensional parametric maps,
analytic and holomorphic maps,
maps containing multi-scale features which arise as solution classes from PDEs,
classes of maps which are invariant under group actions.

The format will be oral student presentations in December 2018
based on a recent research paper selected in two meetings
at the start of the semester.
LiteraturePartial reading list:

DNN Expression Rate Analysis of High-dimensional PDEs: Application to
Option Pricing
Authors: Dennis Elbrächter, Philipp Grohs, Arnulf Jentzen, Christoph Schwab

Topological properties of the set of functions generated by neural networks of fixed size
Authors: Philipp Petersen, Mones Raslan, Felix Voigtlaender

Universal approximations of invariant maps by neural networks
Author: Dmitry Yarotsky

Optimal approximation of continuous functions by very deep ReLU networks
Author: Dmitry Yarotsky

Optimal approximation of piecewise smooth functions using deep ReLU neural networks
Authors: Philipp Petersen, Felix Voigtlaender

Neural networks and rational functions
Author: Matus Telgarsky

The power of deeper networks for expressing natural functions
Authors: David Rolnick, Max Tegmark

Quantified advantage of discontinuous weight selection in approximations with deep neural networks
Author: Dmitry Yarotsky

Error bounds for approximations with deep ReLU networks
Author: Dmitry Yarotsky

Deep vs. shallow networks : An approximation theory perspective
Authors: Hrushikesh Mhaskar, Tomaso Poggio

Benefits of depth in neural networks
Author: Matus Telgarsky
Prerequisites / NoticeEach seminar topic will allow expansion to a semester or a
master thesis in the MSc MATH or MSc Applied MATH.

The seminar will _not_ address recent developments in DNN
software, such as training heuristics, or programming techniques
for various specific applications.
» Seminars (Mathematics Master)
  •  Page  1  of  1