## Siddhartha Mishra: Katalogdaten im Frühjahrssemester 2023 |

Name | Herr Prof. Dr. Siddhartha Mishra |

Lehrgebiet | Angewandte Mathematik |

Adresse | Seminar für Angewandte Mathematik ETH Zürich, HG G 57.2 Rämistrasse 101 8092 Zürich SWITZERLAND |

Telefon | +41 44 632 75 63 |

siddhartha.mishra@sam.math.ethz.ch | |

URL | https://people.math.ethz.ch/~smishra |

Departement | Mathematik |

Beziehung | Ordentlicher Professor |

Nummer | Titel | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|

401-4656-DRL | Deep Learning in Scientific Computing Only for ETH D-MATH doctoral students and for doctoral students from the Institute of Mathematics at UZH. The latter need to send an email to Jessica Bolsinger (info@zgsm.ch) with the course number. The email should have the subject „Graduate course registration (ETH)“. | 1 KP | 2V + 1U | S. Mishra, B. Moseley | |

Kurzbeschreibung | Machine Learning, particularly deep learning is being increasingly applied to perform, enhance and accelerate computer simulations of models in science and engineering. This course aims to present a highly topical selection of themes in the general area of deep learning in scientific computing, with an emphasis on the application of deep learning algorithms for systems, modeled by PDEs. | ||||

Lernziel | The objective of this course will be to introduce students to advanced applications of deep learning in scientific computing. The focus will be on the design and implementation of algorithms as well as on the underlying theory that guarantees reliability of the algorithms. We will provide several examples of applications in science and engineering where deep learning based algorithms outperform state of the art methods. | ||||

Inhalt | A selection of the following topics will be presented in the lectures. 1. Issues with traditional methods for scientific computing such as Finite Element, Finite Volume etc, particularly for PDE models with high-dimensional state and parameter spaces. 2. Introduction to Deep Learning: Artificial Neural networks, Supervised learning, Stochastic gradient descent algorithms for training, different architectures: Convolutional Neural Networks, Recurrent Neural Networks, ResNets. 3. Theoretical Foundations: Universal approximation properties of the Neural networks, Bias-Variance decomposition, Bounds on approximation and generalization errors. 4. Supervised deep learning for solutions fields and observables of high-dimensional parametric PDEs. Use of low-discrepancy sequences and multi-level training to reduce generalization error. 5. Uncertainty Quantification for PDEs with supervised learning algorithms. 6. Deep Neural Networks as Reduced order models and prediction of solution fields. 7. Active Learning algorithms for PDE constrained optimization. 8. Recurrent Neural Networks and prediction of time series for dynamical systems. 9. Physics Informed Neural networks (PINNs) for the forward problem for PDEs. Applications to high-dimensional PDEs. 10. PINNs for inverse problems for PDEs, parameter identification, optimal control and data assimilation. All the algorithms will be illustrated on a variety of PDEs: diffusion models, Black-Scholes type PDEs from finance, wave equations, Euler and Navier-Stokes equations, hyperbolic systems of conservation laws, Dispersive PDEs among others. | ||||

Skript | Lecture notes will be provided at the end of the course. | ||||

Literatur | All the material in the course is based on research articles written in last 1-2 years. The relevant references will be provided. | ||||

Voraussetzungen / Besonderes | The students should be familiar with numerical methods for PDEs, for instance in courses such as Numerical Methods for PDEs for CSE, Numerical analysis of Elliptic and Parabolic PDEs, Numerical methods for hyperbolic PDEs, Computational methods for Engineering Applications. Some familiarity with basic concepts in machine learning will be beneficial. The exercises in the course rely on standard machine learning frameworks such as KERAS, TENSORFLOW or PYTORCH. So, competence in Python is helpful. | ||||

401-4656-21L | Deep Learning in Scientific Computing Aimed at students in a Master's Programme in Mathematics, Engineering and Physics. | 6 KP | 2V + 1U | S. Mishra, B. Moseley | |

Kurzbeschreibung | Machine Learning, particularly deep learning is being increasingly applied to perform, enhance and accelerate computer simulations of models in science and engineering. This course aims to present a highly topical selection of themes in the general area of deep learning in scientific computing, with an emphasis on the application of deep learning algorithms for systems, modeled by PDEs. | ||||

Lernziel | The objective of this course will be to introduce students to advanced applications of deep learning in scientific computing. The focus will be on the design and implementation of algorithms as well as on the underlying theory that guarantees reliability of the algorithms. We will provide several examples of applications in science and engineering where deep learning based algorithms outperform state of the art methods. | ||||

Inhalt | A selection of the following topics will be presented in the lectures. 1. Issues with traditional methods for scientific computing such as Finite Element, Finite Volume etc, particularly for PDE models with high-dimensional state and parameter spaces. 2. Introduction to Deep Learning: Artificial Neural networks, Supervised learning, Stochastic gradient descent algorithms for training, different architectures: Convolutional Neural Networks, Recurrent Neural Networks, ResNets. 3. Theoretical Foundations: Universal approximation properties of the Neural networks, Bias-Variance decomposition, Bounds on approximation and generalization errors. 4. Supervised deep learning for solutions fields and observables of high-dimensional parametric PDEs. Use of low-discrepancy sequences and multi-level training to reduce generalization error. 5. Uncertainty Quantification for PDEs with supervised learning algorithms. 6. Deep Neural Networks as Reduced order models and prediction of solution fields. 7. Active Learning algorithms for PDE constrained optimization. 8. Recurrent Neural Networks and prediction of time series for dynamical systems. 9. Physics Informed Neural networks (PINNs) for the forward problem for PDEs. Applications to high-dimensional PDEs. 10. PINNs for inverse problems for PDEs, parameter identification, optimal control and data assimilation. All the algorithms will be illustrated on a variety of PDEs: diffusion models, Black-Scholes type PDEs from finance, wave equations, Euler and Navier-Stokes equations, hyperbolic systems of conservation laws, Dispersive PDEs among others. | ||||

Skript | Lecture notes will be provided at the end of the course. | ||||

Literatur | All the material in the course is based on research articles written in last 1-2 years. The relevant references will be provided. | ||||

Voraussetzungen / Besonderes | The students should be familiar with numerical methods for PDEs, for instance in courses such as Numerical Methods for PDEs for CSE, Numerical analysis of Elliptic and Parabolic PDEs, Numerical methods for hyperbolic PDEs, Computational methods for Engineering Applications. Some familiarity with basic concepts in machine learning will be beneficial. The exercises in the course rely on standard machine learning frameworks such as KERAS, TENSORFLOW or PYTORCH. So, competence in Python is helpful. | ||||

401-5000-00L | Zurich Colloquium in Mathematics | 0 KP | R. Abgrall, M. Iacobelli, A. Bandeira, A. Iozzi, S. Mishra, R. Pandharipande, Uni-Dozierende | ||

Kurzbeschreibung | The lectures try to give an overview of "what is going on" in important areas of contemporary mathematics, to a wider non-specialised audience of mathematicians. | ||||

Lernziel | |||||

401-5650-00L | Zurich Colloquium in Applied and Computational Mathematics | 0 KP | 1K | R. Abgrall, R. Alaifari, H. Ammari, R. Hiptmair, S. Mishra, S. Sauter, C. Schwab | |

Kurzbeschreibung | Forschungskolloquium | ||||

Lernziel |