Niao He: Catalogue data in Spring Semester 2021 |
Name | Prof. Dr. Niao He |
Field | Computer Science |
Address | Professur für Informatik ETH Zürich, OAT Y 21.1 Andreasstrasse 5 8092 Zürich SWITZERLAND |
niao.he@inf.ethz.ch | |
URL | https://odi.inf.ethz.ch/ |
Department | Computer Science |
Relationship | Associate Professor |
Number | Title | ECTS | Hours | Lecturers | |
---|---|---|---|---|---|
252-0945-12L | Doctoral Seminar Machine Learning (FS21) Only for Computer Science Ph.D. students. This doctoral seminar is intended for PhD students affiliated with the Institute for Machine Learning. Other PhD students who work on machine learning projects or related topics need approval by at least one of the organizers to register for the seminar. | 2 credits | 1S | N. He, M. Sachan, J. M. Buhmann, T. Hofmann, A. Krause, G. Rätsch | |
Abstract | An essential aspect of any research project is dissemination of the findings arising from the study. Here we focus on oral communication, which includes: appropriate selection of material, preparation of the visual aids (slides and/or posters), and presentation skills. | ||||
Learning objective | The seminar participants should learn how to prepare and deliver scientific talks as well as to deal with technical questions. Participants are also expected to actively contribute to discussions during presentations by others, thus learning and practicing critical thinking skills. | ||||
Prerequisites / Notice | This doctoral seminar of the Machine Learning Laboratory of ETH is intended for PhD students who work on a machine learning project, i.e., for the PhD students of the ML lab. | ||||
261-5110-00L | Optimization for Data Science | 10 credits | 3V + 2U + 4A | B. Gärtner, D. Steurer, N. He | |
Abstract | This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in data science. | ||||
Learning objective | Understanding the theoretical guarantees (and their limits) of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science. | ||||
Content | This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in machine learning and data science. In the first part of the course, we will first give a brief introduction to convex optimization, with some basic motivating examples from machine learning. Then we will analyse classical and more recent first and second order methods for convex optimization: gradient descent, Nesterov's accelerated method, proximal and splitting algorithms, subgradient descent, stochastic gradient descent, variance-reduced methods, Newton's method, and Quasi-Newton methods. The emphasis will be on analysis techniques that occur repeatedly in convergence analyses for various classes of convex functions. We will also discuss some classical and recent theoretical results for nonconvex optimization. In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation. | ||||
Prerequisites / Notice | As background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary. |