Niao He: Catalogue data in Spring Semester 2021

Name Prof. Dr. Niao He
FieldComputer Science
Address
Professur für Informatik
ETH Zürich, OAT Y 21.1
Andreasstrasse 5
8092 Zürich
SWITZERLAND
E-mailniao.he@inf.ethz.ch
URLhttps://odi.inf.ethz.ch/
DepartmentComputer Science
RelationshipAssociate Professor

NumberTitleECTSHoursLecturers
252-0945-12LDoctoral Seminar Machine Learning (FS21)
Only for Computer Science Ph.D. students.

This doctoral seminar is intended for PhD students affiliated with the Institute for Machine Learning. Other PhD students who work on machine learning projects or related topics need approval by at least one of the organizers to register for the seminar.
2 credits1SN. He, M. Sachan, J. M. Buhmann, T. Hofmann, A. Krause, G. Rätsch
AbstractAn essential aspect of any research project is dissemination of the findings arising from the study. Here we focus on oral communication, which includes: appropriate selection of material, preparation of the visual aids (slides and/or posters), and presentation skills.
Learning objectiveThe seminar participants should learn how to prepare and deliver scientific talks as well as to deal with technical questions. Participants are also expected to actively contribute to discussions during presentations by others, thus learning and practicing critical thinking skills.
Prerequisites / NoticeThis doctoral seminar of the Machine Learning Laboratory of ETH is intended for PhD students who work on a machine learning project, i.e., for the PhD students of the ML lab.
261-5110-00LOptimization for Data Science Information 10 credits3V + 2U + 4AB. Gärtner, D. Steurer, N. He
AbstractThis course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in data science.
Learning objectiveUnderstanding the theoretical guarantees (and their limits) of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science.
ContentThis course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in machine learning and data science.

In the first part of the course, we will first give a brief introduction to convex optimization, with some basic motivating examples from machine learning. Then we will analyse classical and more recent first and second order methods for convex optimization: gradient descent, Nesterov's accelerated method, proximal and splitting algorithms, subgradient descent, stochastic gradient descent, variance-reduced methods, Newton's method, and Quasi-Newton methods. The emphasis will be on analysis techniques that occur repeatedly in convergence analyses for various classes of convex functions. We will also discuss some classical and recent theoretical results for nonconvex optimization.

In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation.
Prerequisites / NoticeAs background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary.