Bernd Gärtner: Catalogue data in Spring Semester 2018

Name Prof. Dr. Bernd Gärtner
Address
Inst. f. Theoretische Informatik
ETH Zürich, OAT Z 15
Andreasstrasse 5
8092 Zürich
SWITZERLAND
Telephone+41 44 632 70 26
Fax+41 44 632 10 63
E-mailgaertner@inf.ethz.ch
URLhttp://people.inf.ethz.ch/gaertner/
DepartmentComputer Science
RelationshipAdjunct Professor

NumberTitleECTSHoursLecturers
252-4202-00LSeminar in Theoretical Computer Science Information 2 credits2SE. Welzl, B. Gärtner, M. Hoffmann, J. Lengler, A. Steger, B. Sudakov
AbstractPresentation of recent publications in theoretical computer science, including results by diploma, masters and doctoral candidates.
ObjectiveTo get an overview of current research in the areas covered by the involved research groups. To present results from the literature.
252-4220-00LA Taste of Research: Algorithms and Combinatorics Information Restricted registration - show details
Number of participants limited to 16.
2 credits2SB. Gärtner, A. Steger, M. Ghaffari
AbstractStudents work together with lecturers on open problems in algorithms and combinatorics.
ObjectiveThe goal is to learn and practice important research techniques: literature search, understanding and presenting research papers, developing ideas in the group, testing of conjectures with the computer, writing down results.
ContentWork on original research papers and open problems in the areas of algorithms and combinatorics.
Lecture notesNot available.
LiteratureWill be announced in the seminar.
Prerequisites / NoticePassed exam in Algorithms, Probability, and Computing.
261-5110-00LOptimization for Data Science Information 8 credits3V + 2U + 2AB. Gärtner, D. Steurer
AbstractThis course teaches an overview of modern optimization methods, with applications in particular for machine learning and data science.
ObjectiveUnderstanding the theoretical and practical aspects of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science.
ContentThis course teaches an overview of modern optimization methods, with applications in particular for machine learning and data science.

In the first part of the course, we will discuss how classical first and second order methods such as gradient descent and Newton's method can be adapated to scale to large datasets, in theory and in practice. We also cover some new algorithms and paradigms that have been developed specifically in the context of data science. The emphasis is not so much on the application of these methods (many of which are covered in other courses), but on understanding and analyzing the methods themselves.

In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation.
Prerequisites / NoticeAs background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary.