Bernd Gärtner: Catalogue data in Spring Semester 2018 |
Name | Prof. Dr. Bernd Gärtner |
Address | Inst. f. Theoretische Informatik ETH Zürich, OAT Z 15 Andreasstrasse 5 8092 Zürich SWITZERLAND |
Telephone | +41 44 632 70 26 |
Fax | +41 44 632 10 63 |
gaertner@inf.ethz.ch | |
URL | http://people.inf.ethz.ch/gaertner/ |
Department | Computer Science |
Relationship | Adjunct Professor |
Number | Title | ECTS | Hours | Lecturers | |
---|---|---|---|---|---|
252-4202-00L | Seminar in Theoretical Computer Science ![]() | 2 credits | 2S | E. Welzl, B. Gärtner, M. Hoffmann, J. Lengler, A. Steger, B. Sudakov | |
Abstract | Presentation of recent publications in theoretical computer science, including results by diploma, masters and doctoral candidates. | ||||
Objective | To get an overview of current research in the areas covered by the involved research groups. To present results from the literature. | ||||
252-4220-00L | A Taste of Research: Algorithms and Combinatorics ![]() ![]() Number of participants limited to 16. | 2 credits | 2S | B. Gärtner, A. Steger, M. Ghaffari | |
Abstract | Students work together with lecturers on open problems in algorithms and combinatorics. | ||||
Objective | The goal is to learn and practice important research techniques: literature search, understanding and presenting research papers, developing ideas in the group, testing of conjectures with the computer, writing down results. | ||||
Content | Work on original research papers and open problems in the areas of algorithms and combinatorics. | ||||
Lecture notes | Not available. | ||||
Literature | Will be announced in the seminar. | ||||
Prerequisites / Notice | Passed exam in Algorithms, Probability, and Computing. | ||||
261-5110-00L | Optimization for Data Science ![]() | 8 credits | 3V + 2U + 2A | B. Gärtner, D. Steurer | |
Abstract | This course teaches an overview of modern optimization methods, with applications in particular for machine learning and data science. | ||||
Objective | Understanding the theoretical and practical aspects of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science. | ||||
Content | This course teaches an overview of modern optimization methods, with applications in particular for machine learning and data science. In the first part of the course, we will discuss how classical first and second order methods such as gradient descent and Newton's method can be adapated to scale to large datasets, in theory and in practice. We also cover some new algorithms and paradigms that have been developed specifically in the context of data science. The emphasis is not so much on the application of these methods (many of which are covered in other courses), but on understanding and analyzing the methods themselves. In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation. | ||||
Prerequisites / Notice | As background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary. |