Thomas Hofmann: Catalogue data in Spring Semester 2023 |
Name | Prof. Dr. Thomas Hofmann |
Field | Data Analytics |
Address | Dep. Informatik ETH Zürich, CAB F 48.1 Universitätstrasse 6 8092 Zürich SWITZERLAND |
thomas.hofmann@inf.ethz.ch | |
URL | http://www.inf.ethz.ch/department/faculty-profs/person-detail.html?persid=148752 |
Department | Computer Science |
Relationship | Full Professor |
Number | Title | ECTS | Hours | Lecturers | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
252-0055-00L | Information Theory | 4 credits | 2V + 1U | T. Hofmann | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This short course on information theory will introduce fundamental concepts such as entropy, information, sufficiency, typicality, concentration and will present a range of topics from data coding, statistics, inference, decision-making and learning that relate in interesting ways to information theory. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The goal of the course is to familiarize students with the foundations of information theory and to illustrate its practical use across a wide range of applications. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Part 1: Information - Entropy & Information, Sufficiency, Typicality & Concentration Part 2: Coding - Data Compression, Rate-Distortion Theory, Channel Coding Part 3: Inference - Statistical Inference, Maximum Entropy inference, Algorithmic Complexity Part 4: Decisions - Betting Games, Optimal Investment, Evolution Part 5: Learning - Memory, Auto-encoding (may be subject to change) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | A script will be distributed over the course of the semester. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | T. Cover, J. Thomas: Elements of Information Theory, John Wiley, 2006 (2nd Edition) D. MacKay, Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Prerequistes: Probability and Statistics | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
252-0945-16L | Doctoral Seminar Machine Learning (FS23) Only for Computer Science Ph.D. students. This doctoral seminar is intended for PhD students affiliated with the Institute for Machine Learning. Other PhD students who work on machine learning projects or related topics need approval by at least one of the organizers to register for the seminar. | 2 credits | 1S | N. He, V. Boeva, J. M. Buhmann, R. Cotterell, T. Hofmann, A. Krause, M. Sachan, J. Vogt, F. Yang | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | An essential aspect of any research project is dissemination of the findings arising from the study. Here we focus on oral communication, which includes: appropriate selection of material, preparation of the visual aids (slides and/or posters), and presentation skills. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The seminar participants should learn how to prepare and deliver scientific talks as well as to deal with technical questions. Participants are also expected to actively contribute to discussions during presentations by others, thus learning and practicing critical thinking skills. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | This doctoral seminar of the Machine Learning Laboratory of ETH is intended for PhD students who work on a machine learning project, i.e., for the PhD students of the ML lab. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
263-0008-00L | Computational Intelligence Lab Only for master students, otherwise a special permission by the study administration of D-INFK is required. | 8 credits | 2V + 2U + 3A | T. Hofmann | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This laboratory course teaches fundamental concepts in computational science and machine learning with a special emphasis on matrix factorization and representation learning. The class covers techniques like dimension reduction, data clustering, sparse coding, and deep learning as well as a wide spectrum of related use cases and applications. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Students acquire fundamental theoretical concepts and methodologies from machine learning and how to apply these techniques to build intelligent systems that solve real-world problems. They learn to successfully develop solutions to application problems by following the key steps of modeling, algorithm design, implementation and experimental validation. This lab course has a strong focus on practical assignments. Students work in groups of three to four people, to develop solutions to three application problems: 1. Collaborative filtering and recommender systems, 2. Text sentiment classification, and 3. Road segmentation in aerial imagery. For each of these problems, students submit their solutions to an online evaluation and ranking system, and get feedback in terms of numerical accuracy and computational speed. In the final part of the course, students combine and extend one of their previous promising solutions, and write up their findings in an extended abstract in the style of a conference paper. (Disclaimer: The offered projects may be subject to change from year to year.) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | see course description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
263-5002-00L | Generative Visual Models The deadline for deregistering expires at the end of the second week of the semester. Students who are still registered after that date, but do not attend the seminar, will officially fail the seminar. | 4 credits | 2S + 2A | T. Hofmann | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This seminar investigates generative models for image synthesis, which can be controlled via language prompts and visual seeding. The relevant methods will be explained in a few initial classes. Participants will study the research literature and develop project ideas in small groups, which will then be implemented. Presentation of research papers, project ideas, and results is a key component. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The goal of this class is for participants to find, read, understand and critically assess research literature in order to reach the current state of knowledge in the field. Moreover, the project work aims to enrich these readings by hands-on experience and allows for student to develop creative ideas of their own. This is meant to provide a wholistic research experience in small teams. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Phase 1: Introduction & Background During the first weeks of the semester lectures will provide the technical background to understand visual generative models. This includes a historic overview as well as technical deep dives into specialized topics such as stable diffusion and contrastive learning. There will also be a tutorial on suitable software framework to explore and fine-tune such models. Each participant will do a graded pen & paper exercise in order to check on progress. 20% of the grade, correctness of questions. Phase 2: Reading & Planning In the second phase, participants will split up in teams (ideal size 3) and will perform independent reading and planning towards a project idea. Paper suggestions and project sketches will be distributed to provide guidance and inspiration. During this time, participants are also expected to familiarize themselves with the experimental setup (we will locally host models on our GPU servers) and perform some simple warm-up or proof-of-concept experiments to inform the project definition. Each group will give a 15+5 min project pitch and will give/receive feedback from other teams. 30% of the grade, creativity of the idea, clarity of project articulation, recognition of existing work. Phase 3: Project Execution & Presentation In the third phase, teams will implement their project and run the designed experiments to answer the articulated research questions or goals. Participants will have (limited) access to local GPU servers. Each group will produce a written project report and will deliver a presentation. 50% of the grade, success of the project, quality of the experiments, quality of the slides/presentation. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | This hybrid course unit is open to master students enrolled in the Computer Science or Data Science Master program. Enrollement is limited to 20 students. A sufficient background in machine learning (e.g. 252-0220-00L Intro ML, 252-0535-00L Advanced ML) is assumed. The work load during Phase 1-2 will be moderate, but during Phase 3, we expect more intense team work. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
|