Search result: Catalogue data in Autumn Semester 2022

Computer Science Master Information
Minors
Minor in Machine Learning
NumberTitleTypeECTSHoursLecturers
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
ObjectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
252-3005-00LNatural Language Processing Information Restricted registration - show details
Number of participants limited to 400.
W7 credits3V + 3U + 1AR. Cotterell
AbstractThis course presents topics in natural language processing with an emphasis on modern techniques, primarily focusing on statistical and deep learning approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems.
ObjectiveThe objective of the course is to learn the basic concepts in the statistical processing of natural languages. The course will be project-oriented so that the students can also gain hands-on experience with state-of-the-art tools and techniques.
ContentThis course presents an introduction to general topics and techniques used in natural language processing today, primarily focusing on statistical approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems.
LiteratureLectures will make use of textbooks such as the one by Jurafsky and Martin where appropriate, but will also make use of original research and survey papers.
263-2400-00LReliable and Trustworthy Artificial Intelligence Information W6 credits2V + 2U + 1AM. Vechev
AbstractCreating reliable, secure, robust, and fair machine learning models is a core challenge in artificial intelligence and one of fundamental importance. The goal of the course is to teach both the mathematical foundations of this new and emerging area as well as to introduce students to the latest and most exciting research in the space.
ObjectiveUpon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of engineering and research problems. To facilitate deeper understanding, the course includes a group coding project where students will build a system based on the learned material.
ContentThe course is split into 3 parts:

Robustness in Deep Learning
---------------------------------------

- Adversarial attacks and defenses on deep learning models.
- Automated certification of deep learning models (covering the major trends: convex relaxations and branch-and-bound methods as well as randomized smoothing).
- Certified training of deep neural networks to satisfy given properties (combining symbolic and continuous methods).

Privacy of Machine Learning
-------------------------------------

- Threat models (e.g., stealing data, poisoning, membership inference, etc.).
- Attacking federated machine learning (across modalities such as vision, natural language and tabular) .
- Differential privacy for defending machine learning.
- Enforcing regulations with guarantees (e.g., via provable data minimization).

Fairness of Machine Learning
---------------------------------------

- Introduction to fairness (motivation, definitions).
- Enforcing individual fairness with guarantees (e.g., for both vision or tabular data).
- Enforcing group fairness with guarantees.

More information here: Link.
Prerequisites / NoticeWhile not a formal requirement, the course assumes familiarity with basics of machine learning (especially linear algebra, gradient descent, and neural networks as well as basic probability theory). These topics are usually covered in “Intro to ML” classes at most institutions (e.g., “Introduction to Machine Learning” at ETH).

For solving assignments, some programming experience in Python is expected.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Personal CompetenciesCreative Thinkingassessed
Critical Thinkingassessed
263-3210-00LDeep Learning Information Restricted registration - show details
Number of participants limited to 320.
W8 credits3V + 2U + 2AT. Hofmann, F. Perez Cruz, N. Perraudin
AbstractDeep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations.
ObjectiveIn recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the mathematical foundations of deep learning and provide insights into model design, training, and validation. The main objective is a profound understanding of why these methods work and how. There will also be a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology.
Prerequisites / NoticeThis is an advanced level course that requires some basic background in machine learning. More importantly, students are expected to have a very solid mathematical foundation, including linear algebra, multivariate calculus, and probability. The course will make heavy use of mathematics and is not (!) meant to be an extended tutorial of how to train deep networks with tools like Torch or Tensorflow, although that may be a side benefit.

The participation in the course is subject to the following condition:
- Students must have taken the exam in Advanced Machine Learning (252-0535-00) or have acquired equivalent knowledge, see exhaustive list below:

Advanced Machine Learning
Link

Computational Intelligence Lab
Link

Introduction to Machine Learning
Link

Statistical Learning Theory
Link

Computational Statistics
Link

Probabilistic Artificial Intelligence
Link
263-5005-00LArtificial Intelligence in Education Information W3 credits1V + 0.5UM. Sachan, T. Sinha
AbstractArtificial Intelligence (AI) methods have shown to have a profound impact in educational technologies, where the great variety of tasks and data types enable us to get benefit of AI techniques in many different ways. We will review relevant methods and applications of AI in various educational technologies, and work on problem sets and projects to solve problems in education with the help of AI.
ObjectiveThe course will be centered around exploring methodological and system-focused perspectives on designing AI systems for education and analyzing educational data using AI methods. Students will be expected to a) engage in presentations and active in-class and asynchronous discussion, and b) work on problem-sets exemplifying the use of educational data mining techniques.
ContentThe course will start with an introduction to data mining techniques (e.g., prediction, structured discovery, visualization, and relationship mining) relevant to analyzing educational data. We will then continue with topics on personalization in AI in educational technologies (e.g., learner modeling and knowledge tracing, self-improving AIED systems) while showcasing exemplary applications in areas such as content curation and dialog-based tutoring. Finally, we will cover ethical challenges associated with using AI in student facing settings. Face-to-face meetings will be held every fortnight, although students will be expected to work individually on weekly tasks (e.g., discussing relevant literature, working on problems, preparing seminar presentations).
Lecture notesLecture slides will be made available at the course Web site.
LiteratureNo textbook is required, but there will be regularly assigned readings from research literature, linked to the course website.
Prerequisites / NoticeThere are no prerequisites for this class. However, it will help if the student has taken an undergraduate or graduate level class in statistics, data science or machine learning. This class is appropriate for advanced undergraduates and master students in Computer Science as well as PhD students in other departments.
263-5210-00LProbabilistic Artificial Intelligence Information Restricted registration - show details W8 credits3V + 2U + 2AA. Krause
AbstractThis course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics.
ObjectiveHow can we build systems that perform well in uncertain environments? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as robotics. The course is designed for graduate students.
ContentTopics covered:
- Probability
- Probabilistic inference (variational inference, MCMC)
- Bayesian learning (Gaussian processes, Bayesian deep learning)
- Probabilistic planning (MDPs, POMPDPs)
- Multi-armed bandits and Bayesian optimization
- Reinforcement learning
Prerequisites / NoticeSolid basic knowledge in statistics, algorithms and programming.
The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite.
263-5255-00LFoundations of Reinforcement Learning Information Restricted registration - show details
Does not take place this semester.
Number of participants limited to 190.

The course will be offered again in FS23.
W5 credits2V + 2AN. He
AbstractReinforcement learning (RL) has been in the limelight of many recent breakthroughs in artificial intelligence. This course focuses on theoretical and algorithmic foundations of reinforcement learning, through the lens of optimization, modern approximation, and learning theory. The course targets M.S. students with strong research interests in reinforcement learning, optimization, and control.
ObjectiveThis course aims to provide students with an advanced introduction of RL theory and algorithms as well as bring them near the frontier of this active research field.

By the end of the course, students will be able to
- Identify the strengths and limitations of various reinforcement learning algorithms;
- Formulate and solve sequential decision-making problems by applying relevant reinforcement learning tools;
- Generalize or discover “new” applications, algorithms, or theories of reinforcement learning towards conducting independent research on the topic.
ContentBasic topics include fundamentals of Markov decision processes, approximate dynamic programming, linear programming and primal-dual perspectives of RL, model-based and model-free RL, policy gradient and actor-critic algorithms, Markov games and multi-agent RL. If time allows, we will also discuss advanced topics such as batch RL, inverse RL, causal RL, etc. The course keeps strong emphasis on in-depth understanding of the mathematical modeling and theoretical properties of RL algorithms.
Lecture notesLecture notes will be posted on Moodle.
LiteratureDynamic Programming and Optimal Control, Vol I & II, Dimitris Bertsekas
Reinforcement Learning: An Introduction, Second Edition, Richard Sutton and Andrew Barto.
Algorithms for Reinforcement Learning, Csaba Czepesvári.
Reinforcement Learning: Theory and Algorithms, Alekh Agarwal, Nan Jiang, Sham M. Kakade.
Prerequisites / NoticeStudents are expected to have strong mathematical background in linear algebra, probability theory, optimization, and machine learning.
263-5300-00LGuarantees for Machine Learning Information Restricted registration - show details
Number of participants limited to 30.
W7 credits3V + 1U + 2AF. Yang, A. Sanyal
AbstractThis course is aimed at advanced master and doctorate students who want to conduct independent research on theory for modern machine learning (ML). It teaches standard methods in statistical learning theory commonly used to prove theoretical guarantees for ML algorithms. The knowledge is then applied in independent project work to understand and follow-up on recent theoretical ML results.
ObjectiveBy the end of the semester students should be able to

- understand a good fraction of theory papers published in the typical ML venues. For this purpose, students will learn common mathematical techniques from statistical learning in the first part of the course and apply this knowledge in the project work

- critically examine recently published work in terms of relevance and find impactful (novel) research problems. This will be an integral part of the project work and involves experimental as well as theoretical questions

- outline a possible approach to prove a conjectured theorem by e.g. reducing to more solvable subproblems. This will be practiced in in-person exercises, homeworks and potentially in the final project

- effectively communicate and present the problem motivation, new insights and results to a technical audience. This will be primarily learned via the final presentation and report as well as during peer-grading of peer talks.
ContentThis course touches upon foundational methods in statistical learning theory aimed at proving theoretical guarantees for machine learning algorithms. It touches on the following topics
- concentration bounds
- uniform convergence and empirical process theory
- regularization for non-parametric statistics (e.g. in RKHS, neural networks)
- high-dimensional learning
- computational and statistical learnability (information-theoretic, PAC, SQ)
- overparameterized models, implicit bias and regularization

The project work focuses on current theoretical ML research that aims to understand modern phenomena in machine learning, including but not limited to
- how overparameterized models generalize (statistically) and converge (computationally)
- complexity measures and approximation theoretic properties of randomly initialized and trained neural networks
- generalization of robust learning (adversarial or distribution-shift robustness)
- private and fair learning
Prerequisites / NoticeStudents should have a very strong mathematical background (real analysis, probability theory, linear algebra) and solid knowledge of core concepts in machine learning taught in courses such as “Introduction to Machine Learning”, “Regression”/ “Statistical Modelling”. In addition to these prerequisites, this class requires a high degree of mathematical maturity—including abstract thinking and the ability to understand and write proofs.

Students have usually taken a subset of Fundamentals of Mathematical Statistics, Probabilistic AI, Neural Network Theory, Optimization for Data Science, Advanced ML, Statistical Learning Theory, Probability Theory (D-MATH)
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Social CompetenciesCommunicationassessed
Cooperation and Teamworkassessed
Personal CompetenciesCreative Thinkingassessed
Critical Thinkingassessed
263-5353-00LPhilosophy of Language and Computation Information W5 credits2V + 1U + 1AR. Cotterell, J. L. Gastaldi
AbstractUnderstand the philosophical underpinnings of language-based artificial intelligence.
ObjectiveThis graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which centered around statistical machine learning applied to natural language data.
ContentThis graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which centered around statistical machine learning applied to natural language data. The course is a year-long journey, but the second half (Spring 2023) does not depend on the first (Fall 2022) and thus either half may be taken independently. In each semester, we divide the class time into three modules. Each module is centered around a philosophical topic. In the first semester we will discuss structuralism, recursive structure and logic, and in the second semester we will focus on language games, information and pragmatics. The modules will be four weeks long. During the first two weeks of a module, we will read and discuss original texts and supplementary criticism. During the second two weeks, we will read recent NLP papers and discuss how the authors of those works are building on philosophical insights into our conception of language—perhaps implicitly or unwittingly.
LiteratureThe literature will be provided by the instructors on the class website.
  •  Page  1  of  1