Search result: Catalogue data in Spring Semester 2022

Statistics Master Information
The following courses belong to the curriculum of the Master's Programme in Statistics. The corresponding credits do not count as external credits even for course units where an enrolment at ETH Zurich is not possible.
Master Studies (Programme Regulations 2020)
Core Courses
Statistical Modelling
Course units are offered in the autumn semester.
Applied Statistics
NumberTitleTypeECTSHoursLecturers
401-3632-00LComputational StatisticsW8 credits3V + 1UN. Meinshausen
AbstractWe discuss modern statistical methods for data analysis, including methods for data exploration, prediction and inference. We pay attention to algorithmic aspects, theoretical properties and practical considerations. The class is hands-on and methods are applied using the statistical programming language R.
Learning objectiveThe student obtains an overview of modern statistical methods for data analysis, including their algorithmic aspects and theoretical properties. The methods are applied using the statistical programming language R.
ContentSee the class website
Prerequisites / NoticeAt least one semester of (basic) probability and statistics.

Programming experience is helpful but not required.
Mathematical Statistics
Course units are offered in the autumn semester.
Subject Specific Electives
NumberTitleTypeECTSHoursLecturers
252-3900-00LBig Data for Engineers Information
This course is not intended for Computer Science and Data Science MSc students!
W6 credits2V + 2U + 1AG. Fourny
AbstractThis course is part of the series of database lectures offered to all ETH departments, together with Information Systems for Engineers. It introduces the most recent advances in the database field: how do we scale storage and querying to Petabytes of data, with trillions of records? How do we deal with heterogeneous data sets? How do we deal with alternate data shapes like trees and graphs?
Learning objectiveThis lesson is complementary with Information Systems for Engineers as they cover different time periods of database history and practices -- you can even take both lectures at the same time.

The key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations.

This combination of requirements, together with the technologies that have emerged in order to address them, is typically referred to as "Big Data." This revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm".

Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it were and are still needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small.

The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof.

After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each business use case efficiently and consistently.
ContentThis course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe.

It targets specifically students with a scientific or Engineering, but not Computer Science, background.

We take the monolithic, one-machine relational stack from the 1970s, smash it down and rebuild it on top of large clusters: starting with distributed storage, and all the way up to syntax, models, validation, processing, indexing, and querying. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem.

No data is harmed during this course, however, please be psychologically prepared that our data may not always be in normal form.

- physical storage: distributed file systems (HDFS), object storage(S3), key-value stores

- logical storage: document stores (MongoDB), column stores (HBase)

- data formats and syntaxes (XML, JSON, RDF, CSV, YAML, protocol buffers, Avro)

- data shapes and models (tables, trees)

- type systems and schemas: atomic types, structured types (arrays, maps), set-based type systems (?, *, +)

- an overview of functional, declarative programming languages across data shapes (SQL, JSONiq)

- the most important query paradigms (selection, projection, joining, grouping, ordering, windowing)

- paradigms for parallel processing, two-stage (MapReduce) and DAG-based (Spark)

- resource management (YARN)

- what a data center is made of and why it matters (racks, nodes, ...)

- underlying architectures (internal machinery of HDFS, HBase, Spark)

- optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing)

- applications.

Large scale analytics and machine learning are outside of the scope of this course.
LiteraturePapers from scientific conferences and journals. References will be given as part of the course material during the semester.
Prerequisites / NoticeThis course is not intended for Computer Science and Data Science students. Computer Science and Data Science students interested in Big Data MUST attend the Master's level Big Data lecture, offered in Fall.

Requirements: programming knowledge (Java, C++, Python, PHP, ...) as well as basic knowledge on databases (SQL). If you have already built your own website with a backend SQL database, this is perfect.

Attendance is especially recommended to those who attended Information Systems for Engineers last Fall, which introduced the "good old databases of the 1970s" (SQL, tables and cubes). However, this is not a strict requirement, and it is also possible to take the lectures in reverse order.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingassessed
Media and Digital Technologiesfostered
Problem-solvingassessed
Project Managementfostered
Social CompetenciesCommunicationassessed
Cooperation and Teamworkfostered
Customer Orientationfostered
Leadership and Responsibilityfostered
Self-presentation and Social Influence fostered
Sensitivity to Diversityfostered
Negotiationassessed
Personal CompetenciesAdaptability and Flexibilityfostered
Creative Thinkingassessed
Critical Thinkingassessed
Integrity and Work Ethicsfostered
Self-awareness and Self-reflection fostered
Self-direction and Self-management fostered
252-0220-00LIntroduction to Machine Learning Information Restricted registration - show details
Limited number of participants. Preference is given to students in programmes in which the course is being offered. All other students will be waitlisted. Please do not contact Prof. Krause for any questions in this regard. If necessary, please contact studiensekretariat@inf.ethz.ch
W8 credits4V + 2U + 1AA. Krause, F. Yang
AbstractThe course introduces the foundations of learning and making predictions based on data.
Learning objectiveThe course will introduce the foundations of learning and making predictions from data. We will study basic concepts such as trading goodness of fit and model complexitiy. We will discuss important machine learning algorithms used in practice, and provide hands-on experience in a course project.
Content- Linear regression (overfitting, cross-validation/bootstrap, model selection, regularization, [stochastic] gradient descent)
- Linear classification: Logistic regression (feature selection, sparsity, multi-class)
- Kernels and the kernel trick (Properties of kernels; applications to linear and logistic regression); k-nearest neighbor
- Neural networks (backpropagation, regularization, convolutional neural networks)
- Unsupervised learning (k-means, PCA, neural network autoencoders)
- The statistical perspective (regularization as prior; loss as likelihood; learning as MAP inference)
- Statistical decision theory (decision making based on statistical models and utility functions)
- Discriminative vs. generative modeling (benefits and challenges in modeling joint vy. conditional distributions)
- Bayes' classifiers (Naive Bayes, Gaussian Bayes; MLE)
- Bayesian approaches to unsupervised learning (Gaussian mixtures, EM)
LiteratureTextbook: Kevin Murphy, Machine Learning: A Probabilistic Perspective, MIT Press
Prerequisites / NoticeDesigned to provide a basis for following courses:
- Advanced Machine Learning
- Deep Learning
- Probabilistic Artificial Intelligence
- Seminar "Advanced Topics in Machine Learning"
401-4632-15LCausality Information
Does not take place this semester.
W4 credits2Gto be announced
AbstractIn statistics, we are used to search for the best predictors of some random variable. In many situations, however, we are interested in predicting a system's behavior under manipulations. For such an analysis, we require knowledge about the underlying causal structure of the system. In this course, we study concepts and theory behind causal inference.
Learning objectiveAfter this course, you should be able to
- understand the language and concepts of causal inference
- know the assumptions under which one can infer causal relations from observational and/or interventional data
- describe and apply different methods for causal structure learning
- given data and a causal structure, derive causal effects and predictions of interventional experiments
Prerequisites / NoticePrerequisites: basic knowledge of probability theory and regression
401-3602-00LApplied Stochastic Processes Information
Does not take place this semester.
W8 credits3V + 1Unot available
AbstractPoisson processes; renewal processes; Markov chains in discrete and in continuous time; some applications.
Learning objectiveStochastic processes are a way to describe and study the behaviour of systems that evolve in some random way. In this course, the evolution will be with respect to a scalar parameter interpreted as time, so that we discuss the temporal evolution of the system. We present several classes of stochastic processes, analyse their properties and behaviour and show by some examples how they can be used. The main emphasis is on theory; in that sense, "applied" should be understood to mean "applicable".
LiteratureR. N. Bhattacharya and E. C. Waymire, "Stochastic Processes with Applications", SIAM (2009), available online: http://epubs.siam.org/doi/book/10.1137/1.9780898718997
R. Durrett, "Essentials of Stochastic Processes", Springer (2012), available online: http://link.springer.com/book/10.1007/978-1-4614-3615-7/page/1
M. Lefebvre, "Applied Stochastic Processes", Springer (2007), available online: http://link.springer.com/book/10.1007/978-0-387-48976-6/page/1
S. I. Resnick, "Adventures in Stochastic Processes", Birkhäuser (2005)
Prerequisites / NoticePrerequisites are familiarity with (measure-theoretic) probability theory as it is treated in the course "Probability Theory" (401-3601-00L).
401-3642-00LBrownian Motion and Stochastic Calculus Information W10 credits4V + 1UM. Schweizer
AbstractThis course gives an introduction to Brownian motion and stochastic calculus. It includes the construction and properties of Brownian motion, basics of Markov processes in continuous time and of Levy processes, and stochastic calculus for continuous semimartingales.
Learning objectiveThis course gives an introduction to Brownian motion and stochastic calculus. The following topics are planned:
- Definition and construction of Brownian motion
- Some important properties of Brownian motion
- Basics of Markov processes in continuous time
- Stochastic calculus, including stochastic integration for continuous semimartingales, Ito's formula, Girsanov's theorem, stochastic differential equations and connections with partial differential equations
- Basics of Levy processes
Lecture notesLecture notes will be made available in class.
Literature- R.F. Bass, Stochastic Processes, Cambidge University Press (2001).
- I. Karatzas, S. Shreve, Brownian Motion and Stochastic Calculus, Springer (1991).
- J.-F. Le Gall, Brownian Motion, Martingales, and Stochastic Calculus, Springer (2016).
- D. Revuz, M. Yor, Continuous Martingales and Brownian Motion, Springer (2005).
- L.C.G. Rogers, D. Williams, Diffusions, Markov Processes and Martingales, vol. 1 and 2, Cambridge University Press (2000).
Prerequisites / NoticeFamiliarity with measure-theoretic probability as in the standard D-MATH course "Probability Theory" will be assumed. Textbook accounts can be found for example in
- J. Jacod, P. Protter, Probability Essentials, Springer (2004).
- R. Durrett, Probability: Theory and Examples, Cambridge University Press (2010).
401-6228-00LProgramming with R for Reproducible Research Information W1 credit1GM. Mächler
AbstractDeeper understanding of R: Function calls, rather than "commands".
Reproducible research and data analysis via Sweave and Rmarkdown.
Limits of floating point arithmetic.
Understanding how functions work. Environments, packages, namespaces.
Closures, i.e., Functions returning functions.
Lists and [mc]lapply() for easy parallelization.
Performance measurement and improvements.
Learning objectiveLearn to understand R as a (very versatile and flexible) programming language and learn about some of its lower level functionalities which are needed to understand *why* R works the way it does.
ContentSee "Skript": https://github.com/mmaechler/ProgRRR/tree/master/ETH
Lecture notesMaterial available from Github
https://github.com/mmaechler/ProgRRR/tree/master/ETH

(typically will be updated during course)
LiteratureNorman Matloff (2011) The Art of R Programming - A tour of statistical software design.
no starch press, San Francisco. on stock at Polybuchhandlung (CHF 42.-).

More material, notably H.Wickam's "Advanced R" : see my ProgRRR github page.
Prerequisites / NoticeR Knowledge on the same level as after *both* parts of the ETH lecture
401-6217-00L Using R for Data Analysis and Graphics
Link

An interest to dig deeper than average R users do.

Bring your own laptop with a recent version of R installed
401-4627-00LEmpirical Process Theory and ApplicationsW4 credits2VS. van de Geer
AbstractEmpirical process theory provides a rich toolbox for studying the properties of empirical risk minimizers, such as least squares and maximum likelihood estimators, support vector machines, etc.
Learning objective
ContentIn this series of lectures, we will start with considering exponential inequalities, including concentration inequalities, for the deviation of averages from their mean. We furthermore present some notions from approximation theory, because this enables us to assess the modulus of continuity of empirical processes. We introduce e.g., Vapnik Chervonenkis dimension: a combinatorial concept (from learning theory) of the "size" of a collection of sets or functions. As statistical applications, we study consistency and exponential inequalities for empirical risk minimizers, and asymptotic normality in semi-parametric models. We moreover examine regularization and model selection.
401-3629-00LQuantitative Risk Management Information W4 credits2V + 1UP. Cheridito
AbstractThis course introduces methods from probability theory and statistics that can be used to model financial risks. Topics addressed include loss distributions, risk measures, extreme value theory, multivariate models, copulas, dependence structures and operational risk.
Learning objectiveThe goal is to learn the most important methods from probability theory and statistics used in financial risk modeling.
Content1. Introduction
2. Basic Concepts in Risk Management
3. Empirical Properties of Financial Data
4. Financial Time Series
5. Extreme Value Theory
6. Multivariate Models
7. Copulas and Dependence
8. Operational Risk
Lecture notesCourse material is available on https://people.math.ethz.ch/~patrickc/qrm
LiteratureQuantitative Risk Management: Concepts, Techniques and Tools
AJ McNeil, R Frey and P Embrechts
Princeton University Press, Princeton, 2015 (Revised Edition)
http://press.princeton.edu/titles/10496.html
Prerequisites / NoticeThe course corresponds to the Risk Management requirement for the SAA ("Aktuar SAV Ausbildung") as well as for the Master of Science UZH-ETH in Quantitative Finance.
261-5110-00LOptimization for Data Science Information W10 credits3V + 2U + 4AB. Gärtner, N. He
AbstractThis course provides an in-depth theoretical treatment of optimization methods that are relevant in data science.
Learning objectiveUnderstanding the guarantees and limits of relevant optimization methods used in data science. Learning theoretical paradigms and techniques to deal with optimization problems arising in data science.
ContentThis course provides an in-depth theoretical treatment of classical and modern optimization methods that are relevant in data science.

After a general discussion about the role that optimization has in the process of learning from data, we give an introduction to the theory of (convex) optimization. Based on this, we present and analyze algorithms in the following four categories: first-order methods (gradient and coordinate descent, Frank-Wolfe, subgradient and mirror descent, stochastic and incremental gradient methods); second-order methods (Newton and quasi Newton methods); non-convexity (local convergence, provable global convergence, cone programming, convex relaxations); min-max optimization (extragradient methods).

The emphasis is on the motivations and design principles behind the algorithms, on provable performance bounds, and on the mathematical tools and techniques to prove them. The goal is to equip students with a fundamental understanding about why optimization algorithms work, and what their limits are. This understanding will be of help in selecting suitable algorithms in a given application, but providing concrete practical guidance is not our focus.
Prerequisites / NoticeA solid background in analysis and linear algebra; some background in theoretical computer science (computational complexity, analysis of algorithms); the ability to understand and write mathematical proofs.
252-0526-00LStatistical Learning Theory Information
Does not take place this semester.
W8 credits3V + 2U + 2AJ. M. Buhmann
AbstractThe course covers advanced methods of statistical learning:

- Variational methods and optimization.
- Deterministic annealing.
- Clustering for diverse types of data.
- Model validation by information theory.
Learning objectiveThe course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning.
Content- Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing.

- Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures.

- Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation.

- Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models.
Lecture notesA draft of a script will be provided. Lecture slides will be made available.
LiteratureHastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001.

L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996
Prerequisites / NoticeKnowledge of machine learning (introduction to machine learning and/or advanced machine learning)
Basic knowledge of statistics.
227-0432-00LLearning, Classification and Compression Information W4 credits2V + 1UE. Riegler
AbstractThe focus of the course is aligned to a theoretical approach of learning theory and classification and an introduction to lossy and lossless compression for general sets and measures. We will mainly focus on a probabilistic approach, where an underlying distribution must be learned/compressed. The concepts acquired in the course are of broad and general interest in data sciences.
Learning objectiveAfter attending this lecture and participating in the exercise sessions, students will have acquired a working knowledge of learning theory, classification, and compression.
Content1. Learning Theory
(a) Framework of Learning
(b) Hypothesis Spaces and Target Functions
(c) Reproducing Kernel Hilbert Spaces
(d) Bias-Variance Tradeoff
(e) Estimation of Sample and Approximation Error

2. Classification
(a) Binary Classifier
(b) Support Vector Machines (separable case)
(c) Support Vector Machines (nonseparable case)
(d) Kernel Trick

3. Lossy and Lossless Compression
(a) Basics of Compression
(b) Compressed Sensing for General Sets and Measures
(c) Quantization and Rate Distortion Theory for General Sets and Measures
Lecture notesDetailed lecture notes will be provided.
Prerequisites / NoticeThis course is aimed at students with a solid background in measure theory and linear algebra and basic knowledge in functional analysis.
636-0702-00LStatistical Models in Computational BiologyW6 credits2V + 1U + 2AN. Beerenwinkel
AbstractThe course offers an introduction to graphical models and their application to complex biological systems. Graphical models combine a statistical methodology with efficient algorithms for inference in settings of high dimension and uncertainty. The unifying graphical model framework is developed and used to examine several classical and topical computational biology methods.
Learning objectiveThe goal of this course is to establish the common language of graphical models for applications in computational biology and to see this methodology at work for several real-world data sets.
ContentGraphical models are a marriage between probability theory and graph theory. They combine the notion of probabilities with efficient algorithms for inference among many random variables. Graphical models play an important role in computational biology, because they explicitly address two features that are inherent to biological systems: complexity and uncertainty. We will develop the basic theory and the common underlying formalism of graphical models and discuss several computational biology applications. Topics covered include conditional independence, Bayesian networks, Markov random fields, Gaussian graphical models, EM algorithm, junction tree algorithm, model selection, Dirichlet process mixture, causality, the pair hidden Markov model for sequence alignment, probabilistic phylogenetic models, phylo-HMMs, microarray experiments and gene regulatory networks, protein interaction networks, learning from perturbation experiments, time series data and dynamic Bayesian networks. Some of the biological applications will be explored in small data analysis problems as part of the exercises.
Lecture notesno
Literature- Airoldi EM (2007) Getting started in probabilistic graphical models. PLoS Comput Biol 3(12): e252. doi:10.1371/journal.pcbi.0030252
- Bishop CM. Pattern Recognition and Machine Learning. Springer, 2007.
- Durbin R, Eddy S, Krogh A, Mitchinson G. Biological Sequence Analysis. Cambridge university Press, 2004
401-6222-00LRobust and Nonlinear Regression Information W2 credits2GA. F. Ruckstuhl
AbstractIn a first part, the basic ideas of robust fitting techniques are explained theoretically and practically using regression models and explorative multivariate analysis.

The second part addresses the challenges of fitting nonlinear regression functions and finding reliable confidence intervals.
Learning objectiveParticipants are familiar with common robust fitting methods for the linear regression models as well as for exploratory multivariate analysis and are able to assess their suitability for the data at hand.

They know the challenges that arise in fitting of nonlinear regression functions, and know the difference between classical and profile based methods to determine confidence intervals.

They can apply the discussed methods in practise by using the statistics software R.
ContentRobust fitting: influence function, breakdown point, regression M-estimation, regression MM-estimation, robust inference, covariance estimation with high breakdown point, application in principal component analysis and linear discriminant analysis.

Nonlinear regression: the nonlinear regression model, estimation methods, approximate tests and confidence intervals, estimation methods, profile t plot, profile traces, parameter transformation, prediction and calibration
Lecture notesLecture notes are available
Prerequisites / NoticeIt is a block course on three Mondays in June
401-8618-00LStatistical Methods in Epidemiology (University of Zurich)
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student.
UZH Module Code: STA408

Mind the enrolment deadlines at UZH:
https://www.uzh.ch/cmsssl/en/studies/application/deadlines.html
W5 credits3GUniversity lecturers
AbstractAnalysis of case-control and cohort studies. The most relevant measures
of effect (odds and rate ratios) are introduced, and methods for
adjusting for confounders (Mantel-Haenszel, regression) are thoroughly
discussed. Advanced topics such as measurement error and propensity
score adjustments are also covered. We will outline statistical methods
for case-crossover and case series studies etc.
Learning objective
401-4626-00LAdvanced Statistical Modelling: Mixed ModelsW4 credits2VM. Mächler
AbstractMixed Models = (*| generalized| non-) linear Mixed-effects Models, extend traditional regression models by adding "random effect" terms.

In applications, such models are called "hierarchical models", "repeated measures" or "split plot designs". Mixed models are widely used and appropriate in an aera of complex data measured from living creatures from biology to human sciences.
Learning objective- Becoming aware how mixed models are more realistic and more powerful in many cases than traditional ("fixed-effects only") regression models.

- Learning to fit such models to data correctly, critically interpreting results for such model fits, and hence learning to work the creative cycle of responsible statistical data analysis:
"fit -> interpret & diagnose -> modify the fit -> interpret & ...."

- Becoming aware of computational and methodological limitations of these models, even when using state-of-the art software.
ContentThe lecture will build on various examples, use R and notably the `lme4` package, to illustrate concepts. The relevant R scripts are made available online.

Inference (significance of factors, confidence intervals) will focus on the more realistic *un*balanced situation where classical (ANOVA, sum of squares etc) methods are known to be deficient. Hence, Maximum Likelihood (ML) and its variant, "REML", will be used for estimation and inference.
Lecture notesWe will work with an unfinished book proposal from Prof Douglas Bates, Wisconsin, USA which itself is a mixture of theory and worked R code examples.

These lecture notes and all R scripts are made available from
https://github.com/mmaechler/MEMo
Literature(see web page and lecture notes)
Prerequisites / Notice- We assume a good working knowledge about multiple linear regression ("the general linear model') and an intermediate (not beginner's) knowledge about model based statistics (estimation, confidence intervals,..).

Typically this means at least two classes of (math based) statistics, say
1. Intro to probability and statistics
2. (Applied) regression including Matrix-Vector notation Y = X b + E

- Basic (1 semester) "Matrix calculus" / linear algebra is also assumed.

- If familiarity with [R](https://www.r-project.org/) is not given, it should be acquired during the course (by the student on own initiative).
401-8628-00LSurvival Analysis (University of Zurich)
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student.
UZH Module Code: STA425

Mind the enrolment deadlines at UZH:
https://www.uzh.ch/cmsssl/en/studies/application/deadlines.html
W3 credits1.5GUniversity lecturers
AbstractThe analysis of survival times, or in more general terms, the analysis
of time to event variables is concerned with models for censored
observations. Because we cannot always wait until the event of interest
actually happens, the methods discussed here are required for an
appropriate handling of incomplete observations where we only know that
the event of interest did not happen within ...
Learning objective
ContentDuring the course, we will study the most important methods and models
for censored data, including
- general concepts of censoring,
- simple summary statistics,
- estimation of survival curves,
- frequentist inference for two and more groups, and
- regression models for censored observations
227-0434-10LMathematics of Information Information W8 credits3V + 2U + 2AH. Bölcskei
AbstractThe class focuses on mathematical aspects of

1. Information science: Sampling theorems, frame theory, compressed sensing, sparsity, super-resolution, spectrum-blind sampling, subspace algorithms, dimensionality reduction

2. Learning theory: Approximation theory, greedy algorithms, uniform laws of large numbers, Rademacher complexity, Vapnik-Chervonenkis dimension
Learning objectiveThe aim of the class is to familiarize the students with the most commonly used mathematical theories in data science, high-dimensional data analysis, and learning theory. The class consists of the lecture and exercise sessions with homework problems.
ContentMathematics of Information

1. Signal representations: Frame theory, wavelets, Gabor expansions, sampling theorems, density theorems

2. Sparsity and compressed sensing: Sparse linear models, uncertainty relations in sparse signal recovery, super-resolution, spectrum-blind sampling, subspace algorithms (ESPRIT), estimation in the high-dimensional noisy case, Lasso

3. Dimensionality reduction: Random projections, the Johnson-Lindenstrauss Lemma

Mathematics of Learning

4. Approximation theory: Nonlinear approximation theory, best M-term approximation, greedy algorithms, fundamental limits on compressibility of signal classes, Kolmogorov-Tikhomirov epsilon-entropy of signal classes, optimal compression of signal classes

5. Uniform laws of large numbers: Rademacher complexity, Vapnik-Chervonenkis dimension, classes with polynomial discrimination
Lecture notesDetailed lecture notes will be provided at the beginning of the semester.
Prerequisites / NoticeThis course is aimed at students with a background in basic linear algebra, analysis, statistics, and probability.

We encourage students who are interested in mathematical data science to take both this course and "401-4944-20L Mathematics of Data Science" by Prof. A. Bandeira. The two courses are designed to be complementary.

H. Bölcskei and A. Bandeira
401-3904-22LConvex OptimizationW6 credits3GA. A. Kurpisz
AbstractIntroduction to Convex Optimization with a focus on algorithms and the numerous applications of Convex Optimization.
Learning objectiveThe main goal of this course is to obtain a solid understanding of classical Convex Optimization techniques and their numerous applications, including in Data Science, Machine Learning, and, more generally, in science and engineering. Apart from building up a solid foundational understanding of Convex Optimization, students also get hands-on experience through regular coding exercises. This aims at providing a holistic view on the process of identifying, modeling, and solving a wide range of computational questions that can be cast as Convex Optimization problems.
ContentKey topics include:
- Introduction to Convex Optimization.
- Subclasses of Convex Optimization: Semidefinite Programming, Second-Order Cone Programming and Geometric Programming.
- Applications of Convex Optimization in science and engineering.
- Algorithms for Convex Optimization.
Lecture notesA script will be provided.
Literature- Boyd, S., \& Vandenberghe, L. (2004). Convex Optimization. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511804441
Prerequisites / NoticeBackground in Linear Programming is recommended.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingassessed
Media and Digital Technologiesfostered
Problem-solvingassessed
Project Managementfostered
Social CompetenciesCommunicationassessed
Cooperation and Teamworkfostered
Customer Orientationfostered
Leadership and Responsibilityfostered
Self-presentation and Social Influence fostered
Sensitivity to Diversityfostered
Negotiationfostered
Personal CompetenciesAdaptability and Flexibilityfostered
Creative Thinkingassessed
Critical Thinkingfostered
Integrity and Work Ethicsfostered
Self-awareness and Self-reflection fostered
Self-direction and Self-management fostered
263-5300-00LGuarantees for Machine Learning Information Restricted registration - show details
Does not take place this semester.
Number of participants limited to 30.

The course will take place next autumn semester 2022.
W7 credits3G + 3AF. Yang
AbstractThis course is aimed at advanced master and doctorate students who want to conduct independent research on theory for modern machine learning (ML). It teaches classical and recent methods in statistical learning theory commonly used to prove theoretical guarantees for ML algorithms. The knowledge is then applied in independent project work that focuses on understanding modern ML phenomena.
Learning objectiveLearning objectives:

- acquire enough mathematical background to understand a good fraction of theory papers published in the typical ML venues. For this purpose, students will learn common mathematical techniques from statistics and optimization in the first part of the course and apply this knowledge in the project work
- critically examine recently published work in terms of relevance and determine impactful (novel) research problems. This will be an integral part of the project work and involves experimental as well as theoretical questions
- find and outline an approach (some subproblem) to prove a conjectured theorem. This will be practiced in lectures / exercise and homeworks and potentially in the final project.
- effectively communicate and present the problem motivation, new insights and results to a technical audience. This will be primarily learned via the final presentation and report as well as during peer-grading of peer talks.
ContentThis course touches upon foundational methods in statistical learning theory aimed at proving theoretical guarantees for machine learning algorithms, touching on the following topics
- concentration bounds
- uniform convergence and empirical process theory
- high-dimensional statistics (e.g. sparsity)
- regularization for non-parametric statistics (e.g. in RKHS, neural networks)
- implicit regularization via gradient descent (e.g. margins, early stopping)
- minimax lower bounds

The project work focuses on current theoretical ML research that aims to understand modern phenomena in machine learning, including but not limited to
- how overparameterization could help generalization ( RKHS, NN )
- how overparameterization could help optimization ( non-convex optimization, loss landscape )
- complexity measures and approximation theoretic properties of randomly initialized and trained NN
- generalization of robust learning ( adversarial robustness, standard and robust error tradeoff, distribution shift)
Prerequisites / NoticeIt’s absolutely necessary for students to have a strong mathematical background (basic real analysis, probability theory, linear algebra) and good knowledge of core concepts in machine learning taught in courses such as “Introduction to Machine Learning”, “Regression”/ “Statistical Modelling”. In addition to these prerequisites, this class requires a high degree of mathematical maturity—including abstract thinking and the ability to understand and write proofs.

Students have usually taken a subset of Fundamentals of Mathematical Statistics, Probabilistic AI, Neural Network Theory, Optimization for Data Science, Advanced ML, Statistical Learning Theory, Probability Theory (D-MATH)
  •  Page  1  of  4 Next page Last page     All