Search result: Catalogue data in Spring Semester 2021
Mathematics Master | ||||||
Electives For the Master's degree in Applied Mathematics the following additional condition (not manifest in myStudies) must be obeyed: At least 15 of the required 28 credits from core courses and electives must be acquired in areas of applied mathematics and further application-oriented fields. | ||||||
Electives: Applied Mathematics and Further Application-Oriented Fields ¬ | ||||||
Selection: Probability Theory, Statistics | ||||||
Number | Title | Type | ECTS | Hours | Lecturers | |
---|---|---|---|---|---|---|
401-4611-21L | Rough Path Theory | W | 4 credits | 2V | A. Allan, J. Teichmann | |
Abstract | The aim of this course is to provide an introduction to the theory of rough paths, with a particular focus on their integration theory and associated rough differential equations, and how the theory relates to and enhances the field of stochastic calculus. | |||||
Learning objective | Our first motivation will be to understand the limitations of classical notions of integration to handle paths of very low regularity, and to see how the rough integral succeeds where other notions fail. We will construct rough integrals and establish solutions of differential equations driven by rough paths, as well as the continuity of these objects with respect to the paths involved, and their consistency with stochastic integration and SDEs. Various applications and extensions of the theory will then be discussed. | |||||
Lecture notes | Lecture notes will be provided by the lecturer. | |||||
Literature | P. K. Friz and M. Hairer, A course on rough paths with an introduction to regularity structures, Springer (2014). P. K. Friz and N. B. Victoir. Multidimensional stochastic processes as rough paths, Cambridge University Press (2010). | |||||
Prerequisites / Notice | The aim will be to make the course as self-contained as possible, but some knowledge of stochastic analysis is highly recommended. The course “Brownian Motion and Stochastic Calculus” would be ideal, but not strictly required. | |||||
401-4626-00L | Advanced Statistical Modelling: Mixed Models Does not take place this semester. | W | 4 credits | 2V | M. Mächler | |
Abstract | Mixed Models = (*| generalized| non-) linear Mixed-effects Models, extend traditional regression models by adding "random effect" terms. In applications, such models are called "hierarchical models", "repeated measures" or "split plot designs". Mixed models are widely used and appropriate in an aera of complex data measured from living creatures from biology to human sciences. | |||||
Learning objective | - Becoming aware how mixed models are more realistic and more powerful in many cases than traditional ("fixed-effects only") regression models. - Learning to fit such models to data correctly, critically interpreting results for such model fits, and hence learning to work the creative cycle of responsible statistical data analysis: "fit -> interpret & diagnose -> modify the fit -> interpret & ...." - Becoming aware of computational and methodological limitations of these models, even when using state-of-the art software. | |||||
Content | The lecture will build on various examples, use R and notably the `lme4` package, to illustrate concepts. The relevant R scripts are made available online. Inference (significance of factors, confidence intervals) will focus on the more realistic *un*balanced situation where classical (ANOVA, sum of squares etc) methods are known to be deficient. Hence, Maximum Likelihood (ML) and its variant, "REML", will be used for estimation and inference. | |||||
Lecture notes | We will work with an unfinished book proposal from Prof Douglas Bates, Wisconsin, USA which itself is a mixture of theory and worked R code examples. These lecture notes and all R scripts are made available from https://github.com/mmaechler/MEMo | |||||
Literature | (see web page and lecture notes) | |||||
Prerequisites / Notice | - We assume a good working knowledge about multiple linear regression ("the general linear model') and an intermediate (not beginner's) knowledge about model based statistics (estimation, confidence intervals,..). Typically this means at least two classes of (math based) statistics, say 1. Intro to probability and statistics 2. (Applied) regression including Matrix-Vector notation Y = X b + E - Basic (1 semester) "Matrix calculus" / linear algebra is also assumed. - If familiarity with [R](https://www.r-project.org/) is not given, it should be acquired during the course (by the student on own initiative). | |||||
401-4627-00L | Empirical Process Theory and Applications | W | 4 credits | 2V | S. van de Geer | |
Abstract | Empirical process theory provides a rich toolbox for studying the properties of empirical risk minimizers, such as least squares and maximum likelihood estimators, support vector machines, etc. | |||||
Learning objective | ||||||
Content | In this series of lectures, we will start with considering exponential inequalities, including concentration inequalities, for the deviation of averages from their mean. We furthermore present some notions from approximation theory, because this enables us to assess the modulus of continuity of empirical processes. We introduce e.g., Vapnik Chervonenkis dimension: a combinatorial concept (from learning theory) of the "size" of a collection of sets or functions. As statistical applications, we study consistency and exponential inequalities for empirical risk minimizers, and asymptotic normality in semi-parametric models. We moreover examine regularization and model selection. | |||||
401-4632-15L | Causality | W | 4 credits | 2G | C. Heinze-Deml | |
Abstract | In statistics, we are used to search for the best predictors of some random variable. In many situations, however, we are interested in predicting a system's behavior under manipulations. For such an analysis, we require knowledge about the underlying causal structure of the system. In this course, we study concepts and theory behind causal inference. | |||||
Learning objective | After this course, you should be able to - understand the language and concepts of causal inference - know the assumptions under which one can infer causal relations from observational and/or interventional data - describe and apply different methods for causal structure learning - given data and a causal structure, derive causal effects and predictions of interventional experiments | |||||
Prerequisites / Notice | Prerequisites: basic knowledge of probability theory and regression | |||||
401-6102-00L | Multivariate Statistics Does not take place this semester. | W | 4 credits | 2G | not available | |
Abstract | Multivariate Statistics deals with joint distributions of several random variables. This course introduces the basic concepts and provides an overview over classical and modern methods of multivariate statistics. We will consider the theory behind the methods as well as their applications. | |||||
Learning objective | After the course, you should be able to: - describe the various methods and the concepts and theory behind them - identify adequate methods for a given statistical problem - use the statistical software "R" to efficiently apply these methods - interpret the output of these methods | |||||
Content | Visualization / Principal component analysis / Multidimensional scaling / The multivariate Normal distribution / Factor analysis / Supervised learning / Cluster analysis | |||||
Lecture notes | None | |||||
Literature | The course will be based on class notes and books that are available electronically via the ETH library. | |||||
Prerequisites / Notice | Target audience: This course is the more theoretical version of "Applied Multivariate Statistics" (401-0102-00L) and is targeted at students with a math background. Prerequisite: A basic course in probability and statistics. Note: The courses 401-0102-00L and 401-6102-00L are mutually exclusive. You may register for at most one of these two course units. | |||||
401-4637-67L | On Hypothesis Testing | W | 4 credits | 2V | F. Balabdaoui | |
Abstract | This course is a review of the main results in decision theory. | |||||
Learning objective | The goal of this course is to present a review for the most fundamental results in statistical testing. This entails reviewing the Neyman-Pearson Lemma for simple hypotheses and the Karlin-Rubin Theorem for monotone likelihood ratio parametric families. The students will also encounter the important concept of p-values and their use in some multiple testing situations. Further methods for constructing tests will be also presented including likelihood ratio and chi-square tests. Some non-parametric tests will be reviewed such as the Kolmogorov goodness-of-fit test and the two sample Wilcoxon rank test. The most important theoretical results will reproved and also illustrated via different examples. Four sessions of exercises will be scheduled (the students will be handed in an exercise sheet a week before discussing solutions in class). | |||||
Literature | - Statistical Inference (Casella & Berger) - Testing Statistical Hypotheses (Lehmann and Romano) |
- Page 1 of 1