Search result: Catalogue data in Spring Semester 2020
Statistics Master The following courses belong to the curriculum of the Master's Programme in Statistics. The corresponding credits do not count as external credits even for course units where an enrolment at ETH Zurich is not possible. | ||||||
Specialization Areas and Electives | ||||||
Statistical and Mathematical Courses | ||||||
Number | Title | Type | ECTS | Hours | Lecturers | |
---|---|---|---|---|---|---|
401-4632-15L | Causality | W | 4 credits | 2G | C. Heinze-Deml | |
Abstract | In statistics, we are used to search for the best predictors of some random variable. In many situations, however, we are interested in predicting a system's behavior under manipulations. For such an analysis, we require knowledge about the underlying causal structure of the system. In this course, we study concepts and theory behind causal inference. | |||||
Learning objective | After this course, you should be able to - understand the language and concepts of causal inference - know the assumptions under which one can infer causal relations from observational and/or interventional data - describe and apply different methods for causal structure learning - given data and a causal structure, derive causal effects and predictions of interventional experiments | |||||
Prerequisites / Notice | Prerequisites: basic knowledge of probability theory and regression | |||||
401-4627-00L | Empirical Process Theory and Applications | W | 4 credits | 2V | S. van de Geer | |
Abstract | Empirical process theory provides a rich toolbox for studying the properties of empirical risk minimizers, such as least squares and maximum likelihood estimators, support vector machines, etc. | |||||
Learning objective | ||||||
Content | In this series of lectures, we will start with considering exponential inequalities, including concentration inequalities, for the deviation of averages from their mean. We furthermore present some notions from approximation theory, because this enables us to assess the modulus of continuity of empirical processes. We introduce e.g., Vapnik Chervonenkis dimension: a combinatorial concept (from learning theory) of the "size" of a collection of sets or functions. As statistical applications, we study consistency and exponential inequalities for empirical risk minimizers, and asymptotic normality in semi-parametric models. We moreover examine regularization and model selection. | |||||
401-3632-00L | Computational Statistics | W | 8 credits | 3V + 1U | M. H. Maathuis | |
Abstract | We discuss modern statistical methods for data analysis, including methods for data exploration, prediction and inference. We pay attention to algorithmic aspects, theoretical properties and practical considerations. The class is hands-on and methods are applied using the statistical programming language R. | |||||
Learning objective | The student obtains an overview of modern statistical methods for data analysis, including their algorithmic aspects and theoretical properties. The methods are applied using the statistical programming language R. | |||||
Content | See the class website | |||||
Prerequisites / Notice | At least one semester of (basic) probability and statistics. Programming experience is helpful but not required. | |||||
401-3602-00L | Applied Stochastic Processes Does not take place this semester. | W | 8 credits | 3V + 1U | not available | |
Abstract | Poisson processes; renewal processes; Markov chains in discrete and in continuous time; some applications. | |||||
Learning objective | Stochastic processes are a way to describe and study the behaviour of systems that evolve in some random way. In this course, the evolution will be with respect to a scalar parameter interpreted as time, so that we discuss the temporal evolution of the system. We present several classes of stochastic processes, analyse their properties and behaviour and show by some examples how they can be used. The main emphasis is on theory; in that sense, "applied" should be understood to mean "applicable". | |||||
Literature | R. N. Bhattacharya and E. C. Waymire, "Stochastic Processes with Applications", SIAM (2009), available online: http://epubs.siam.org/doi/book/10.1137/1.9780898718997 R. Durrett, "Essentials of Stochastic Processes", Springer (2012), available online: http://link.springer.com/book/10.1007/978-1-4614-3615-7/page/1 M. Lefebvre, "Applied Stochastic Processes", Springer (2007), available online: http://link.springer.com/book/10.1007/978-0-387-48976-6/page/1 S. I. Resnick, "Adventures in Stochastic Processes", Birkhäuser (2005) | |||||
Prerequisites / Notice | Prerequisites are familiarity with (measure-theoretic) probability theory as it is treated in the course "Probability Theory" (401-3601-00L). | |||||
401-3642-00L | Brownian Motion and Stochastic Calculus | W | 10 credits | 4V + 1U | W. Werner | |
Abstract | This course covers some basic objects of stochastic analysis. In particular, the following topics are discussed: construction and properties of Brownian motion, stochastic integration, Ito's formula and applications, stochastic differential equations and connection with partial differential equations. | |||||
Learning objective | This course covers some basic objects of stochastic analysis. In particular, the following topics are discussed: construction and properties of Brownian motion, stochastic integration, Ito's formula and applications, stochastic differential equations and connection with partial differential equations. | |||||
Lecture notes | Lecture notes will be distributed in class. | |||||
Literature | - J.-F. Le Gall, Brownian Motion, Martingales, and Stochastic Calculus, Springer (2016). - I. Karatzas, S. Shreve, Brownian Motion and Stochastic Calculus, Springer (1991). - D. Revuz, M. Yor, Continuous Martingales and Brownian Motion, Springer (2005). - L.C.G. Rogers, D. Williams, Diffusions, Markov Processes and Martingales, vol. 1 and 2, Cambridge University Press (2000). - D.W. Stroock, S.R.S. Varadhan, Multidimensional Diffusion Processes, Springer (2006). | |||||
Prerequisites / Notice | Familiarity with measure-theoretic probability as in the standard D-MATH course "Probability Theory" will be assumed. Textbook accounts can be found for example in - J. Jacod, P. Protter, Probability Essentials, Springer (2004). - R. Durrett, Probability: Theory and Examples, Cambridge University Press (2010). | |||||
401-6228-00L | Programming with R for Reproducible Research | W | 1 credit | 1G | M. Mächler | |
Abstract | Deeper understanding of R: Function calls, rather than "commands". Reproducible research and data analysis via Sweave and Rmarkdown. Limits of floating point arithmetic. Understanding how functions work. Environments, packages, namespaces. Closures, i.e., Functions returning functions. Lists and [mc]lapply() for easy parallelization. Performance measurement and improvements. | |||||
Learning objective | Learn to understand R as a (very versatile and flexible) programming language and learn about some of its lower level functionalities which are needed to understand *why* R works the way it does. | |||||
Content | See "Skript": https://github.com/mmaechler/ProgRRR/tree/master/ETH | |||||
Lecture notes | Material available from Github https://github.com/mmaechler/ProgRRR/tree/master/ETH (typically will be updated during course) | |||||
Literature | Norman Matloff (2011) The Art of R Programming - A tour of statistical software design. no starch press, San Francisco. on stock at Polybuchhandlung (CHF 42.-). More material, notably H.Wickam's "Advanced R" : see my ProgRRR github page. | |||||
Prerequisites / Notice | R Knowledge on the same level as after *both* parts of the ETH lecture 401-6217-00L Using R for Data Analysis and Graphics Link An interest to dig deeper than average R users do. Bring your own laptop with a recent version of R installed | |||||
401-3629-00L | Quantitative Risk Management | W | 4 credits | 2V + 1U | P. Cheridito | |
Abstract | This course introduces methods from probability theory and statistics that can be used to model financial risks. Topics addressed include loss distributions, risk measures, extreme value theory, multivariate models, copulas, dependence structures and operational risk. | |||||
Learning objective | The goal is to learn the most important methods from probability theory and statistics used in financial risk modeling. | |||||
Content | 1. Introduction 2. Basic Concepts in Risk Management 3. Empirical Properties of Financial Data 4. Financial Time Series 5. Extreme Value Theory 6. Multivariate Models 7. Copulas and Dependence 8. Operational Risk | |||||
Lecture notes | Course material is available on https://people.math.ethz.ch/~patrickc/qrm | |||||
Literature | Quantitative Risk Management: Concepts, Techniques and Tools AJ McNeil, R Frey and P Embrechts Princeton University Press, Princeton, 2015 (Revised Edition) http://press.princeton.edu/titles/10496.html | |||||
Prerequisites / Notice | The course corresponds to the Risk Management requirement for the SAA ("Aktuar SAV Ausbildung") as well as for the Master of Science UZH-ETH in Quantitative Finance. | |||||
401-4658-00L | Computational Methods for Quantitative Finance: PDE Methods | W | 6 credits | 3V + 1U | C. Schwab | |
Abstract | Introduction to principal methods of option pricing. Emphasis on PDE-based methods. Prerequisite MATLAB programming and knowledge of numerical mathematics at ETH BSc level. | |||||
Learning objective | Introduce the main methods for efficient numerical valuation of derivative contracts in a Black Scholes as well as in incomplete markets due Levy processes or due to stochastic volatility models. Develop implementation of pricing methods in MATLAB. Finite-Difference/ Finite Element based methods for the solution of the pricing integrodifferential equation. | |||||
Content | 1. Review of option pricing. Wiener and Levy price process models. Deterministic, local and stochastic volatility models. 2. Finite Difference Methods for option pricing. Relation to bi- and multinomial trees. European contracts. 3. Finite Difference methods for Asian, American and Barrier type contracts. 4. Finite element methods for European and American style contracts. 5. Pricing under local and stochastic volatility in Black-Scholes Markets. 6. Finite Element Methods for option pricing under Levy processes. Treatment of integrodifferential operators. 7. Stochastic volatility models for Levy processes. 8. Techniques for multidimensional problems. Baskets in a Black-Scholes setting and stochastic volatility models in Black Scholes and Levy markets. 9. Introduction to sparse grid option pricing techniques. | |||||
Lecture notes | There will be english, typed lecture notes as well as MATLAB software for registered participants in the course. | |||||
Literature | R. Cont and P. Tankov : Financial Modelling with Jump Processes, Chapman and Hall Publ. 2004. Y. Achdou and O. Pironneau : Computational Methods for Option Pricing, SIAM Frontiers in Applied Mathematics, SIAM Publishers, Philadelphia 2005. D. Lamberton and B. Lapeyre : Introduction to stochastic calculus Applied to Finance (second edition), Chapman & Hall/CRC Financial Mathematics Series, Taylor & Francis Publ. Boca Raton, London, New York 2008. J.-P. Fouque, G. Papanicolaou and K.-R. Sircar : Derivatives in financial markets with stochastic volatility, Cambridge Univeristy Press, Cambridge, 2000. N. Hilber, O. Reichmann, Ch. Schwab and Ch. Winter: Computational Methods for Quantitative Finance, Springer Finance, Springer, 2013. | |||||
401-2284-00L | Measure and Integration | W | 6 credits | 3V + 2U | F. Da Lio | |
Abstract | Introduction to abstract measure and integration theory, including the following topics: Caratheodory extension theorem, Lebesgue measure, convergence theorems, L^p-spaces, Radon-Nikodym theorem, product measures and Fubini's theorem, measures on topological spaces | |||||
Learning objective | Basic acquaintance with the abstract theory of measure and integration | |||||
Content | Introduction to abstract measure and integration theory, including the following topics: Caratheodory extension theorem, Lebesgue measure, convergence theorems, L^p-spaces, Radon-Nikodym theorem, product measures and Fubini's theorem, measures on topological spaces | |||||
Lecture notes | New lecture notes in English will be made available during the course | |||||
Literature | 1. L. Evans and R.F. Gariepy " Measure theory and fine properties of functions" 2. Walter Rudin "Real and complex analysis" 3. R. Bartle The elements of Integration and Lebesgue Measure 4. The notes by Prof. Michael Struwe Springsemester 2013, https://people.math.ethz.ch/~struwe/Skripten/AnalysisIII-FS2013-12-9-13.pdf. 5. The notes by Prof. UrsLang Springsemester 2019. https://people.math.ethz.ch/~lang/mi.pdf 6. P. Cannarsa & T. D'Aprile: Lecture notes on Measure Theory and Functional Analysis: http://www.mat.uniroma2.it/~cannarsa/cam_0607.pdf . | |||||
401-3903-11L | Geometric Integer Programming | W | 6 credits | 2V + 1U | J. Paat | |
Abstract | Integer programming is the task of minimizing a linear function over all the integer points in a polyhedron. This lecture introduces the key concepts of an algorithmic theory for solving such problems. | |||||
Learning objective | The purpose of the lecture is to provide a geometric treatment of the theory of integer optimization. | |||||
Content | Key topics are: - Lattice theory and the polynomial time solvability of integer optimization problems in fixed dimension. - Structural properties of integer sets that reveal other parameters affecting the complexity of integer problems - Duality theory for integer optimization problems from the vantage point of lattice free sets. | |||||
Lecture notes | not available, blackboard presentation | |||||
Literature | Lecture notes will be provided. Other helpful materials include Bertsimas, Weismantel: Optimization over Integers, 2005 and Schrijver: Theory of linear and integer programming, 1986. | |||||
Prerequisites / Notice | "Mathematical Optimization" (401-3901-00L) | |||||
401-4944-20L | Mathematics of Data Science | W | 8 credits | 4G | A. Bandeira | |
Abstract | Mostly self-contained, but fast-paced, introductory masters level course on various theoretical aspects of algorithms that aim to extract information from data. | |||||
Learning objective | Introduction to various mathematical aspects of Data Science. | |||||
Content | These topics lie in overlaps of (Applied) Mathematics with: Computer Science, Electrical Engineering, Statistics, and/or Operations Research. Each lecture will feature a couple of Mathematical Open Problem(s) related to Data Science. The main mathematical tools used will be Probability and Linear Algebra, and a basic familiarity with these subjects is required. There will also be some (although knowledge of these tools is not assumed) Graph Theory, Representation Theory, Applied Harmonic Analysis, among others. The topics treated will include Dimension reduction, Manifold learning, Sparse recovery, Random Matrices, Approximation Algorithms, Community detection in graphs, and several others. | |||||
Lecture notes | https://people.math.ethz.ch/~abandeira/TenLecturesFortyTwoProblems.pdf | |||||
Prerequisites / Notice | The main mathematical tools used will be Probability, Linear Algebra (and real analysis), and a working knowledge of these subjects is required. In addition to these prerequisites, this class requires a certain degree of mathematical maturity--including abstract thinking and the ability to understand and write proofs. We encourage students who are interested in mathematical data science to take both this course and ``227-0434-10L Mathematics of Information'' taught by Prof. H. Bölcskei. The two courses are designed to be complementary. A. Bandeira and H. Bölcskei | |||||
227-0434-10L | Mathematics of Information | W | 8 credits | 3V + 2U + 2A | H. Bölcskei | |
Abstract | The class focuses on mathematical aspects of 1. Information science: Sampling theorems, frame theory, compressed sensing, sparsity, super-resolution, spectrum-blind sampling, subspace algorithms, dimensionality reduction 2. Learning theory: Approximation theory, uniform laws of large numbers, Rademacher complexity, Vapnik-Chervonenkis dimension | |||||
Learning objective | The aim of the class is to familiarize the students with the most commonly used mathematical theories in data science, high-dimensional data analysis, and learning theory. The class consists of the lecture, exercise sessions with homework problems, and of a research project, which can be carried out either individually or in groups. The research project consists of either 1. software development for the solution of a practical signal processing or machine learning problem or 2. the analysis of a research paper or 3. a theoretical research problem of suitable complexity. Students are welcome to propose their own project at the beginning of the semester. The outcomes of all projects have to be presented to the entire class at the end of the semester. | |||||
Content | Mathematics of Information 1. Signal representations: Frame theory, wavelets, Gabor expansions, sampling theorems, density theorems 2. Sparsity and compressed sensing: Sparse linear models, uncertainty relations in sparse signal recovery, matching pursuits, super-resolution, spectrum-blind sampling, subspace algorithms (MUSIC, ESPRIT, matrix pencil), estimation in the high-dimensional noisy case, Lasso 3. Dimensionality reduction: Random projections, the Johnson-Lindenstrauss Lemma Mathematics of Learning 4. Approximation theory: Nonlinear approximation theory, fundamental limits on compressibility of signal classes, Kolmogorov-Tikhomirov epsilon-entropy of signal classes, optimal compression of signal classes, recovery from incomplete data, information-based complexity, curse of dimensionality 5. Uniform laws of large numbers: Rademacher complexity, Vapnik-Chervonenkis dimension, classes with polynomial discrimination, blessings of dimensionality | |||||
Lecture notes | Detailed lecture notes will be provided at the beginning of the semester and as we go along. | |||||
Prerequisites / Notice | This course is aimed at students with a background in basic linear algebra, analysis, statistics, and probability. We encourage students who are interested in mathematical data science to take both this course and "401-4944-20L Mathematics of Data Science" by Prof. A. Bandeira. The two courses are designed to be complementary. H. Bölcskei and A. Bandeira | |||||
261-5110-00L | Optimization for Data Science | W | 8 credits | 3V + 2U + 2A | B. Gärtner, D. Steurer | |
Abstract | This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in data science. | |||||
Learning objective | Understanding the theoretical guarantees (and their limits) of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science. | |||||
Content | This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in machine learning and data science. In the first part of the course, we will first give a brief introduction to convex optimization, with some basic motivating examples from machine learning. Then we will analyse classical and more recent first and second order methods for convex optimization: gradient descent, projected gradient descent, subgradient descent, stochastic gradient descent, Nesterov's accelerated method, Newton's method, and Quasi-Newton methods. The emphasis will be on analysis techniques that occur repeatedly in convergence analyses for various classes of convex functions. We will also discuss some classical and recent theoretical results for nonconvex optimization. In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation. | |||||
Prerequisites / Notice | As background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary. | |||||
252-0220-00L | Introduction to Machine Learning Limited number of participants. Preference is given to students in programmes in which the course is being offered. All other students will be waitlisted. Please do not contact Prof. Krause for any questions in this regard. If necessary, please contact studiensekretariat@inf.ethz.ch | W | 8 credits | 4V + 2U + 1A | A. Krause | |
Abstract | The course introduces the foundations of learning and making predictions based on data. | |||||
Learning objective | The course will introduce the foundations of learning and making predictions from data. We will study basic concepts such as trading goodness of fit and model complexitiy. We will discuss important machine learning algorithms used in practice, and provide hands-on experience in a course project. | |||||
Content | - Linear regression (overfitting, cross-validation/bootstrap, model selection, regularization, [stochastic] gradient descent) - Linear classification: Logistic regression (feature selection, sparsity, multi-class) - Kernels and the kernel trick (Properties of kernels; applications to linear and logistic regression); k-nearest neighbor - Neural networks (backpropagation, regularization, convolutional neural networks) - Unsupervised learning (k-means, PCA, neural network autoencoders) - The statistical perspective (regularization as prior; loss as likelihood; learning as MAP inference) - Statistical decision theory (decision making based on statistical models and utility functions) - Discriminative vs. generative modeling (benefits and challenges in modeling joint vy. conditional distributions) - Bayes' classifiers (Naive Bayes, Gaussian Bayes; MLE) - Bayesian approaches to unsupervised learning (Gaussian mixtures, EM) | |||||
Literature | Textbook: Kevin Murphy, Machine Learning: A Probabilistic Perspective, MIT Press | |||||
Prerequisites / Notice | Designed to provide a basis for following courses: - Advanced Machine Learning - Deep Learning - Probabilistic Artificial Intelligence - Seminar "Advanced Topics in Machine Learning" | |||||
252-0526-00L | Statistical Learning Theory | W | 7 credits | 3V + 2U + 1A | J. M. Buhmann, C. Cotrini Jimenez | |
Abstract | The course covers advanced methods of statistical learning: - Variational methods and optimization. - Deterministic annealing. - Clustering for diverse types of data. - Model validation by information theory. | |||||
Learning objective | The course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning. | |||||
Content | - Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing. - Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures. - Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation. - Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models. | |||||
Lecture notes | A draft of a script will be provided. Lecture slides will be made available. | |||||
Literature | Hastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001. L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996 | |||||
Prerequisites / Notice | Knowledge of machine learning (introduction to machine learning and/or advanced machine learning) Basic knowledge of statistics. | |||||
252-3900-00L | Big Data for Engineers This course is not intended for Computer Science and Data Science MSc students! | W | 6 credits | 2V + 2U + 1A | G. Fourny | |
Abstract | This course is part of the series of database lectures offered to all ETH departments, together with Information Systems for Engineers. It introduces the most recent advances in the database field: how do we scale storage and querying to Petabytes of data, with trillions of records? How do we deal with heterogeneous data sets? How do we deal with alternate data shapes like trees and graphs? | |||||
Learning objective | This lesson is complementary with Information Systems for Engineers as they cover different time periods of database history and practices -- you can even take both lectures at the same time. The key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations. This combination of requirements, together with the technologies that have emerged in order to address them, is typically referred to as "Big Data." This revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm". Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it were and are still needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small. The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof. After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each business use case efficiently and consistently. | |||||
Content | This course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe. It targets specifically students with a scientific or Engineering, but not Computer Science, background. We take the monolithic, one-machine relational stack from the 1970s, smash it down and rebuild it on top of large clusters: starting with distributed storage, and all the way up to syntax, models, validation, processing, indexing, and querying. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem. No data is harmed during this course, however, please be psychologically prepared that our data may not always be in normal form. - physical storage: distributed file systems (HDFS), object storage(S3), key-value stores - logical storage: document stores (MongoDB), column stores (HBase) - data formats and syntaxes (XML, JSON, RDF, CSV, YAML, protocol buffers, Avro) - data shapes and models (tables, trees) - type systems and schemas: atomic types, structured types (arrays, maps), set-based type systems (?, *, +) - an overview of functional, declarative programming languages across data shapes (SQL, JSONiq) - the most important query paradigms (selection, projection, joining, grouping, ordering, windowing) - paradigms for parallel processing, two-stage (MapReduce) and DAG-based (Spark) - resource management (YARN) - what a data center is made of and why it matters (racks, nodes, ...) - underlying architectures (internal machinery of HDFS, HBase, Spark) - optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing) - applications. Large scale analytics and machine learning are outside of the scope of this course. | |||||
Literature | Papers from scientific conferences and journals. References will be given as part of the course material during the semester. | |||||
Prerequisites / Notice | This course is not intended for Computer Science and Data Science students. Computer Science and Data Science students interested in Big Data MUST attend the Master's level Big Data lecture, offered in Fall. Requirements: programming knowledge (Java, C++, Python, PHP, ...) as well as basic knowledge on databases (SQL). If you have already built your own website with a backend SQL database, this is perfect. Attendance is especially recommended to those who attended Information Systems for Engineers last Fall, which introduced the "good old databases of the 1970s" (SQL, tables and cubes). However, this is not a strict requirement, and it is also possible to take the lectures in reverse order. | |||||
263-5300-00L | Guarantees for Machine Learning | W | 5 credits | 2V + 2A | F. Yang | |
Abstract | This course teaches classical and recent methods in statistics and optimization commonly used to prove theoretical guarantees for machine learning algorithms. The knowledge is then applied in project work that focuses on understanding phenomena in modern machine learning. | |||||
Learning objective | This course is aimed at advanced master and doctorate students who want to understand and/or conduct independent research on theory for modern machine learning. For this purpose, students will learn common mathematical techniques from statistical learning theory. In independent project work, they then apply their knowledge and go through the process of critically questioning recently published work, finding relevant research questions and learning how to effectively present research ideas to a professional audience. | |||||
Content | This course teaches some classical and recent methods in statistical learning theory aimed at proving theoretical guarantees for machine learning algorithms, including topics in - concentration bounds, uniform convergence - high-dimensional statistics (e.g. Lasso) - prediction error bounds for non-parametric statistics (e.g. in kernel spaces) - minimax lower bounds - regularization via optimization The project work focuses on active theoretical ML research that aims to understand modern phenomena in machine learning, including but not limited to - how overparameterization could help generalization ( interpolating models, linearized NN ) - how overparameterization could help optimization ( non-convex optimization, loss landscape ) - complexity measures and approximation theoretic properties of randomly initialized and trained NN - generalization of robust learning ( adversarial robustness, standard and robust error tradeoff ) - prediction with calibrated confidence ( conformal prediction, calibration ) | |||||
Prerequisites / Notice | It’s absolutely necessary for students to have a strong mathematical background (basic real analysis, probability theory, linear algebra) and good knowledge of core concepts in machine learning taught in courses such as “Introduction to Machine Learning”, “Regression”/ “Statistical Modelling”. It's also helpful to have heard an optimization course or approximation theoretic course. In addition to these prerequisites, this class requires a certain degree of mathematical maturity—including abstract thinking and the ability to understand and write proofs. | |||||
636-0702-00L | Statistical Models in Computational Biology | W | 6 credits | 2V + 1U + 2A | N. Beerenwinkel | |
Abstract | The course offers an introduction to graphical models and their application to complex biological systems. Graphical models combine a statistical methodology with efficient algorithms for inference in settings of high dimension and uncertainty. The unifying graphical model framework is developed and used to examine several classical and topical computational biology methods. | |||||
Learning objective | The goal of this course is to establish the common language of graphical models for applications in computational biology and to see this methodology at work for several real-world data sets. | |||||
Content | Graphical models are a marriage between probability theory and graph theory. They combine the notion of probabilities with efficient algorithms for inference among many random variables. Graphical models play an important role in computational biology, because they explicitly address two features that are inherent to biological systems: complexity and uncertainty. We will develop the basic theory and the common underlying formalism of graphical models and discuss several computational biology applications. Topics covered include conditional independence, Bayesian networks, Markov random fields, Gaussian graphical models, EM algorithm, junction tree algorithm, model selection, Dirichlet process mixture, causality, the pair hidden Markov model for sequence alignment, probabilistic phylogenetic models, phylo-HMMs, microarray experiments and gene regulatory networks, protein interaction networks, learning from perturbation experiments, time series data and dynamic Bayesian networks. Some of the biological applications will be explored in small data analysis problems as part of the exercises. | |||||
Lecture notes | no | |||||
Literature | - Airoldi EM (2007) Getting started in probabilistic graphical models. PLoS Comput Biol 3(12): e252. doi:10.1371/journal.pcbi.0030252 - Bishop CM. Pattern Recognition and Machine Learning. Springer, 2007. - Durbin R, Eddy S, Krogh A, Mitchinson G. Biological Sequence Analysis. Cambridge university Press, 2004 | |||||
701-0104-00L | Statistical Modelling of Spatial Data | W | 3 credits | 2G | A. J. Papritz | |
Abstract | In environmental sciences one often deals with spatial data. When analysing such data the focus is either on exploring their structure (dependence on explanatory variables, autocorrelation) and/or on spatial prediction. The course provides an introduction to geostatistical methods that are useful for such analyses. | |||||
Learning objective | The course will provide an overview of the basic concepts and stochastic models that are used to model spatial data. In addition, participants will learn a number of geostatistical techniques and acquire familiarity with R software that is useful for analyzing spatial data. | |||||
Content | After an introductory discussion of the types of problems and the kind of data that arise in environmental research, an introduction into linear geostatistics (models: stationary and intrinsic random processes, modelling large-scale spatial patterns by linear regression, modelling autocorrelation by variogram; kriging: mean square prediction of spatial data) will be taught. The lectures will be complemented by data analyses that the participants have to do themselves. | |||||
Lecture notes | Slides, descriptions of the problems for the data analyses and solutions to them will be provided. | |||||
Literature | P.J. Diggle & P.J. Ribeiro Jr. 2007. Model-based Geostatistics. Springer. Bivand, R. S., Pebesma, E. J. & Gómez-Rubio, V. 2013. Applied Spatial Data Analysis with R. Springer. | |||||
Prerequisites / Notice | Familiarity with linear regression analysis (e.g. equivalent to the first part of the course 401-0649-00L Applied Statistical Regression) and with the software R (e.g. 401-6215-00L Using R for Data Analysis and Graphics (Part I), 401-6217-00L Using R for Data Analysis and Graphics (Part II)) are required for attending the course. | |||||
401-6222-00L | Robust and Nonlinear Regression | W | 2 credits | 1V + 1U | A. F. Ruckstuhl | |
Abstract | In a first part, the basic ideas of robust fitting techniques are explained theoretically and practically using regression models and explorative multivariate analysis. The second part addresses the challenges of fitting nonlinear regression functions and finding reliable confidence intervals. | |||||
Learning objective | Participants are familiar with common robust fitting methods for the linear regression models as well as for exploratory multivariate analysis and are able to assess their suitability for the data at hand. They know the challenges that arise in fitting of nonlinear regression functions, and know the difference between classical and profile based methods to determine confidence intervals. They can apply the discussed methods in practise by using the statistics software R. | |||||
Content | Robust fitting: influence function, breakdown point, regression M-estimation, regression MM-estimation, robust inference, covariance estimation with high breakdown point, application in principal component analysis and linear discriminant analysis. Nonlinear regression: the nonlinear regression model, estimation methods, approximate tests and confidence intervals, estimation methods, profile t plot, profile traces, parameter transformation, prediction and calibration | |||||
Lecture notes | Lecture notes are available | |||||
Prerequisites / Notice | It is a block course on three Mondays in June |
- Page 1 of 2 All