Suchergebnis: Katalogdaten im Frühjahrssemester 2020
Data Science Master | ||||||
Kernfächer | ||||||
Datenanalyse | ||||||
Information and Learning | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|---|
227-0434-10L | Mathematics of Information | W | 8 KP | 3V + 2U + 2A | H. Bölcskei | |
Kurzbeschreibung | The class focuses on mathematical aspects of 1. Information science: Sampling theorems, frame theory, compressed sensing, sparsity, super-resolution, spectrum-blind sampling, subspace algorithms, dimensionality reduction 2. Learning theory: Approximation theory, uniform laws of large numbers, Rademacher complexity, Vapnik-Chervonenkis dimension | |||||
Lernziel | The aim of the class is to familiarize the students with the most commonly used mathematical theories in data science, high-dimensional data analysis, and learning theory. The class consists of the lecture, exercise sessions with homework problems, and of a research project, which can be carried out either individually or in groups. The research project consists of either 1. software development for the solution of a practical signal processing or machine learning problem or 2. the analysis of a research paper or 3. a theoretical research problem of suitable complexity. Students are welcome to propose their own project at the beginning of the semester. The outcomes of all projects have to be presented to the entire class at the end of the semester. | |||||
Inhalt | Mathematics of Information 1. Signal representations: Frame theory, wavelets, Gabor expansions, sampling theorems, density theorems 2. Sparsity and compressed sensing: Sparse linear models, uncertainty relations in sparse signal recovery, matching pursuits, super-resolution, spectrum-blind sampling, subspace algorithms (MUSIC, ESPRIT, matrix pencil), estimation in the high-dimensional noisy case, Lasso 3. Dimensionality reduction: Random projections, the Johnson-Lindenstrauss Lemma Mathematics of Learning 4. Approximation theory: Nonlinear approximation theory, fundamental limits on compressibility of signal classes, Kolmogorov-Tikhomirov epsilon-entropy of signal classes, optimal compression of signal classes, recovery from incomplete data, information-based complexity, curse of dimensionality 5. Uniform laws of large numbers: Rademacher complexity, Vapnik-Chervonenkis dimension, classes with polynomial discrimination, blessings of dimensionality | |||||
Skript | Detailed lecture notes will be provided at the beginning of the semester and as we go along. | |||||
Voraussetzungen / Besonderes | This course is aimed at students with a background in basic linear algebra, analysis, statistics, and probability. We encourage students who are interested in mathematical data science to take both this course and "401-4944-20L Mathematics of Data Science" by Prof. A. Bandeira. The two courses are designed to be complementary. H. Bölcskei and A. Bandeira | |||||
Statistics | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
401-3632-00L | Computational Statistics | W | 8 KP | 3V + 1U | M. H. Maathuis | |
Kurzbeschreibung | We discuss modern statistical methods for data analysis, including methods for data exploration, prediction and inference. We pay attention to algorithmic aspects, theoretical properties and practical considerations. The class is hands-on and methods are applied using the statistical programming language R. | |||||
Lernziel | The student obtains an overview of modern statistical methods for data analysis, including their algorithmic aspects and theoretical properties. The methods are applied using the statistical programming language R. | |||||
Voraussetzungen / Besonderes | At least one semester of (basic) probability and statistics. Programming experience is helpful but not required. | |||||
Datenmanagement und Datenverarbeitung | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
261-5110-00L | Optimization for Data Science | W | 8 KP | 3V + 2U + 2A | B. Gärtner, D. Steurer | |
Kurzbeschreibung | This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in data science. | |||||
Lernziel | Understanding the theoretical guarantees (and their limits) of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science. | |||||
Inhalt | This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in machine learning and data science. In the first part of the course, we will first give a brief introduction to convex optimization, with some basic motivating examples from machine learning. Then we will analyse classical and more recent first and second order methods for convex optimization: gradient descent, projected gradient descent, subgradient descent, stochastic gradient descent, Nesterov's accelerated method, Newton's method, and Quasi-Newton methods. The emphasis will be on analysis techniques that occur repeatedly in convergence analyses for various classes of convex functions. We will also discuss some classical and recent theoretical results for nonconvex optimization. In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation. | |||||
Voraussetzungen / Besonderes | As background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary. | |||||
Wählbare Kernfächer | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
151-0566-00L | Recursive Estimation | W | 4 KP | 2V + 1U | R. D'Andrea | |
Kurzbeschreibung | Estimation of the state of a dynamic system based on a model and observations in a computationally efficient way. | |||||
Lernziel | Learn the basic recursive estimation methods and their underlying principles. | |||||
Inhalt | Introduction to state estimation; probability review; Bayes' theorem; Bayesian tracking; extracting estimates from probability distributions; Kalman filter; extended Kalman filter; particle filter; observer-based control and the separation principle. | |||||
Skript | Lecture notes available on course website: http://www.idsc.ethz.ch/education/lectures/recursive-estimation.html | |||||
Voraussetzungen / Besonderes | Requirements: Introductory probability theory and matrix-vector algebra. | |||||
227-0150-00L | Systems-on-chip for Data Analytics and Machine Learning Previously "Energy-Efficient Parallel Computing Systems for Data Analytics" | W | 6 KP | 4G | L. Benini | |
Kurzbeschreibung | Systems-on-chip architecture and related design issues with a focus on machine learning and data analytics applications. It will cover multi-cores, many-cores, vector engines, GP-GPUs, application-specific processors and heterogeneous compute accelerators. Special emphasis given to energy-efficiency issues and hardware-software techniques for power and energy minimization. | |||||
Lernziel | Give in-depth understanding of the links and dependencies between architectures and their energy-efficient implementation and to get a comprehensive exposure to state-of-the-art systems-on-chip platforms for machine learning and data analytics. Practical experience will also be gained through practical exercises and mini-projects (hardware and software) assigned on specific topics. | |||||
Inhalt | The course will cover advanced system-on-chip architectures, with an in-depth view on design challenges related to advanced silicon technology and state-of-the-art system integration options (nanometer silicon technology, novel storage devices, three-dimensional integration, advanced system packaging). The emphasis will be on programmable parallel architectures with application focus on machine learning and data analytics. The main SoC architectural families will be covered: namely, multi and many- cores, GPUs, vector accelerators, application-specific processors, heterogeneous platforms. The course will cover the complex design choices required to achieve scalability and energy proportionality. The course will will also delve into system design, touching on hardware-software tradeoffs and full-system analysis and optimization taking into account non-functional constraints and quality metrics, such as power consumption, thermal dissipation, reliability and variability. The application focus will be on machine learning both in the cloud and at the edges (near-sensor analytics). | |||||
Skript | Slides will be provided to accompany lectures. Pointers to scientific literature will be given. Exercise scripts and tutorials will be provided. | |||||
Literatur | John L. Hennessy, David A. Patterson, Computer Architecture: A Quantitative Approach (The Morgan Kaufmann Series in Computer Architecture and Design) 6th Edition, 2017. | |||||
Voraussetzungen / Besonderes | Knowledge of digital design at the level of "Design of Digital Circuits SS12" is required. Knowledge of basic VLSI design at the level of "VLSI I: Architectures of VLSI Circuits" is required | |||||
227-0155-00L | Machine Learning on Microcontrollers Registration in this class requires the permission of the instructors. Class size will be limited to 30. Preference is given to students in the MSc EEIT. | W | 6 KP | 3G + 2A | M. Magno, L. Benini | |
Kurzbeschreibung | Machine Learning (ML) and artificial intelligence are pervading the digital society. Today, even low power embedded systems are incorporating ML, becoming increasingly “smart”. This lecture gives an overview of ML methods and algorithms to process and extract useful near-sensor information in end-nodes of the “internet-of-things”, using low-power microcontrollers/ processors (ARM-Cortex-M; RISC-V) | |||||
Lernziel | Learn how to Process data from sensors and how to extract useful information with low power microprocessors using ML techniques. We will analyze data coming from real low-power sensors (accelerometers, microphones, ExG bio-signals, cameras…). The main objective is to study in details how Machine Learning algorithms can be adapted to the performance constraints and limited resources of low-power microcontrollers. | |||||
Inhalt | The final goal of the course is a deep understanding of machine learning and its practical implementation on single- and multi-core microcontrollers, coupled with performance and energy efficiency analysis and optimization. The main topics of the course include: - Sensors and sensor data acquisition with low power embedded systems - Machine Learning: Overview of supervised and unsupervised learning and in particular supervised learning (Bayes Decision Theory, Decision Trees, Random Forests, kNN-Methods, Support Vector Machines, Convolutional Networks and Deep Learning) - Low-power embedded systems and their architecture. Low Power microcontrollers (ARM-Cortex M) and RISC-V-based Parallel Ultra Low Power (PULP) systems-on-chip. - Low power smart sensor system design: hardware-software tradeoffs, analysis, and optimization. Implementation and performance evaluation of ML in battery-operated embedded systems. The laboratory exercised will show how to address concrete design problems, like motion, gesture recognition, emotion detection, image and sound classification, using real sensors data and real MCU boards. Presentations from Ph.D. students and the visit to the Digital Circuits and Systems Group will introduce current research topics and international research projects. | |||||
Skript | Script and exercise sheets. Books will be suggested during the course. | |||||
Voraussetzungen / Besonderes | Prerequisites: Good experience in C language programming. Microprocessors and computer architecture. Basics of Digital Signal Processing. Some exposure to machine learning concepts is also desirable. | |||||
227-0224-00L | Stochastic Systems | W | 4 KP | 2V + 1U | F. Herzog | |
Kurzbeschreibung | Probability. Stochastic processes. Stochastic differential equations. Ito. Kalman filters. St Stochastic optimal control. Applications in financial engineering. | |||||
Lernziel | Stochastic dynamic systems. Optimal control and filtering of stochastic systems. Examples in technology and finance. | |||||
Inhalt | - Stochastic processes - Stochastic calculus (Ito) - Stochastic differential equations - Discrete time stochastic difference equations - Stochastic processes AR, MA, ARMA, ARMAX, GARCH - Kalman filter - Stochastic optimal control - Applications in finance and engineering | |||||
Skript | H. P. Geering et al., Stochastic Systems, Measurement and Control Laboratory, 2007 and handouts | |||||
227-0420-00L | Information Theory II Findet dieses Semester nicht statt. | W | 6 KP | 2V + 2U | A. Lapidoth | |
Kurzbeschreibung | This course builds on Information Theory I. It introduces additional topics in single-user communication, connections between Information Theory and Statistics, and Network Information Theory. | |||||
Lernziel | The course has two objectives: to introduce the students to the key information theoretic results that underlay the design of communication systems and to equip the students with the tools that are needed to conduct research in Information Theory. | |||||
Inhalt | Differential entropy, maximum entropy, the Gaussian channel and water filling, the entropy-power inequality, Sanov's Theorem, Fisher information, the broadcast channel, the multiple-access channel, Slepian-Wolf coding, and the Gelfand-Pinsker problem. | |||||
Skript | n/a | |||||
Literatur | T.M. Cover and J.A. Thomas, Elements of Information Theory, second edition, Wiley 2006 | |||||
227-0558-00L | Principles of Distributed Computing | W | 7 KP | 2V + 2U + 2A | R. Wattenhofer, M. Ghaffari | |
Kurzbeschreibung | We study the fundamental issues underlying the design of distributed systems: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques. | |||||
Lernziel | Distributed computing is essential in modern computing and communications systems. Examples are on the one hand large-scale networks such as the Internet, and on the other hand multiprocessors such as your new multi-core laptop. This course introduces the principles of distributed computing, emphasizing the fundamental issues underlying the design of distributed systems and networks: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques, basically the "pearls" of distributed computing. We will cover a fresh topic every week. | |||||
Inhalt | Distributed computing models and paradigms, e.g. message passing, shared memory, synchronous vs. asynchronous systems, time and message complexity, peer-to-peer systems, small-world networks, social networks, sorting networks, wireless communication, and self-organizing systems. Distributed algorithms, e.g. leader election, coloring, covering, packing, decomposition, spanning trees, mutual exclusion, store and collect, arrow, ivy, synchronizers, diameter, all-pairs-shortest-path, wake-up, and lower bounds | |||||
Skript | Available. Our course script is used at dozens of other universities around the world. | |||||
Literatur | Lecture Notes By Roger Wattenhofer. These lecture notes are taught at about a dozen different universities through the world. Distributed Computing: Fundamentals, Simulations and Advanced Topics Hagit Attiya, Jennifer Welch. McGraw-Hill Publishing, 1998, ISBN 0-07-709352 6 Introduction to Algorithms Thomas Cormen, Charles Leiserson, Ronald Rivest. The MIT Press, 1998, ISBN 0-262-53091-0 oder 0-262-03141-8 Disseminatin of Information in Communication Networks Juraj Hromkovic, Ralf Klasing, Andrzej Pelc, Peter Ruzicka, Walter Unger. Springer-Verlag, Berlin Heidelberg, 2005, ISBN 3-540-00846-2 Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes Frank Thomson Leighton. Morgan Kaufmann Publishers Inc., San Francisco, CA, 1991, ISBN 1-55860-117-1 Distributed Computing: A Locality-Sensitive Approach David Peleg. Society for Industrial and Applied Mathematics (SIAM), 2000, ISBN 0-89871-464-8 | |||||
Voraussetzungen / Besonderes | Course pre-requisites: Interest in algorithmic problems. (No particular course needed.) | |||||
227-0560-00L | Deep Learning for Autonomous Driving Registration in this class requires the permission of the instructors. Class size will be limited to 80 students. Preference is given to EEIT, INF and RSC students. | W | 6 KP | 3V + 2P | D. Dai, A. Liniger | |
Kurzbeschreibung | Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme. | |||||
Lernziel | Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control. After attending this course, students will: 1) understand the core technologies of building a self-driving car; 2) have a good overview over the current state of the art in self-driving cars; 3) be able to critically analyze and evaluate current research in this area; 4) be able to implement basic systems for multiple autonomous driving tasks. | |||||
Inhalt | We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control. The course covers the following main areas: I) Foundation a) Fundamentals of a self-driving car b) Fundamentals of deep-learning II) Perception a) Semantic segmentation and lane detection b) Depth estimation with images and sparse LiDAR data c) 3D object detection with images and LiDAR data d) Object tracking and motion prediction III) Localization a) GPS-based and Vision-based Localization b) Visual Odometry and Lidar Odometry IV) Path Planning and Control a) Path planning for autonomous driving b) Motion planning and vehicle control c) Imitation learning and reinforcement learning for self driving cars The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems: - Sensor calibration and synchronization to obtain multimodal driving data; - Semantic segmentation and depth estimation with deep neural networks ; - Learning to drive with images and map data directly (a.k.a. end-to-end driving) | |||||
Skript | The lecture slides will be provided as a PDF. | |||||
Voraussetzungen / Besonderes | This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image. | |||||
252-0211-00L | Information Security | W | 8 KP | 4V + 3U | D. Basin, S. Capkun, R. Sasse | |
Kurzbeschreibung | This course provides an introduction to Information Security. The focus is on fundamental concepts and models, basic cryptography, protocols and system security, and privacy and data protection. While the emphasis is on foundations, case studies will be given that examine different realizations of these ideas in practice. | |||||
Lernziel | Master fundamental concepts in Information Security and their application to system building. (See objectives listed below for more details). | |||||
Inhalt | 1. Introduction and Motivation (OBJECTIVE: Broad conceptual overview of information security) Motivation: implications of IT on society/economy, Classical security problems, Approaches to defining security and security goals, Abstractions, assumptions, and trust, Risk management and the human factor, Course verview. 2. Foundations of Cryptography (OBJECTIVE: Understand basic cryptographic mechanisms and applications) Introduction, Basic concepts in cryptography: Overview, Types of Security, computational hardness, Abstraction of channel security properties, Symmetric encryption, Hash functions, Message authentication codes, Public-key distribution, Public-key cryptosystems, Digital signatures, Application case studies, Comparison of encryption at different layers, VPN, SSL, Digital payment systems, blind signatures, e-cash, Time stamping 3. Key Management and Public-key Infrastructures (OBJECTIVE: Understand the basic mechanisms relevant in an Internet context) Key management in distributed systems, Exact characterization of requirements, the role of trust, Public-key Certificates, Public-key Infrastructures, Digital evidence and non-repudiation, Application case studies, Kerberos, X.509, PGP. 4. Security Protocols (OBJECTIVE: Understand network-oriented security, i.e.. how to employ building blocks to secure applications in (open) networks) Introduction, Requirements/properties, Establishing shared secrets, Principal and message origin authentication, Environmental assumptions, Dolev-Yao intruder model and variants, Illustrative examples, Formal models and reasoning, Trace-based interleaving semantics, Inductive verification, or model-checking for falsification, Techniques for protocol design, Application case study 1: from Needham-Schroeder Shared-Key to Kerberos, Application case study 2: from DH to IKE. 5. Access Control and Security Policies (OBJECTIVES: Study system-oriented security, i.e., policies, models, and mechanisms) Motivation (relationship to CIA, relationship to Crypto) and examples Concepts: policies versus models versus mechanisms, DAC and MAC, Modeling formalism, Access Control Matrix Model, Roll Based Access Control, Bell-LaPadula, Harrison-Ruzzo-Ullmann, Information flow, Chinese Wall, Biba, Clark-Wilson, System mechanisms: Operating Systems, Hardware Security Features, Reference Monitors, File-system protection, Application case studies 6. Anonymity and Privacy (OBJECTIVE: examine protection goals beyond standard CIA and corresponding mechanisms) Motivation and Definitions, Privacy, policies and policy languages, mechanisms, problems, Anonymity: simple mechanisms (pseudonyms, proxies), Application case studies: mix networks and crowds. 7. Larger application case study: GSM, mobility | |||||
252-0526-00L | Statistical Learning Theory | W | 7 KP | 3V + 2U + 1A | J. M. Buhmann, C. Cotrini Jimenez | |
Kurzbeschreibung | The course covers advanced methods of statistical learning: - Variational methods and optimization. - Deterministic annealing. - Clustering for diverse types of data. - Model validation by information theory. | |||||
Lernziel | The course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning. | |||||
Inhalt | - Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing. - Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures. - Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation. - Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models. | |||||
Skript | A draft of a script will be provided. Lecture slides will be made available. | |||||
Literatur | Hastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001. L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996 | |||||
Voraussetzungen / Besonderes | Knowledge of machine learning (introduction to machine learning and/or advanced machine learning) Basic knowledge of statistics. | |||||
252-0538-00L | Shape Modeling and Geometry Processing | W | 6 KP | 2V + 1U + 2A | O. Sorkine Hornung | |
Kurzbeschreibung | This course covers the fundamentals and some of the latest developments in geometric modeling and geometry processing. Topics include surface modeling based on point clouds and polygonal meshes, mesh generation, surface reconstruction, mesh fairing and parameterization, discrete differential geometry, interactive shape editing, topics in digital shape fabrication. | |||||
Lernziel | The students will learn how to design, program and analyze algorithms and systems for interactive 3D shape modeling and geometry processing. | |||||
Inhalt | Recent advances in 3D geometry processing have created a plenitude of novel concepts for the mathematical representation and interactive manipulation of geometric models. This course covers the fundamentals and some of the latest developments in geometric modeling and geometry processing. Topics include surface modeling based on point clouds and triangle meshes, mesh generation, surface reconstruction, mesh fairing and parameterization, discrete differential geometry, interactive shape editing and digital shape fabrication. | |||||
Skript | Slides and course notes | |||||
Voraussetzungen / Besonderes | Prerequisites: Visual Computing, Computer Graphics or an equivalent class. Experience with C++ programming. Solid background in linear algebra and analysis. Some knowledge of differential geometry, computational geometry and numerical methods is helpful but not a strict requirement. | |||||
252-0579-00L | 3D Vision | W | 5 KP | 3G + 1A | M. Pollefeys, V. Larsson | |
Kurzbeschreibung | The course covers camera models and calibration, feature tracking and matching, camera motion estimation via simultaneous localization and mapping (SLAM) and visual odometry (VO), epipolar and mult-view geometry, structure-from-motion, (multi-view) stereo, augmented reality, and image-based (re-)localization. | |||||
Lernziel | After attending this course, students will: 1. understand the core concepts for recovering 3D shape of objects and scenes from images and video. 2. be able to implement basic systems for vision-based robotics and simple virtual/augmented reality applications. 3. have a good overview over the current state-of-the art in 3D vision. 4. be able to critically analyze and asses current research in this area. | |||||
Inhalt | The goal of this course is to teach the core techniques required for robotic and augmented reality applications: How to determine the motion of a camera and how to estimate the absolute position and orientation of a camera in the real world. This course will introduce the basic concepts of 3D Vision in the form of short lectures, followed by student presentations discussing the current state-of-the-art. The main focus of this course are student projects on 3D Vision topics, with an emphasis on robotic vision and virtual and augmented reality applications. | |||||
252-3005-00L | Natural Language Understanding Findet dieses Semester nicht statt. Findet im HS20 wieder statt. | W | 5 KP | 2V + 1U + 1A | Noch nicht bekannt | |
Kurzbeschreibung | This course presents topics in natural language processing with an emphasis on modern techniques, primarily focusing on statistical and deep learning approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems. | |||||
Lernziel | The objective of the course is to learn the basic concepts in the statistical processing of natural languages. The course will be project-oriented so that the students can also gain hands-on experience with state-of-the-art tools and techniques. | |||||
Inhalt | This course presents an introduction to general topics and techniques used in natural language processing today, primarily focusing on statistical approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems. | |||||
Literatur | Lectures will make use of textbooks such as the one by Jurafsky and Martin where appropriate, but will also make use of original research and survey papers. | |||||
261-5130-00L | Research in Data Science Only for Data Science MSc. | W | 6 KP | 13A | Professor/innen | |
Kurzbeschreibung | Independent work under the supervision of a core or adjunct faculty of data science. | |||||
Lernziel | Independent work under the supervision of a core or adjunct faculty of data science. An approval of the director of studies is required for a non DS professor. | |||||
Inhalt | Project done under supervision of an approved professor. | |||||
Voraussetzungen / Besonderes | Only students who have passed at least one core course in Data Management and Processing, and one core course in Data Analysis can start with a research project. A project description must be submitted at the start of the project to the studies administration. | |||||
263-0007-00L | Advanced Systems Lab Only for master students, otherwise a special permission by the study administration of D-INFK is required. | W | 8 KP | 3V + 2U + 2A | M. Püschel, C. Zhang | |
Kurzbeschreibung | This course introduces the student to the foundations and state-of-the-art techniques in developing high performance software for mathematical functionality occurring in various fields in computer science. The focus is on optimizing for a single core and includes optimizing for the memory hierarchy, for special instruction sets, and the possible use of automatic performance tuning. | |||||
Lernziel | Software performance (i.e., runtime) arises through the complex interaction of algorithm, its implementation, the compiler used, and the microarchitecture the program is run on. The first goal of the course is to provide the student with an understanding of this "vertical" interaction, and hence software performance, for mathematical functionality. The second goal is to teach a systematic strategy how to use this knowledge to write fast software for numerical problems. This strategy will be trained in several homeworks and a semester-long group project. | |||||
Inhalt | The fast evolution and increasing complexity of computing platforms pose a major challenge for developers of high performance software for engineering, science, and consumer applications: it becomes increasingly harder to harness the available computing power. Straightforward implementations may lose as much as one or two orders of magnitude in performance. On the other hand, creating optimal implementations requires the developer to have an understanding of algorithms, capabilities and limitations of compilers, and the target platform's architecture and microarchitecture. This interdisciplinary course introduces the student to the foundations and state-of-the-art techniques in high performance mathematical software development using important functionality such as matrix operations, transforms, filters, and others as examples. The course will explain how to optimize for the memory hierarchy, take advantage of special instruction sets, and other details of current processors that require optimization. The concept of automatic performance tuning is introduced. The focus is on optimization for a single core; thus, the course complements others on parallel and distributed computing. Finally a general strategy for performance analysis and optimization is introduced that the students will apply in group projects that accompany the course. | |||||
Voraussetzungen / Besonderes | Solid knowledge of the C programming language and matrix algebra. | |||||
263-0008-00L | Computational Intelligence Lab Only for master students, otherwise a special permission by the study administration of D-INFK is required. | W | 8 KP | 2V + 2U + 3A | T. Hofmann | |
Kurzbeschreibung | This laboratory course teaches fundamental concepts in computational science and machine learning with a special emphasis on matrix factorization and representation learning. The class covers techniques like dimension reduction, data clustering, sparse coding, and deep learning as well as a wide spectrum of related use cases and applications. | |||||
Lernziel | Students acquire fundamental theoretical concepts and methodologies from machine learning and how to apply these techniques to build intelligent systems that solve real-world problems. They learn to successfully develop solutions to application problems by following the key steps of modeling, algorithm design, implementation and experimental validation. This lab course has a strong focus on practical assignments. Students work in groups of three to four people, to develop solutions to three application problems: 1. Collaborative filtering and recommender systems, 2. Text sentiment classification, and 3. Road segmentation in aerial imagery. For each of these problems, students submit their solutions to an online evaluation and ranking system, and get feedback in terms of numerical accuracy and computational speed. In the final part of the course, students combine and extend one of their previous promising solutions, and write up their findings in an extended abstract in the style of a conference paper. (Disclaimer: The offered projects may be subject to change from year to year.) | |||||
Inhalt | see course description | |||||
263-2925-00L | Program Analysis for System Security and Reliability | W | 6 KP | 2V + 1U + 2A | P. Tsankov | |
Kurzbeschreibung | Security issues in modern systems (blockchains, datacenters, AI) result in billions of losses due to hacks. This course introduces the security issues in modern systems and state-of-the-art automated techniques for building secure and reliable systems. The course has a practical focus and covers systems built by successful ETH spin-offs. | |||||
Lernziel | * Learn about security issues in modern systems -- blockchains, smart contracts, AI-based systems (e.g., autonomous cars), data centers -- and why they are challenging to address. * Understand how the latest automated analysis techniques work, both discrete and probabilistic. * Understand how these techniques combine with machine-learning methods, both supervised and unsupervised. * Understand how to use these methods to build reliable and secure modern systems. * Learn about new open problems that if solved can lead to research and commercial impact. | |||||
Inhalt | Part I: Security of Blockchains - We will cover existing blockchains (e.g., Ethereum, Bitcoin), how they work, what the core security issues are, and how these have led to massive financial losses. - We will show how to extract useful information about smart contracts and transactions using interactive analysis frameworks for querying blockchains (e.g. Google's Ethereum BigQuery). - We will discuss the state-of-the-art security tools (e.g., https://securify.ch) for ensuring that smart contracts are free of security vulnerabilities. - We will study the latest automated reasoning systems (e.g., https://verx.ch) for checking custom (temporal) properties of smart contracts and illustrate their operation on real-world use cases. - We will study the underlying methods for automated reasoning and testing (e.g., abstract interpretation, symbolic execution, fuzzing) are used to build such tools. Part II: Security of Datacenters and Networks - We will show how to ensure that datacenters and ISPs are secured using declarative reasoning methods (e.g., Datalog). We will also see how to automatically synthesize secure configurations (e.g. using SyNET and NetComplete) which lead to desirable behaviors, thus automating the job of the network operator and avoiding critical errors. - We will discuss how to apply modern discrete probabilistic inference (e.g., PSI and Bayonet) so to reason about probabilistic network properties (e.g., the probability of a packet reaching a destination if links fail). Part III: Machine Learning for Security - We will discuss how machine learning models for structured prediction are used to address security tasks, including de-obfuscation of binaries (Debin: https://debin.ai), Android APKs (DeGuard: http://apk-deguard.com) and JavaScript (JSNice: http://jsnice.org). - We will study to leverage program abstractions in combination with clustering techniques to learn security rules for cryptography APIs from large codebases. - We will study how to automatically learn to identify security vulnerabilities related to the handling of untrusted inputs (cross-Site scripting, SQL injection, path traversal, remote code execution) from large codebases. To gain a deeper understanding, the course will involve a hands-on programming project where the methods studied in the class will be applied. | |||||
263-3710-00L | Machine Perception Number of participants limited to 200. | W | 5 KP | 2V + 1U + 1A | O. Hilliges | |
Kurzbeschreibung | Recent developments in neural networks (aka “deep learning”) have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and intelligent UIs. This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual tasks. | |||||
Lernziel | Students will learn about fundamental aspects of modern deep learning approaches for perception. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics and HCI. The final project assignment will involve training a complex neural network architecture and applying it on a real-world dataset of human activity. The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human input into computing systems. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others. | |||||
Inhalt | We will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures and advanced deep learning concepts in particular probabilistic deep learning models The course covers the following main areas: I) Foundations of deep-learning. II) Probabilistic deep-learning for generative modelling of data (latent variable models, generative adversarial networks and auto-regressive models). III) Deep learning in computer vision, human-computer interaction and robotics. Specific topics include: I) Deep learning basics: a) Neural Networks and training (i.e., backpropagation) b) Feedforward Networks c) Timeseries modelling (RNN, GRU, LSTM) d) Convolutional Neural Networks for classification II) Probabilistic Deep Learning: a) Latent variable models (VAEs) b) Generative adversarial networks (GANs) c) Autoregressive models (PixelCNN, PixelRNN, TCNs) III) Deep Learning techniques for machine perception: a) Fully Convolutional architectures for dense per-pixel tasks (i.e., instance segmentation) b) Pose estimation and other tasks involving human activity c) Deep reinforcement learning IV) Case studies from research in computer vision, HCI, robotics and signal processing | |||||
Literatur | Deep Learning Book by Ian Goodfellow and Yoshua Bengio | |||||
Voraussetzungen / Besonderes | This is an advanced grad-level course that requires a background in machine learning. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. The course will focus on state-of-the-art research in deep-learning and will not repeat basics of machine learning Please take note of the following conditions: 1) The number of participants is limited to 200 students (MSc and PhDs). 2) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge 3) All practical exercises will require basic knowledge of Python and will use libraries such as TensorFlow, scikit-learn and scikit-image. We will provide introductions to TensorFlow and other libraries that are needed but will not provide introductions to basic programming or Python. The following courses are strongly recommended as prerequisite: * "Visual Computing" or "Computer Vision" The course will be assessed by a final written examination in English. No course materials or electronic devices can be used during the examination. Note that the examination will be based on the contents of the lectures, the associated reading materials and the exercises. | |||||
263-4400-00L | Advanced Graph Algorithms and Optimization Number of participants limited to 30. | W | 5 KP | 3G + 1A | R. Kyng | |
Kurzbeschreibung | This course will cover a number of advanced topics in optimization and graph algorithms. | |||||
Lernziel | The course will take students on a deep dive into modern approaches to graph algorithms using convex optimization techniques. By studying convex optimization through the lens of graph algorithms, students should develop a deeper understanding of fundamental phenomena in optimization. The course will cover some traditional discrete approaches to various graph problems, especially flow problems, and then contrast these approaches with modern, asymptotically faster methods based on combining convex optimization with spectral and combinatorial graph theory. | |||||
Inhalt | Students should leave the course understanding key concepts in optimization such as first and second-order optimization, convex duality, multiplicative weights and dual-based methods, acceleration, preconditioning, and non-Euclidean optimization. Students will also be familiarized with central techniques in the development of graph algorithms in the past 15 years, including graph decomposition techniques, sparsification, oblivious routing, and spectral and combinatorial preconditioning. | |||||
Voraussetzungen / Besonderes | This course is targeted toward masters and doctoral students with an interest in theoretical computer science. Students should be comfortable with design and analysis of algorithms, probability, and linear algebra. Having passed the course Algorithms, Probability, and Computing (APC) is highly recommended, but not formally required. If you are not sure whether you're ready for this class or not, please consult the instructor. | |||||
263-5300-00L | Guarantees for Machine Learning | W | 5 KP | 2V + 2A | F. Yang | |
Kurzbeschreibung | This course teaches classical and recent methods in statistics and optimization commonly used to prove theoretical guarantees for machine learning algorithms. The knowledge is then applied in project work that focuses on understanding phenomena in modern machine learning. | |||||
Lernziel | This course is aimed at advanced master and doctorate students who want to understand and/or conduct independent research on theory for modern machine learning. For this purpose, students will learn common mathematical techniques from statistical learning theory. In independent project work, they then apply their knowledge and go through the process of critically questioning recently published work, finding relevant research questions and learning how to effectively present research ideas to a professional audience. | |||||
Inhalt | This course teaches some classical and recent methods in statistical learning theory aimed at proving theoretical guarantees for machine learning algorithms, including topics in - concentration bounds, uniform convergence - high-dimensional statistics (e.g. Lasso) - prediction error bounds for non-parametric statistics (e.g. in kernel spaces) - minimax lower bounds - regularization via optimization The project work focuses on active theoretical ML research that aims to understand modern phenomena in machine learning, including but not limited to - how overparameterization could help generalization ( interpolating models, linearized NN ) - how overparameterization could help optimization ( non-convex optimization, loss landscape ) - complexity measures and approximation theoretic properties of randomly initialized and trained NN - generalization of robust learning ( adversarial robustness, standard and robust error tradeoff ) - prediction with calibrated confidence ( conformal prediction, calibration ) | |||||
Voraussetzungen / Besonderes | It’s absolutely necessary for students to have a strong mathematical background (basic real analysis, probability theory, linear algebra) and good knowledge of core concepts in machine learning taught in courses such as “Introduction to Machine Learning”, “Regression”/ “Statistical Modelling”. It's also helpful to have heard an optimization course or approximation theoretic course. In addition to these prerequisites, this class requires a certain degree of mathematical maturity—including abstract thinking and the ability to understand and write proofs. | |||||
401-0674-00L | Numerical Methods for Partial Differential Equations Nicht für Studierende BSc/MSc Mathematik | W | 10 KP | 2G + 2U + 2P + 4A | R. Hiptmair | |
Kurzbeschreibung | Derivation, properties, and implementation of fundamental numerical methods for a few key partial differential equations: convection-diffusion, heat equation, wave equation, conservation laws. Implementation in C++ based on a finite element library. | |||||
Lernziel | Main skills to be acquired in this course: * Ability to implement fundamental numerical methods for the solution of partial differential equations efficiently. * Ability to modify and adapt numerical algorithms guided by awareness of their mathematical foundations. * Ability to select and assess numerical methods in light of the predictions of theory * Ability to identify features of a PDE (= partial differential equation) based model that are relevant for the selection and performance of a numerical algorithm. * Ability to understand research publications on theoretical and practical aspects of numerical methods for partial differential equations. * Skills in the efficient implementation of finite element methods on unstructured meshes. This course is neither a course on the mathematical foundations and numerical analysis of methods nor an course that merely teaches recipes and how to apply software packages. | |||||
Inhalt | 1 Second-Order Scalar Elliptic Boundary Value Problems 1.2 Equilibrium Models: Examples 1.3 Sobolev spaces 1.4 Linear Variational Problems 1.5 Equilibrium Models: Boundary Value Problems 1.6 Diffusion Models (Stationary Heat Conduction) 1.7 Boundary Conditions 1.8 Second-Order Elliptic Variational Problems 1.9 Essential and Natural Boundary Conditions 2 Finite Element Methods (FEM) 2.2 Principles of Galerkin Discretization 2.3 Case Study: Linear FEM for Two-Point Boundary Value Problems 2.4 Case Study: Triangular Linear FEM in Two Dimensions 2.5 Building Blocks of General Finite Element Methods 2.6 Lagrangian Finite Element Methods 2.7 Implementation of Finite Element Methods 2.7.1 Mesh Generation and Mesh File Format 2.7.2 Mesh Information and Mesh Data Structures 2.7.2.1 L EHR FEM++ Mesh: Container Layer 2.7.2.2 L EHR FEM++ Mesh: Topology Layer 2.7.2.3 L EHR FEM++ Mesh: Geometry Layer 2.7.3 Vectors and Matrices 2.7.4 Assembly Algorithms 2.7.4.1 Assembly: Localization 2.7.4.2 Assembly: Index Mappings 2.7.4.3 Distribute Assembly Schemes 2.7.4.4 Assembly: Linear Algebra Perspective 2.7.5 Local Computations 2.7.5.1 Analytic Formulas for Entries of Element Matrices 2.7.5.2 Local Quadrature 2.7.6 Treatment of Essential Boundary Conditions 2.8 Parametric Finite Element Methods 3 FEM: Convergence and Accuracy 3.1 Abstract Galerkin Error Estimates 3.2 Empirical (Asymptotic) Convergence of Lagrangian FEM 3.3 A Priori (Asymptotic) Finite Element Error Estimates 3.4 Elliptic Regularity Theory 3.5 Variational Crimes 3.6 FEM: Duality Techniques for Error Estimation 3.7 Discrete Maximum Principle 3.8 Validation and Debugging of Finite Element Codes 4 Beyond FEM: Alternative Discretizations [dropped] 5 Non-Linear Elliptic Boundary Value Problems [dropped] 6 Second-Order Linear Evolution Problems 6.1 Time-Dependent Boundary Value Problems 6.2 Parabolic Initial-Boundary Value Problems 6.3 Linear Wave Equations 7 Convection-Diffusion Problems [dropped] 8 Numerical Methods for Conservation Laws 8.1 Conservation Laws: Examples 8.2 Scalar Conservation Laws in 1D 8.3 Conservative Finite Volume (FV) Discretization 8.4 Timestepping for Finite-Volume Methods 8.5 Higher-Order Conservative Finite-Volume Schemes | |||||
Skript | The lecture will be taught in flipped classroom format: - Video tutorials for all thematic units will be published online. - Tablet notes accompanying the videos will be made available to the audience as PDF. - A comprehensive lecture document will cover all aspects of the course. | |||||
Literatur | Chapters of the following books provide supplementary reading (detailed references in course material): * D. Braess: Finite Elemente, Theorie, schnelle Löser und Anwendungen in der Elastizitätstheorie, Springer 2007 (available online). * S. Brenner and R. Scott. Mathematical theory of finite element methods, Springer 2008 (available online). * A. Ern and J.-L. Guermond. Theory and Practice of Finite Elements, volume 159 of Applied Mathematical Sciences. Springer, New York, 2004. * Ch. Großmann and H.-G. Roos: Numerical Treatment of Partial Differential Equations, Springer 2007. * W. Hackbusch. Elliptic Differential Equations. Theory and Numerical Treatment, volume 18 of Springer Series in Computational Mathematics. Springer, Berlin, 1992. * P. Knabner and L. Angermann. Numerical Methods for Elliptic and Parabolic Partial Differential Equations, volume 44 of Texts in Applied Mathematics. Springer, Heidelberg, 2003. * S. Larsson and V. Thomée. Partial Differential Equations with Numerical Methods, volume 45 of Texts in Applied Mathematics. Springer, Heidelberg, 2003. * R. LeVeque. Finite Volume Methods for Hyperbolic Problems. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, UK, 2002. However, study of supplementary literature is not important for for following the course. | |||||
Voraussetzungen / Besonderes | Mastery of basic calculus and linear algebra is taken for granted. Familiarity with fundamental numerical methods (solution methods for linear systems of equations, interpolation, approximation, numerical quadrature, numerical integration of ODEs) is essential. Important: Coding skills and experience in C++ are essential. Homework assignments involve substantial coding, partly based on a C++ finite element library. The written examination will be computer based and will comprise coding tasks. | |||||
401-3052-05L | Graph Theory | W | 5 KP | 2V + 1U | B. Sudakov | |
Kurzbeschreibung | Basic notions, trees, spanning trees, Caley's formula, vertex and edge connectivity, 2-connectivity, Mader's theorem, Menger's theorem, Eulerian graphs, Hamilton cycles, Dirac's theorem, matchings, theorems of Hall, König and Tutte, planar graphs, Euler's formula, basic non-planar graphs, graph colorings, greedy colorings, Brooks' theorem, 5-colorings of planar graphs | |||||
Lernziel | The students will get an overview over the most fundamental questions concerning graph theory. We expect them to understand the proof techniques and to use them autonomously on related problems. | |||||
Skript | Lecture will be only at the blackboard. | |||||
Literatur | West, D.: "Introduction to Graph Theory" Diestel, R.: "Graph Theory" Further literature links will be provided in the lecture. | |||||
Voraussetzungen / Besonderes | Students are expected to have a mathematical background and should be able to write rigorous proofs. NOTICE: This course unit was previously offered as 252-1408-00L Graphs and Algorithms. | |||||
401-3052-10L | Graph Theory | W | 10 KP | 4V + 1U | B. Sudakov | |
Kurzbeschreibung | Basics, trees, Caley's formula, matrix tree theorem, connectivity, theorems of Mader and Menger, Eulerian graphs, Hamilton cycles, theorems of Dirac, Ore, Erdös-Chvatal, matchings, theorems of Hall, König, Tutte, planar graphs, Euler's formula, Kuratowski's theorem, graph colorings, Brooks' theorem, 5-colorings of planar graphs, list colorings, Vizing's theorem, Ramsey theory, Turán's theorem | |||||
Lernziel | The students will get an overview over the most fundamental questions concerning graph theory. We expect them to understand the proof techniques and to use them autonomously on related problems. | |||||
Skript | Lecture will be only at the blackboard. | |||||
Literatur | West, D.: "Introduction to Graph Theory" Diestel, R.: "Graph Theory" Further literature links will be provided in the lecture. | |||||
Voraussetzungen / Besonderes | Students are expected to have a mathematical background and should be able to write rigorous proofs. | |||||
401-3602-00L | Applied Stochastic Processes Findet dieses Semester nicht statt. | W | 8 KP | 3V + 1U | keine Angaben | |
Kurzbeschreibung | Poisson-Prozesse; Erneuerungsprozesse; Markovketten in diskreter und in stetiger Zeit; einige Beispiele und Anwendungen. | |||||
Lernziel | Stochastische Prozesse dienen zur Beschreibung der Entwicklung von Systemen, die sich in einer zufälligen Weise entwickeln. In dieser Vorlesung bezieht sich die Entwicklung auf einen skalaren Parameter, der als Zeit interpretiert wird, so dass wir die zeitliche Entwicklung des Systems studieren. Die Vorlesung präsentiert mehrere Klassen von stochastischen Prozessen, untersucht ihre Eigenschaften und ihr Verhalten und zeigt anhand von einigen Beispielen, wie diese Prozesse eingesetzt werden können. Die Hauptbetonung liegt auf der Theorie; "applied" ist also im Sinne von "applicable" zu verstehen. | |||||
Literatur | R. N. Bhattacharya and E. C. Waymire, "Stochastic Processes with Applications", SIAM (2009), available online: http://epubs.siam.org/doi/book/10.1137/1.9780898718997 R. Durrett, "Essentials of Stochastic Processes", Springer (2012), available online: http://link.springer.com/book/10.1007/978-1-4614-3615-7/page/1 M. Lefebvre, "Applied Stochastic Processes", Springer (2007), available online: http://link.springer.com/book/10.1007/978-0-387-48976-6/page/1 S. I. Resnick, "Adventures in Stochastic Processes", Birkhäuser (2005) | |||||
Voraussetzungen / Besonderes | Prerequisites are familiarity with (measure-theoretic) probability theory as it is treated in the course "Probability Theory" (401-3601-00L). | |||||
401-4632-15L | Causality | W | 4 KP | 2G | C. Heinze-Deml | |
Kurzbeschreibung | In statistics, we are used to search for the best predictors of some random variable. In many situations, however, we are interested in predicting a system's behavior under manipulations. For such an analysis, we require knowledge about the underlying causal structure of the system. In this course, we study concepts and theory behind causal inference. | |||||
Lernziel | After this course, you should be able to - understand the language and concepts of causal inference - know the assumptions under which one can infer causal relations from observational and/or interventional data - describe and apply different methods for causal structure learning - given data and a causal structure, derive causal effects and predictions of interventional experiments | |||||
Voraussetzungen / Besonderes | Prerequisites: basic knowledge of probability theory and regression | |||||
401-4944-20L | Mathematics of Data Science | W | 8 KP | 4G | A. Bandeira | |
Kurzbeschreibung | Mostly self-contained, but fast-paced, introductory masters level course on various theoretical aspects of algorithms that aim to extract information from data. | |||||
Lernziel | Introduction to various mathematical aspects of Data Science. | |||||
Inhalt | These topics lie in overlaps of (Applied) Mathematics with: Computer Science, Electrical Engineering, Statistics, and/or Operations Research. Each lecture will feature a couple of Mathematical Open Problem(s) related to Data Science. The main mathematical tools used will be Probability and Linear Algebra, and a basic familiarity with these subjects is required. There will also be some (although knowledge of these tools is not assumed) Graph Theory, Representation Theory, Applied Harmonic Analysis, among others. The topics treated will include Dimension reduction, Manifold learning, Sparse recovery, Random Matrices, Approximation Algorithms, Community detection in graphs, and several others. | |||||
Skript | https://people.math.ethz.ch/~abandeira/TenLecturesFortyTwoProblems.pdf | |||||
Voraussetzungen / Besonderes | The main mathematical tools used will be Probability, Linear Algebra (and real analysis), and a working knowledge of these subjects is required. In addition to these prerequisites, this class requires a certain degree of mathematical maturity--including abstract thinking and the ability to understand and write proofs. We encourage students who are interested in mathematical data science to take both this course and ``227-0434-10L Mathematics of Information'' taught by Prof. H. Bölcskei. The two courses are designed to be complementary. A. Bandeira and H. Bölcskei | |||||
401-6102-00L | Multivariate Statistics Findet dieses Semester nicht statt. | W | 4 KP | 2G | keine Angaben | |
Kurzbeschreibung | Multivariate Statistics deals with joint distributions of several random variables. This course introduces the basic concepts and provides an overview over classical and modern methods of multivariate statistics. We will consider the theory behind the methods as well as their applications. | |||||
Lernziel | After the course, you should be able to: - describe the various methods and the concepts and theory behind them - identify adequate methods for a given statistical problem - use the statistical software "R" to efficiently apply these methods - interpret the output of these methods | |||||
Inhalt | Visualization / Principal component analysis / Multidimensional scaling / The multivariate Normal distribution / Factor analysis / Supervised learning / Cluster analysis | |||||
Skript | None | |||||
Literatur | The course will be based on class notes and books that are available electronically via the ETH library. | |||||
Voraussetzungen / Besonderes | Target audience: This course is the more theoretical version of "Applied Multivariate Statistics" (401-0102-00L) and is targeted at students with a math background. Prerequisite: A basic course in probability and statistics. Note: The courses 401-0102-00L and 401-6102-00L are mutually exclusive. You may register for at most one of these two course units. | |||||
402-0448-01L | Quantum Information Processing I: Concepts Dieser theoretisch ausgerichtete Teil QIP I bildet zusammen mit dem experimentell ausgerichteten Teil 402-0448-02L QIP II, die beide im Frühjahrssemester angeboten werden, im Master-Studiengang Physik das experimentelle Kernfach "Quantum Information Processing" mit total 10 ECTS-Kreditpunkten. | W | 5 KP | 2V + 1U | P. Kammerlander | |
Kurzbeschreibung | The course will cover the key concepts and ideas of quantum information processing, including descriptions of quantum algorithms which give the quantum computer the power to compute problems outside the reach of any classical supercomputer. Key concepts such as quantum error correction will be described. These ideas provide fundamental insights into the nature of quantum states and measurement. | |||||
Lernziel | We aim to provide an overview of the central concepts in Quantum Information Processing, including insights into the advantages to be gained from using quantum mechanics and the range of techniques based on quantum error correction which enable the elimination of noise. | |||||
Inhalt | The topics covered in the course will include quantum circuits, gate decomposition and universal sets of gates, efficiency of quantum circuits, quantum algorithms (Shor, Grover, Deutsch-Josza,..), error correction, fault-tolerant design, entanglement, teleportation and dense conding, teleportation of gates, and cryptography. | |||||
Skript | More details to follow. | |||||
Literatur | Quantum Computation and Quantum Information Michael Nielsen and Isaac Chuang Cambridge University Press | |||||
Voraussetzungen / Besonderes | Basic knowledge in the formalism of quantum states, unitary evolution and quantum measurement is recommended. | |||||
701-0104-00L | Statistical Modelling of Spatial Data | W | 3 KP | 2G | A. J. Papritz | |
Kurzbeschreibung | In environmental sciences one often deals with spatial data. When analysing such data the focus is either on exploring their structure (dependence on explanatory variables, autocorrelation) and/or on spatial prediction. The course provides an introduction to geostatistical methods that are useful for such analyses. | |||||
Lernziel | The course will provide an overview of the basic concepts and stochastic models that are used to model spatial data. In addition, participants will learn a number of geostatistical techniques and acquire familiarity with R software that is useful for analyzing spatial data. | |||||
Inhalt | After an introductory discussion of the types of problems and the kind of data that arise in environmental research, an introduction into linear geostatistics (models: stationary and intrinsic random processes, modelling large-scale spatial patterns by linear regression, modelling autocorrelation by variogram; kriging: mean square prediction of spatial data) will be taught. The lectures will be complemented by data analyses that the participants have to do themselves. | |||||
Skript | Slides, descriptions of the problems for the data analyses and solutions to them will be provided. | |||||
Literatur | P.J. Diggle & P.J. Ribeiro Jr. 2007. Model-based Geostatistics. Springer. Bivand, R. S., Pebesma, E. J. & Gómez-Rubio, V. 2013. Applied Spatial Data Analysis with R. Springer. | |||||
Voraussetzungen / Besonderes | Familiarity with linear regression analysis (e.g. equivalent to the first part of the course 401-0649-00L Applied Statistical Regression) and with the software R (e.g. 401-6215-00L Using R for Data Analysis and Graphics (Part I), 401-6217-00L Using R for Data Analysis and Graphics (Part II)) are required for attending the course. |