Search result: Catalogue data in Autumn Semester 2022

Computer Science Master Information
Majors
Major in Data Management Systems
Core Courses
NumberTitleTypeECTSHoursLecturers
263-3010-00LBig Data Information Restricted registration - show details W10 credits3V + 2U + 4AG. Fourny
AbstractThe key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations.
Learning objectiveDo you want to be able to query your own data productively and efficiently in your future semester projects, master thesis, or PhD thesis? Are you looking for something beyond the Python+Pandas hype? This courses teaches you how to do so as well as the dos and don'ts.

"Big Data" refers to the case when the amount of data is very large (100 GB and more), or when the data is not completely structured (or messy). The Big Data revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm".

Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it are needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small.

The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof.

After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each one of your projects efficiently and consistently.
ContentThis course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe. We take the monolithic, one-machine relational stack from the 1970s, smash it down and rebuild it on top of large clusters: starting with distributed storage, and all the way up to syntax, models, validation, processing, indexing, and querying. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem.

No data is harmed during this course, however, please be psychologically prepared that our data may not always be in third normal form.

- physical storage: distributed file systems (HDFS), object storage(S3), key-value stores

- logical storage: document stores (MongoDB), column stores (HBase), graph databases (neo4j), data warehouses (ROLAP)

- data formats and syntaxes (XML, JSON, RDF, Turtle, CSV, XBRL, YAML, protocol buffers, Avro)

- data shapes and models (tables, trees, graphs, cubes)

- type systems and schemas: atomic types, structured types (arrays, maps), set-based type systems (?, *, +)

- an overview of functional, declarative programming languages across data shapes (SQL, XQuery, JSONiq, Cypher, MDX)

- the most important query paradigms (selection, projection, joining, grouping, ordering, windowing)

- paradigms for parallel processing, two-stage (MapReduce) and DAG-based (Spark)

- resource management (YARN)

- what a data center is made of and why it matters (racks, nodes, ...)

- underlying architectures (internal machinery of HDFS, HBase, Spark, neo4j)

- optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing)

- applications.

Large scale analytics and machine learning are outside of the scope of this course.
LiteratureCourse textbook: https://ghislainfourny.github.io/big-data-textbook/

Papers from scientific conferences and journals. References will be given as part of the course material during the semester.
Prerequisites / NoticeThe lecture is hybrid, meaning you can attend with us in the lecture hall, or on Zoom, or watch the recordings on YouTube later. Exercise sessions are in presence.

This course, in the autumn semester, is only intended for:
- Computer Science students
- Data Science students
- CBB students with a Computer Science background

Mobility students in CS are also welcome and encouraged to attend. If you experience any issue while registering, please contact the study administration and you will be gladly added.

For students of all other departements interested in this fascinating topic: I would love to have you visit my lectures as well! So there is a series of two courses specially designed for you:
- "Information Systems for Engineers" (SQL, relational databases): this Fall
- "Big Data for Engineers" (similar to Big Data, but adapted for non Computer Scientists): Spring 2023
There is no hard dependency, so you can either them in any order, but it may be more enjoyable to start with Information Systems for Engineers.

Students who successfully completed Big Data for Engineers are not allowed to enrol in the course Big Data.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingassessed
Media and Digital Technologiesfostered
Problem-solvingfostered
Social CompetenciesCommunicationfostered
Sensitivity to Diversityfostered
Negotiationfostered
Personal CompetenciesCreative Thinkingfostered
Critical Thinkingfostered
Integrity and Work Ethicsfostered
263-3845-00LData Management Systems Information W8 credits3V + 1U + 3AG. Alonso
AbstractThe course will cover the implementation aspects of data management systems using relational database engines as a starting point to cover the basic concepts of efficient data processing and then expanding those concepts to modern implementations in data centers and the cloud.
Learning objectiveThe goal of the course is to convey the fundamental aspects of efficient data management from a systems implementation perspective: storage, access, organization, indexing, consistency, concurrency, transactions, distribution, query compilation vs interpretation, data representations, etc. Using conventional relational engines as a starting point, the course will aim at providing an in depth coverage of the latest technologies used in data centers and the cloud to implement large scale data processing in various forms.
ContentThe course will first cover fundamental concepts in data management: storage, locality, query optimization, declarative interfaces, concurrency control and recovery, buffer managers, management of the memory hierarchy, presenting them in a system independent manner. The course will place an special emphasis on understating these basic principles as they are key to understanding what problems existing systems try to address. It will then proceed to explore their implementation in modern relational engines supporting SQL to then expand the range of systems used in the cloud: key value stores, geo-replication, query as a service, serverless, large scale analytics engines, etc.
LiteratureThe main source of information for the course will be articles and research papers describing the architecture of the systems discussed. The list of papers will be provided at the beginning of the course.
Prerequisites / NoticeThe course requires to have completed the Data Modeling and Data Bases course at the Bachelor level as it assumes knowledge of databases and SQL.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Elective Courses
NumberTitleTypeECTSHoursLecturers
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
Learning objectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
252-1414-00LSystem Security Information W7 credits2V + 2U + 2AS. Capkun, S. Shinde
AbstractThe first part of the course covers general security concepts and hardware-based support for security.
In the second part, the focus is on system design and methodologies for building secure systems.
Learning objectiveIn this lecture, students learn about the security requirements and capabilities that are expected from modern hardware, operating systems, and other software environments. An overview of available technologies, algorithms and standards is given, with which these requirements can be met.
ContentThe first part of the lecture covers hardware-based security concepts. Topics include the concept of physical and software-based side channel attacks on hardware resources, architectural support for security (e.g., memory management and permissions, disk encryption), and trusted execution environments (Intel SGX, ARM TrustZone, AMD SEV, and RISC-​V Keystone).

In the second part, the focus is on system design and methodologies for building secure systems. Topics include: common software faults (e.g., buffer overflows, etc.), bug-​detection, writing secure software (design, architecture, QA, testing), compiler-​supported security (e.g., control-​flow integrity), and language-​supported security (e.g., memory safety).

Along the lectures, model cases will be elaborated and evaluated in the exercises.
263-2800-00LDesign of Parallel and High-Performance Computing Information Restricted registration - show details
Number of participants limited to 125.
W9 credits3V + 2U + 3AT. Hoefler, M. Püschel
AbstractAdvanced topics in parallel and high-performance computing.
Learning objectiveUnderstand concurrency paradigms and models from a higher perspective and acquire skills for designing, structuring and developing possibly large parallel high-performance software systems. Become able to distinguish parallelism in problem space and in machine space. Become familiar with important technical concepts and with concurrency folklore.
ContentWe will cover all aspects of high-performance computing ranging from architecture through programming up to algorithms. We will start with a discussion of caches and cache coherence in practical computer systems. We will dive into parallel programming concepts such as memory models, locks, and lock-free. We will cover performance modeling and parallel design principles as well as basic parallel algorithms.
Prerequisites / NoticeThis class is intended for the Computer Science Masters curriculum. Students must have basic knowledge in programming in C as well as computer science theory. Students should be familiar with the material covered in the ETH computer science first-year courses "Parallele Programmierung (parallel programming)" and "Algorithmen und Datenstrukturen (algorithm and data structures)" or equivalent courses.
263-3210-00LDeep Learning Information Restricted registration - show details
Number of participants limited to 320.
W8 credits3V + 2U + 2AT. Hofmann, F. Perez Cruz, N. Perraudin
AbstractDeep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations.
Learning objectiveIn recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the mathematical foundations of deep learning and provide insights into model design, training, and validation. The main objective is a profound understanding of why these methods work and how. There will also be a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology.
Prerequisites / NoticeThis is an advanced level course that requires some basic background in machine learning. More importantly, students are expected to have a very solid mathematical foundation, including linear algebra, multivariate calculus, and probability. The course will make heavy use of mathematics and is not (!) meant to be an extended tutorial of how to train deep networks with tools like Torch or Tensorflow, although that may be a side benefit.

The participation in the course is subject to the following condition:
- Students must have taken the exam in Advanced Machine Learning (252-0535-00) or have acquired equivalent knowledge, see exhaustive list below:

Advanced Machine Learning
https://ml2.inf.ethz.ch/courses/aml/

Computational Intelligence Lab
http://da.inf.ethz.ch/teaching/2019/CIL/

Introduction to Machine Learning
https://las.inf.ethz.ch/teaching/introml-S19

Statistical Learning Theory
http://ml2.inf.ethz.ch/courses/slt/

Computational Statistics
https://stat.ethz.ch/lectures/ss19/comp-stats.php

Probabilistic Artificial Intelligence
https://las.inf.ethz.ch/teaching/pai-f18
263-3850-00LInformal Methods Information W5 credits2G + 2AD. Cock
AbstractFormal methods are increasingly a key part of the methodological toolkit of systems programmers - those writing operating systems, databases, and distributed systems. This course is about how to apply concepts, techniques, and principles from formal methods to such software systems, and how to get into the habit of thinking formally about systems design even when writing low-level C code.
Learning objectiveThis course is about equipping students whose focus is systems with the insights and conceptual tools provided by formal methods, and thereby enabling them to become better systems programmers.
By the end of the course, students should be able to seamlessly integrate basic concepts form formal methods into how they conceive, design, implement, reason about, and debug computer systems.

The goal is not to provide a comprehensive introduction to formal methods - this is well covered by other courses in the department. Instead, it is intended to provide students in computer systems (who may or may not have existing background knowledge of formal methods) with a basis for applying formal methods in their work.
ContentThis course does not assume prior knowledge of formal methods, and will start with a quick review of topics such static vs. dynamic reasoning, variants and invariants, program algebra and refinement, etc. However, it is strongly recommended that students have already taken one of the introductory formal methods course at ETH (or equivalents elsewhere) before taking this course - the emphasis is on reinforcing these concepts by applying them, not to teach them from scratch.

Instead, the majority of the course will be about how to apply these techniques to actual, practical code in real systems. We will work from real systems code written both by students taking the course, and practical systems developed using formal techniques, in particular the verified seL4 microkernel will be a key case study. We will also focus on informal, pen-and-paper arguments for correctness of programs and systems rather than using theorem provers or automated verification tools; again these latter techniques are well covered in other courses (and recommended as a complement to this one).
Major in Machine Intelligence
Core Courses
NumberTitleTypeECTSHoursLecturers
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
Learning objectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
263-3210-00LDeep Learning Information Restricted registration - show details
Number of participants limited to 320.
W8 credits3V + 2U + 2AT. Hofmann, F. Perez Cruz, N. Perraudin
AbstractDeep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations.
Learning objectiveIn recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the mathematical foundations of deep learning and provide insights into model design, training, and validation. The main objective is a profound understanding of why these methods work and how. There will also be a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology.
Prerequisites / NoticeThis is an advanced level course that requires some basic background in machine learning. More importantly, students are expected to have a very solid mathematical foundation, including linear algebra, multivariate calculus, and probability. The course will make heavy use of mathematics and is not (!) meant to be an extended tutorial of how to train deep networks with tools like Torch or Tensorflow, although that may be a side benefit.

The participation in the course is subject to the following condition:
- Students must have taken the exam in Advanced Machine Learning (252-0535-00) or have acquired equivalent knowledge, see exhaustive list below:

Advanced Machine Learning
https://ml2.inf.ethz.ch/courses/aml/

Computational Intelligence Lab
http://da.inf.ethz.ch/teaching/2019/CIL/

Introduction to Machine Learning
https://las.inf.ethz.ch/teaching/introml-S19

Statistical Learning Theory
http://ml2.inf.ethz.ch/courses/slt/

Computational Statistics
https://stat.ethz.ch/lectures/ss19/comp-stats.php

Probabilistic Artificial Intelligence
https://las.inf.ethz.ch/teaching/pai-f18
263-5210-00LProbabilistic Artificial Intelligence Information Restricted registration - show details W8 credits3V + 2U + 2AA. Krause
AbstractThis course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics.
Learning objectiveHow can we build systems that perform well in uncertain environments? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as robotics. The course is designed for graduate students.
ContentTopics covered:
- Probability
- Probabilistic inference (variational inference, MCMC)
- Bayesian learning (Gaussian processes, Bayesian deep learning)
- Probabilistic planning (MDPs, POMPDPs)
- Multi-armed bandits and Bayesian optimization
- Reinforcement learning
Prerequisites / NoticeSolid basic knowledge in statistics, algorithms and programming.
The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite.
Elective Courses
NumberTitleTypeECTSHoursLecturers
252-3005-00LNatural Language Processing Information Restricted registration - show details
Number of participants limited to 400.
W7 credits3V + 3U + 1AR. Cotterell
AbstractThis course presents topics in natural language processing with an emphasis on modern techniques, primarily focusing on statistical and deep learning approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems.
Learning objectiveThe objective of the course is to learn the basic concepts in the statistical processing of natural languages. The course will be project-oriented so that the students can also gain hands-on experience with state-of-the-art tools and techniques.
ContentThis course presents an introduction to general topics and techniques used in natural language processing today, primarily focusing on statistical approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems.
LiteratureLectures will make use of textbooks such as the one by Jurafsky and Martin where appropriate, but will also make use of original research and survey papers.
263-2400-00LReliable and Trustworthy Artificial Intelligence Information W6 credits2V + 2U + 1AM. Vechev
AbstractCreating reliable, secure, robust, and fair machine learning models is a core challenge in artificial intelligence and one of fundamental importance. The goal of the course is to teach both the mathematical foundations of this new and emerging area as well as to introduce students to the latest and most exciting research in the space.
Learning objectiveUpon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of engineering and research problems. To facilitate deeper understanding, the course includes a group coding project where students will build a system based on the learned material.
ContentThe course is split into 3 parts:

Robustness in Deep Learning
---------------------------------------

- Adversarial attacks and defenses on deep learning models.
- Automated certification of deep learning models (covering the major trends: convex relaxations and branch-and-bound methods as well as randomized smoothing).
- Certified training of deep neural networks to satisfy given properties (combining symbolic and continuous methods).

Privacy of Machine Learning
-------------------------------------

- Threat models (e.g., stealing data, poisoning, membership inference, etc.).
- Attacking federated machine learning (across modalities such as vision, natural language and tabular) .
- Differential privacy for defending machine learning.
- Enforcing regulations with guarantees (e.g., via provable data minimization).

Fairness of Machine Learning
---------------------------------------

- Introduction to fairness (motivation, definitions).
- Enforcing individual fairness with guarantees (e.g., for both vision or tabular data).
- Enforcing group fairness with guarantees.

More information here: https://www.sri.inf.ethz.ch/teaching/rtai22.
Prerequisites / NoticeWhile not a formal requirement, the course assumes familiarity with basics of machine learning (especially linear algebra, gradient descent, and neural networks as well as basic probability theory). These topics are usually covered in “Intro to ML” classes at most institutions (e.g., “Introduction to Machine Learning” at ETH).

For solving assignments, some programming experience in Python is expected.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Personal CompetenciesCreative Thinkingassessed
Critical Thinkingassessed
263-5005-00LArtificial Intelligence in Education Information W3 credits1V + 0.5UM. Sachan, T. Sinha
AbstractArtificial Intelligence (AI) methods have shown to have a profound impact in educational technologies, where the great variety of tasks and data types enable us to get benefit of AI techniques in many different ways. We will review relevant methods and applications of AI in various educational technologies, and work on problem sets and projects to solve problems in education with the help of AI.
Learning objectiveThe course will be centered around exploring methodological and system-focused perspectives on designing AI systems for education and analyzing educational data using AI methods. Students will be expected to a) engage in presentations and active in-class and asynchronous discussion, and b) work on problem-sets exemplifying the use of educational data mining techniques.
ContentThe course will start with an introduction to data mining techniques (e.g., prediction, structured discovery, visualization, and relationship mining) relevant to analyzing educational data. We will then continue with topics on personalization in AI in educational technologies (e.g., learner modeling and knowledge tracing, self-improving AIED systems) while showcasing exemplary applications in areas such as content curation and dialog-based tutoring. Finally, we will cover ethical challenges associated with using AI in student facing settings. Face-to-face meetings will be held every fortnight, although students will be expected to work individually on weekly tasks (e.g., discussing relevant literature, working on problems, preparing seminar presentations).
Lecture notesLecture slides will be made available at the course Web site.
LiteratureNo textbook is required, but there will be regularly assigned readings from research literature, linked to the course website.
Prerequisites / NoticeThere are no prerequisites for this class. However, it will help if the student has taken an undergraduate or graduate level class in statistics, data science or machine learning. This class is appropriate for advanced undergraduates and master students in Computer Science as well as PhD students in other departments.
263-5255-00LFoundations of Reinforcement Learning Information Restricted registration - show details
Does not take place this semester.
Number of participants limited to 190.

The course will be offered again in FS23.
W5 credits2V + 2AN. He
AbstractReinforcement learning (RL) has been in the limelight of many recent breakthroughs in artificial intelligence. This course focuses on theoretical and algorithmic foundations of reinforcement learning, through the lens of optimization, modern approximation, and learning theory. The course targets M.S. students with strong research interests in reinforcement learning, optimization, and control.
Learning objectiveThis course aims to provide students with an advanced introduction of RL theory and algorithms as well as bring them near the frontier of this active research field.

By the end of the course, students will be able to
- Identify the strengths and limitations of various reinforcement learning algorithms;
- Formulate and solve sequential decision-making problems by applying relevant reinforcement learning tools;
- Generalize or discover “new” applications, algorithms, or theories of reinforcement learning towards conducting independent research on the topic.
ContentBasic topics include fundamentals of Markov decision processes, approximate dynamic programming, linear programming and primal-dual perspectives of RL, model-based and model-free RL, policy gradient and actor-critic algorithms, Markov games and multi-agent RL. If time allows, we will also discuss advanced topics such as batch RL, inverse RL, causal RL, etc. The course keeps strong emphasis on in-depth understanding of the mathematical modeling and theoretical properties of RL algorithms.
Lecture notesLecture notes will be posted on Moodle.
LiteratureDynamic Programming and Optimal Control, Vol I & II, Dimitris Bertsekas
Reinforcement Learning: An Introduction, Second Edition, Richard Sutton and Andrew Barto.
Algorithms for Reinforcement Learning, Csaba Czepesvári.
Reinforcement Learning: Theory and Algorithms, Alekh Agarwal, Nan Jiang, Sham M. Kakade.
Prerequisites / NoticeStudents are expected to have strong mathematical background in linear algebra, probability theory, optimization, and machine learning.
263-5300-00LGuarantees for Machine Learning Information Restricted registration - show details
Number of participants limited to 30.
W7 credits3V + 1U + 2AF. Yang, A. Sanyal
AbstractThis course is aimed at advanced master and doctorate students who want to conduct independent research on theory for modern machine learning (ML). It teaches standard methods in statistical learning theory commonly used to prove theoretical guarantees for ML algorithms. The knowledge is then applied in independent project work to understand and follow-up on recent theoretical ML results.
Learning objectiveBy the end of the semester students should be able to

- understand a good fraction of theory papers published in the typical ML venues. For this purpose, students will learn common mathematical techniques from statistical learning in the first part of the course and apply this knowledge in the project work

- critically examine recently published work in terms of relevance and find impactful (novel) research problems. This will be an integral part of the project work and involves experimental as well as theoretical questions

- outline a possible approach to prove a conjectured theorem by e.g. reducing to more solvable subproblems. This will be practiced in in-person exercises, homeworks and potentially in the final project

- effectively communicate and present the problem motivation, new insights and results to a technical audience. This will be primarily learned via the final presentation and report as well as during peer-grading of peer talks.
ContentThis course touches upon foundational methods in statistical learning theory aimed at proving theoretical guarantees for machine learning algorithms. It touches on the following topics
- concentration bounds
- uniform convergence and empirical process theory
- regularization for non-parametric statistics (e.g. in RKHS, neural networks)
- high-dimensional learning
- computational and statistical learnability (information-theoretic, PAC, SQ)
- overparameterized models, implicit bias and regularization

The project work focuses on current theoretical ML research that aims to understand modern phenomena in machine learning, including but not limited to
- how overparameterized models generalize (statistically) and converge (computationally)
- complexity measures and approximation theoretic properties of randomly initialized and trained neural networks
- generalization of robust learning (adversarial or distribution-shift robustness)
- private and fair learning
Prerequisites / NoticeStudents should have a very strong mathematical background (real analysis, probability theory, linear algebra) and solid knowledge of core concepts in machine learning taught in courses such as “Introduction to Machine Learning”, “Regression”/ “Statistical Modelling”. In addition to these prerequisites, this class requires a high degree of mathematical maturity—including abstract thinking and the ability to understand and write proofs.

Students have usually taken a subset of Fundamentals of Mathematical Statistics, Probabilistic AI, Neural Network Theory, Optimization for Data Science, Advanced ML, Statistical Learning Theory, Probability Theory (D-MATH)
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Social CompetenciesCommunicationassessed
Cooperation and Teamworkassessed
Personal CompetenciesCreative Thinkingassessed
Critical Thinkingassessed
263-5353-00LPhilosophy of Language and Computation Information W5 credits2V + 1U + 1AR. Cotterell, J. L. Gastaldi
AbstractUnderstand the philosophical underpinnings of language-based artificial intelligence.
Learning objectiveThis graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which centered around statistical machine learning applied to natural language data.
ContentThis graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which centered around statistical machine learning applied to natural language data. The course is a year-long journey, but the second half (Spring 2023) does not depend on the first (Fall 2022) and thus either half may be taken independently. In each semester, we divide the class time into three modules. Each module is centered around a philosophical topic. In the first semester we will discuss structuralism, recursive structure and logic, and in the second semester we will focus on language games, information and pragmatics. The modules will be four weeks long. During the first two weeks of a module, we will read and discuss original texts and supplementary criticism. During the second two weeks, we will read recent NLP papers and discuss how the authors of those works are building on philosophical insights into our conception of language—perhaps implicitly or unwittingly.
LiteratureThe literature will be provided by the instructors on the class website.
263-5902-00LComputer Vision Information W8 credits3V + 1U + 3AM. Pollefeys, S. Tang, F. Yu
AbstractThe goal of this course is to provide students with a good understanding of computer vision and image analysis techniques. The main concepts and techniques will be studied in depth and practical algorithms and approaches will be discussed and explored through the exercises.
Learning objectiveThe objectives of this course are:
1. To introduce the fundamental problems of computer vision.
2. To introduce the main concepts and techniques used to solve those.
3. To enable participants to implement solutions for reasonably complex problems.
4. To enable participants to make sense of the computer vision literature.
ContentCamera models and calibration, invariant features, Multiple-view geometry, Model fitting, Stereo Matching, Segmentation, 2D Shape matching, Shape from Silhouettes, Optical flow, Structure from motion, Tracking, Object recognition, Object category recognition
Prerequisites / NoticeIt is recommended that students have taken the Visual Computing lecture or a similar course introducing basic image processing concepts before taking this course.
Major in Secure and Reliable Systems
Core Courses
NumberTitleTypeECTSHoursLecturers
252-0237-00LConcepts of Object-Oriented Programming Information W8 credits3V + 2U + 2AP. Müller
AbstractCourse that focuses on an in-depth understanding of object-oriented programming and compares designs of object-oriented programming languages. Topics include different flavors of type systems, inheritance models, encapsulation in the presence of aliasing, object and class initialization, program correctness, reflection
Learning objectiveAfter this course, students will:
Have a deep understanding of advanced concepts of object-oriented programming and their support through various language features. Be able to understand language concepts on a semantic level and be able to compare and evaluate language designs.
Be able to learn new languages more rapidly.
Be aware of many subtle problems of object-oriented programming and know how to avoid them.
ContentThe main goal of this course is to convey a deep understanding of the key concepts of sequential object-oriented programming and their support in different programming languages. This is achieved by studying how important challenges are addressed through language features and programming idioms. In particular, the course discusses alternative language designs by contrasting solutions in languages such as C++, C#, Eiffel, Java, Python, and Scala. The course also introduces novel ideas from research languages that may influence the design of future mainstream languages.

The topics discussed in the course include among others:
The pros and cons of different flavors of type systems (for instance, static vs. dynamic typing, nominal vs. structural, syntactic vs. behavioral typing)
The key problems of single and multiple inheritance and how different languages address them
Generic type systems, in particular, Java generics, C# generics, and C++ templates
The situations in which object-oriented programming does not provide encapsulation, and how to avoid them
The pitfalls of object initialization, exemplified by a research type system that prevents null pointer dereferencing
How to maintain the consistency of data structures
LiteratureWill be announced in the lecture.
Prerequisites / NoticePrerequisites:
Mastering at least one object-oriented programming language (this course will NOT provide an introduction to object-oriented programming); programming experience
252-0463-00LSecurity Engineering Information W7 credits2V + 2U + 2AD. Basin, M. Ochoa Ronderos
AbstractSubject of the class are engineering techniques for developing secure systems. We examine concepts, methods and tools, applied within the different activities of the SW development process to improve security of the system. Topics: security requirements&risk analysis, system modeling&model-based development methods, implementation-level security, and evaluation criteria for secure systems
Learning objectiveSecurity engineering is an evolving discipline that unifies two important areas: software engineering and security. Software Engineering addresses the development and application of methods for systematically developing, operating, and maintaining, complex, high-quality software.
Security, on the other hand, is concerned with assuring and verifying properties of a system that relate to confidentiality, integrity, and availability of data.

The goal of this class is to survey engineering techniques for developing secure systems. We will examine concepts, methods, and tools that can be applied within the different activities of the software development process, in order to improve the security of the resulting systems.

Topics covered include

* security requirements & risk analysis,
* system modeling and model-based development methods,
* implementation-level security, and
* evaluation criteria for the development of secure systems
ContentSecurity engineering is an evolving discipline that unifies two important areas: software engineering and security. Software Engineering addresses the development and application of methods for systematically developing, operating, and maintaining, complex, high-quality software.
Security, on the other hand, is concerned with assuring and verifying properties of a system that relate to confidentiality, integrity, and availability of data.

The goal of this class is to survey engineering techniques for developing secure systems. We will examine concepts, methods, and tools that can be applied within the different activities of the software development process, in order to improve the security of the resulting systems.

Topics covered include

* security requirements & risk analysis,
* system modeling and model-based development methods,
* implementation-level security, and
* evaluation criteria for the development of secure systems

Modules taught:

1. Introduction
- Introduction of Infsec group and speakers
- Security meets SW engineering: an introduction
- The activities of SW engineering, and where security fits in
- Overview of this class
2. Requirements Engineering: Security Requirements and some Analysis
- Overview: functional and non-functional requirements
- Use cases, misuse cases, sequence diagrams
- Safety and security
3. Modeling in the design activities
- Structure, behavior, and data flow
- Class diagrams, statecharts
4. Model-driven security for access control (Part I)
- SecureUML as a language for access control
- Combining Design Modeling Languages with SecureUML
- Semantics, i.e., what does it all mean,
- Generation
- Examples and experience
5. Model-driven security (Part II)
- Continuation of above topics
6. Security patterns (design and implementation)
7. Implementation-level security
- Buffer overflows
- Input checking
- Injection attacks
8. Code scanning
- Static code analysis basics
- Theoretical and practical challenges
- Analysis algorithms
- Common bug pattern search and specification
- Dataflow analysis
9. Testing
- Overview and basics
- Model-based testing
- Testing security properties
10. Risk analysis and management
- "Risk": assets, threats, vulnerabilities, risk
- Risk assessment: quantitative and qualitative
- Safeguards
- Generic risk analysis procedure
- The OCTAVE approach
- Example of qualitative risk assessment
11. Threat modeling
- Overview
- Safety engineering basics: FMEA and FTA
- Security impact analysis in the design phase
- Modeling security threats: attack trees
- Examples and experience
12. Evaluation criteria
- NIST special papers
- ISO/IEC 27000
- Common criteria
- BSI baseline protection
13. Guest lecture
- TBA
Literature- Ross Anderson: Security Engineering, Wiley, 2001.
- Matt Bishop: Computer Security, Pearson Education, 2003.
- Ian Sommerville: Software Engineering, 6th ed., Addison-Wesley, 2001.
- John Viega, Gary McGraw: Building Secure Software, Addison-Wesley, 2002.
- Further relevant books and journal/conference articles will be announced in the lecture.
Prerequisites / NoticePrerequisite: Class on Information Security
252-1414-00LSystem Security Information W7 credits2V + 2U + 2AS. Capkun, S. Shinde
AbstractThe first part of the course covers general security concepts and hardware-based support for security.
In the second part, the focus is on system design and methodologies for building secure systems.
Learning objectiveIn this lecture, students learn about the security requirements and capabilities that are expected from modern hardware, operating systems, and other software environments. An overview of available technologies, algorithms and standards is given, with which these requirements can be met.
ContentThe first part of the lecture covers hardware-based security concepts. Topics include the concept of physical and software-based side channel attacks on hardware resources, architectural support for security (e.g., memory management and permissions, disk encryption), and trusted execution environments (Intel SGX, ARM TrustZone, AMD SEV, and RISC-​V Keystone).

In the second part, the focus is on system design and methodologies for building secure systems. Topics include: common software faults (e.g., buffer overflows, etc.), bug-​detection, writing secure software (design, architecture, QA, testing), compiler-​supported security (e.g., control-​flow integrity), and language-​supported security (e.g., memory safety).

Along the lectures, model cases will be elaborated and evaluated in the exercises.
  •  Page  1  of  6 Next page Last page     All