Search result: Catalogue data in Autumn Semester 2018

DAS in Data Science Information
Specialisation Track
Machine Learning and Artificial Intelligence
NumberTitleTypeECTSHoursLecturers
227-0689-00LSystem IdentificationW4 credits2V + 1UR. Smith
AbstractTheory and techniques for the identification of dynamic models from experimentally obtained system input-output data.
Learning objectiveTo provide a series of practical techniques for the development of dynamical models from experimental data, with the emphasis being on the development of models suitable for feedback control design purposes. To provide sufficient theory to enable the practitioner to understand the trade-offs between model accuracy, data quality and data quantity.
ContentIntroduction to modeling: Black-box and grey-box models; Parametric and non-parametric models; ARX, ARMAX (etc.) models.

Predictive, open-loop, black-box identification methods. Time and frequency domain methods. Subspace identification methods.

Optimal experimental design, Cramer-Rao bounds, input signal design.

Parametric identification methods. On-line and batch approaches.

Closed-loop identification strategies. Trade-off between controller performance and information available for identification.
Literature"System Identification; Theory for the User" Lennart Ljung, Prentice Hall (2nd Ed), 1999.

"Dynamic system identification: Experimental design and data analysis", GC Goodwin and RL Payne, Academic Press, 1977.
Prerequisites / NoticeControl systems (227-0216-00L) or equivalent.
252-0535-00LAdvanced Machine Learning Information W8 credits3V + 2U + 2AJ. M. Buhmann
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
Learning objectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.
263-2400-00LReliable and Interpretable Artificial Intelligence Information W4 credits2V + 1UM. Vechev
AbstractCreating reliable and explainable probabilistic models is a fundamental challenge to solving the artificial intelligence problem. This course covers some of the latest and most exciting advances that bring us closer to constructing such models.
Learning objectiveThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.

To facilitate deeper understanding, an important part of the course will be a group hands-on programming project where students will build a system based on the learned material.
ContentThe course covers the following inter-connected directions.

Part I: Robust and Explainable Deep Learning
-------------------------------------------------------------

Deep learning technology has made impressive advances in recent years. Despite this progress however, the fundamental challenge with deep learning remains that of understanding what a trained neural network has actually learned, and how stable that solution is. Forr example: is the network stable to slight perturbations of the input (e.g., an image)? How easy it is to fool the network into mis-classifying obvious inputs? Can we guide the network in a manner beyond simple labeled data?

Topics:
- Attacks: Finding adversarial examples via state-of-the-art attacks (e.g., FGSM, PGD attacks).
- Defenses: Automated methods and tools which guarantee robustness of deep nets (e.g., using abstract domains, mixed-integer solvers)
- Combing differentiable logic with gradient-based methods so to train networks to satisfy richer properties.
- Frameworks: AI2, DiffAI, Reluplex, DQL, DeepPoly, etc.

Part II: Program Synthesis/Induction
------------------------------------------------

Synthesis is a new frontier in AI where the computer programs itself via user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.

Topics:
- Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, lower bounds on learning.
- Applications of techniques: synthesis for end users (e.g., spreadsheets) and data analytics.
- Combining synthesis with learning: application to learning from code.
- Frameworks: PHOG, DeepCode.

Part III: Probabilistic Programming
----------------------------------------------

Probabilistic programming is an emerging direction, recently also pushed by various companies (e.g., Facebook, Uber, Google) whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.

Topics:

- Probabilistic Inference: sampling based, exact symbolic inference, semantics
- Applications of probabilistic programming: bias in deep learning, differential privacy (connects to Part I).
- Frameworks: PSI, Edward2, Venture.
Prerequisites / NoticeThe course material is self-contained: needed background is covered in the lectures and exercises, and additional pointers.
263-3210-00LDeep Learning Information Restricted registration - show details
Number of participants limited to 300.
W4 credits2V + 1UF. Perez Cruz
AbstractDeep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations.
Learning objectiveIn recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the mathematical foundations of deep learning and provide insights into model design, training, and validation. The main objective is a profound understanding of why these methods work and how. There will also be a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology.
Prerequisites / NoticeThis is an advanced level course that requires some basic background in machine learning. More importantly, students are expected to have a very solid mathematical foundation, including linear algebra, multivariate calculus, and probability. The course will make heavy use of mathematics and is not (!) meant to be an extended tutorial of how to train deep networks with tools like Torch or Tensorflow, although that may be a side benefit.

The participation in the course is subject to the following conditions:
1) The number of participants is limited to 300 students (MSc and PhDs).
2) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge, see exhaustive list below:

Machine Learning
https://ml2.inf.ethz.ch/courses/ml/

Computational Intelligence Lab
http://da.inf.ethz.ch/teaching/2018/CIL/

Learning and Intelligent Systems/Introduction to Machine Learning
https://las.inf.ethz.ch/teaching/introml-S18

Statistical Learning Theory
http://ml2.inf.ethz.ch/courses/slt/

Computational Statistics
https://stat.ethz.ch/lectures/ss18/comp-stats.php

Probabilistic Artificial Intelligence
https://las.inf.ethz.ch/teaching/pai-f17

Data Mining: Learning from Large Data Sets
https://las.inf.ethz.ch/teaching/dm-f17
263-5210-00LProbabilistic Artificial Intelligence Information W4 credits2V + 1UA. Krause
AbstractThis course introduces core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet.
Learning objectiveHow can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for upper-level undergraduate and graduate students.
ContentTopics covered:
- Search (BFS, DFS, A*), constraint satisfaction and optimization
- Tutorial in logic (propositional, first-order)
- Probability
- Bayesian Networks (models, exact and approximative inference, learning) - Temporal models (Hidden Markov Models, Dynamic Bayesian Networks)
- Probabilistic palnning (MDPs, POMPDPs)
- Reinforcement learning
- Combining logic and probability
Prerequisites / NoticeSolid basic knowledge in statistics, algorithms and programming
  •  Page  1  of  1