Search result: Catalogue data in Spring Semester 2022
Electrical Engineering and Information Technology Master | ||||||
Master Studies (Programme Regulations 2018) | ||||||
Signal Processing and Machine Learning The core courses and specialization courses below are a selection for students who wish to specialize in the area of "Signal Processing and Machine Learning ", see Link. The individual study plan is subject to the tutor's approval. | ||||||
Specialization Courses These specialization courses are particularly recommended for the area of "Signal Processing and Machine Learning", but you are free to choose courses from any other field in agreement with your tutor. A minimum of 40 credits must be obtained from specialization courses during the MSc EEIT. | ||||||
Number | Title | Type | ECTS | Hours | Lecturers | |
---|---|---|---|---|---|---|
227-0120-00L | Communication Networks | W | 6 credits | 4G | L. Vanbever | |
Abstract | At the end of this course, you will understand the fundamental concepts behind communication networks and the Internet. Specifically, you will be able to: - understand how the Internet works; - build and operate Internet-like infrastructures; - identify the right set of metrics to evaluate the performance of a network and propose ways to improve it. | |||||
Objective | At the end of the course, the students will understand the fundamental concepts of communication networks and Internet-based communications. Specifically, students will be able to: - understand how the Internet works; - build and operate Internet-like network infrastructures; - identify the right set of metrics to evaluate the performance or the adequacy of a network and propose ways to improve it (if any). The course will introduce the relevant mechanisms used in today's networks both from an abstract perspective but also from a practical one by presenting many real-world examples and through multiple hands-on projects. For more information about the lecture, please visit: Link | |||||
Lecture notes | Lecture notes and material for the course will be available before each course on: Link | |||||
Literature | Most of course follows the textbook "Computer Networking: A Top-Down Approach (6th Edition)" by Kurose and Ross. | |||||
Prerequisites / Notice | No prior networking background is needed. The course will include some programming assignments (in Python) for which the material covered in Technische Informatik 1 (227-0013-00L) will be useful. | |||||
227-0147-00L | VLSI 2: From Netlist to Complete System on Chip | W | 6 credits | 5G | F. K. Gürkaynak, L. Benini | |
Abstract | This second course in our VLSI series is concerned with how to turn digital circuit netlists into safe, testable and manufacturable mask layout, taking into account various parasitic effects. Low-power circuit design is another important topic. Economic aspects and management issues of VLSI projects round off the course. | |||||
Objective | Know how to design digital VLSI circuits that are safe, testable, durable, and make economic sense. | |||||
Content | The second course begins with a thorough discussion of various technical aspects at the circuit and layout level before moving on to economic issues of VLSI. Topics include: - The difficulties of finding fabrication defects in large VLSI chips. - How to make integrated circuit testable (design for test). - Synchronous clocking disciplines compared, clock skew, clock distribution, input/output timing. - Synchronization and metastability. - CMOS transistor-level circuits of gates, flip-flops and random access memories. - Sinks of energy in CMOS circuits. - Power estimation and low-power design. - Current research in low-energy computing. - Layout parasitics, interconnect delay, static timing analysis. - Switching currents, ground bounce, IR-drop, power distribution. - Floorplanning, chip assembly, packaging. - Layout design at the mask level, physical design verification. - Electromigration, electrostatic discharge, and latch-up. - Models of industrial cooperation in microelectronics. - The caveats of virtual components. - The cost structures of ASIC development and manufacturing. - Market requirements, decision criteria, and case studies. - Yield models. - Avenues to low-volume fabrication. - Marketing considerations and case studies. - Management of VLSI projects. Exercises are concerned with back-end design (floorplanning, placement, routing, clock and power distribution, layout verification). Industrial CAD tools are being used. | |||||
Lecture notes | H. Kaeslin: "Top-Down Digital VLSI Design, from Gate-Level Circuits to CMOS Fabrication", Lecture Notes Vol.2 , 2015. All written documents in English. | |||||
Literature | H. Kaeslin: "Top-Down Digital VLSI Design, from Architectures to Gate-Level Circuits and FPGAs", Elsevier, 2014, ISBN 9780128007303. | |||||
Prerequisites / Notice | Highlight: Students are offered the opportunity to design a circuit of their own which then gets actually fabricated as a microchip! Students who elect to participate in this program register for a term project at the Integrated Systems Laboratory in parallel to attending the VLSI II course. Prerequisites: "VLSI I: from Architectures to Very Large Scale Integration Circuits and FPGAs" or equivalent knowledge. Further details: Link | |||||
227-0418-00L | Algebra and Error Correcting Codes | W | 6 credits | 4G | H.‑A. Loeliger | |
Abstract | The course is an introduction to error correcting codes covering both classical algebraic codes and modern iterative decoding. The course includes a self-contained introduction of the pertinent basics of "abstract" algebra. | |||||
Objective | The course is an introduction to error correcting codes covering both classical algebraic codes and modern iterative decoding. The course includes a self-contained introduction of the pertinent basics of "abstract" algebra. | |||||
Content | Error correcting codes: coding and modulation, linear codes, Hamming space codes, Euclidean space codes, trellises and Viterbi decoding, convolutional codes, factor graphs and message passing algorithms, low-density parity check codes, turbo codes, polar codes, Reed-Solomon codes. Algebra: groups, rings, homomorphisms, quotient groups, ideals, finite fields, vector spaces, polynomials. | |||||
Lecture notes | Lecture Notes (english) | |||||
227-0150-00L | Systems-on-Chip for Data Analytics and Machine Learning Previously "Energy-Efficient Parallel Computing Systems for Data Analytics" | W | 6 credits | 4G | L. Benini | |
Abstract | Systems-on-chip architecture and related design issues with a focus on machine learning and data analytics applications. It will cover multi-cores, many-cores, vector engines, GP-GPUs, application-specific processors and heterogeneous compute accelerators. Special emphasis given to energy-efficiency issues and hardware-software techniques for power and energy minimization. | |||||
Objective | Give in-depth understanding of the links and dependencies between architectures and their energy-efficient implementation and to get a comprehensive exposure to state-of-the-art systems-on-chip platforms for machine learning and data analytics. Practical experience will also be gained through practical exercises and mini-projects (hardware and software) assigned on specific topics. | |||||
Content | The course will cover advanced system-on-chip architectures, with an in-depth view on design challenges related to advanced silicon technology and state-of-the-art system integration options (nanometer silicon technology, novel storage devices, three-dimensional integration, advanced system packaging). The emphasis will be on programmable parallel architectures with application focus on machine learning and data analytics. The main SoC architectural families will be covered: namely, multi and many- cores, GPUs, vector accelerators, application-specific processors, heterogeneous platforms. The course will cover the complex design choices required to achieve scalability and energy proportionality. The course will will also delve into system design, touching on hardware-software tradeoffs and full-system analysis and optimization taking into account non-functional constraints and quality metrics, such as power consumption, thermal dissipation, reliability and variability. The application focus will be on machine learning both in the cloud and at the edges (near-sensor analytics). | |||||
Lecture notes | Slides will be provided to accompany lectures. Pointers to scientific literature will be given. Exercise scripts and tutorials will be provided. | |||||
Literature | John L. Hennessy, David A. Patterson, Computer Architecture: A Quantitative Approach (The Morgan Kaufmann Series in Computer Architecture and Design) 6th Edition, 2017. | |||||
Prerequisites / Notice | Knowledge of digital design at the level of "Design of Digital Circuits SS12" is required. Knowledge of basic VLSI design at the level of "VLSI I: Architectures of VLSI Circuits" is required | |||||
227-0155-00L | Machine Learning on Microcontrollers Number of participants limited to 45. Registration in this class requires the permission of the instructors. | W | 6 credits | 3G | M. Magno, L. Benini | |
Abstract | Machine Learning (ML) and artificial intelligence are pervading the digital society. Today, even low power embedded systems are incorporating ML, becoming increasingly “smart”. This lecture gives an overview of ML methods and algorithms to process and extracts useful near-sensor information in end-nodes of the “internet-of-things”, using low-power microcontrollers (ARM-Cortex-M; RISC-V). | |||||
Objective | Learn how to Process data from sensors and how to extract useful information with low power microprocessors using ML techniques. We will analyze data coming from real low-power sensors (accelerometers, microphones, ExG bio-signals, cameras…). The main objective is to study in detail how Machine Learning algorithms can be adapted to the performance constraints and limited resources of low-power microcontrollers becoming Tiny Machine learning algorithms. | |||||
Content | The final goal of the course is a deep understanding of machine learning and its practical implementation on single- and multi-core microcontrollers, coupled with performance and energy efficiency analysis and optimization. The main topics of the course include: - Sensors and sensor data acquisition with low power embedded systems - Machine Learning: Overview of supervised and unsupervised learning and in particular supervised learning ( Decision Trees, Random, Support Vector Machines, Artificial Neural Networks, Deep Learning, and Convolutional Networks) - Low-power embedded systems and their architecture. Low Power microcontrollers (ARM-Cortex M) and RISC-V-based Parallel Ultra Low Power (PULP) systems-on-chip. - Low power smart sensor system design: hardware-software tradeoffs, analysis, and optimization. Implementation and performance evaluation of ML in battery-operated embedded systems. The laboratory exercised will show how to address concrete design problems, like motion, gesture recognition, emotion detection, image, and sound classification, using real sensors data and real MCU boards. Presentations from Ph.D. students and the visit to the Digital Circuits and Systems Group will introduce current research topics and international research projects. | |||||
Lecture notes | Script and exercise sheets. Books will be suggested during the course. | |||||
Prerequisites / Notice | Prerequisites: Good experience in C language programming. Microprocessors and computer architecture. Basics of Digital Signal Processing. Some exposure to machine learning concepts is also desirable. | |||||
227-0424-00L | Model- and Learning-Based Inverse Problems in Imaging | W | 4 credits | 2V + 1P | V. Vishnevskiy | |
Abstract | Reconstruction is an inverse problem which estimates images from noisy measurements. Model-based reconstructions use analytical models of the imaging process and priors. Data-based methods directly approximate inversion using training data. Combining these two approaches yields physics-aware neural nets and state-of-the-art imaging accuracy (MRI, US, CT, microscopy, non-destructive imaging). | |||||
Objective | The goal of this course is to introduce the mathematical models of imaging experiments and practice implementation of numerical methods to solve the corresponding inverse problem. Students will learn how to improve reconstruction accuracy by introducing prior knowledge in the form of regularization models and training data. Furthermore, students will practice incorporating imaging model knowledge into deep neural networks. | |||||
Content | The course is based on following fundamental fields: (i) numerical linear algebra, (ii) mathematical statistics and learning theory, (iii) convex optimization and (iv) signal processing. The first part of the course introduces classical linear and nonlinear methods for image reconstruction. The second part considers data-based regularization and covers modern deep learning approaches to inverse problems in imaging. Finally, we introduce advances in the actively developing field of experimental design in biomedical imaging (i.e. how to conduct an experiment in a way to enable the most accurate reconstruction). 1. Introduction: Examples of inverse problems, general introduction. Refresh prerequisites. 2. Linear algebra in imaging: Refresh prerequisites. Demonstrate properties of operators employed in imaging. 3. Linear inverse problems and regularization: Classical theory of inverse problems. Introduce notion of ill-posedness and regularization. 3. Compressed sensing: Sparsity, basis-CS, TV-CS. Notion of analysis and synthesis forms of reconstruction problems. Application of PGD and ADMM to reconstruction. 4. Advanced priors and model selection: Total generalized variation, GMM priors, vectorial TV, low-rank, and tensor models. Stein's unbiased risk estimator. 5. Dictionary and prior learning: Classical dictionary learning. Gentle intro to machine learning. A lot of technical details about patch-models. 6. Deep learning in image reconstruction: Generic convolutional-NN models (automap, residual filtering, u-nets). Talk about the data generation process. Characterized difference between model- and data-based reconstruction methods. Mode averaging. 7. Loop unrolling and physics-aware networks for reconstruction: Autograd, Variational Networks, a lot of examples and intuition. Show how to use them efficiently, e.g. adding preconditioners, attention, etc. 8. Generative models and uncertainty quantification: Amortized posterior, variational autoencoders, adversarial learning. Estimation uncertainty quantification. 9. Inversible networks for estimation: Gradient flows in networks, inversible neural networks for estimation problems. 10. Experimental design in imaging: Acquisition optimization for continuous models. How far can we exploit autograd? 11. Signal sampling optimization in MRI. Reinforcement learning: Acquisition optimization for discrete models. Reinforce and policy gradients, variance minimization for discrete variables (RELAX, REBAR). Cartesian under-sampling pattern design 12. Summary and exam preparation. | |||||
Lecture notes | Lecture slides with references will be provided during the course. | |||||
Prerequisites / Notice | Students are expected to know the basics of (i) numerical linear algebra, (ii) applied methods of convex optimization, (iii) computational statistics, (iv) Matlab and Python. | |||||
227-0432-00L | Learning, Classification and Compression | W | 4 credits | 2V + 1U | E. Riegler | |
Abstract | The focus of the course is aligned to a theoretical approach of learning theory and classification and an introduction to lossy and lossless compression for general sets and measures. We will mainly focus on a probabilistic approach, where an underlying distribution must be learned/compressed. The concepts acquired in the course are of broad and general interest in data sciences. | |||||
Objective | After attending this lecture and participating in the exercise sessions, students will have acquired a working knowledge of learning theory, classification, and compression. | |||||
Content | 1. Learning Theory (a) Framework of Learning (b) Hypothesis Spaces and Target Functions (c) Reproducing Kernel Hilbert Spaces (d) Bias-Variance Tradeoff (e) Estimation of Sample and Approximation Error 2. Classification (a) Binary Classifier (b) Support Vector Machines (separable case) (c) Support Vector Machines (nonseparable case) (d) Kernel Trick 3. Lossy and Lossless Compression (a) Basics of Compression (b) Compressed Sensing for General Sets and Measures (c) Quantization and Rate Distortion Theory for General Sets and Measures | |||||
Lecture notes | Detailed lecture notes will be provided. | |||||
Prerequisites / Notice | This course is aimed at students with a solid background in measure theory and linear algebra and basic knowledge in functional analysis. | |||||
227-0436-00L | Digital Communication and Signal Processing Does not take place this semester. | W | 6 credits | 2V + 2U | A. Wittneben | |
Abstract | A comprehensive presentation of modern digital modulation, detection and synchronization schemes and relevant aspects of signal processing enables the student to analyze, simulate, implement and research the physical layer of advanced digital communication schemes. The course both covers the underlying theory and provides problem solving and hands-on experience. | |||||
Objective | Digital communication systems are characterized by ever increasing requirements on data rate, spectral efficiency and reliability. Due to the huge advances in very large scale integration (VLSI) we are now able to implement extremely complex digital signal processing algorithms to meet these challenges. As a result the physical layer (PHY) of digital communication systems has become the dominant function in most state-of-the-art system designs. In this course we discuss the major elements of PHY implementations in a rigorous theoretical fashion and present important practical examples to illustrate the application of the theory. In Part I we treat discrete time linear adaptive filters, which are a core component to handle multiuser and intersymbol interference in time-variant channels. Part II is a seminar block, in which the students develop their analytical and experimental (simulation) problem solving skills. After a review of major aspects of wireless communication we discuss, simulate and present the performance of novel cooperative and adaptive multiuser wireless communication systems. As part of this seminar each students has to give a 15 minute presentation and actively attends the presentations of the classmates. In Part III we cover parameter estimation and synchronization. Based on the classical discrete detection and estimation theory we develop maximum likelihood inspired digital algorithms for symbol timing and frequency synchronization. | |||||
Content | Part I: Linear adaptive filters for digital communication • Finite impulse response (FIR) filter for temporal and spectral shaping • Wiener filters • Method of steepest descent • Least mean square adaptive filters Part II: Seminar block on cooperative wireless communication • review of the basic concepts of wireless communication • multiuser amplify&forward relaying • performance evaluation of adaptive A&F relaying schemes and student presentations Part III: Parameter estimation and synchronization • Discrete detection theory • Discrete estimation theory • Synthesis of synchronization algorithms • Frequency estimation • Timing adjustment by interpolation | |||||
Lecture notes | Lecture notes. | |||||
Literature | [1] Oppenheim, A. V., Schafer, R. W., "Discrete-time signal processing", Prentice-Hall, ISBN 0-13-754920-2. [2] Haykin, S., "Adaptive filter theory", Prentice-Hall, ISBN 0-13-090126-1. [3] Van Trees, H. L., "Detection , estimation and modulation theory", John Wiley&Sons, ISBN 0-471-09517-6. [4] Meyr, H., Moeneclaey, M., Fechtel, S. A., "Digital communication receivers: synchronization, channel estimation and signal processing", John Wiley&Sons, ISBN 0-471-50275-8. | |||||
Prerequisites / Notice | Formal prerequisites: none Recommended: Communication Systems or equivalent | |||||
227-0478-00L | Acoustics II | W | 6 credits | 4G | K. Heutschi, R. Pieren | |
Abstract | Advanced knowledge of the functioning and application of electro-acoustic transducers. | |||||
Objective | Advanced knowledge of the functioning and application of electro-acoustic transducers. | |||||
Content | Electrical, mechanical and acoustical analogies. Transducers, microphones and loudspeakers, acoustics of musical instruments, sound recording, sound reproduction, digital audio. | |||||
Lecture notes | available | |||||
227-0449-00L | Seminar in Biomedical Image Computing | W | 1 credit | 2S | E. Konukoglu, B. Menze, M. A. Reyes Aguirre | |
Abstract | This is a seminar lecture focusing on recent research topics in biomedical image computing, machine learning techniques related to interpreting biomedical images and medical data in general. Every week a different topic will be presented and discussed. | |||||
Objective | The goal of this lecture is to provide a glimpse of the current research landscape to graduate students who are interested in working on biomedical image computing and related areas. Different topics will be covered by different speakers every week to provide a broad perspective and highlight current challenges. Every week students will be asked to read a paper, prepare discussion questions and participate in the discussion. Upon completion of this course, students will have a broad overview of the recent developments in biomedical image computing and ability to critically discuss a scientific article. | |||||
Prerequisites / Notice | Knowledge in computer vision, machine learning and biomedical image analysis would be essential. | |||||
227-0558-00L | Principles of Distributed Computing | W | 7 credits | 2V + 2U + 2A | R. Wattenhofer, M. Dory, G. Zuzic | |
Abstract | We study the fundamental issues underlying the design of distributed systems: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques. | |||||
Objective | Distributed computing is essential in modern computing and communications systems. Examples are on the one hand large-scale networks such as the Internet, and on the other hand multiprocessors such as your new multi-core laptop. This course introduces the principles of distributed computing, emphasizing the fundamental issues underlying the design of distributed systems and networks: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques, basically the "pearls" of distributed computing. We will cover a fresh topic every week. | |||||
Content | Distributed computing models and paradigms, e.g. message passing, shared memory, synchronous vs. asynchronous systems, time and message complexity, peer-to-peer systems, small-world networks, social networks, sorting networks, wireless communication, and self-organizing systems. Distributed algorithms, e.g. leader election, coloring, covering, packing, decomposition, spanning trees, mutual exclusion, store and collect, arrow, ivy, synchronizers, diameter, all-pairs-shortest-path, wake-up, and lower bounds | |||||
Lecture notes | Available. Our course script is used at dozens of other universities around the world. | |||||
Literature | Lecture Notes By Roger Wattenhofer. These lecture notes are taught at about a dozen different universities through the world. Distributed Computing: Fundamentals, Simulations and Advanced Topics Hagit Attiya, Jennifer Welch. McGraw-Hill Publishing, 1998, ISBN 0-07-709352 6 Introduction to Algorithms Thomas Cormen, Charles Leiserson, Ronald Rivest. The MIT Press, 1998, ISBN 0-262-53091-0 oder 0-262-03141-8 Disseminatin of Information in Communication Networks Juraj Hromkovic, Ralf Klasing, Andrzej Pelc, Peter Ruzicka, Walter Unger. Springer-Verlag, Berlin Heidelberg, 2005, ISBN 3-540-00846-2 Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes Frank Thomson Leighton. Morgan Kaufmann Publishers Inc., San Francisco, CA, 1991, ISBN 1-55860-117-1 Distributed Computing: A Locality-Sensitive Approach David Peleg. Society for Industrial and Applied Mathematics (SIAM), 2000, ISBN 0-89871-464-8 | |||||
Prerequisites / Notice | Course pre-requisites: Interest in algorithmic problems. (No particular course needed.) | |||||
227-0560-00L | Deep Learning for Autonomous Driving Number of participants limited to 80. | W | 6 credits | 3V + 2P | D. Dai, A. Liniger | |
Abstract | Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme. | |||||
Objective | Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control. After attending this course, students will: 1) understand the core technologies of building a self-driving car; 2) have a good overview over the current state of the art in self-driving cars; 3) be able to critically analyze and evaluate current research in this area; 4) be able to implement basic systems for multiple autonomous driving tasks. | |||||
Content | We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control. The course covers the following main areas: I) Foundation a) Fundamentals of a self-driving car b) Fundamentals of deep-learning II) Perception a) Semantic segmentation and lane detection b) Depth estimation with images and sparse LiDAR data c) 3D object detection with images and LiDAR data d) Object tracking and Lane Detection III) Localization a) GPS-based and Vision-based Localization b) Visual Odometry and Lidar Odometry IV) Path Planning and Control a) Path planning for autonomous driving b) Motion planning and vehicle control c) Imitation learning and reinforcement learning for self driving cars The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems: - Sensor calibration and synchronization to obtain multimodal driving data; - Semantic segmentation and depth estimation with deep neural networks ; - 3D object detection and tracking in LiDAR point clouds | |||||
Lecture notes | The lecture slides will be provided as a PDF. | |||||
Prerequisites / Notice | This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image. | |||||
227-0562-00L | Robot Learning | W | 6 credits | 2V + 2U | F. Yu | |
Abstract | Learning robots presents both significant research challenges and great commercial opportunities. This course explores the research frontiers of robot learning and dives into building practical systems such as autonomous driving. The lectures will cover advanced topics in perception, control, planning, prediction, mapping, reinforcement learning, imitation learning, and human-robot collaboration. | |||||
Objective | Students will learn the advanced topics in perception and robotics to understand research frontiers and engineering practices in building learning robot systems. The lectures will cover the foundations in robot learning systems, including dynamic scene understanding, high-level reasoning, and decision making. Despite the immense scopes of those areas, we will focus on the advanced topics directly related to robot learning. The course will equip the students with knowledge and experience to start research works immediately in those areas. At the same time, students will learn how to apply those ideas and methods in practical systems and applications. So those interested in engineering careers can understand the boundaries between research explorations and practical solutions and how real-world robot systems work behind the scene. Students will have a solid grasp of the main ideas and theories for robot learning. Besides, through a series of projects, students will gain hands-on experience building and running state-of-the-art models in dynamic scene understanding and reinforcement learning. Also, students will learn how to experiment with their robot systems in simulation environments. | |||||
Content | The course assumes you have taken lectures in computer vision and machine learning, and you are familiar with conducting deep learning experiments. We aim to cover advanced CV and ML topics closely related to robot learning, and get you prepared for research study and advanced engineering solutions. We will cover the following areas and topics: 1) Dynamic 3D scene perception - 2D and 3D object detection and tracking - Multi-task learning - Geometry Processing - Visual localization - Visual mapping 2) Learning and reasoning - Meta-learning - Few-shot learning - Domain adaptation - Interactive learning - Causal reasoning - Lifelong learning 3) Decision making - Imitation learning - Model-free reinforcement learning - Model-based reinforcement learning - Inverse reinforcement learning - Hierarchical reinforcement learning - Learning to predict - Learning to plan 4) Applications - Autonomous driving - Object grasping - Object manipulation - Autonomous exploration | |||||
Literature | The course doesn't use a particular textbook, but each lecture will have a reading list. | |||||
Prerequisites / Notice | Those studies complement the existing ETH courses in computer vision, machine learning, and robotics because we will mainly focus on the advanced study of the covered topics. The students are expected to grasp those subjects in graduate studies before taking the course. Please take note of the following conditions: 1) The number of participants is limited to 32 students (MSc and PhDs). 2) Students must have taken the exams in at least one computer vision course and one machine learning course at ETH. 3) Students are expected to be familiar with Python and PyTorch/Tensorflow to build deep learning models and conduct experiments. The following courses are strongly recommended as prerequisites in each category: 1) Computer vision: "Visual Computing" or "Computer Vision" or "Image Analysis and Computer Vision" or "Machine Perception" or "3D Vision" 2) Machine learning: "Advanced Machine Learning" or "Probabilistic Artificial Intelligence" or "Statistical Learning Theory" or "Computational Intelligence Lab" or "Deep Learning" or "Computational Statistics" "Introduction to Autonomous Mobile Robots" is recommended as the robotics background study. | |||||
227-0707-00L | Optimization Methods for Engineers | W | 3 credits | 2G | J. Smajic | |
Abstract | First half of the semester: Introduction to the main methods of numerical optimization with focus on stochastic methods such as genetic algorithms, evolutionary strategies, etc. Second half of the semester: Each participant implements a selected optimizer and applies it on a problem of practical interest. | |||||
Objective | Numerical optimization is of increasing importance for the development of devices and for the design of numerical methods. The students shall learn to select, improve, and combine appropriate procedures for efficiently solving practical problems. | |||||
Content | Typical optimization problems and their difficulties are outlined. Well-known deterministic search strategies, combinatorial minimization, and evolutionary algorithms are presented and compared. In engineering, optimization problems are often very complex. Therefore, new techniques based on the generalization and combination of known methods are discussed. To illustrate the procedure, various problems of practical interest are presented and solved with different optimization codes. | |||||
Lecture notes | PDF of a short skript (39 pages) plus the view graphs are provided | |||||
Prerequisites / Notice | Lecture only in the first half of the semester, exercises in form of small projects in the second half, presentation of the results in the last week of the semester. | |||||
227-0948-00L | Magnetic Resonance Imaging in Medicine | W | 4 credits | 3G | S. Kozerke, M. Weiger Senften | |
Abstract | Introduction to magnetic resonance imaging and spectroscopy, encoding and contrast mechanisms and their application in medicine. | |||||
Objective | Understand the basic principles of signal generation, image encoding and decoding, contrast manipulation and the application thereof to assess anatomical and functional information in-vivo. | |||||
Content | Introduction to magnetic resonance imaging including basic phenomena of nuclear magnetic resonance; 2- and 3-dimensional imaging procedures; fast and parallel imaging techniques; image reconstruction; pulse sequences and image contrast manipulation; equipment; advanced techniques for identifying activated brain areas; perfusion and flow; diffusion tensor imaging and fiber tracking; contrast agents; localized magnetic resonance spectroscopy and spectroscopic imaging; diagnostic applications and applications in research. | |||||
Lecture notes | D. Meier, P. Boesiger, S. Kozerke Magnetic Resonance Imaging and Spectroscopy | |||||
227-0973-00L | Translational Neuromodeling | W | 8 credits | 3V + 2U + 1A | K. Stephan | |
Abstract | This course provides a systematic introduction to Translational Neuromodeling (the development of mathematical models for diagnostics of brain diseases) and their application to concrete clinical questions (Computational Psychiatry/Psychosomatics). It focuses on a generative modeling strategy and teaches (hierarchical) Bayesian models of neuroimaging data and behaviour, incl. exercises. | |||||
Objective | To obtain an understanding of the goals, concepts and methods of Translational Neuromodeling and Computational Psychiatry/Psychosomatics, particularly with regard to Bayesian models of neuroimaging (fMRI, EEG) and behavioural data. | |||||
Content | This course provides a systematic introduction to Translational Neuromodeling (the development of mathematical models for inferring mechanisms of brain diseases from neuroimaging and behavioural data) and their application to concrete clinical questions (Computational Psychiatry/Psychosomatics). The first part of the course will introduce disease concepts from psychiatry and psychosomatics, their history, and clinical priority problems. The second part of the course concerns computational modeling of neuronal and cognitive processes for clinical applications. A particular focus is on Bayesian methods and generative models, for example, dynamic causal models for inferring neuronal processes from neuroimaging data, and hierarchical Bayesian models for inference on cognitive processes from behavioural data. The course discusses the mathematical and statistical principles behind these models, illustrates their application to various psychiatric diseases, and outlines a general research strategy based on generative models. Lecture topics include: 1. Introduction to Translational Neuromodeling and Computational Psychiatry/Psychosomatics 2. Psychiatric nosology 3. Pathophysiology of psychiatric disease mechanisms 4. Principles of Bayesian inference and generative modeling 5. Variational Bayes (VB) 6. Bayesian model selection 7. Markov Chain Monte Carlo techniques (MCMC) 8. Bayesian frameworks for understanding psychiatric and psychosomatic diseases 9. Generative models of fMRI data 10. Generative models of electrophysiological data 11. Generative models of behavioural data 12. Computational concepts of schizophrenia, depression and autism 13. Generative embedding: Model-based predictions about individual patients Practical exercises include mathematical derivations and the implementation of specific models and inference methods. In additional project work, students are required to either develop a novel generative model (and demonstrate its properties in simulations) or devise novel applications of an existing model to empirical data in order to address a clinical question. Group work (up to 3 students) is required. | |||||
Literature | See TNU website: Link | |||||
Prerequisites / Notice | Good knowledge of principles of statistics, good programming skills (the majority of the open source software tools used is in MATLAB; for project work, Julia or Python can also be used) | |||||
227-1032-00L | Neuromorphic Engineering II Information for UZH students: Enrolment to this course unit only possible at ETH. No enrolment to module INI405 at UZH. Please mind the ETH enrolment deadlines for UZH students: Link | W | 6 credits | 5G | T. Delbrück, G. Indiveri, S.‑C. Liu | |
Abstract | This course teaches the basics of analog chip design and layout with an emphasis on neuromorphic circuits, which are introduced in the fall semester course "Neuromorphic Engineering I". | |||||
Objective | Design of a neuromorphic circuit for implementation with CMOS technology. | |||||
Content | This course teaches the basics of analog chip design and layout with an emphasis on neuromorphic circuits, which are introduced in the autumn semester course "Neuromorphic Engineering I". The principles of CMOS processing technology are presented. Using a set of inexpensive software tools for simulation, layout and verification, suitable for neuromorphic circuits, participants learn to simulate circuits on the transistor level and to make their layouts on the mask level. Important issues in the layout of neuromorphic circuits will be explained and illustrated with examples. In the latter part of the semester students simulate and layout a neuromorphic chip. Schematics of basic building blocks will be provided. The layout will then be fabricated and will be tested by students during the following fall semester. | |||||
Literature | S.-C. Liu et al.: Analog VLSI Circuits and Principles; software documentation. | |||||
Prerequisites / Notice | Prerequisites: Neuromorphic Engineering I strongly recommended | |||||
151-0566-00L | Recursive Estimation | W | 4 credits | 2V + 1U | R. D'Andrea | |
Abstract | Estimation of the state of a dynamic system based on a model and observations in a computationally efficient way. | |||||
Objective | Learn the basic recursive estimation methods and their underlying principles. | |||||
Content | Introduction to state estimation; probability review; Bayes' theorem; Bayesian tracking; extracting estimates from probability distributions; Kalman filter; extended Kalman filter; particle filter; observer-based control and the separation principle. | |||||
Lecture notes | Lecture notes available on course website: Link | |||||
Prerequisites / Notice | Requirements: Introductory probability theory and matrix-vector algebra. | |||||
252-0526-00L | Statistical Learning Theory Does not take place this semester. | W | 8 credits | 3V + 2U + 2A | J. M. Buhmann | |
Abstract | The course covers advanced methods of statistical learning: - Variational methods and optimization. - Deterministic annealing. - Clustering for diverse types of data. - Model validation by information theory. | |||||
Objective | The course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning. | |||||
Content | - Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing. - Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures. - Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation. - Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models. | |||||
Lecture notes | A draft of a script will be provided. Lecture slides will be made available. | |||||
Literature | Hastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001. L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996 | |||||
Prerequisites / Notice | Knowledge of machine learning (introduction to machine learning and/or advanced machine learning) Basic knowledge of statistics. | |||||
252-0579-00L | 3D Vision | W | 5 credits | 3G + 1A | M. Pollefeys, D. B. Baráth | |
Abstract | The course covers camera models and calibration, feature tracking and matching, camera motion estimation via simultaneous localization and mapping (SLAM) and visual odometry (VO), epipolar and mult-view geometry, structure-from-motion, (multi-view) stereo, augmented reality, and image-based (re-)localization. | |||||
Objective | After attending this course, students will: 1. understand the core concepts for recovering 3D shape of objects and scenes from images and video. 2. be able to implement basic systems for vision-based robotics and simple virtual/augmented reality applications. 3. have a good overview over the current state-of-the art in 3D vision. 4. be able to critically analyze and asses current research in this area. | |||||
Content | The goal of this course is to teach the core techniques required for robotic and augmented reality applications: How to determine the motion of a camera and how to estimate the absolute position and orientation of a camera in the real world. This course will introduce the basic concepts of 3D Vision in the form of short lectures, followed by student presentations discussing the current state-of-the-art. The main focus of this course are student projects on 3D Vision topics, with an emphasis on robotic vision and virtual and augmented reality applications. |
- Page 1 of 2 All