# Search result: Catalogue data in Autumn Semester 2020

Electrical Engineering and Information Technology Master | ||||||

Master Studies (Programme Regulations 2008) | ||||||

Major Courses A total of 42 CP must be achieved during the Master Programme. The individual study plan is subject to the tutor's approval. | ||||||

Signal Processing and Machine Learning | ||||||

Recommended Subjects | ||||||

Number | Title | Type | ECTS | Hours | Lecturers | |
---|---|---|---|---|---|---|

227-0101-00L | Discrete-Time and Statistical Signal Processing | W | 6 credits | 4G | H.‑A. Loeliger | |

Abstract | The course introduces some fundamental topics of digital signal processing with a bias towards applications in communications: discrete-time linear filters, inverse filters and equalization, DFT, discrete-time stochastic processes, elements of detection theory and estimation theory, LMMSE estimation and LMMSE filtering, LMS algorithm, Viterbi algorithm. | |||||

Objective | The course introduces some fundamental topics of digital signal processing with a bias towards applications in communications. The two main themes are linearity and probability. In the first part of the course, we deepen our understanding of discrete-time linear filters. In the second part of the course, we review the basics of probability theory and discrete-time stochastic processes. We then discuss some basic concepts of detection theory and estimation theory, as well as some practical methods including LMMSE estimation and LMMSE filtering, the LMS algorithm, and the Viterbi algorithm. A recurrent theme throughout the course is the stable and robust "inversion" of a linear filter. | |||||

Content | 1. Discrete-time linear systems and filters: state-space realizations, z-transform and spectrum, decimation and interpolation, digital filter design, stable realizations and robust inversion. 2. The discrete Fourier transform and its use for digital filtering. 3. The statistical perspective: probability, random variables, discrete-time stochastic processes; detection and estimation: MAP, ML, Bayesian MMSE, LMMSE; Wiener filter, LMS adaptive filter, Viterbi algorithm. | |||||

Lecture notes | Lecture Notes | |||||

227-0116-00L | VLSI I: From Architectures to VLSI Circuits and FPGAs | W | 6 credits | 5G | F. K. Gürkaynak, L. Benini | |

Abstract | This first course in a series that extends over three consecutive terms is concerned with tailoring algorithms and with devising high performance hardware architectures for their implementation as ASIC or with FPGAs. The focus is on front end design using HDLs and automatic synthesis for producing industrial-quality circuits. | |||||

Objective | Understand Very-Large-Scale Integrated Circuits (VLSI chips), Application-Specific Integrated Circuits (ASIC), and Field-Programmable Gate-Arrays (FPGA). Know their organization and be able to identify suitable application areas. Become fluent in front-end design from architectural conception to gate-level netlists. How to model digital circuits with SystemVerilog. How to ensure they behave as expected with the aid of simulation, testbenches, and assertions. How to take advantage of automatic synthesis tools to produce industrial-quality VLSI and FPGA circuits. Gain practical experience with the hardware description language SystemVerilog and with industrial Electronic Design Automation (EDA) tools. | |||||

Content | This course is concerned with system-level issues of VLSI design and FPGA implementations. Topics include: - Overview on design methodologies and fabrication depths. - Levels of abstraction for circuit modeling. - Organization and configuration of commercial field-programmable components. - FPGA design flows. - Dedicated and general purpose architectures compared. - How to obtain an architecture for a given processing algorithm. - Meeting throughput, area, and power goals by way of architectural transformations. - Hardware Description Languages (HDL) and the underlying concepts. - SystemVerilog - Register Transfer Level (RTL) synthesis and its limitations. - Building blocks of digital VLSI circuits. - Functional verification techniques and their limitations. - Modular and largely reusable testbenches. - Assertion-based verification. - Synchronous versus asynchronous circuits. - The case for synchronous circuits. - Periodic events and the Anceau diagram. - Case studies, ASICs compared to microprocessors, DSPs, and FPGAs. During the exercises, students learn how to model FPGAs with SystemVerilog. They write testbenches for simulation purposes and synthesize gate-level netlists for FPGAs. Commercial EDA software by leading vendors is being used throughout. | |||||

Lecture notes | Textbook and all further documents in English. | |||||

Literature | H. Kaeslin: "Top-Down Digital VLSI Design, from Architectures to Gate-Level Circuits and FPGAs", Elsevier, 2014, ISBN 9780128007303. | |||||

Prerequisites / Notice | Prerequisites: Basics of digital circuits. Examination: In written form following the course semester (spring term). Problems are given in English, answers will be accepted in either English oder German. Further details: Link | |||||

227-0155-00L | Machine Learning on Microcontrollers Registration in this class requires the permission of the instructors. Class size will be limited to 16. Preference is given to students in the MSc EEIT. | W | 6 credits | 3G | M. Magno, L. Benini | |

Abstract | Machine Learning (ML) and artificial intelligence are pervading the digital society. Today, even low power embedded systems are incorporating ML, becoming increasingly “smart”. This lecture gives an overview of ML methods and algorithms to process and extract useful near-sensor information in end-nodes of the “internet-of-things”, using low-power microcontrollers/ processors (ARM-Cortex-M; RISC-V) | |||||

Objective | Learn how to Process data from sensors and how to extract useful information with low power microprocessors using ML techniques. We will analyze data coming from real low-power sensors (accelerometers, microphones, ExG bio-signals, cameras…). The main objective is to study in details how Machine Learning algorithms can be adapted to the performance constraints and limited resources of low-power microcontrollers. | |||||

Content | The final goal of the course is a deep understanding of machine learning and its practical implementation on single- and multi-core microcontrollers, coupled with performance and energy efficiency analysis and optimization. The main topics of the course include: - Sensors and sensor data acquisition with low power embedded systems - Machine Learning: Overview of supervised and unsupervised learning and in particular supervised learning (Bayes Decision Theory, Decision Trees, Random Forests, kNN-Methods, Support Vector Machines, Convolutional Networks and Deep Learning) - Low-power embedded systems and their architecture. Low Power microcontrollers (ARM-Cortex M) and RISC-V-based Parallel Ultra Low Power (PULP) systems-on-chip. - Low power smart sensor system design: hardware-software tradeoffs, analysis, and optimization. Implementation and performance evaluation of ML in battery-operated embedded systems. The laboratory exercised will show how to address concrete design problems, like motion, gesture recognition, emotion detection, image and sound classification, using real sensors data and real MCU boards. Presentations from Ph.D. students and the visit to the Digital Circuits and Systems Group will introduce current research topics and international research projects. | |||||

Lecture notes | Script and exercise sheets. Books will be suggested during the course. | |||||

Prerequisites / Notice | Prerequisites: C language programming. Basics of Digital Signal Processing. Basics of processor and computer architecture. Some exposure to machine learning concepts is also desirable | |||||

227-0225-00L | Linear System Theory | W | 6 credits | 5G | M. Colombino | |

Abstract | The class is intended to provide a comprehensive overview of the theory of linear dynamical systems, stability analysis, and their use in control and estimation. The focus is on the mathematics behind the physical properties of these systems and on understanding and constructing proofs of properties of linear control systems. | |||||

Objective | Students should be able to apply the fundamental results in linear system theory to analyze and control linear dynamical systems. | |||||

Content | - Proof techniques and practices. - Linear spaces, normed linear spaces and Hilbert spaces. - Ordinary differential equations, existence and uniqueness of solutions. - Continuous and discrete-time, time-varying linear systems. Time domain solutions. Time invariant systems treated as a special case. - Controllability and observability, duality. Time invariant systems treated as a special case. - Stability and stabilization, observers, state and output feedback, separation principle. | |||||

Lecture notes | Available on the course Moodle platform. | |||||

Prerequisites / Notice | Sufficient mathematical maturity, in particular in linear algebra, analysis. | |||||

227-0417-00L | Information Theory I | W | 6 credits | 4G | A. Lapidoth | |

Abstract | This course covers the basic concepts of information theory and of communication theory. Topics covered include the entropy rate of a source, mutual information, typical sequences, the asymptotic equi-partition property, Huffman coding, channel capacity, the channel coding theorem, the source-channel separation theorem, and feedback capacity. | |||||

Objective | The fundamentals of Information Theory including Shannon's source coding and channel coding theorems | |||||

Content | The entropy rate of a source, Typical sequences, the asymptotic equi-partition property, the source coding theorem, Huffman coding, Arithmetic coding, channel capacity, the channel coding theorem, the source-channel separation theorem, feedback capacity | |||||

Literature | T.M. Cover and J. Thomas, Elements of Information Theory (second edition) | |||||

227-0421-00L | Learning in Deep Artificial and Biological Neuronal Networks | W | 4 credits | 3G | B. Grewe | |

Abstract | Deep-Learning (DL) a brain-inspired weak for of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. However, DL is far from being understood and investigating learning in biological networks might serve again as a compelling inspiration to think differently about state-of-the-art ANN training methods. | |||||

Objective | The main goal of this lecture is to provide a comprehensive overview into the learning principles neuronal networks as well as to introduce a diverse skill set (e.g. simulating a spiking neuronal network) that is required to understand learning in large, hierarchical neuronal networks. To achieve this the lectures and exercises will merge ideas, concepts and methods from machine learning and neuroscience. These will include training basic ANNs, simulating spiking neuronal networks as well as being able to read and understand the main ideas presented in today’s neuroscience papers. After this course students will be able to: - read and understand the main ideas and methods that are presented in today’s neuroscience papers - explain the basic ideas and concepts of plasticity in the mammalian brain - implement alternative ANN learning algorithms to ‘error backpropagation’ in order to train deep neuronal networks. - use a diverse set of ANN regularization methods to improve learning - simulate spiking neuronal networks that learn simple (e.g. digit classification) tasks in a supervised manner. | |||||

Content | Deep-learning a brain-inspired weak form of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. The origins of deep hierarchical learning can be traced back to early neuroscience research by Hubel and Wiesel in the 1960s, who first described the neuronal processing of visual inputs in the mammalian neocortex. Similar to their neocortical counterparts ANNs seem to learn by interpreting and structuring the data provided by the external world. However, while on specific tasks such as playing (video) games deep ANNs outperform humans (Minh et al, 2015, Silver et al., 2018), ANNs are still not performing on par when it comes to recognizing actions in movie data and their ability to act as generalizable problem solvers is still far behind of what the human brain seems to achieve effortlessly. Moreover, biological neuronal networks can learn far more effectively with fewer training examples, they achieve a much higher performance in recognizing complex patterns in time series data (e.g. recognizing actions in movies), they dynamically adapt to new tasks without losing performance and they achieve unmatched performance to detect and integrate out-of-domain data examples (data they have not been trained with). In other words, many of the big challenges and unknowns that have emerged in the field of deep learning over the last years are already mastered exceptionally well by biological neuronal networks in our brain. On the other hand, many facets of typical ANN design and training algorithms seem biologically implausible, such as the non-local weight updates, discrete processing of time, and scalar communication between neurons. Recent evidence suggests that learning in biological systems is the result of the complex interplay of diverse error feedback signaling processes acting at multiple scales, ranging from single synapses to entire networks. | |||||

Lecture notes | The lecture slides will be provided as a PDF after each lecture. | |||||

Prerequisites / Notice | This advanced level lecture requires some basic background in machine/deep learning. Thus, students are expected to have a basic mathematical foundation, including linear algebra, multivariate calculus, and probability. The course is not to be meant as an extended tutorial of how to train deep networks in PyTorch or Tensorflow, although these tools used. The participation in the course is subject to the following conditions: 1) The number of participants is limited to 120 students (MSc and PhDs). 2) Students must have taken the exam in Deep Learning (263-3210-00L) or have acquired equivalent knowledge. | |||||

227-0445-10L | Mathematical Methods of Signal Processing | W | 6 credits | 4G | H. G. Feichtinger | |

Abstract | This course offers a mathematical correct but still non-technical description of key objects relevant for signal processing, such as Dirac measures, Dirac combs, various function spaces (like L^2), impulse response, transfer function, Gabor expansion, and so on. The approach is based on properties of "Feichtinger's algebra". MATLAB routines will serve as illustration. | |||||

Objective | The aim of the class to familiarize the participants with the idea of generalized functions (usual called distributions), and to provide a (novel approach) to a theory of mild distributions, which cannot be found in books so far (the course will contribute to the development of such a book). From the physical point of view, such an object is something, which can be measured or captured by (linear) measurements, such as an audio signal. The Harmonic Analysis perspective is, that the Fourier transform and time-frequency transforms are possible over any locally compact group. Engineers talk about discrete or continuous, periodic and non-periodic signals. Hence, a unified approach to these settings and a discussion of their interconnection (e.g. approximately computing the Fourier transform of a function using the DFT) is at the heart of this course. | |||||

Content | Mathematical Foundations of Signal Processing: 0. Recalling (on and off) concepts from linear algebra (e.g. linear mappings, etc.) and introducing concepts from basic linear functional analysis (Hilbert spaces, Banach spaces) 1. Translation invariant systems and convolution, elementary functional analytic approach; 2. Pure frequencies and the Fourier transform, convolution theorem 3. The subalgebra L1(Rd) of integrable functions (without Lebesgue integration), Riemann Lebesgue Lemma 4. Plancherels Theorem, L2(Rd) and basic Hilbert space theory, unitary mappings 5. Short-time Fourier transform, the Feichtinger algebra S0(Rd) as algebra of test functions 6. The dual space of mild distributions, relationship to tempered distributions (for this familiar); various characterization 7. Gabor expansions of signals, characterization of smoothness and decay, Gabor frames and Riesz bases; 8. Transition from continuous to discrete variables, from periodic to the non-periodic case; 9. The kernel theorem, as the continuous analogue of matrix representations; 10. Sobolev spaces (describing smoothness) and weighted spaces; 11. Spreading representation and Kohn-Nirenberg representation of operators; 12. Gabor multipliers and approximation of slowly varying systems; 13. As time permits: the idea of generalized stochastic processes 14. Further subjects as demanded by the audience can be covered on demand. Detailed lecture notes will be provided. This material will become part of an on-going book-project, which has many facets. | |||||

Lecture notes | This material will be regularly updated and posted at the lecturer's homepage, at Link There will be also a dedicated WEB page at Link (to be installed in the near future). | |||||

Prerequisites / Notice | We encourage students who are interested in mathematics, but also students of physics or mathematics who want to learn about application of modern methods from functional analysis to their sciences, especially those who are interested to understand what the connections between the continuous and the discrete world are (from continuous functions or images to samples or pixels, and back). Hans G. Feichtinger (Link) For any kind of questions concerning this course please contact the lecturer. He will be in Zurich most of the time, even if the course has to be held offline. It will start by October 1st 2020 only. | |||||

227-0477-00L | Acoustics I | W | 6 credits | 4G | K. Heutschi | |

Abstract | Introduction to the fundamentals of acoustics in the area of sound field calculations, measurement of acoustical events, outdoor sound propagation and room acoustics of large and small enclosures. | |||||

Objective | Introduction to acoustics. Understanding of basic acoustical mechanisms. Survey of the technical literature. Illustration of measurement techniques in the laboratory. | |||||

Content | Fundamentals of acoustics, measuring and analyzing of acoustical events, anatomy and properties of the ear. Outdoor sound propagation, absorption and transmission of sound, room acoustics of large and small enclosures, architectural acoustics, noise and noise control, calculation of sound fields. | |||||

Lecture notes | yes | |||||

263-5210-00L | Probabilistic Artificial Intelligence | W | 8 credits | 3V + 2U + 2A | A. Krause | |

Abstract | This course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics and the Internet. | |||||

Objective | How can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for graduate students. | |||||

Content | Topics covered: - Probability - Probabilistic inference (variational inference, MCMC) - Bayesian learning (Gaussian processes, Bayesian deep learning) - Probabilistic planning (MDPs, POMPDPs) - Multi-armed bandits and Bayesian optimization - Reinforcement learning | |||||

Prerequisites / Notice | Solid basic knowledge in statistics, algorithms and programming. The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite. | |||||

401-0647-00L | Introduction to Mathematical Optimization | W | 5 credits | 2V + 1U | D. Adjiashvili | |

Abstract | Introduction to basic techniques and problems in mathematical optimization, and their applications to a variety of problems in engineering. | |||||

Objective | The goal of the course is to obtain a good understanding of some of the most fundamental mathematical optimization techniques used to solve linear programs and basic combinatorial optimization problems. The students will also practice applying the learned models to problems in engineering. | |||||

Content | Topics covered in this course include: - Linear programming (simplex method, duality theory, shadow prices, ...). - Basic combinatorial optimization problems (spanning trees, shortest paths, network flows, ...). - Modelling with mathematical optimization: applications of mathematical programming in engineering. | |||||

Literature | Information about relevant literature will be given in the lecture. | |||||

Prerequisites / Notice | This course is meant for students who did not already attend the course "Mathematical Optimization", which is a more advance lecture covering similar topics. Compared to "Mathematical Optimization", this course has a stronger focus on modeling and applications. | |||||

401-3054-14L | Probabilistic Methods in Combinatorics | W | 6 credits | 2V + 1U | B. Sudakov | |

Abstract | This course provides a gentle introduction to the Probabilistic Method, with an emphasis on methodology. We will try to illustrate the main ideas by showing the application of probabilistic reasoning to various combinatorial problems. | |||||

Objective | ||||||

Content | The topics covered in the class will include (but are not limited to): linearity of expectation, the second moment method, the local lemma, correlation inequalities, martingales, large deviation inequalities, Janson and Talagrand inequalities and pseudo-randomness. | |||||

Literature | - The Probabilistic Method, by N. Alon and J. H. Spencer, 3rd Edition, Wiley, 2008. - Random Graphs, by B. Bollobás, 2nd Edition, Cambridge University Press, 2001. - Random Graphs, by S. Janson, T. Luczak and A. Rucinski, Wiley, 2000. - Graph Coloring and the Probabilistic Method, by M. Molloy and B. Reed, Springer, 2002. | |||||

401-3621-00L | Fundamentals of Mathematical Statistics | W | 10 credits | 4V + 1U | S. van de Geer | |

Abstract | The course covers the basics of inferential statistics. | |||||

Objective | ||||||

401-3901-00L | Mathematical Optimization | W | 11 credits | 4V + 2U | R. Zenklusen | |

Abstract | Mathematical treatment of diverse optimization techniques. | |||||

Objective | The goal of this course is to get a thorough understanding of various classical mathematical optimization techniques with an emphasis on polyhedral approaches. In particular, we want students to develop a good understanding of some important problem classes in the field, of structural mathematical results linked to these problems, and of solution approaches based on this structural understanding. | |||||

Content | Key topics include: - Linear programming and polyhedra; - Flows and cuts; - Combinatorial optimization problems and techniques; - Equivalence between optimization and separation; - Brief introduction to Integer Programming. | |||||

Literature | - Bernhard Korte, Jens Vygen: Combinatorial Optimization. 6th edition, Springer, 2018. - Alexander Schrijver: Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. This work has 3 volumes. - Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993. - Alexander Schrijver: Theory of Linear and Integer Programming. John Wiley, 1986. | |||||

Prerequisites / Notice | Solid background in linear algebra. | |||||

401-4619-67L | Advanced Topics in Computational StatisticsDoes not take place this semester. | W | 4 credits | 2V | not available | |

Abstract | This lecture covers selected advanced topics in computational statistics. This year the focus will be on graphical modelling. | |||||

Objective | Students learn the theoretical foundations of the selected methods, as well as practical skills to apply these methods and to interpret their outcomes. | |||||

Content | The main focus will be on graphical models in various forms: Markov properties of undirected graphs; Belief propagation; Hidden Markov Models; Structure estimation and parameter estimation; inference for high-dimensional data; causal graphical models | |||||

Prerequisites / Notice | We assume a solid background in mathematics, an introductory lecture in probability and statistics, and at least one more advanced course in statistics. |

- Page 1 of 1