Suchergebnis: Katalogdaten im Herbstsemester 2020
Elektrotechnik und Informationstechnologie Master | ||||||
Master-Studium (Studienreglement 2018) | ||||||
Signal Processing and Machine Learning The core courses and specialisation courses below are a selection for students who wish to specialise in the area of "Signal Processing and Machine Learning ", see https://www.ee.ethz.ch/studies/main-master/areas-of-specialisation.html. The individual study plan is subject to the tutor's approval. | ||||||
Kernfächer These core courses are particularly recommended for the field of "Signal Processing and Machine Learning". You may choose core courses form other fields in agreement with your tutor. A minimum of 24 credits must be obtained from core courses during the MSc EEIT. | ||||||
Foundation Core Courses Fundamentals at bachelor level, for master students who need to strengthen or refresh their background in the area. | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|---|
227-0101-00L | Discrete-Time and Statistical Signal Processing | W | 6 KP | 4G | H.‑A. Loeliger | |
Kurzbeschreibung | The course introduces some fundamental topics of digital signal processing with a bias towards applications in communications: discrete-time linear filters, inverse filters and equalization, DFT, discrete-time stochastic processes, elements of detection theory and estimation theory, LMMSE estimation and LMMSE filtering, LMS algorithm, Viterbi algorithm. | |||||
Lernziel | The course introduces some fundamental topics of digital signal processing with a bias towards applications in communications. The two main themes are linearity and probability. In the first part of the course, we deepen our understanding of discrete-time linear filters. In the second part of the course, we review the basics of probability theory and discrete-time stochastic processes. We then discuss some basic concepts of detection theory and estimation theory, as well as some practical methods including LMMSE estimation and LMMSE filtering, the LMS algorithm, and the Viterbi algorithm. A recurrent theme throughout the course is the stable and robust "inversion" of a linear filter. | |||||
Inhalt | 1. Discrete-time linear systems and filters: state-space realizations, z-transform and spectrum, decimation and interpolation, digital filter design, stable realizations and robust inversion. 2. The discrete Fourier transform and its use for digital filtering. 3. The statistical perspective: probability, random variables, discrete-time stochastic processes; detection and estimation: MAP, ML, Bayesian MMSE, LMMSE; Wiener filter, LMS adaptive filter, Viterbi algorithm. | |||||
Skript | Lecture Notes | |||||
227-0105-00L | Introduction to Estimation and Machine Learning | W | 6 KP | 4G | H.‑A. Loeliger | |
Kurzbeschreibung | Mathematical basics of estimation and machine learning, with a view towards applications in signal processing. | |||||
Lernziel | Students master the basic mathematical concepts and algorithms of estimation and machine learning. | |||||
Inhalt | Review of probability theory; basics of statistical estimation; least squares and linear learning; Hilbert spaces; Gaussian random variables; singular-value decomposition; kernel methods, neural networks, and more | |||||
Skript | Lecture notes will be handed out as the course progresses. | |||||
Voraussetzungen / Besonderes | solid basics in linear algebra and probability theory | |||||
Advanced Core Courses Advanced core courses bring students to gain in-depth knowledge of the chosen specialization. They are MSc level only. | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
227-0423-00L | Neural Network Theory | W | 4 KP | 2V + 1U | H. Bölcskei | |
Kurzbeschreibung | The class focuses on fundamental mathematical aspects of neural networks with an emphasis on deep networks: Universal approximation theorems, basics of approximation theory, fundamental limits of deep neural network learning, geometry of decision surfaces, capacity of separating surfaces, dimension measures relevant for generalization, VC dimension of neural networks. | |||||
Lernziel | After attending this lecture, participating in the exercise sessions, and working on the homework problem sets, students will have acquired a working knowledge of the mathematical foundations of (deep) neural networks. | |||||
Inhalt | 1. Universal approximation with single- and multi-layer networks 2. Introduction to approximation theory: Fundamental limits on compressibility of signal classes, Kolmogorov epsilon-entropy of signal classes, non-linear approximation theory 3. Fundamental limits of deep neural network learning 4. Geometry of decision surfaces 5. Separating capacity of nonlinear decision surfaces 6. Dimension measures: Pseudo-dimension, fat-shattering dimension, Vapnik-Chervonenkis (VC) dimension 7. Dimensions of neural networks 8. Generalization error in neural network learning | |||||
Skript | Detailed lecture notes will be provided. | |||||
Voraussetzungen / Besonderes | This course is aimed at students with a strong mathematical background in general, and in linear algebra, analysis, and probability theory in particular. | |||||
227-0427-00L | Signal Analysis, Models, and Machine Learning Findet dieses Semester nicht statt. This course has been replaced by "Introduction to Estimation and Machine Learning" (autumn semester) and "Advanced Signal Analysis, Modeling, and Machine Learning" (spring semester). | W | 6 KP | 4G | H.‑A. Loeliger | |
Kurzbeschreibung | Mathematical methods in signal processing and machine learning. I. Linear signal representation and approximation: Hilbert spaces, LMMSE estimation, regularization and sparsity. II. Learning linear and nonlinear functions and filters: neural networks, kernel methods. III. Structured statistical models: hidden Markov models, factor graphs, Kalman filter, Gaussian models with sparse events. | |||||
Lernziel | The course is an introduction to some basic topics in signal processing and machine learning. | |||||
Inhalt | Part I - Linear Signal Representation and Approximation: Hilbert spaces, least squares and LMMSE estimation, projection and estimation by linear filtering, learning linear functions and filters, L2 regularization, L1 regularization and sparsity, singular-value decomposition and pseudo-inverse, principal-components analysis. Part II - Learning Nonlinear Functions: fundamentals of learning, neural networks, kernel methods. Part III - Structured Statistical Models and Message Passing Algorithms: hidden Markov models, factor graphs, Gaussian message passing, Kalman filter and recursive least squares, Monte Carlo methods, parameter estimation, expectation maximization, linear Gaussian models with sparse events. | |||||
Skript | Lecture notes. | |||||
Voraussetzungen / Besonderes | Prerequisites: - local bachelors: course "Discrete-Time and Statistical Signal Processing" (5. Sem.) - others: solid basics in linear algebra and probability theory | |||||
227-0447-00L | Image Analysis and Computer Vision | W | 6 KP | 3V + 1U | L. Van Gool, E. Konukoglu, F. Yu | |
Kurzbeschreibung | Light and perception. Digital image formation. Image enhancement and feature extraction. Unitary transformations. Color and texture. Image segmentation. Motion extraction and tracking. 3D data extraction. Invariant features. Specific object recognition and object class recognition. Deep learning and Convolutional Neural Networks. | |||||
Lernziel | Overview of the most important concepts of image formation, perception and analysis, and Computer Vision. Gaining own experience through practical computer and programming exercises. | |||||
Inhalt | This course aims at offering a self-contained account of computer vision and its underlying concepts, including the recent use of deep learning. The first part starts with an overview of existing and emerging applications that need computer vision. It shows that the realm of image processing is no longer restricted to the factory floor, but is entering several fields of our daily life. First the interaction of light with matter is considered. The most important hardware components such as cameras and illumination sources are also discussed. The course then turns to image discretization, necessary to process images by computer. The next part describes necessary pre-processing steps, that enhance image quality and/or detect specific features. Linear and non-linear filters are introduced for that purpose. The course will continue by analyzing procedures allowing to extract additional types of basic information from multiple images, with motion and 3D shape as two important examples. Finally, approaches for the recognition of specific objects as well as object classes will be discussed and analyzed. A major part at the end is devoted to deep learning and AI-based approaches to image analysis. Its main focus is on object recognition, but also other examples of image processing using deep neural nets are given. | |||||
Skript | Course material Script, computer demonstrations, exercises and problem solutions | |||||
Voraussetzungen / Besonderes | Prerequisites: Basic concepts of mathematical analysis and linear algebra. The computer exercises are based on Python and Linux. The course language is English. | |||||
252-0535-00L | Advanced Machine Learning | W | 10 KP | 3V + 2U + 4A | J. M. Buhmann, C. Cotrini Jimenez | |
Kurzbeschreibung | Machine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects. | |||||
Lernziel | Students will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data. | |||||
Inhalt | The theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data. Topics covered in the lecture include: Fundamentals: What is data? Bayesian Learning Computational learning theory Supervised learning: Ensembles: Bagging and Boosting Max Margin methods Neural networks Unsupservised learning: Dimensionality reduction techniques Clustering Mixture Models Non-parametric density estimation Learning Dynamical Systems | |||||
Skript | No lecture notes, but slides will be made available on the course webpage. | |||||
Literatur | C. Bishop. Pattern Recognition and Machine Learning. Springer 2007. R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley & Sons, second edition, 2001. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, 2001. L. Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer, 2004. | |||||
Voraussetzungen / Besonderes | The course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments. Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution. PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points. | |||||
263-3210-00L | Deep Learning | W | 8 KP | 3V + 2U + 2A | T. Hofmann | |
Kurzbeschreibung | Deep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations. | |||||
Lernziel | In recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the mathematical foundations of deep learning and provide insights into model design, training, and validation. The main objective is a profound understanding of why these methods work and how. There will also be a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology. | |||||
Voraussetzungen / Besonderes | This is an advanced level course that requires some basic background in machine learning. More importantly, students are expected to have a very solid mathematical foundation, including linear algebra, multivariate calculus, and probability. The course will make heavy use of mathematics and is not (!) meant to be an extended tutorial of how to train deep networks with tools like Torch or Tensorflow, although that may be a side benefit. The participation in the course is subject to the following condition: - Students must have taken the exam in Advanced Machine Learning (252-0535-00) or have acquired equivalent knowledge, see exhaustive list below: Advanced Machine Learning https://ml2.inf.ethz.ch/courses/aml/ Computational Intelligence Lab http://da.inf.ethz.ch/teaching/2019/CIL/ Introduction to Machine Learning https://las.inf.ethz.ch/teaching/introml-S19 Statistical Learning Theory http://ml2.inf.ethz.ch/courses/slt/ Computational Statistics https://stat.ethz.ch/lectures/ss19/comp-stats.php Probabilistic Artificial Intelligence https://las.inf.ethz.ch/teaching/pai-f18 | |||||
Vertiefungsfächer These specialisation courses are particularly recommended for the area of "Signal Processing and Machine Learning", but you are free to choose courses from any other field in agreement with your tutor. A minimum of 40 credits must be obtained from specialisation courses during the MSc EEIT. | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
227-0116-00L | VLSI I: From Architectures to VLSI Circuits and FPGAs | W | 6 KP | 5G | F. K. Gürkaynak, L. Benini | |
Kurzbeschreibung | This first course in a series that extends over three consecutive terms is concerned with tailoring algorithms and with devising high performance hardware architectures for their implementation as ASIC or with FPGAs. The focus is on front end design using HDLs and automatic synthesis for producing industrial-quality circuits. | |||||
Lernziel | Understand Very-Large-Scale Integrated Circuits (VLSI chips), Application-Specific Integrated Circuits (ASIC), and Field-Programmable Gate-Arrays (FPGA). Know their organization and be able to identify suitable application areas. Become fluent in front-end design from architectural conception to gate-level netlists. How to model digital circuits with SystemVerilog. How to ensure they behave as expected with the aid of simulation, testbenches, and assertions. How to take advantage of automatic synthesis tools to produce industrial-quality VLSI and FPGA circuits. Gain practical experience with the hardware description language SystemVerilog and with industrial Electronic Design Automation (EDA) tools. | |||||
Inhalt | This course is concerned with system-level issues of VLSI design and FPGA implementations. Topics include: - Overview on design methodologies and fabrication depths. - Levels of abstraction for circuit modeling. - Organization and configuration of commercial field-programmable components. - FPGA design flows. - Dedicated and general purpose architectures compared. - How to obtain an architecture for a given processing algorithm. - Meeting throughput, area, and power goals by way of architectural transformations. - Hardware Description Languages (HDL) and the underlying concepts. - SystemVerilog - Register Transfer Level (RTL) synthesis and its limitations. - Building blocks of digital VLSI circuits. - Functional verification techniques and their limitations. - Modular and largely reusable testbenches. - Assertion-based verification. - Synchronous versus asynchronous circuits. - The case for synchronous circuits. - Periodic events and the Anceau diagram. - Case studies, ASICs compared to microprocessors, DSPs, and FPGAs. During the exercises, students learn how to model FPGAs with SystemVerilog. They write testbenches for simulation purposes and synthesize gate-level netlists for FPGAs. Commercial EDA software by leading vendors is being used throughout. | |||||
Skript | Textbook and all further documents in English. | |||||
Literatur | H. Kaeslin: "Top-Down Digital VLSI Design, from Architectures to Gate-Level Circuits and FPGAs", Elsevier, 2014, ISBN 9780128007303. | |||||
Voraussetzungen / Besonderes | Prerequisites: Basics of digital circuits. Examination: In written form following the course semester (spring term). Problems are given in English, answers will be accepted in either English oder German. Further details: https://iis-students.ee.ethz.ch/lectures/vlsi-i/ | |||||
227-0155-00L | Machine Learning on Microcontrollers Registration in this class requires the permission of the instructors. Class size will be limited to 16. Preference is given to students in the MSc EEIT. | W | 6 KP | 3G | M. Magno, L. Benini | |
Kurzbeschreibung | Machine Learning (ML) and artificial intelligence are pervading the digital society. Today, even low power embedded systems are incorporating ML, becoming increasingly “smart”. This lecture gives an overview of ML methods and algorithms to process and extract useful near-sensor information in end-nodes of the “internet-of-things”, using low-power microcontrollers/ processors (ARM-Cortex-M; RISC-V) | |||||
Lernziel | Learn how to Process data from sensors and how to extract useful information with low power microprocessors using ML techniques. We will analyze data coming from real low-power sensors (accelerometers, microphones, ExG bio-signals, cameras…). The main objective is to study in details how Machine Learning algorithms can be adapted to the performance constraints and limited resources of low-power microcontrollers. | |||||
Inhalt | The final goal of the course is a deep understanding of machine learning and its practical implementation on single- and multi-core microcontrollers, coupled with performance and energy efficiency analysis and optimization. The main topics of the course include: - Sensors and sensor data acquisition with low power embedded systems - Machine Learning: Overview of supervised and unsupervised learning and in particular supervised learning (Bayes Decision Theory, Decision Trees, Random Forests, kNN-Methods, Support Vector Machines, Convolutional Networks and Deep Learning) - Low-power embedded systems and their architecture. Low Power microcontrollers (ARM-Cortex M) and RISC-V-based Parallel Ultra Low Power (PULP) systems-on-chip. - Low power smart sensor system design: hardware-software tradeoffs, analysis, and optimization. Implementation and performance evaluation of ML in battery-operated embedded systems. The laboratory exercised will show how to address concrete design problems, like motion, gesture recognition, emotion detection, image and sound classification, using real sensors data and real MCU boards. Presentations from Ph.D. students and the visit to the Digital Circuits and Systems Group will introduce current research topics and international research projects. | |||||
Skript | Script and exercise sheets. Books will be suggested during the course. | |||||
Voraussetzungen / Besonderes | Prerequisites: C language programming. Basics of Digital Signal Processing. Basics of processor and computer architecture. Some exposure to machine learning concepts is also desirable | |||||
227-0121-00L | Kommunikationssysteme | W | 6 KP | 2V + 2U | A. Wittneben | |
Kurzbeschreibung | Informationstheorie, Signalraumanalyse, Basisbandübertragung, Passbandübertragung, Systembeispiel und Kanal, Sicherungsschicht, MAC, Beispiele Layer 2, Layer 3, Internet | |||||
Lernziel | Ziel der Vorlesung ist die Einführung der wichtigsten Konzepte und Verfahren, die in modernen digitalen Kommunikationssystemen Anwendung finden, sowie eine Übersicht über bestehende und zukünftige Systeme. | |||||
Inhalt | Es werden die untersten drei Schichten des OSI-Referenzmodells behandelt: die Bitübertragungsschicht, die Sicherungsschicht mit dem Zugriff auf das Übertragungsmedium und die Vermittlung. Die wichtigsten Begriffe der Informationstheorie werden eingeführt. Anschliessend konzentrieren sich die Betrachtungen auf die Verfahren der Punkt-zu-Punkt-Übertragung, welche sich mittels der Signalraumdarstellung elegant und kohärent behandeln lassen. Den Methoden der Fehlererkennung und –korrektur, sowie Protokollen für die erneute Übermittlung gestörter Daten wird Rechnung getragen. Auch der Vielfachzugriff bei geteiltem Übertragungsmedium wird diskutiert. Den Abschluss bilden Algorithmen für das Routing in Kommunikationsnetzen und der Flusssteuerung. Die Anwendung der grundlegenden Verfahren wird ausführlich anhand von bestehenden und zukünftigen drahtlosen und drahtgebundenen Systemen erläutert. | |||||
Skript | Vorlesungsfolien | |||||
Literatur | [1] Simon Haykin, Communication Systems, 4. Auflage, John Wiley & Sons, 2001 [2] Andrew S. Tanenbaum, Computernetzwerke, 3. Auflage, Pearson Studium, 2003 [3] M. Bossert und M. Breitbach, Digitale Netze, 1. Auflage, Teubner, 1999 | |||||
227-0225-00L | Linear System Theory | W | 6 KP | 5G | M. Colombino | |
Kurzbeschreibung | The class is intended to provide a comprehensive overview of the theory of linear dynamical systems, stability analysis, and their use in control and estimation. The focus is on the mathematics behind the physical properties of these systems and on understanding and constructing proofs of properties of linear control systems. | |||||
Lernziel | Students should be able to apply the fundamental results in linear system theory to analyze and control linear dynamical systems. | |||||
Inhalt | - Proof techniques and practices. - Linear spaces, normed linear spaces and Hilbert spaces. - Ordinary differential equations, existence and uniqueness of solutions. - Continuous and discrete-time, time-varying linear systems. Time domain solutions. Time invariant systems treated as a special case. - Controllability and observability, duality. Time invariant systems treated as a special case. - Stability and stabilization, observers, state and output feedback, separation principle. | |||||
Skript | Available on the course Moodle platform. | |||||
Voraussetzungen / Besonderes | Sufficient mathematical maturity, in particular in linear algebra, analysis. | |||||
227-0417-00L | Information Theory I | W | 6 KP | 4G | A. Lapidoth | |
Kurzbeschreibung | This course covers the basic concepts of information theory and of communication theory. Topics covered include the entropy rate of a source, mutual information, typical sequences, the asymptotic equi-partition property, Huffman coding, channel capacity, the channel coding theorem, the source-channel separation theorem, and feedback capacity. | |||||
Lernziel | The fundamentals of Information Theory including Shannon's source coding and channel coding theorems | |||||
Inhalt | The entropy rate of a source, Typical sequences, the asymptotic equi-partition property, the source coding theorem, Huffman coding, Arithmetic coding, channel capacity, the channel coding theorem, the source-channel separation theorem, feedback capacity | |||||
Literatur | T.M. Cover and J. Thomas, Elements of Information Theory (second edition) | |||||
227-0421-00L | Learning in Deep Artificial and Biological Neuronal Networks | W | 4 KP | 3G | B. Grewe | |
Kurzbeschreibung | Deep-Learning (DL) a brain-inspired weak for of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. However, DL is far from being understood and investigating learning in biological networks might serve again as a compelling inspiration to think differently about state-of-the-art ANN training methods. | |||||
Lernziel | The main goal of this lecture is to provide a comprehensive overview into the learning principles neuronal networks as well as to introduce a diverse skill set (e.g. simulating a spiking neuronal network) that is required to understand learning in large, hierarchical neuronal networks. To achieve this the lectures and exercises will merge ideas, concepts and methods from machine learning and neuroscience. These will include training basic ANNs, simulating spiking neuronal networks as well as being able to read and understand the main ideas presented in today’s neuroscience papers. After this course students will be able to: - read and understand the main ideas and methods that are presented in today’s neuroscience papers - explain the basic ideas and concepts of plasticity in the mammalian brain - implement alternative ANN learning algorithms to ‘error backpropagation’ in order to train deep neuronal networks. - use a diverse set of ANN regularization methods to improve learning - simulate spiking neuronal networks that learn simple (e.g. digit classification) tasks in a supervised manner. | |||||
Inhalt | Deep-learning a brain-inspired weak form of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. The origins of deep hierarchical learning can be traced back to early neuroscience research by Hubel and Wiesel in the 1960s, who first described the neuronal processing of visual inputs in the mammalian neocortex. Similar to their neocortical counterparts ANNs seem to learn by interpreting and structuring the data provided by the external world. However, while on specific tasks such as playing (video) games deep ANNs outperform humans (Minh et al, 2015, Silver et al., 2018), ANNs are still not performing on par when it comes to recognizing actions in movie data and their ability to act as generalizable problem solvers is still far behind of what the human brain seems to achieve effortlessly. Moreover, biological neuronal networks can learn far more effectively with fewer training examples, they achieve a much higher performance in recognizing complex patterns in time series data (e.g. recognizing actions in movies), they dynamically adapt to new tasks without losing performance and they achieve unmatched performance to detect and integrate out-of-domain data examples (data they have not been trained with). In other words, many of the big challenges and unknowns that have emerged in the field of deep learning over the last years are already mastered exceptionally well by biological neuronal networks in our brain. On the other hand, many facets of typical ANN design and training algorithms seem biologically implausible, such as the non-local weight updates, discrete processing of time, and scalar communication between neurons. Recent evidence suggests that learning in biological systems is the result of the complex interplay of diverse error feedback signaling processes acting at multiple scales, ranging from single synapses to entire networks. | |||||
Skript | The lecture slides will be provided as a PDF after each lecture. | |||||
Voraussetzungen / Besonderes | This advanced level lecture requires some basic background in machine/deep learning. Thus, students are expected to have a basic mathematical foundation, including linear algebra, multivariate calculus, and probability. The course is not to be meant as an extended tutorial of how to train deep networks in PyTorch or Tensorflow, although these tools used. The participation in the course is subject to the following conditions: 1) The number of participants is limited to 120 students (MSc and PhDs). 2) Students must have taken the exam in Deep Learning (263-3210-00L) or have acquired equivalent knowledge. | |||||
227-0445-10L | Mathematical Methods of Signal Processing | W | 6 KP | 4G | H. G. Feichtinger | |
Kurzbeschreibung | This course offers a mathematical correct but still non-technical description of key objects relevant for signal processing, such as Dirac measures, Dirac combs, various function spaces (like L^2), impulse response, transfer function, Gabor expansion, and so on. The approach is based on properties of "Feichtinger's algebra". MATLAB routines will serve as illustration. | |||||
Lernziel | The aim of the class to familiarize the participants with the idea of generalized functions (usual called distributions), and to provide a (novel approach) to a theory of mild distributions, which cannot be found in books so far (the course will contribute to the development of such a book). From the physical point of view, such an object is something, which can be measured or captured by (linear) measurements, such as an audio signal. The Harmonic Analysis perspective is, that the Fourier transform and time-frequency transforms are possible over any locally compact group. Engineers talk about discrete or continuous, periodic and non-periodic signals. Hence, a unified approach to these settings and a discussion of their interconnection (e.g. approximately computing the Fourier transform of a function using the DFT) is at the heart of this course. | |||||
Inhalt | Mathematical Foundations of Signal Processing: 0. Recalling (on and off) concepts from linear algebra (e.g. linear mappings, etc.) and introducing concepts from basic linear functional analysis (Hilbert spaces, Banach spaces) 1. Translation invariant systems and convolution, elementary functional analytic approach; 2. Pure frequencies and the Fourier transform, convolution theorem 3. The subalgebra L1(Rd) of integrable functions (without Lebesgue integration), Riemann Lebesgue Lemma 4. Plancherels Theorem, L2(Rd) and basic Hilbert space theory, unitary mappings 5. Short-time Fourier transform, the Feichtinger algebra S0(Rd) as algebra of test functions 6. The dual space of mild distributions, relationship to tempered distributions (for this familiar); various characterization 7. Gabor expansions of signals, characterization of smoothness and decay, Gabor frames and Riesz bases; 8. Transition from continuous to discrete variables, from periodic to the non-periodic case; 9. The kernel theorem, as the continuous analogue of matrix representations; 10. Sobolev spaces (describing smoothness) and weighted spaces; 11. Spreading representation and Kohn-Nirenberg representation of operators; 12. Gabor multipliers and approximation of slowly varying systems; 13. As time permits: the idea of generalized stochastic processes 14. Further subjects as demanded by the audience can be covered on demand. Detailed lecture notes will be provided. This material will become part of an on-going book-project, which has many facets. | |||||
Skript | This material will be regularly updated and posted at the lecturer's homepage, at https://www.univie.ac.at/nuhag-php/home/skripten.php There will be also a dedicated WEB page at www.nuhag.eu/ETH20 (to be installed in the near future). | |||||
Voraussetzungen / Besonderes | We encourage students who are interested in mathematics, but also students of physics or mathematics who want to learn about application of modern methods from functional analysis to their sciences, especially those who are interested to understand what the connections between the continuous and the discrete world are (from continuous functions or images to samples or pixels, and back). Hans G. Feichtinger (hans.feichtinger@univie.ac.at) For any kind of questions concerning this course please contact the lecturer. He will be in Zurich most of the time, even if the course has to be held offline. It will start by October 1st 2020 only. | |||||
227-0477-00L | Acoustics I | W | 6 KP | 4G | K. Heutschi | |
Kurzbeschreibung | Introduction to the fundamentals of acoustics in the area of sound field calculations, measurement of acoustical events, outdoor sound propagation and room acoustics of large and small enclosures. | |||||
Lernziel | Introduction to acoustics. Understanding of basic acoustical mechanisms. Survey of the technical literature. Illustration of measurement techniques in the laboratory. | |||||
Inhalt | Fundamentals of acoustics, measuring and analyzing of acoustical events, anatomy and properties of the ear. Outdoor sound propagation, absorption and transmission of sound, room acoustics of large and small enclosures, architectural acoustics, noise and noise control, calculation of sound fields. | |||||
Skript | yes | |||||
263-5210-00L | Probabilistic Artificial Intelligence | W | 8 KP | 3V + 2U + 2A | A. Krause | |
Kurzbeschreibung | This course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics and the Internet. | |||||
Lernziel | How can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for graduate students. | |||||
Inhalt | Topics covered: - Probability - Probabilistic inference (variational inference, MCMC) - Bayesian learning (Gaussian processes, Bayesian deep learning) - Probabilistic planning (MDPs, POMPDPs) - Multi-armed bandits and Bayesian optimization - Reinforcement learning | |||||
Voraussetzungen / Besonderes | Solid basic knowledge in statistics, algorithms and programming. The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite. | |||||
401-0647-00L | Introduction to Mathematical Optimization | W | 5 KP | 2V + 1U | D. Adjiashvili | |
Kurzbeschreibung | Introduction to basic techniques and problems in mathematical optimization, and their applications to a variety of problems in engineering. | |||||
Lernziel | The goal of the course is to obtain a good understanding of some of the most fundamental mathematical optimization techniques used to solve linear programs and basic combinatorial optimization problems. The students will also practice applying the learned models to problems in engineering. | |||||
Inhalt | Topics covered in this course include: - Linear programming (simplex method, duality theory, shadow prices, ...). - Basic combinatorial optimization problems (spanning trees, shortest paths, network flows, ...). - Modelling with mathematical optimization: applications of mathematical programming in engineering. | |||||
Literatur | Information about relevant literature will be given in the lecture. | |||||
Voraussetzungen / Besonderes | This course is meant for students who did not already attend the course "Mathematical Optimization", which is a more advance lecture covering similar topics. Compared to "Mathematical Optimization", this course has a stronger focus on modeling and applications. | |||||
401-3054-14L | Probabilistic Methods in Combinatorics | W | 6 KP | 2V + 1U | B. Sudakov | |
Kurzbeschreibung | This course provides a gentle introduction to the Probabilistic Method, with an emphasis on methodology. We will try to illustrate the main ideas by showing the application of probabilistic reasoning to various combinatorial problems. | |||||
Lernziel | ||||||
Inhalt | The topics covered in the class will include (but are not limited to): linearity of expectation, the second moment method, the local lemma, correlation inequalities, martingales, large deviation inequalities, Janson and Talagrand inequalities and pseudo-randomness. | |||||
Literatur | - The Probabilistic Method, by N. Alon and J. H. Spencer, 3rd Edition, Wiley, 2008. - Random Graphs, by B. Bollobás, 2nd Edition, Cambridge University Press, 2001. - Random Graphs, by S. Janson, T. Luczak and A. Rucinski, Wiley, 2000. - Graph Coloring and the Probabilistic Method, by M. Molloy and B. Reed, Springer, 2002. | |||||
401-3621-00L | Fundamentals of Mathematical Statistics | W | 10 KP | 4V + 1U | S. van de Geer | |
Kurzbeschreibung | The course covers the basics of inferential statistics. | |||||
Lernziel | ||||||
401-3901-00L | Mathematical Optimization | W | 11 KP | 4V + 2U | R. Zenklusen | |
Kurzbeschreibung | Mathematical treatment of diverse optimization techniques. | |||||
Lernziel | The goal of this course is to get a thorough understanding of various classical mathematical optimization techniques with an emphasis on polyhedral approaches. In particular, we want students to develop a good understanding of some important problem classes in the field, of structural mathematical results linked to these problems, and of solution approaches based on this structural understanding. | |||||
Inhalt | Key topics include: - Linear programming and polyhedra; - Flows and cuts; - Combinatorial optimization problems and techniques; - Equivalence between optimization and separation; - Brief introduction to Integer Programming. | |||||
Literatur | - Bernhard Korte, Jens Vygen: Combinatorial Optimization. 6th edition, Springer, 2018. - Alexander Schrijver: Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. This work has 3 volumes. - Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993. - Alexander Schrijver: Theory of Linear and Integer Programming. John Wiley, 1986. | |||||
Voraussetzungen / Besonderes | Solid background in linear algebra. |
- Seite 1 von 2 Alle