Search result: Catalogue data in Autumn Semester 2020

Electrical Engineering and Information Technology Master Information
Master Studies (Programme Regulations 2008)
Major Courses
A total of 42 CP must be achieved during the Master Programme. The individual study plan is subject to the tutor's approval.
Signal Processing and Machine Learning
Core Subjects
NumberTitleTypeECTSHoursLecturers
227-0105-00LIntroduction to Estimation and Machine Learning Restricted registration - show details W6 credits4GH.‑A. Loeliger
AbstractMathematical basics of estimation and machine learning, with a view towards applications in signal processing.
ObjectiveStudents master the basic mathematical concepts and algorithms of estimation and machine learning.
ContentReview of probability theory;
basics of statistical estimation;
least squares and linear learning;
Hilbert spaces;
Gaussian random variables;
singular-value decomposition;
kernel methods, neural networks, and more
Lecture notesLecture notes will be handed out as the course progresses.
Prerequisites / Noticesolid basics in linear algebra and probability theory
227-0423-00LNeural Network Theory Information W4 credits2V + 1UH. Bölcskei
AbstractThe class focuses on fundamental mathematical aspects of neural networks with an emphasis on deep networks: Universal approximation theorems, basics of approximation theory, fundamental limits of deep neural network learning, geometry of decision surfaces, capacity of separating surfaces, dimension measures relevant for generalization, VC dimension of neural networks.
ObjectiveAfter attending this lecture, participating in the exercise sessions, and working on the homework problem sets, students will have acquired a working knowledge of the mathematical foundations of (deep) neural networks.
Content1. Universal approximation with single- and multi-layer networks

2. Introduction to approximation theory: Fundamental limits on compressibility of signal classes, Kolmogorov epsilon-entropy of signal classes, non-linear approximation theory

3. Fundamental limits of deep neural network learning

4. Geometry of decision surfaces

5. Separating capacity of nonlinear decision surfaces

6. Dimension measures: Pseudo-dimension, fat-shattering dimension, Vapnik-Chervonenkis (VC) dimension

7. Dimensions of neural networks

8. Generalization error in neural network learning
Lecture notesDetailed lecture notes will be provided.
Prerequisites / NoticeThis course is aimed at students with a strong mathematical background in general, and in linear algebra, analysis, and probability theory in particular.
227-0427-00LSignal Analysis, Models, and Machine Learning
Does not take place this semester.
This course has been replaced by "Introduction to Estimation and Machine Learning" (autumn semester) and "Advanced Signal Analysis, Modeling, and Machine Learning" (spring semester).
W6 credits4GH.‑A. Loeliger
AbstractMathematical methods in signal processing and machine learning.
I. Linear signal representation and approximation: Hilbert spaces, LMMSE estimation, regularization and sparsity.
II. Learning linear and nonlinear functions and filters: neural networks, kernel methods.
III. Structured statistical models: hidden Markov models, factor graphs, Kalman filter, Gaussian models with sparse events.
ObjectiveThe course is an introduction to some basic topics in signal processing and machine learning.
ContentPart I - Linear Signal Representation and Approximation: Hilbert spaces, least squares and LMMSE estimation, projection and estimation by linear filtering, learning linear functions and filters, L2 regularization, L1 regularization and sparsity, singular-value decomposition and pseudo-inverse, principal-components analysis.
Part II - Learning Nonlinear Functions: fundamentals of learning, neural networks, kernel methods.
Part III - Structured Statistical Models and Message Passing Algorithms: hidden Markov models, factor graphs, Gaussian message passing, Kalman filter and recursive least squares, Monte Carlo methods, parameter estimation, expectation maximization, linear Gaussian models with sparse events.
Lecture notesLecture notes.
Prerequisites / NoticePrerequisites:
- local bachelors: course "Discrete-Time and Statistical Signal Processing" (5. Sem.)
- others: solid basics in linear algebra and probability theory
227-0447-00LImage Analysis and Computer Vision Information W6 credits3V + 1UL. Van Gool, E. Konukoglu, F. Yu
AbstractLight and perception. Digital image formation. Image enhancement and feature extraction. Unitary transformations. Color and texture. Image segmentation. Motion extraction and tracking. 3D data extraction. Invariant features. Specific object recognition and object class recognition. Deep learning and Convolutional Neural Networks.
ObjectiveOverview of the most important concepts of image formation, perception and analysis, and Computer Vision. Gaining own experience through practical computer and programming exercises.
ContentThis course aims at offering a self-contained account of computer vision and its underlying concepts, including the recent use of deep learning.
The first part starts with an overview of existing and emerging applications that need computer vision. It shows that the realm of image processing is no longer restricted to the factory floor, but is entering several fields of our daily life. First the interaction of light with matter is considered. The most important hardware components such as cameras and illumination sources are also discussed. The course then turns to image discretization, necessary to process images by computer.
The next part describes necessary pre-processing steps, that enhance image quality and/or detect specific features. Linear and non-linear filters are introduced for that purpose. The course will continue by analyzing procedures allowing to extract additional types of basic information from multiple images, with motion and 3D shape as two important examples. Finally, approaches for the recognition of specific objects as well as object classes will be discussed and analyzed. A major part at the end is devoted to deep learning and AI-based approaches to image analysis. Its main focus is on object recognition, but also other examples of image processing using deep neural nets are given.
Lecture notesCourse material Script, computer demonstrations, exercises and problem solutions
Prerequisites / NoticePrerequisites:
Basic concepts of mathematical analysis and linear algebra. The computer exercises are based on Python and Linux.
The course language is English.
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
ObjectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
Recommended Subjects
NumberTitleTypeECTSHoursLecturers
227-0101-00LDiscrete-Time and Statistical Signal Processing Information W6 credits4GH.‑A. Loeliger
AbstractThe course introduces some fundamental topics of digital signal processing with a bias towards applications in communications: discrete-time linear filters, inverse filters and equalization, DFT, discrete-time stochastic processes, elements of detection theory and estimation theory, LMMSE estimation and LMMSE filtering, LMS algorithm, Viterbi algorithm.
ObjectiveThe course introduces some fundamental topics of digital signal processing with a bias towards applications in communications. The two main themes are linearity and probability. In the first part of the course, we deepen our understanding of discrete-time linear filters. In the second part of the course, we review the basics of probability theory and discrete-time stochastic processes. We then discuss some basic concepts of detection theory and estimation theory, as well as some practical methods including LMMSE estimation and LMMSE filtering, the LMS algorithm, and the Viterbi algorithm. A recurrent theme throughout the course is the stable and robust "inversion" of a linear filter.
Content1. Discrete-time linear systems and filters:
state-space realizations, z-transform and spectrum,
decimation and interpolation, digital filter design,
stable realizations and robust inversion.

2. The discrete Fourier transform and its use for digital filtering.

3. The statistical perspective:
probability, random variables, discrete-time stochastic processes;
detection and estimation: MAP, ML, Bayesian MMSE, LMMSE;
Wiener filter, LMS adaptive filter, Viterbi algorithm.
Lecture notesLecture Notes
227-0116-00LVLSI I: From Architectures to VLSI Circuits and FPGAs Information W6 credits5GF. K. Gürkaynak, L. Benini
AbstractThis first course in a series that extends over three consecutive terms is concerned with tailoring algorithms and with devising high performance hardware architectures for their implementation as ASIC or with FPGAs. The focus is on front end design using HDLs and automatic synthesis for producing industrial-quality circuits.
ObjectiveUnderstand Very-Large-Scale Integrated Circuits (VLSI chips), Application-Specific Integrated Circuits (ASIC), and Field-Programmable Gate-Arrays (FPGA). Know their organization and be able to identify suitable application areas. Become fluent in front-end design from architectural conception to gate-level netlists. How to model digital circuits with SystemVerilog. How to ensure they behave as expected with the aid of simulation, testbenches, and assertions. How to take advantage of automatic synthesis tools to produce industrial-quality VLSI and FPGA circuits. Gain practical experience with the hardware description language SystemVerilog and with industrial Electronic Design Automation (EDA) tools.
ContentThis course is concerned with system-level issues of VLSI design and FPGA implementations. Topics include:
- Overview on design methodologies and fabrication depths.
- Levels of abstraction for circuit modeling.
- Organization and configuration of commercial field-programmable components.
- FPGA design flows.
- Dedicated and general purpose architectures compared.
- How to obtain an architecture for a given processing algorithm.
- Meeting throughput, area, and power goals by way of architectural transformations.
- Hardware Description Languages (HDL) and the underlying concepts.
- SystemVerilog
- Register Transfer Level (RTL) synthesis and its limitations.
- Building blocks of digital VLSI circuits.
- Functional verification techniques and their limitations.
- Modular and largely reusable testbenches.
- Assertion-based verification.
- Synchronous versus asynchronous circuits.
- The case for synchronous circuits.
- Periodic events and the Anceau diagram.
- Case studies, ASICs compared to microprocessors, DSPs, and FPGAs.

During the exercises, students learn how to model FPGAs with SystemVerilog. They write testbenches for simulation purposes and synthesize gate-level netlists for FPGAs. Commercial EDA software by leading vendors is being used throughout.
Lecture notesTextbook and all further documents in English.
LiteratureH. Kaeslin: "Top-Down Digital VLSI Design, from Architectures to Gate-Level Circuits and FPGAs", Elsevier, 2014, ISBN 9780128007303.
Prerequisites / NoticePrerequisites:
Basics of digital circuits.

Examination:
In written form following the course semester (spring term). Problems are given in English, answers will be accepted in either English oder German.

Further details:
Link
227-0155-00LMachine Learning on Microcontrollers Restricted registration - show details
Registration in this class requires the permission of the instructors. Class size will be limited to 16.
Preference is given to students in the MSc EEIT.
W6 credits3GM. Magno, L. Benini
AbstractMachine Learning (ML) and artificial intelligence are pervading the digital society. Today, even low power embedded systems are incorporating ML, becoming increasingly “smart”. This lecture gives an overview of ML methods and algorithms to process and extract useful near-sensor information in end-nodes of the “internet-of-things”, using low-power microcontrollers/ processors (ARM-Cortex-M; RISC-V)
ObjectiveLearn how to Process data from sensors and how to extract useful information with low power microprocessors using ML techniques. We will analyze data coming from real low-power sensors (accelerometers, microphones, ExG bio-signals, cameras…). The main objective is to study in details how Machine Learning algorithms can be adapted to the performance constraints and limited resources of low-power microcontrollers.
ContentThe final goal of the course is a deep understanding of machine learning and its practical implementation on single- and multi-core microcontrollers, coupled with performance and energy efficiency analysis and optimization. The main topics of the course include:

- Sensors and sensor data acquisition with low power embedded systems

- Machine Learning: Overview of supervised and unsupervised learning and in particular supervised learning (Bayes Decision Theory, Decision Trees, Random Forests, kNN-Methods, Support Vector Machines, Convolutional Networks and Deep Learning)

- Low-power embedded systems and their architecture. Low Power microcontrollers (ARM-Cortex M) and RISC-V-based Parallel Ultra Low Power (PULP) systems-on-chip.

- Low power smart sensor system design: hardware-software tradeoffs, analysis, and optimization. Implementation and performance evaluation of ML in battery-operated embedded systems.

The laboratory exercised will show how to address concrete design problems, like motion, gesture recognition, emotion detection, image and sound classification, using real sensors data and real MCU boards.

Presentations from Ph.D. students and the visit to the Digital Circuits and Systems Group will introduce current research topics and international research projects.
Lecture notesScript and exercise sheets. Books will be suggested during the course.
Prerequisites / NoticePrerequisites: C language programming. Basics of Digital Signal Processing. Basics of processor and computer architecture. Some exposure to machine learning concepts is also desirable
227-0225-00LLinear System TheoryW6 credits5GM. Colombino
AbstractThe class is intended to provide a comprehensive overview of the theory of linear dynamical systems, stability analysis, and their use in control and estimation. The focus is on the mathematics behind the physical properties of these systems and on understanding and constructing proofs of properties of linear control systems.
ObjectiveStudents should be able to apply the fundamental results in linear system theory to analyze and control linear dynamical systems.
Content- Proof techniques and practices.
- Linear spaces, normed linear spaces and Hilbert spaces.
- Ordinary differential equations, existence and uniqueness of solutions.
- Continuous and discrete-time, time-varying linear systems. Time domain solutions. Time invariant systems treated as a special case.
- Controllability and observability, duality. Time invariant systems treated as a special case.
- Stability and stabilization, observers, state and output feedback, separation principle.
Lecture notesAvailable on the course Moodle platform.
Prerequisites / NoticeSufficient mathematical maturity, in particular in linear algebra, analysis.
227-0417-00LInformation Theory IW6 credits4GA. Lapidoth
AbstractThis course covers the basic concepts of information theory and of communication theory. Topics covered include the entropy rate of a source, mutual information, typical sequences, the asymptotic equi-partition property, Huffman coding, channel capacity, the channel coding theorem, the source-channel separation theorem, and feedback capacity.
ObjectiveThe fundamentals of Information Theory including Shannon's source coding and channel coding theorems
ContentThe entropy rate of a source, Typical sequences, the asymptotic equi-partition property, the source coding theorem, Huffman coding, Arithmetic coding, channel capacity, the channel coding theorem, the source-channel separation theorem, feedback capacity
LiteratureT.M. Cover and J. Thomas, Elements of Information Theory (second edition)
227-0421-00LLearning in Deep Artificial and Biological Neuronal NetworksW4 credits3GB. Grewe
AbstractDeep-Learning (DL) a brain-inspired weak for of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. However, DL is far from being understood and investigating learning in biological networks might serve again as a compelling inspiration to think differently about state-of-the-art ANN training methods.
ObjectiveThe main goal of this lecture is to provide a comprehensive overview into the learning principles neuronal networks as well as to introduce a diverse skill set (e.g. simulating a spiking neuronal network) that is required to understand learning in large, hierarchical neuronal networks. To achieve this the lectures and exercises will merge ideas, concepts and methods from machine learning and neuroscience. These will include training basic ANNs, simulating spiking neuronal networks as well as being able to read and understand the main ideas presented in today’s neuroscience papers.
After this course students will be able to:
- read and understand the main ideas and methods that are presented in today’s neuroscience papers
- explain the basic ideas and concepts of plasticity in the mammalian brain
- implement alternative ANN learning algorithms to ‘error backpropagation’ in order to train deep neuronal networks.
- use a diverse set of ANN regularization methods to improve learning
- simulate spiking neuronal networks that learn simple (e.g. digit classification) tasks in a supervised manner.
ContentDeep-learning a brain-inspired weak form of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. The origins of deep hierarchical learning can be traced back to early neuroscience research by Hubel and Wiesel in the 1960s, who first described the neuronal processing of visual inputs in the mammalian neocortex. Similar to their neocortical counterparts ANNs seem to learn by interpreting and structuring the data provided by the external world. However, while on specific tasks such as playing (video) games deep ANNs outperform humans (Minh et al, 2015, Silver et al., 2018), ANNs are still not performing on par when it comes to recognizing actions in movie data and their ability to act as generalizable problem solvers is still far behind of what the human brain seems to achieve effortlessly. Moreover, biological neuronal networks can learn far more effectively with fewer training examples, they achieve a much higher performance in recognizing complex patterns in time series data (e.g. recognizing actions in movies), they dynamically adapt to new tasks without losing performance and they achieve unmatched performance to detect and integrate out-of-domain data examples (data they have not been trained with). In other words, many of the big challenges and unknowns that have emerged in the field of deep learning over the last years are already mastered exceptionally well by biological neuronal networks in our brain. On the other hand, many facets of typical ANN design and training algorithms seem biologically implausible, such as the non-local weight updates, discrete processing of time, and scalar communication between neurons. Recent evidence suggests that learning in biological systems is the result of the complex interplay of diverse error feedback signaling processes acting at multiple scales, ranging from single synapses to entire networks.
Lecture notesThe lecture slides will be provided as a PDF after each lecture.
Prerequisites / NoticeThis advanced level lecture requires some basic background in machine/deep learning. Thus, students are expected to have a basic mathematical foundation, including linear algebra, multivariate calculus, and probability. The course is not to be meant as an extended tutorial of how to train deep networks in PyTorch or Tensorflow, although these tools used.
The participation in the course is subject to the following conditions:

1) The number of participants is limited to 120 students (MSc and PhDs).

2) Students must have taken the exam in Deep Learning (263-3210-00L) or have acquired equivalent knowledge.
227-0445-10LMathematical Methods of Signal Processing Information W6 credits4GH. G. Feichtinger
AbstractThis course offers a mathematical correct but still non-technical description of key objects relevant for signal processing, such as Dirac
measures, Dirac combs, various function spaces (like L^2), impulse response, transfer function, Gabor expansion, and so on. The approach is based on properties of "Feichtinger's algebra". MATLAB routines will serve as illustration.
ObjectiveThe aim of the class to familiarize the participants with the idea of generalized functions (usual called distributions), and to provide a (novel approach) to a theory of mild distributions, which cannot be found in books so far (the course will contribute to the development of such a book). From the physical point of view, such an object is something, which can be measured or captured by (linear) measurements, such as an audio signal. The Harmonic Analysis perspective is, that the Fourier transform and time-frequency transforms are possible over any locally compact group. Engineers talk about discrete or continuous, periodic and non-periodic signals. Hence, a unified approach to these settings and a discussion of their interconnection (e.g. approximately computing the Fourier transform of a function using the DFT) is at the heart of this course.
ContentMathematical Foundations of Signal Processing:

0. Recalling (on and off) concepts from linear algebra (e.g. linear mappings, etc.) and introducing concepts from basic linear functional analysis (Hilbert spaces, Banach spaces)

1. Translation invariant systems and convolution, elementary functional analytic approach;

2. Pure frequencies and the Fourier transform, convolution theorem

3. The subalgebra L1(Rd) of integrable functions (without Lebesgue integration), Riemann Lebesgue Lemma

4. Plancherels Theorem, L2(Rd) and basic Hilbert space theory, unitary mappings

5. Short-time Fourier transform, the Feichtinger algebra S0(Rd) as algebra of test functions

6. The dual space of mild distributions, relationship to tempered distributions (for this familiar); various characterization

7. Gabor expansions of signals, characterization of smoothness and decay, Gabor frames and Riesz bases;

8. Transition from continuous to discrete variables, from periodic to the non-periodic case;

9. The kernel theorem, as the continuous analogue of matrix representations;

10. Sobolev spaces (describing smoothness) and weighted spaces;

11. Spreading representation and Kohn-Nirenberg representation of operators;

12. Gabor multipliers and approximation of slowly varying systems;

13. As time permits: the idea of generalized stochastic processes

14. Further subjects as demanded by the audience can be covered on demand.


Detailed lecture notes will be provided. This material will become part of an on-going book-project, which has many facets.
Lecture notesThis material will be regularly updated and posted at the lecturer's homepage, at Link

There will be also a dedicated WEB page at Link (to be installed in the near future).
Prerequisites / NoticeWe encourage students who are interested in mathematics, but also students of physics or mathematics who want to learn about application of modern methods from functional analysis to their sciences, especially those who are interested to understand what the connections between the continuous and the discrete world are (from continuous functions or images to samples or pixels, and back).

Hans G. Feichtinger (Link)

For any kind of questions concerning this course please contact the lecturer. He will be in Zurich most of the time, even if the course has to be held offline. It will start by October 1st 2020 only.
227-0477-00LAcoustics IW6 credits4GK. Heutschi
AbstractIntroduction to the fundamentals of acoustics in the area of sound field calculations, measurement of acoustical events, outdoor sound propagation and room acoustics of large and small enclosures.
ObjectiveIntroduction to acoustics. Understanding of basic acoustical mechanisms. Survey of the technical literature. Illustration of measurement techniques in the laboratory.
ContentFundamentals of acoustics, measuring and analyzing of acoustical events, anatomy and properties of the ear. Outdoor sound propagation, absorption and transmission of sound, room acoustics of large and small enclosures, architectural acoustics, noise and noise control, calculation of sound fields.
Lecture notesyes
263-5210-00LProbabilistic Artificial Intelligence Information Restricted registration - show details W8 credits3V + 2U + 2AA. Krause
AbstractThis course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics and the Internet.
ObjectiveHow can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for graduate students.
ContentTopics covered:
- Probability
- Probabilistic inference (variational inference, MCMC)
- Bayesian learning (Gaussian processes, Bayesian deep learning)
- Probabilistic planning (MDPs, POMPDPs)
- Multi-armed bandits and Bayesian optimization
- Reinforcement learning
Prerequisites / NoticeSolid basic knowledge in statistics, algorithms and programming.
The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite.
401-0647-00LIntroduction to Mathematical Optimization Restricted registration - show details W5 credits2V + 1UD. Adjiashvili
AbstractIntroduction to basic techniques and problems in mathematical optimization, and their applications to a variety of problems in engineering.
ObjectiveThe goal of the course is to obtain a good understanding of some of the most fundamental mathematical optimization techniques used to solve linear programs and basic combinatorial optimization problems. The students will also practice applying the learned models to problems in engineering.
ContentTopics covered in this course include:
- Linear programming (simplex method, duality theory, shadow prices, ...).
- Basic combinatorial optimization problems (spanning trees, shortest paths, network flows, ...).
- Modelling with mathematical optimization: applications of mathematical programming in engineering.
LiteratureInformation about relevant literature will be given in the lecture.
Prerequisites / NoticeThis course is meant for students who did not already attend the course "Mathematical Optimization", which is a more advance lecture covering similar topics. Compared to "Mathematical Optimization", this course has a stronger focus on modeling and applications.
401-3054-14LProbabilistic Methods in Combinatorics Information W6 credits2V + 1UB. Sudakov
AbstractThis course provides a gentle introduction to the Probabilistic Method, with an emphasis on methodology. We will try to illustrate the main ideas by showing the application of probabilistic reasoning to various combinatorial problems.
Objective
ContentThe topics covered in the class will include (but are not limited to): linearity of expectation, the second moment method, the local lemma, correlation inequalities, martingales, large deviation inequalities, Janson and Talagrand inequalities and pseudo-randomness.
Literature- The Probabilistic Method, by N. Alon and J. H. Spencer, 3rd Edition, Wiley, 2008.
- Random Graphs, by B. Bollobás, 2nd Edition, Cambridge University Press, 2001.
- Random Graphs, by S. Janson, T. Luczak and A. Rucinski, Wiley, 2000.
- Graph Coloring and the Probabilistic Method, by M. Molloy and B. Reed, Springer, 2002.
401-3621-00LFundamentals of Mathematical Statistics Information W10 credits4V + 1US. van de Geer
AbstractThe course covers the basics of inferential statistics.
Objective
401-3901-00LMathematical OptimizationW11 credits4V + 2UR. Zenklusen
AbstractMathematical treatment of diverse optimization techniques.
ObjectiveThe goal of this course is to get a thorough understanding of various classical mathematical optimization techniques with an emphasis on polyhedral approaches. In particular, we want students to develop a good understanding of some important problem classes in the field, of structural mathematical results linked to these problems, and of solution approaches based on this structural understanding.
ContentKey topics include:
- Linear programming and polyhedra;
- Flows and cuts;
- Combinatorial optimization problems and techniques;
- Equivalence between optimization and separation;
- Brief introduction to Integer Programming.
Literature- Bernhard Korte, Jens Vygen: Combinatorial Optimization. 6th edition, Springer, 2018.
- Alexander Schrijver: Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. This work has 3 volumes.
- Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993.
- Alexander Schrijver: Theory of Linear and Integer Programming. John Wiley, 1986.
Prerequisites / NoticeSolid background in linear algebra.
401-4619-67LAdvanced Topics in Computational Statistics
Does not take place this semester.
W4 credits2Vnot available
AbstractThis lecture covers selected advanced topics in computational statistics. This year the focus will be on graphical modelling.
ObjectiveStudents learn the theoretical foundations of the selected methods, as well as practical skills to apply these methods and to interpret their outcomes.
ContentThe main focus will be on graphical models in various forms:
Markov properties of undirected graphs; Belief propagation; Hidden Markov Models; Structure estimation and parameter estimation; inference for high-dimensional data; causal graphical models
Prerequisites / NoticeWe assume a solid background in mathematics, an introductory lecture in probability and statistics, and at least one more advanced course in statistics.
  •  Page  1  of  1