Search result: Catalogue data in Spring Semester 2021

Computer Science Master Information
Master Studies (Programme Regulations 2009)
Focus Courses
Focus Courses in Visual Computing
Focus Elective Courses Visual Computing
NumberTitleTypeECTSHoursLecturers
252-0526-00LStatistical Learning Theory Information W8 credits3V + 2U + 2AJ. M. Buhmann, C. Cotrini Jimenez
AbstractThe course covers advanced methods of statistical learning:

- Variational methods and optimization.
- Deterministic annealing.
- Clustering for diverse types of data.
- Model validation by information theory.
ObjectiveThe course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning.
Content- Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing.

- Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures.

- Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation.

- Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models.
Lecture notesA draft of a script will be provided. Lecture slides will be made available.
LiteratureHastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001.

L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996
Prerequisites / NoticeKnowledge of machine learning (introduction to machine learning and/or advanced machine learning)
Basic knowledge of statistics.
252-0570-00LGame Programming Laboratory Information
In the Master Programme max. 10 credits can be accounted by Labs on top of the Interfocus Courses. Additional Labs will be listed on the Addendum.
W10 credits9PB. Sumner
AbstractThe goal of this course is the in-depth understanding of the technology and programming underlying computer games. Students gradually design and develop a computer game in small groups and get acquainted with the art of game programming.
ObjectiveThe goal of this new course is to acquaint students with the
technology and art of programming modern three-dimensional computer
games.
ContentThis course addresses modern three-dimensional computer game technology. During the course, small groups of students will design and develop a computer game. Focus will be put on technical aspects of game development, such as rendering, cinematography, interaction, physics, animation, and AI. In addition, we will cultivate creative thinking for advanced gameplay and visual effects.

The "laboratory" format involves a practical, hands-on approach with traditional lectures. We will meet once a week to discuss technical issues and to track progress. For development we use MonoGames, which is a collection of libraries and tools that facilitate game development. While development will take place on PCs, we will ultimately deployour games on the Xbox One console.

At the end of the course we will present our results to the public.
Lecture notesGame Design Workshop: A Playcentric Approach to Creating Innovative Games by Tracy Fullerton
Prerequisites / NoticeThe number of participants is limited.

Prerequisites include:

- Good programming skills (Java, C++, C#, etc.)

- CG experience: Students should have taken, at a minimum, Visual
Computing. Higher level courses are recommended, such as Introduction
to Computer Graphics, Surface Representations and Geometric Modeling,
and Physically-based Simulation in Computer Graphics.
252-0579-00L3D Vision Information W5 credits3G + 1AM. Pollefeys, V. Larsson
AbstractThe course covers camera models and calibration, feature tracking and matching, camera motion estimation via simultaneous localization and mapping (SLAM) and visual odometry (VO), epipolar and mult-view geometry, structure-from-motion, (multi-view) stereo, augmented reality, and image-based (re-)localization.
ObjectiveAfter attending this course, students will:
1. understand the core concepts for recovering 3D shape of objects and scenes from images and video.
2. be able to implement basic systems for vision-based robotics and simple virtual/augmented reality applications.
3. have a good overview over the current state-of-the art in 3D vision.
4. be able to critically analyze and asses current research in this area.
ContentThe goal of this course is to teach the core techniques required for robotic and augmented reality applications: How to determine the motion of a camera and how to estimate the absolute position and orientation of a camera in the real world. This course will introduce the basic concepts of 3D Vision in the form of short lectures, followed by student presentations discussing the current state-of-the-art. The main focus of this course are student projects on 3D Vision topics, with an emphasis on robotic vision and virtual and augmented reality applications.
252-5706-00LMathematical Foundations of Computer Graphics and Vision Information W5 credits2V + 1U + 1AT. Aydin, A. Djelouah
AbstractThis course presents the fundamental mathematical tools and concepts used in computer graphics and vision. Each theoretical topic is introduced in the context of practical vision or graphic problems, showcasing its importance in real-world applications.
ObjectiveThe main goal is to equip the students with the key mathematical tools necessary to understand state-of-the-art algorithms in vision and graphics. In addition to the theoretical part, the students will learn how to use these mathematical tools to solve a wide range of practical problems in visual computing. After successfully completing this course, the students will be able to apply these mathematical concepts and tools to practical industrial and academic projects in visual computing.
ContentThe theory behind various mathematical concepts and tools will be introduced, and their practical utility will be showcased in diverse applications in computer graphics and vision. The course will cover topics in sampling, reconstruction, approximation, optimization, robust fitting, differentiation, quadrature and spectral methods. Applications will include 3D surface reconstruction, camera pose estimation, image editing, data projection, character animation, structure-aware geometry processing, and rendering.
263-3710-00LMachine Perception Information Restricted registration - show details
Number of participants limited to 200.
W8 credits3V + 2U + 2AO. Hilliges, S. Tang
AbstractRecent developments in neural networks (aka “deep learning”) have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and intelligent UIs. This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual tasks.
ObjectiveStudents will learn about fundamental aspects of modern deep learning approaches for perception. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics and HCI. The final project assignment will involve training a complex neural network architecture and applying it on a real-world dataset of human activity.

The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human input into computing systems. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others.
ContentWe will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures and advanced deep learning concepts in particular probabilistic deep learning models

The course covers the following main areas:
I) Foundations of deep-learning.
II) Probabilistic deep-learning for generative modelling of data (latent variable models, generative adversarial networks and auto-regressive models).
III) Deep learning in computer vision, human-computer interaction and robotics.

Specific topics include: 
I) Deep learning basics:
a) Neural Networks and training (i.e., backpropagation)
b) Feedforward Networks
c) Timeseries modelling (RNN, GRU, LSTM)
d) Convolutional Neural Networks for classification
II) Probabilistic Deep Learning:
a) Latent variable models (VAEs)
b) Generative adversarial networks (GANs)
c) Autoregressive models (PixelCNN, PixelRNN, TCNs)
III) Deep Learning techniques for machine perception:
a) Fully Convolutional architectures for dense per-pixel tasks (i.e., instance segmentation)
b) Pose estimation and other tasks involving human activity
c) Deep reinforcement learning
IV) Case studies from research in computer vision, HCI, robotics and signal processing
LiteratureDeep Learning
Book by Ian Goodfellow and Yoshua Bengio
Prerequisites / Notice***
In accordance with the ETH Covid-19 master plan the lecture will be fully virtual. Details on the course website.
***

This is an advanced grad-level course that requires a background in machine learning. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. The course will focus on state-of-the-art research in deep-learning and will not repeat basics of machine learning

Please take note of the following conditions:
1) The number of participants is limited to 200 students (MSc and PhDs).
2) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge
3) All practical exercises will require basic knowledge of Python and will use libraries such as Pytorch, scikit-learn and scikit-image. We will provide introductions to Pytorch and other libraries that are needed but will not provide introductions to basic programming or Python.

The following courses are strongly recommended as prerequisite:
* "Visual Computing" or "Computer Vision"

The course will be assessed by a final written examination in English. No course materials or electronic devices can be used during the examination. Note that the examination will be based on the contents of the lectures, the associated reading materials and the exercises.
263-5701-00LVisualization Information W5 credits2V + 1U + 1AM. Gross, T. Günther
AbstractThis lecture provides an introduction into visualization of scientific and abstract data.
ObjectiveThis lecture provides an introduction into the visualization of scientific and abstract data. The lecture introduces into the two main branches of visualization: scientific visualization and information visualization. The focus is set onto scientific data, demonstrating the usefulness and necessity of computer graphics in other fields than the entertainment industry. The exercises contain theoretical tasks on the mathematical foundations such as numerical integration, differential vector calculus, and flow field analysis, while programming exercises familiarize with the Visualization Tool Kit (VTK). In a course project, the learned methods are applied to visualize one real scientific data set. The provided data sets contain measurements of volcanic eruptions, galaxy simulations, fluid simulations, meteorological cloud simulations and asteroid impact simulations.
ContentThis lecture opens with human cognition basics, and scalar and vector calculus. Afterwards, this is applied to the visualization of air and fluid flows, including geometry-based, topology-based and feature-based methods. Further, the direct and indirect visualization of volume data is discussed. The lecture ends on the viualization of abstract, non-spatial and multi-dimensional data by means of information visualization.
Prerequisites / NoticeFundamentals of differential calculus. Knowledge on numerical mathematics, computer algebra systems, as well as ordinary and partial differential equations is an asset, but not required.
263-5806-00LComputational Models of Motion Information W8 credits2V + 2U + 3AS. Coros, M. Bächer, B. Thomaszewski
AbstractThis course covers fundamentals of physics-based modelling and numerical optimization from the perspective of character animation and robotics applications. The methods discussed in class derive their theoretical underpinnings from applied mathematics, control theory and computational mechanics, and they will be richly illustrated using examples ranging from locomotion controllers and crowd simula
ObjectiveStudents will learn how to represent, model and algorithmically control the behavior of animated characters and real-life robots. The lectures are accompanied by programming assignments (written in C++) and a capstone project.
ContentOptimal control and trajectory optimization; multibody systems; kinematics; forward and inverse dynamics; constrained and unconstrained numerical optimization; mass-spring models for crowd simulation; FEM; compliant systems; sim-to-real; robotic manipulation of elastically-deforming objects.
Prerequisites / NoticeExperience with C++ programming, numerical linear algebra and multivariate calculus. Some background in physics-based modeling, kinematics and dynamics is helpful, but not necessary.
227-0560-00LDeep Learning for Autonomous Driving Information Restricted registration - show details
Registration in this class requires the permission of the instructors.
Class size will be limited to 80 students.
Please send an email to Dengxin Dai <Link> about your courses/projects that are related to machine learning, computer vision, and Robotics.
W6 credits3V + 2PD. Dai, A. Liniger
AbstractAutonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.
ObjectiveStudents will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.

After attending this course, students will:
1) understand the core technologies of building a self-driving car;
2) have a good overview over the current state of the art in self-driving cars;
3) be able to critically analyze and evaluate current research in this area;
4) be able to implement basic systems for multiple autonomous driving tasks.
ContentWe will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.

The course covers the following main areas:

I) Foundation
a) Fundamentals of a self-driving car
b) Fundamentals of deep-learning


II) Perception
a) Semantic segmentation and lane detection
b) Depth estimation with images and sparse LiDAR data
c) 3D object detection with images and LiDAR data
d) Object tracking and Lane Detection

III) Localization
a) GPS-based and Vision-based Localization
b) Visual Odometry and Lidar Odometry

IV) Path Planning and Control
a) Path planning for autonomous driving
b) Motion planning and vehicle control
c) Imitation learning and reinforcement learning for self driving cars

The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:
- Sensor calibration and synchronization to obtain multimodal driving data;
- Semantic segmentation and depth estimation with deep neural networks ;
- 3D object detection and tracking in LiDAR point clouds
Lecture notesThe lecture slides will be provided as a PDF.
Prerequisites / NoticeThis is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.
227-1034-00LComputational Vision (University of Zurich)
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH.
UZH Module Code: INI402

Mind the enrolment deadlines at UZH:
Link
W6 credits2V + 1UD. Kiper
AbstractThis course focuses on neural computations that underlie visual perception. We study how visual signals are processed in the retina, LGN and visual cortex. We study the morpholgy and functional architecture of cortical circuits responsible for pattern, motion, color, and three-dimensional vision.
ObjectiveThis course considers the operation of circuits in the process of neural computations. The evolution of neural systems will be considered to demonstrate how neural structures and mechanisms are optimised for energy capture, transduction, transmission and representation of information. Canonical brain circuits will be described as models for the analysis of sensory information. The concept of receptive fields will be introduced and their role in coding spatial and temporal information will be considered. The constraints of the bandwidth of neural channels and the mechanisms of normalization by neural circuits will be discussed.
The visual system will form the basis of case studies in the computation of form, depth, and motion. The role of multiple channels and collective computations for object recognition will
be considered. Coordinate transformations of space and time by cortical and subcortical mechanisms will be analysed. The means by which sensory and motor systems are integrated to allow for adaptive behaviour will be considered.
ContentThis course considers the operation of circuits in the process of neural computations. The evolution of neural systems will be considered to demonstrate how neural structures and mechanisms are optimised for energy capture, transduction, transmission and representation of information. Canonical brain circuits will be described as models for the analysis of sensory information. The concept of receptive fields will be introduced and their role in coding spatial and temporal information will be considered. The constraints of the bandwidth of neural channels and the mechanisms of normalization by neural circuits will be discussed.
The visual system will form the basis of case studies in the computation of form, depth, and motion. The role of multiple channels and collective computations for object recognition will
be considered. Coordinate transformations of space and time by cortical and subcortical mechanisms will be analysed. The means by which sensory and motor systems are integrated to allow for adaptive behaviour will be considered.
LiteratureBooks: (recommended references, not required)
1. An Introduction to Natural Computation, D. Ballard (Bradford Books, MIT Press) 1997.
2. The Handbook of Brain Theorie and Neural Networks, M. Arbib (editor), (MIT Press) 1995.
  •  Page  1  of  1