Search result: Catalogue data in Spring Semester 2022

Computer Science Master Information
Master Studies (Programme Regulations 2020)
Majors
Major in Visual and Interactive Computing
Elective Courses
NumberTitleTypeECTSHoursLecturers
252-0312-00LMobile Health and Activity Monitoring Information
Previously Ubiquitous Computing, now with a focused and technical scope.
W6 credits2V + 3AC. Holz
AbstractHealth and activity monitoring has become a key purpose of mobile & wearable devices, e.g., phones, watches, and rings. We will cover the phenomena they capture, i.e., user behavior and actions, basic human physiology, as well as the sensors, signals, and methods for processing and analysis.

For the exercise, students will receive a wristband to stream and analyze activity and health signals.
ObjectiveThe course comprises a series of introductions to the cross-disciplinary area of mobile health with technical follow-up lectures.

* Introduction to the basic (digital) health ecosystem
* Introduction to basic cardiovascular function and processes
* Overview of sensors and signal modalities (PPG, ECG, camera-based/remote PPG, BCG, PTT)
* Introduction to affective computing, psychological states, basic personalities, emotions
* Overview of motion sensors, signals, sampling, filters
* Overview of basic signal processing specific to the metrics related to mobile health
* Introduction to user studies: controlled in-lab vs. outside the lab
* Introduction to sleep physiology and neurological conditions
* Overview of device platforms: components of wearables, design, communication


The course will combine high-level concepts with low-level technical methods needed to sense, detect, and understand them.

High-level:
– sensing modalities for interactive systems
– "activities" and "events" (exercises and other mechanical activities such as movements and resulting vibrations)
– health monitoring (basic cardiovascular physiology)
– affective computing (emotions, mood, personality)

Lower-level:
– sampling and filtering, time and frequency domains
– cross-modal sensor systems, signal synchronization and correlation
– event detection, classification, prediction using basic signal processing as well as learning-based methods
– sensor types: optical, mechanical/acoustic, electromagnetic

------------------------------------------------------------

The course was previously called "Ubiquitous Computing", but has been redesigned to focus solely on the technical aspects of Ubicomp, particularly those related to mobile health, activity monitoring, data analysis, interpretation and insights.
ContentHealth and activity monitoring has become a key purpose of mobile and wearable devices, including phones, (smart) watches, (smart) rings, (smart) belts, and other trackers (e.g., shoe clips, pendants). In this course, we will cover the fundamental aspects that these devices observe, i.e., user behavior, actions, and physiological dynamics of the human body, as well as the sensors, signals, and methods to capture, process, and analyze them. We will then cover methods for pattern extraction and classification on such data. The course will therefore touch on aspects of human activities, cardiovascular and pulmonary physiology, affective computing (recognizing, interpreting, and processing emotions), corresponding lower-level sensing systems (e.g., inertial sensing, optical sensing, photoplethysmography, eletrodermal activity, electrocardiograms) and higher-level computer vision-based sensing (facial expressions, motions, gestures), as well as processing methods for these types of data.

The course will be accompanied by a group exercise project, in which students will apply the concepts and methods taught in class. Students will receive a wearable wristband device that streams IMU data to a mobile phone (code will be provided for receiving, storing, visualizing on the phone). Throughout the course and exercises, we will collect data of various human activities from the band, annotate them, analyze, classify, and interpret them. For this, existing and novel processing methods will be developed (plenty of related work exists), based on the collected data as well as existing datasets. We will also combine the band with signals obtained from the mobile phone to holistically capture and analyze health and activity data.

Full details: Link

Note: All lectures will be streamed live and recorded for later replay. Hybrid participation will be possible even if ETH should return to full presence teaching.
Lecture notesCopies of slides will be made available
Lectures will be streamed live as well as recorded and made available online.

More information on the course site: Link

Note: All lectures will be streamed live and recorded for later replay. Hybrid participation will be possible even if ETH should return to full presence teaching.
LiteratureWill be provided in the lecture
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingassessed
Media and Digital Technologiesassessed
Problem-solvingassessed
Social CompetenciesCooperation and Teamworkassessed
Sensitivity to Diversityassessed
Personal CompetenciesAdaptability and Flexibilityassessed
Creative Thinkingassessed
Critical Thinkingassessed
252-0579-00L3D Vision Information W5 credits3G + 1AM. Pollefeys, D. B. Baráth
AbstractThe course covers camera models and calibration, feature tracking and matching, camera motion estimation via simultaneous localization and mapping (SLAM) and visual odometry (VO), epipolar and mult-view geometry, structure-from-motion, (multi-view) stereo, augmented reality, and image-based (re-)localization.
ObjectiveAfter attending this course, students will:
1. understand the core concepts for recovering 3D shape of objects and scenes from images and video.
2. be able to implement basic systems for vision-based robotics and simple virtual/augmented reality applications.
3. have a good overview over the current state-of-the art in 3D vision.
4. be able to critically analyze and asses current research in this area.
ContentThe goal of this course is to teach the core techniques required for robotic and augmented reality applications: How to determine the motion of a camera and how to estimate the absolute position and orientation of a camera in the real world. This course will introduce the basic concepts of 3D Vision in the form of short lectures, followed by student presentations discussing the current state-of-the-art. The main focus of this course are student projects on 3D Vision topics, with an emphasis on robotic vision and virtual and augmented reality applications.
252-5706-00LMathematical Foundations of Computer Graphics and Vision Information W5 credits2V + 1U + 1AT. Aydin, A. Djelouah
AbstractThis course presents the fundamental mathematical tools and concepts used in computer graphics and vision. Each theoretical topic is introduced in the context of practical vision or graphic problems, showcasing its importance in real-world applications.
ObjectiveThe main goal is to equip the students with the key mathematical tools necessary to understand state-of-the-art algorithms in vision and graphics. In addition to the theoretical part, the students will learn how to use these mathematical tools to solve a wide range of practical problems in visual computing. After successfully completing this course, the students will be able to apply these mathematical concepts and tools to practical industrial and academic projects in visual computing.
ContentThe theory behind various mathematical concepts and tools will be introduced, and their practical utility will be showcased in diverse applications in computer graphics and vision. The course will cover topics in sampling, reconstruction, approximation, optimization, robust fitting, differentiation, quadrature and spectral methods. Applications will include 3D surface reconstruction, camera pose estimation, image editing, data projection, character animation, structure-aware geometry processing, and rendering.
263-5052-00LInteractive Machine Learning: Visualization & Explainability Information Restricted registration - show details
Number of participants limited to 190.
W5 credits2V + 1U + 1AM. El-Assady
AbstractVisual Analytics supports the design of human-in-the-loop interfaces that enable human-machine collaboration. In this course, will go through the fundamentals of designing interactive visualizations, later applying them to explain and interact with machine leaning models.
ObjectiveThe goal of the course is to introduce techniques for interactive information visualization and to apply these on understanding, diagnosing, and refining machine learning models.
ContentInteractive, mixed-initiative machine learning promises to combine the efficiency of automation with the effectiveness of humans for a collaborative decision-making and problem-solving process. This can be facilitated through co-adaptive visual interfaces.

This course will first introduce the foundations of information visualization design based on data charecteristics, e.g., high-dimensional, geo-spatial, relational, temporal, and textual data.

Second, we will discuss interaction techniques and explanation strategies to enable explainable machine learning with the tasks of understanding, diagnosing, and refining machine learning models.

Tentative list of topics:
1. Visualization and Perception
2. Interaction and Explanation
3. Systems Overview
Lecture notesCourse material will be provided in form of slides.
LiteratureWill be provided during the course.
Prerequisites / NoticeBasic understanding of machine learning as taught at the Bachelor's level.
263-5701-00LScientific Visualization Information W5 credits2V + 1U + 1AM. Gross, T. Günther
AbstractThis lecture provides an introduction into visualization of scientific and abstract data.
ObjectiveThis lecture provides an introduction into the visualization of scientific and abstract data. The lecture introduces into the two main branches of visualization: scientific visualization and information visualization. The focus is set onto scientific data, demonstrating the usefulness and necessity of computer graphics in other fields than the entertainment industry. The exercises contain theoretical tasks on the mathematical foundations such as numerical integration, differential vector calculus, and flow field analysis, while programming exercises familiarize with the Visualization Tool Kit (VTK). In a course project, the learned methods are applied to visualize one real scientific data set. The provided data sets contain measurements of volcanic eruptions, galaxy simulations, fluid simulations, meteorological cloud simulations and asteroid impact simulations.
ContentThis lecture opens with human cognition basics, and scalar and vector calculus. Afterwards, this is applied to the visualization of air and fluid flows, including geometry-based, topology-based and feature-based methods. Further, the direct and indirect visualization of volume data is discussed. The lecture ends on the viualization of abstract, non-spatial and multi-dimensional data by means of information visualization.
Prerequisites / NoticeFundamentals of differential calculus. Knowledge on numerical mathematics, computer algebra systems, as well as ordinary and partial differential equations is an asset, but not required.
263-5906-00LVirtual Humans Information W5 credits2V + 1U + 1AS. Tang
AbstractHuman digitalization is required in many applications, such as AR/VR, robotics, games, and social networking. The course covers core techniques and fundamental tools necessary for perceiving and modeling humans. The main topics include human body modeling, human appearance and motion modeling, and human-scene interaction capture and modeling.
ObjectiveAfter attending this course, students will be able to implement basic systems to estimate human pose, shape, and motion from videos; furthermore, students will be able to create basic human avatars from various visual inputs.
ContentWe will focus on all aspects of 3D human capture, modelling, and synthesis, including
⁃ Basic concept of 3D representations
⁃ Human body models;
⁃ Human motion capture;
⁃ Non-rigid surface tracking and reconstruction;
⁃ Neural rendering
Lecture notesSlides
LiteratureComputer Vision: Algorithms and applications by Richard Szeliski.
Deep Learning: by Goodfellow, Bengio, and Courville
Prerequisites / NoticeThis is an advanced lecture for learning to model and synthesize 3D humans. We assume you have basic knowledge of computer vision, deep learning, and computer graphics; a solid understanding of linear algebra, probability, and calculus.
The following courses are highly recommended as a prerequisite
visual computing, computer vision, and deep learning.
227-0560-00LDeep Learning for Autonomous Driving Information Restricted registration - show details
Number of participants limited to 80.
W6 credits3V + 2PD. Dai, A. Liniger
AbstractAutonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.
ObjectiveStudents will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.

After attending this course, students will:
1) understand the core technologies of building a self-driving car;
2) have a good overview over the current state of the art in self-driving cars;
3) be able to critically analyze and evaluate current research in this area;
4) be able to implement basic systems for multiple autonomous driving tasks.
ContentWe will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.

The course covers the following main areas:

I) Foundation
a) Fundamentals of a self-driving car
b) Fundamentals of deep-learning


II) Perception
a) Semantic segmentation and lane detection
b) Depth estimation with images and sparse LiDAR data
c) 3D object detection with images and LiDAR data
d) Object tracking and Lane Detection

III) Localization
a) GPS-based and Vision-based Localization
b) Visual Odometry and Lidar Odometry

IV) Path Planning and Control
a) Path planning for autonomous driving
b) Motion planning and vehicle control
c) Imitation learning and reinforcement learning for self driving cars

The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:
- Sensor calibration and synchronization to obtain multimodal driving data;
- Semantic segmentation and depth estimation with deep neural networks ;
- 3D object detection and tracking in LiDAR point clouds
Lecture notesThe lecture slides will be provided as a PDF.
Prerequisites / NoticeThis is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.
  •  Page  1  of  1