Search result: Catalogue data in Spring Semester 2019

Computer Science Master Information
Focus Courses
Focus Courses in Computational Science
Focus Elective Courses Computational Science
NumberTitleTypeECTSHoursLecturers
252-0526-00LStatistical Learning Theory Information W7 credits3V + 2U + 1AJ. M. Buhmann
AbstractThe course covers advanced methods of statistical learning :
Statistical learning theory;variational methods and optimization, e.g., maximum entropy techniques, information bottleneck, deterministic and simulated annealing; clustering for vectorial, histogram and relational data; model selection; graphical models.
Learning objectiveThe course surveys recent methods of statistical learning. The fundamentals of machine learning as presented in the course "Introduction to Machine Learning" are expanded and in particular, the theory of statistical learning is discussed.
Content# Theory of estimators: How can we measure the quality of a statistical estimator? We already discussed bias and variance of estimators very briefly, but the interesting part is yet to come.

# Variational methods and optimization: We consider optimization approaches for problems where the optimizer is a probability distribution. Concepts we will discuss in this context include:

* Maximum Entropy
* Information Bottleneck
* Deterministic Annealing

# Clustering: The problem of sorting data into groups without using training samples. This requires a definition of ``similarity'' between data points and adequate optimization procedures.

# Model selection: We have already discussed how to fit a model to a data set in ML I, which usually involved adjusting model parameters for a given type of model. Model selection refers to the question of how complex the chosen model should be. As we already know, simple and complex models both have advantages and drawbacks alike.

# Statistical physics models: approaches for large systems approximate optimization, which originate in the statistical physics (free energy minimization applied to spin glasses and other models); sampling methods based on these models
Lecture notesA draft of a script will be provided;
transparencies of the lectures will be made available.
LiteratureHastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001.

L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996
Prerequisites / NoticeRequirements:

knowledge of the Machine Learning course
basic knowledge of statistics, interest in statistical methods.

It is recommended that Introduction to Machine Learning (ML I) is taken first; but with a little extra effort Statistical Learning Theory can be followed without the introductory course.
261-5120-00LMachine Learning for Health Care Information Restricted registration - show details
Number of participants limited to 78.

Previously called Computational Biomedicine II
W4 credits3PG. Rätsch
AbstractThe course will review the most relevant methods and applications of Machine Learning in Biomedicine, discuss the main challenges they present and their current technical problems.
Learning objectiveDuring the last years, we have observed a rapid growth in the field of Machine Learning (ML), mainly due to improvements in ML algorithms, the increase of data availability and a reduction in computing costs. This growth is having a profound impact in biomedical applications, where the great variety of tasks and data types enables us to get benefit of ML algorithms in many different ways. In this course we will review the most relevant methods and applications of ML in biomedicine, discuss the main challenges they present and their current technical solutions.
ContentThe course will consist of four topic clusters that will cover the most relevant applications of ML in Biomedicine:
1) Structured time series: Temporal time series of structured data often appear in biomedical datasets, presenting challenges as containing variables with different periodicities, being conditioned by static data, etc.
2) Medical notes: Vast amount of medical observations are stored in the form of free text, we will analyze stategies for extracting knowledge from them.
3) Medical images: Images are a fundamental piece of information in many medical disciplines. We will study how to train ML algorithms with them.
4) Genomics data: ML in genomics is still an emerging subfield, but given that genomics data are arguably the most extensive and complex datasets that can be found in biomedicine, it is expected that many relevant ML applications will arise in the near future. We will review and discuss current applications and challenges.
Prerequisites / NoticeData Structures & Algorithms, Introduction to Machine Learning, Statistics/Probability, Programming in Python, Unix Command Line

Relation to Course 261-5100-00 Computational Biomedicine: This course is a continuation of the previous course with new topics related to medical data and machine learning. The format of Computational Biomedicine II will also be different. It is helpful but not essential to attend Computational Biomedicine before attending Computational Biomedicine II.
  •  Page  1  of  1