Fan Yang: Katalogdaten im Frühjahrssemester 2021 |
Name | Frau Prof. Dr. Fan Yang |
Lehrgebiet | Informatik |
Adresse | Professur für Informatik ETH Zürich, CAB G 19.1 Universitätstrasse 6 8092 Zürich SWITZERLAND |
fan.yang@inf.ethz.ch | |
Departement | Informatik |
Beziehung | Assistenzprofessorin (Tenure Track) |
Nummer | Titel | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|
252-0220-00L | Introduction to Machine Learning Limited number of participants. Preference is given to students in programmes in which the course is being offered. All other students will be waitlisted. Please do not contact Prof. Krause for any questions in this regard. If necessary, please contact studiensekretariat@inf.ethz.ch | 8 KP | 4V + 2U + 1A | A. Krause, F. Yang | |
Kurzbeschreibung | The course introduces the foundations of learning and making predictions based on data. | ||||
Lernziel | The course will introduce the foundations of learning and making predictions from data. We will study basic concepts such as trading goodness of fit and model complexitiy. We will discuss important machine learning algorithms used in practice, and provide hands-on experience in a course project. | ||||
Inhalt | - Linear regression (overfitting, cross-validation/bootstrap, model selection, regularization, [stochastic] gradient descent) - Linear classification: Logistic regression (feature selection, sparsity, multi-class) - Kernels and the kernel trick (Properties of kernels; applications to linear and logistic regression); k-nearest neighbor - Neural networks (backpropagation, regularization, convolutional neural networks) - Unsupervised learning (k-means, PCA, neural network autoencoders) - The statistical perspective (regularization as prior; loss as likelihood; learning as MAP inference) - Statistical decision theory (decision making based on statistical models and utility functions) - Discriminative vs. generative modeling (benefits and challenges in modeling joint vy. conditional distributions) - Bayes' classifiers (Naive Bayes, Gaussian Bayes; MLE) - Bayesian approaches to unsupervised learning (Gaussian mixtures, EM) | ||||
Literatur | Textbook: Kevin Murphy, Machine Learning: A Probabilistic Perspective, MIT Press | ||||
Voraussetzungen / Besonderes | Designed to provide a basis for following courses: - Advanced Machine Learning - Deep Learning - Probabilistic Artificial Intelligence - Seminar "Advanced Topics in Machine Learning" | ||||
252-0220-10L | Introduction to Machine Learning (Only Project) Only for Ph.D. students! | 2 KP | 4A | A. Krause, F. Yang | |
Kurzbeschreibung | |||||
Lernziel | The course will introduce the foundations of learning and making predictions from data. We will study basic concepts such as trading goodness of fit and model complexitiy. We will discuss important machine learning algorithms used in practice, and provide hands-on experience in a course project. | ||||
263-3300-00L | Data Science Lab Only for Data Science MSc. | 14 KP | 9P | C. Zhang, V. Boeva, R. Cotterell, J. Vogt, F. Yang | |
Kurzbeschreibung | In this class, we bring together data science applications provided by ETH researchers outside computer science and teams of computer science master's students. Two to three students will form a team working on data science/machine learning-related research topics provided by scientists in a diverse range of domains such as astronomy, biology, social sciences etc. | ||||
Lernziel | The goal of this class if for students to gain experience of dealing with data science and machine learning applications "in the wild". Students are expected to go through the full process starting from data cleaning, modeling, execution, debugging, error analysis, and quality/performance refinement. | ||||
Voraussetzungen / Besonderes | Prerequisites: At least 8 KP must have been obtained under Data Analysis and at least 8 KP must have been obtained under Data Management and Processing. | ||||
263-5300-00L | Guarantees for Machine Learning Number of participants limited to 30. Last cancellation/deregistration date for this graded semester performance: 17 March 2021! Please note that after that date no deregistration will be accepted and a "no show" will appear on your transcript. | 7 KP | 3G + 3A | F. Yang | |
Kurzbeschreibung | This course is aimed at advanced master and doctorate students who want to conduct independent research on theory for modern machine learning (ML). It teaches classical and recent methods in statistical learning theory commonly used to prove theoretical guarantees for ML algorithms. The knowledge is then applied in independent project work that focuses on understanding modern ML phenomena. | ||||
Lernziel | Learning objectives: - acquire enough mathematical background to understand a good fraction of theory papers published in the typical ML venues. For this purpose, students will learn common mathematical techniques from statistics and optimization in the first part of the course and apply this knowledge in the project work - critically examine recently published work in terms of relevance and determine impactful (novel) research problems. This will be an integral part of the project work and involves experimental as well as theoretical questions - find and outline an approach (some subproblem) to prove a conjectured theorem. This will be practiced in lectures / exercise and homeworks and potentially in the final project. - effectively communicate and present the problem motivation, new insights and results to a technical audience. This will be primarily learned via the final presentation and report as well as during peer-grading of peer talks. | ||||
Inhalt | This course touches upon foundational methods in statistical learning theory aimed at proving theoretical guarantees for machine learning algorithms, touching on the following topics - concentration bounds - uniform convergence and empirical process theory - high-dimensional statistics (e.g. sparsity) - regularization for non-parametric statistics (e.g. in RKHS, neural networks) - implicit regularization via gradient descent (e.g. margins, early stopping) - minimax lower bounds The project work focuses on current theoretical ML research that aims to understand modern phenomena in machine learning, including but not limited to - how overparameterization could help generalization ( RKHS, NN ) - how overparameterization could help optimization ( non-convex optimization, loss landscape ) - complexity measures and approximation theoretic properties of randomly initialized and trained NN - generalization of robust learning ( adversarial robustness, standard and robust error tradeoff, distribution shift) | ||||
Voraussetzungen / Besonderes | It’s absolutely necessary for students to have a strong mathematical background (basic real analysis, probability theory, linear algebra) and good knowledge of core concepts in machine learning taught in courses such as “Introduction to Machine Learning”, “Regression”/ “Statistical Modelling”. In addition to these prerequisites, this class requires a high degree of mathematical maturity—including abstract thinking and the ability to understand and write proofs. Students have usually taken a subset of Fundamentals of Mathematical Statistics, Probabilistic AI, Neural Network Theory, Optimization for Data Science, Advanced ML, Statistical Learning Theory, Probability Theory (D-MATH) | ||||
401-5680-00L | Foundations of Data Science Seminar | 0 KP | P. L. Bühlmann, A. Bandeira, H. Bölcskei, J. M. Buhmann, T. Hofmann, A. Krause, A. Lapidoth, H.‑A. Loeliger, M. H. Maathuis, N. Meinshausen, G. Rätsch, S. van de Geer, F. Yang | ||
Kurzbeschreibung | Research colloquium | ||||
Lernziel |