Ryan Cotterell: Katalogdaten im Frühjahrssemester 2023 |
Name | Herr Prof. Dr. Ryan Cotterell |
Lehrgebiet | Informatik |
Adresse | Professur für Informatik ETH Zürich, OAT W 13.2 Andreasstrasse 5 8092 Zürich SWITZERLAND |
ryan.cotterell@inf.ethz.ch | |
Departement | Informatik |
Beziehung | Assistenzprofessor (Tenure Track) |
Nummer | Titel | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|
252-0945-16L | Doctoral Seminar Machine Learning (FS23) Only for Computer Science Ph.D. students. This doctoral seminar is intended for PhD students affiliated with the Institute for Machine Learning. Other PhD students who work on machine learning projects or related topics need approval by at least one of the organizers to register for the seminar. | 2 KP | 1S | N. He, V. Boeva, J. M. Buhmann, R. Cotterell, T. Hofmann, A. Krause, M. Sachan, J. Vogt, F. Yang | |
Kurzbeschreibung | An essential aspect of any research project is dissemination of the findings arising from the study. Here we focus on oral communication, which includes: appropriate selection of material, preparation of the visual aids (slides and/or posters), and presentation skills. | ||||
Lernziel | The seminar participants should learn how to prepare and deliver scientific talks as well as to deal with technical questions. Participants are also expected to actively contribute to discussions during presentations by others, thus learning and practicing critical thinking skills. | ||||
Voraussetzungen / Besonderes | This doctoral seminar of the Machine Learning Laboratory of ETH is intended for PhD students who work on a machine learning project, i.e., for the PhD students of the ML lab. | ||||
252-2310-00L | Understanding Context-Free Parsing Algorithms Findet dieses Semester nicht statt. The deadline for deregistering expires at the end of the second week of the semester. Students who are still registered after that date, but do not attend the seminar, will officially fail the seminar. | 2 KP | 2S | R. Cotterell | |
Kurzbeschreibung | Parsing context-free grammars is a fundamental problem in natural language processing and computer science more broadly. This seminar will explore a classic text that unifies many algorithms for parsing in one framework. | ||||
Lernziel | Sikkel's notion of parsing schemata is explored in depth. The students should take away an understanding and fluency with these ideas. | ||||
Inhalt | Parsing Schemata: A Framework for Specification and Analysis of Parsing Algorithms | ||||
263-3300-00L | Data Science Lab | 14 KP | 9P | A. Ilic, V. Boeva, R. Cotterell, J. Vogt, F. Yang | |
Kurzbeschreibung | In this class, we bring together data science applications provided by ETH researchers outside computer science and teams of computer science master's students. Two to three students will form a team working on data science/machine learning-related research topics provided by scientists in a diverse range of domains such as astronomy, biology, social sciences etc. | ||||
Lernziel | The goal of this class if for students to gain experience of dealing with data science and machine learning applications "in the wild". Students are expected to go through the full process starting from data cleaning, modeling, execution, debugging, error analysis, and quality/performance refinement. | ||||
Voraussetzungen / Besonderes | Prerequisites: At least 8 KP must have been obtained under Data Analysis and at least 8 KP must have been obtained under Data Management and Processing. | ||||
263-5352-00L | Advanced Formal Language Theory | 6 KP | 4G + 1A | R. Cotterell | |
Kurzbeschreibung | This course serves as an introduction to various advanced topics in formal language theory. | ||||
Lernziel | The objective of the course is to learn and understand a variety of topics in advanced formal language theory. | ||||
Inhalt | This course serves as an introduction to various advanced topics in formal language theory. The primary focus of the course is on weighted formalisms, which can easily be applied in machine learning. Topics include finite-state machines as well as the algorithms that are commonly used for their manipulation. We will also cover weighted context-free grammars, weighted tree automata, and weighted mildly context-sensitive formalisms. | ||||
263-5353-10L | Philosophy of Language and Computation II (with Case Study) | 5 KP | 2V + 1U + 1A | R. Cotterell, J. L. Gastaldi | |
Kurzbeschreibung | Understand the philosophical underpinnings of language-based artificial intelligence. | ||||
Lernziel | This graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which is centered around statistical machine learning applied to natural language data. | ||||
Inhalt | This graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which is centered around statistical machine learning applied to natural language data. The course is a year-long journey, but the second half (Spring 2023) does not depend on the first (Fall 2022) and thus either half may be taken independently. In each semester, we divide the class time into three modules. Each module is centered around a philosophical topic. After discussing logical, structuralist, and generative approaches to language in the first semester, in the second semester we will focus on information, language games, and pragmatics. The modules will be four weeks long. During the first two weeks of a module, we will read and discuss original texts and supplementary criticism. During the second two weeks, we will read recent NLP papers and discuss how the authors of those works are building on philosophical insights into our conception of language—perhaps implicitly or unwittingly. | ||||
Literatur | The literature will be provided by the instructors on the class website | ||||
263-5353-20L | Philosophy of Language and Computation II | 3 KP | 2V + 1U | R. Cotterell, J. L. Gastaldi | |
Kurzbeschreibung | Understand the philosophical underpinnings of language-based artificial intelligence. | ||||
Lernziel | This graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which is centered around statistical machine learning applied to natural language data. | ||||
Inhalt | This graduate class, taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which is centered around statistical machine learning applied to natural language data. The course is a year-long journey, but the second half (Spring 2023) does not depend on the first (Fall 2022) and thus either half may be taken independently. In each semester, we divide the class time into three modules. Each module is centered around a philosophical topic. After discussing logical, structuralist, and generative approaches to language in the first semester, in the second semester we will focus on information, language games, and pragmatics. The modules will be four weeks long. During the first two weeks of a module, we will read and discuss original texts and supplementary criticism. During the second two weeks, we will read recent NLP papers and discuss how the authors of those works are building on philosophical insights into our conception of language—perhaps implicitly or unwittingly. | ||||
Literatur | The literature will be provided by the instructors on the class website | ||||
263-5354-00L | Large Language Models | 8 KP | 3V + 2U + 2A | R. Cotterell, M. Sachan, F. Tramèr, C. Zhang | |
Kurzbeschreibung | Large language models have become one of the most commonly deployed NLP inventions. In the past half-decade, their integration into core natural language processing tools has dramatically increased the performance of such tools, and they have entered the public discourse surrounding artificial intelligence. | ||||
Lernziel | To understand the mathematical foundations of large language models as well as how to implement them. | ||||
Inhalt | We start with the probabilistic foundations of language models, i.e., covering what constitutes a language model from a formal, theoretical perspective. We then discuss how to construct and curate training corpora, and introduce many of the neural-network architectures often used to instantiate language models at scale. The course covers aspects of systems programming, discussion of privacy and harms, as well as applications of language models in NLP and beyond. | ||||
Literatur | The lecture notes will be supplemented with various readings from the literature. |