263-2400-00L  Reliable and Interpretable Artificial Intelligence

SemesterHerbstsemester 2018
DozierendeM. Vechev
Periodizitätjährlich wiederkehrende Veranstaltung
LehrspracheEnglisch



Lehrveranstaltungen

NummerTitelUmfangDozierende
263-2400-00 VReliable and Interpretable Artificial Intelligence2 Std.
Mo10:15-12:00HG D 7.2 »
M. Vechev
263-2400-00 UReliable and Interpretable Artificial Intelligence1 Std.
Mo13:15-14:00LFW C 4 »
Mi11:15-12:00CAB G 59 »
M. Vechev

Katalogdaten

KurzbeschreibungCreating reliable and explainable probabilistic models is a fundamental challenge to solving the artificial intelligence problem. This course covers some of the latest and most exciting advances that bring us closer to constructing such models.
LernzielThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.

To facilitate deeper understanding, an important part of the course will be a group hands-on programming project where students will build a system based on the learned material.
InhaltThe course covers the following inter-connected directions.

Part I: Robust and Explainable Deep Learning
-------------------------------------------------------------

Deep learning technology has made impressive advances in recent years. Despite this progress however, the fundamental challenge with deep learning remains that of understanding what a trained neural network has actually learned, and how stable that solution is. Forr example: is the network stable to slight perturbations of the input (e.g., an image)? How easy it is to fool the network into mis-classifying obvious inputs? Can we guide the network in a manner beyond simple labeled data?

Topics:
- Attacks: Finding adversarial examples via state-of-the-art attacks (e.g., FGSM, PGD attacks).
- Defenses: Automated methods and tools which guarantee robustness of deep nets (e.g., using abstract domains, mixed-integer solvers)
- Combing differentiable logic with gradient-based methods so to train networks to satisfy richer properties.
- Frameworks: AI2, DiffAI, Reluplex, DQL, DeepPoly, etc.

Part II: Program Synthesis/Induction
------------------------------------------------

Synthesis is a new frontier in AI where the computer programs itself via user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.

Topics:
- Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, lower bounds on learning.
- Applications of techniques: synthesis for end users (e.g., spreadsheets) and data analytics.
- Combining synthesis with learning: application to learning from code.
- Frameworks: PHOG, DeepCode.

Part III: Probabilistic Programming
----------------------------------------------

Probabilistic programming is an emerging direction, recently also pushed by various companies (e.g., Facebook, Uber, Google) whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.

Topics:

- Probabilistic Inference: sampling based, exact symbolic inference, semantics
- Applications of probabilistic programming: bias in deep learning, differential privacy (connects to Part I).
- Frameworks: PSI, Edward2, Venture.
Voraussetzungen / BesonderesThe course material is self-contained: needed background is covered in the lectures and exercises, and additional pointers.

Leistungskontrolle

Information zur Leistungskontrolle (gültig bis die Lerneinheit neu gelesen wird)
Leistungskontrolle als Semesterkurs
ECTS Kreditpunkte4 KP
PrüfendeM. Vechev
FormSessionsprüfung
PrüfungsspracheEnglisch
RepetitionDie Leistungskontrolle wird nur in der Session nach der Lerneinheit angeboten. Die Repetition ist nur nach erneuter Belegung möglich.
Prüfungsmodusschriftlich 90 Minuten
Zusatzinformation zum Prüfungsmodus30% of your grade is determined by mandatory project work and 70% is determined by a written exam.
Hilfsmittel schriftlichKeine
Diese Angaben können noch zu Semesterbeginn aktualisiert werden; verbindlich sind die Angaben auf dem Prüfungsplan.

Lernmaterialien

 
HauptlinkInformation
Es werden nur die öffentlichen Lernmaterialien aufgeführt.

Gruppen

Keine Informationen zu Gruppen vorhanden.

Einschränkungen

Keine zusätzlichen Belegungseinschränkungen vorhanden.

Angeboten in

StudiengangBereichTyp
CAS in InformatikFokusfächer und WahlfächerWInformation
DAS in Data ScienceMachine Learning and Artificial IntelligenceWInformation
Data Science MasterWählbare KernfächerWInformation
Informatik MasterWahlfächer der Vertiefung in Software EngineeringWInformation
Informatik MasterWahlfächer der Vertiefung General StudiesWInformation
Informatik MasterWahlfächer der Vertiefung in Information SystemsWInformation
Informatik MasterWahlfächer der Vertiefung in Visual ComputingWInformation