Game Theory is the study of strategic decision making, and was used to solve problems in economics by John Nash (A Beautiful Mind) and others. We study concepts and methods in Game Theory, and show how these can be used to solve control design problems. The course covers non-cooperative dynamic games and Nash equilibria, and emphasizes their use in control applications.
Objective
Formulate an optimal control problem as a noncooperative dynamic game, compute mixed and behavioural strategies for different equilibria.
Content
Introduction to game theory, mathematical tools including convex optimisation and dynamic programming, zero sum games in matrix and extensive form, pure and mixed strategies, minimax theorem, nonzero sum games in normal and extensive form, numerical computation of mixed equilibrium strategies, Nash and Stackelberg equilibria, potential games, infinite dynamic games, differential games, behavioral strategies and informational properties for dynamic games, aggregative games, VCG mechanism.
Lecture notes
Will be made available from SPOD or course webpage.
Literature
Basar, T. and Olsder, G. Dynamic Noncooperative Game Theory, 2nd Edition, Society for Industrial and Applied Mathematics, 1998. Available through ETH Bibliothek directly at http://epubs.siam.org/doi/abs/10.1137/1.9781611971132.
Prerequisites / Notice
Control Systems I (or equivalent). Necessary methods and concepts from optimization will be covered in the course.