Stochastic Control

Stochastic Control

The first goal is to learn how to formulate models for the purposes of control, in applications ranging from finance to power systems to medicine. Linear and Markov models are chosen to capture essential dynamics and uncertainty.

The course provides several approaches to design control laws based on these models, and methods to approximate the performance of the controlled system. In parallel with these algorithmic objectives, students will be provided with an introduction to basic dynamic programming theory, closely related stability theory for Markovian and linear systems.

What about model free?   The course also provides an introduction to reinforcement learning (based on chapters 8 and 9 of the new monograph).  Understanding RL is not difficult after you have a good understanding of optimal control fundamentals.

Course information from Spring 2020

Topics include:

  1. Control and Stability Theory
  2. Optimal Control
  3. Monte-Carlo Methods
  4. ODE Methods for Algorithm Design
  5. TD and Q-learning  (and Actor-Critic methods if time permits)
CS&RL

Essential background: stochastic processes and prior exposure to control concepts.  Experience with Matlab or Python is also essential.

More Resources