Lecture

Dynamic Programming: Optimal Control

Description

This lecture covers the application of Dynamic Programming to solve optimal control problems, focusing on the recursive solution and the stability of the uncontrolled system. The lecture also discusses the concept of a stationary policy and the convergence to a unique fixed point. The recursive solution leads to closed-form solutions for the optimal control policy, emphasizing the importance of the Bellman equation and the principle of optimality.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.