Lecture

Optimal Distributed Control: Projected GD for Locally Optimal Controllers

Description

This lecture covers optimal distributed control, focusing on the challenges of implementing LQR controllers in large-scale systems due to measurement limitations. The instructor introduces the concept of using Gradient Descent (GD) to overcome these limitations and achieve locally optimal distributed controllers. The lecture delves into the theoretical foundations of GD, including the computation of the cost function and the convergence results. Additionally, the instructor discusses the application of GD in Stochastic LQR systems and the computation of the optimal policy. The lecture concludes with a detailed explanation of improving the controller through GD and the sparsity requirements for achieving optimal distributed control.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.