Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers optimal distributed control, focusing on the challenges of implementing LQR controllers in large-scale systems due to measurement limitations. The instructor introduces the concept of using Gradient Descent (GD) to overcome these limitations and achieve locally optimal distributed controllers. The lecture delves into the theoretical foundations of GD, including the computation of the cost function and the convergence results. Additionally, the instructor discusses the application of GD in Stochastic LQR systems and the computation of the optimal policy. The lecture concludes with a detailed explanation of improving the controller through GD and the sparsity requirements for achieving optimal distributed control.