Lecture

Primal-dual Optimization III: Lagrangian Gradient Methods

Description

This lecture covers primal-dual optimization methods, focusing on Lagrangian gradient techniques. It delves into the mathematics behind data optimization, including convex formulations, e-accurate solutions, and various primal-dual methods. The instructor explains the quadratic penalty and Lagrangian formulations, augmented dual problems, and the linearized augmented Lagrangian method. Examples such as blind image deconvolution, basis pursuit, and neural networks are used to illustrate the concepts. The lecture concludes with discussions on convergence guarantees, the augmented Lagrangian CGM, and applications like k-means clustering and scalable semidefinite programming.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.