This lecture covers primal-dual optimization methods, focusing on quadratic penalty, Lagrangian formulations, and augmented Lagrangian methods. It discusses various algorithms such as proximal-based decomposition, primal-dual hybrid gradient, and alternating minimization. The lecture also explores the convergence and drawbacks of these methods, along with enhancements like inexact approaches and linearized augmented Lagrangian methods. Examples include basis pursuit problems and nonconvex optimization with nonlinear constraints, such as blind image deconvolution.