This lecture covers Projected Gradient Descent (PGD) and Quadratic Penalty methods, focusing on the assumptions and convergence properties. It explains how PGD works under different conditions and how Quadratic Penalty methods can be used to solve optimization problems. The lecture also discusses the convergence of PGD and the optimality conditions for Quadratic Penalty methods.