This lecture covers quadratic penalty methods for optimization problems, focusing on nonconvex-concave settings. The instructor explains the challenges of nonsmooth, nonconvex optimization and introduces primal-dual algorithms with penalty functions. Various linearization techniques and their convergence properties are discussed, along with the role of the prox-operator in optimization algorithms.