This lecture introduces primal-dual methods for composite minimization, focusing on minimax reformulation. The instructor discusses restricted minimax templates, assumptions, and properties of the primal and dual functions. The lecture covers the primal-dual hybrid gradient method, convergence theorems, and the stochastic primal-dual hybrid gradient algorithm. Additionally, it explores nonconvex-concave and nonconvex-nonconcave problems, providing insights into gradient complexities and convergence rates. The lecture concludes with discussions on nonsmooth, nonconvex optimization, penalty methods, and quadratic penalty algorithms.