This lecture covers the standard form of Semidefinite Programming (SDP) formulations, including trace-constrained SDPs and their broad interest. It also explores SDP relaxations for non-convex problems, such as the maximum-weight cut of a graph, clustering with minimal sum-of-squares, and approximating the Lipschitz constant of neural networks. Additionally, it discusses optimization strategies like CGM with quadratic penalty, Augmented Lagrangian CGM, and their convergence guarantees.