Discusses Stochastic Gradient Descent and its application in non-convex optimization, focusing on convergence rates and challenges in machine learning.
Explores KKT conditions in convex optimization, covering dual problems, logarithmic constraints, least squares, matrix functions, and suboptimality of covering ellipsoids.