Explores gradient descent methods for smooth convex and non-convex problems, covering iterative strategies, convergence rates, and challenges in optimization.
Explores optimization methods, including convexity, gradient descent, and non-convex minimization, with examples like maximum likelihood estimation and ridge regression.