Explores gradient descent methods for smooth convex and non-convex problems, covering iterative strategies, convergence rates, and challenges in optimization.
Explores convex optimization, emphasizing the importance of minimizing functions within a convex set and the significance of continuous processes in studying convergence rates.
Explores Stochastic Gradient Descent with Averaging, comparing it with Gradient Descent, and discusses challenges in non-convex optimization and sparse recovery techniques.