Explores gradient descent methods for smooth convex and non-convex problems, covering iterative strategies, convergence rates, and challenges in optimization.
Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.