Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.
Explores convex optimization, emphasizing the importance of minimizing functions within a convex set and the significance of continuous processes in studying convergence rates.
Discusses optimization techniques in machine learning, focusing on stochastic gradient descent and its applications in constrained and non-convex problems.