Discusses optimization techniques in machine learning, focusing on stochastic gradient descent and its applications in constrained and non-convex problems.
Discusses Stochastic Gradient Descent and its application in non-convex optimization, focusing on convergence rates and challenges in machine learning.
Explores the trade-off between complexity and risk in machine learning models, the benefits of overparametrization, and the implicit bias of optimization algorithms.
Covers optimization techniques in machine learning, focusing on convexity, algorithms, and their applications in ensuring efficient convergence to global minima.