Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the role of computation and the application of gradient descent for convex and nonconvex problems. Topics include subdifferentials, subgradient method, smooth unconstrained convex minimization, maximum likelihood estimation, gradient descent methods, L-smooth and u-strongly convex functions, least-squares estimation, geometric interpretation of stationarity, convergence rate of gradient descent, and examples like ridge regression, image classification using neural networks, and phase retrieval. The lecture also discusses the necessity of non-convex optimization, notions of convergence, assumptions and the gradient method, convergence rate, and iteration complexity. It concludes with a proof of convergence rates of gradient descent in the convex case and insights on mirror descent and Bregman divergences.