Lecture

Gradient Descent Methods

Description

This lecture covers the role of computation and the application of gradient descent for convex and nonconvex problems. Topics include subdifferentials, subgradient method, smooth unconstrained convex minimization, maximum likelihood estimation, gradient descent methods, L-smooth and u-strongly convex functions, least-squares estimation, geometric interpretation of stationarity, convergence rate of gradient descent, and examples like ridge regression, image classification using neural networks, and phase retrieval. The lecture also discusses the necessity of non-convex optimization, notions of convergence, assumptions and the gradient method, convergence rate, and iteration complexity. It concludes with a proof of convergence rates of gradient descent in the convex case and insights on mirror descent and Bregman divergences.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.