Lecture

Gradient Descent Methods

Description

This lecture covers the role of computation and the application of gradient descent for convex and nonconvex problems. Topics include subdifferentials, subgradient method, smooth unconstrained convex minimization, maximum likelihood estimation, gradient descent methods, L-smooth and u-strongly convex functions, least-squares estimation, geometric interpretation of stationarity, convergence rate of gradient descent, and examples like ridge regression, image classification using neural networks, and phase retrieval. The lecture also discusses the necessity of non-convex optimization, notions of convergence, assumptions and the gradient method, convergence rate, and iteration complexity. It concludes with a proof of convergence rates of gradient descent in the convex case and insights on mirror descent and Bregman divergences.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.