Lecture

Optimality of Convergence Rates: Accelerated Gradient Descent

Description

This lecture covers the optimality of convergence rates in convex minimization problems, focusing on the accelerated gradient descent (AGD) methods. It discusses the convergence rate of gradient descent, information theoretic lower bounds, and the AGD algorithm. The AGD-L algorithm is presented as a solution to achieving optimal convergence rates. The lecture also explores the global convergence of AGD and compares gradient descent with AGD in terms of assumptions, step sizes, and convergence rates. Additionally, adaptive first-order methods, the extra-gradient algorithm, and variable metric gradient descent are discussed as approaches to improve convergence rates without knowing the smoothness constant.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.