Lecture

Optimality of Convergence Rates: Accelerated Gradient Descent

Description

This lecture delves into the optimality of convergence rates in convex optimization, focusing on the accelerated gradient descent method. It covers the convergence rate of gradient descent, information theoretic lower bounds, and the accelerated gradient descent algorithm. The lecture explores the design of first-order methods with convergence rates matching theoretical lower bounds, such as Nesterov's accelerated scheme. It also discusses the global convergence of accelerated gradient descent and the variable metric gradient descent algorithm. Additionally, it examines adaptive first-order methods, Newton's method, and the extra-gradient algorithm. The lecture concludes with insights on the performance of optimization algorithms and the gradient method for non-convex optimization.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.