Lecture

Optimality of Convergence Rates: Accelerated Gradient Descent

Description

This lecture delves into the optimality of convergence rates in convex optimization, focusing on the accelerated gradient descent method. It covers the convergence rate of gradient descent, information theoretic lower bounds, and the accelerated gradient descent algorithm. The lecture explores the design of first-order methods with convergence rates matching theoretical lower bounds, such as Nesterov's accelerated scheme. It also discusses the global convergence of accelerated gradient descent and the variable metric gradient descent algorithm. Additionally, it examines adaptive first-order methods, Newton's method, and the extra-gradient algorithm. The lecture concludes with insights on the performance of optimization algorithms and the gradient method for non-convex optimization.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.