Lecture

Optimality of Convergence Rates: Accelerated Gradient Descent

In course
DEMO: fugiat irure
Anim reprehenderit enim cupidatat irure. Tempor ea dolor ullamco esse fugiat eu. Et consectetur veniam incididunt est voluptate id deserunt adipisicing in qui enim amet. Reprehenderit id qui magna sint proident exercitation ipsum fugiat elit ea fugiat. Adipisicing aliquip nisi do ad nostrud cillum adipisicing id labore. Velit ipsum cupidatat velit elit exercitation velit sunt tempor ea ex.
Login to see this section
Description

This lecture delves into the optimality of convergence rates in convex optimization, focusing on the accelerated gradient descent method. It covers the convergence rate of gradient descent, information theoretic lower bounds, and the accelerated gradient descent algorithm. The lecture explores the design of first-order methods with convergence rates matching theoretical lower bounds, such as Nesterov's accelerated scheme. It also discusses the global convergence of accelerated gradient descent and the variable metric gradient descent algorithm. Additionally, it examines adaptive first-order methods, Newton's method, and the extra-gradient algorithm. The lecture concludes with insights on the performance of optimization algorithms and the gradient method for non-convex optimization.

Instructor
cupidatat nulla deserunt nisi
Nisi ut Lorem exercitation amet consequat ut pariatur sunt. Duis aliquip dolor ad minim magna tempor cillum. Ut et qui ipsum minim deserunt. Dolor aliquip ullamco ea sit aliquip velit culpa. Ea eu enim qui incididunt dolor amet irure ex. Irure ipsum nostrud qui anim reprehenderit.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (110)
Primal-dual Optimization: Extra-Gradient Method
Explores the Extra-Gradient method for Primal-dual optimization, covering nonconvex-concave problems, convergence rates, and practical performance.
Gradient Descent Methods: Theory and Computation
Explores gradient descent methods for smooth convex and non-convex problems, covering iterative strategies, convergence rates, and challenges in optimization.
Optimization Methods: Convergence and Trade-offs
Covers optimization methods, convergence guarantees, trade-offs, and variance reduction techniques in numerical optimization.
Composite Convex Minimization
Covers solution methods for composite convex minimization and explores examples like ₁-regularized least squares and phase retrieval.
Optimality of Convergence Rate: Acceleration in Gradient Descent
Explores the optimality of convergence rate in gradient descent and acceleration techniques for convex and non-convex problems.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.