Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.
This lecture delves into the optimality of convergence rates in convex optimization, focusing on the accelerated gradient descent method. It covers the convergence rate of gradient descent, information theoretic lower bounds, and the accelerated gradient descent algorithm. The lecture explores the design of first-order methods with convergence rates matching theoretical lower bounds, such as Nesterov's accelerated scheme. It also discusses the global convergence of accelerated gradient descent and the variable metric gradient descent algorithm. Additionally, it examines adaptive first-order methods, Newton's method, and the extra-gradient algorithm. The lecture concludes with insights on the performance of optimization algorithms and the gradient method for non-convex optimization.