Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the optimality of convergence rates in convex minimization problems, focusing on the accelerated gradient descent (AGD) methods. It discusses the convergence rate of gradient descent, information theoretic lower bounds, and the AGD algorithm. The AGD-L algorithm is presented as a solution to achieving optimal convergence rates. The lecture also explores the global convergence of AGD and compares gradient descent with AGD in terms of assumptions, step sizes, and convergence rates. Additionally, adaptive first-order methods, the extra-gradient algorithm, and variable metric gradient descent are discussed as approaches to improve convergence rates without knowing the smoothness constant.