Lecture

Optimality of Convergence Rates: Accelerated/Stochastic Gradient Descent

Description

This lecture covers the optimality of convergence rates in accelerated and stochastic gradient descent methods for non-convex optimization problems. It discusses the necessity of non-convex optimization, phase retrieval, image classification using neural networks, and the convergence rates of gradient descent algorithms. The lecture also explores the geometric interpretation of stationarity, phase retrieval for Fourier ptychography, and the formulation of binary classification problems. Various optimization formulations and examples are presented, including ridge regression and phase retrieval for Fourier ptychography. The lecture concludes with discussions on the performance of optimization algorithms, the convergence of stochastic gradient descent, and popular variants of stochastic gradient descent.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.