Lecture

Optimization Methods: Convergence and Trade-offs

Description

This lecture covers optimization methods such as Conditional Gradient Method (CGM), Proximal Gradient, and Frank-Wolfe, focusing on convergence guarantees and trade-offs. It discusses the Conditional Gradient Method for strongly convex objectives, convergence guarantees of CGM, faster convergence rates, and examples like nuclear-norm ball and phase retrieval. The lecture also compares Proximal Gradient with Frank-Wolfe, presents a basic constrained non-convex problem, and explores phase retrieval and matrix completion problems. It delves into the role of convexity in optimization, phase retrieval as a convex matrix completion problem, and the challenges of estimation and prediction. The lecture concludes with a discussion on the time-data trade-offs, statistical dimensions, and variance reduction techniques like Stochastic Variance Reduced Gradient (SVRG).

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.