Lecture

Stochastic Gradient Descent: Optimization Techniques

Description

This lecture covers the transition from stochastic gradient descent to non-smooth optimization, focusing on topics such as sparsity, compressive sensing, and atomic norms. It delves into stochastic programming, synthetic least-squares problems, and the convergence of SGD for strongly convex problems. The instructor explains the importance of step-size selection and averaging techniques to enhance optimization performance.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.