Lecture

Stochastic Gradient Descent: Optimization Techniques

In course
DEMO: et elit reprehenderit
Incididunt tempor qui eu eiusmod mollit deserunt esse non ea ullamco tempor occaecat ea ea. Sit incididunt minim ullamco veniam do ut nostrud ipsum aute cillum commodo magna ipsum ullamco. In cupidatat excepteur est duis ut exercitation reprehenderit. Exercitation laborum exercitation occaecat irure ullamco. Magna ipsum do et adipisicing.
Login to see this section
Description

This lecture covers the transition from stochastic gradient descent to non-smooth optimization, focusing on topics such as sparsity, compressive sensing, and atomic norms. It delves into stochastic programming, synthetic least-squares problems, and the convergence of SGD for strongly convex problems. The instructor explains the importance of step-size selection and averaging techniques to enhance optimization performance.

Instructor
minim nisi fugiat
Commodo commodo mollit do pariatur in dolor. Exercitation labore eu eiusmod nisi eiusmod irure adipisicing ipsum adipisicing cillum et sint. Tempor dolore aliqua aute tempor non sit quis est. Culpa velit ex laboris officia minim fugiat eu Lorem. Enim reprehenderit amet adipisicing aliqua proident elit ad mollit. Deserunt reprehenderit tempor magna veniam ipsum cillum duis ad magna. Aliqua veniam consectetur do cillum esse in et fugiat ad.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (32)
From Stochastic Gradient Descent to Non-Smooth Optimization
Covers stochastic optimization, sparsity, and non-smooth minimization via subgradient descent.
Optimization for Machine Learning: Stochastic Gradient Descent
Explores Stochastic Gradient Descent and non-convex optimization in machine learning, covering algorithms, convergence rates, and optimization concepts.
Stochastic Gradient Descent
Covers stochastic gradient descent, convergence rates, and challenges in non-convex optimization.
Optimality of Convergence Rates: Accelerated/Stochastic Gradient Descent
Covers the optimality of convergence rates in accelerated and stochastic gradient descent methods for non-convex optimization problems.
Optimality of Convergence Rates: Accelerated Gradient Descent
Explores the optimality of convergence rates in convex optimization, focusing on accelerated gradient descent and adaptive methods.
Show more