Lecture

Convergence Analysis: Stochastic Gradient Algorithms

In course
DEMO: incididunt velit ullamco sint
Aliqua culpa fugiat exercitation culpa adipisicing consequat. Id id magna anim veniam commodo non. Ipsum eu enim pariatur consectetur irure eiusmod do laboris aliquip labore deserunt. Consectetur incididunt aute in ea in non aliquip elit consequat dolor ex adipisicing. Laborum voluptate fugiat et ea et eiusmod.
Login to see this section
Description

This lecture covers the convergence analysis of stochastic gradient algorithms for smooth risks under various operational modes, including updates with constant and vanishing step-sizes, data sampling with and without replacement, and mini-batch gradient approximations. The lecture delves into the conditions on risk and loss functions, the convergence behavior in mean-square-error sense, and the impact of step-size sequences on the convergence rate. The instructor discusses the convergence properties under different step-size sequences and provides theorems and examples to illustrate the rates of convergence.

Instructor
aliquip est
Incididunt non duis deserunt anim sit excepteur. Lorem elit proident ullamco aliquip nisi amet do qui mollit aliqua reprehenderit officia tempor. Dolor pariatur labore non magna esse aliquip cillum qui ad quis officia amet. Ipsum incididunt cupidatat nostrud sint quis fugiat amet veniam veniam duis.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (32)
Stochastic Optimization: Algorithms and Methods
Explores stochastic optimization algorithms and methods for convex problems with smooth and nonsmooth risks.
Adaptive Gradient Methods: Part 1
Explores adaptive gradient methods and their impact on optimization scenarios, including AdaGrad, ADAM, and RMSprop.
Recursive Least-Squares: Weighted Formulation
Covers the Recursive Least-Squares algorithm with weighted formulation for real-time data updating.
Optimization: Gradient Descent and Subgradients
Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.
Boltzmann Machine
Introduces the Boltzmann Machine, covering expectation consistency, data clustering, and probability distribution functions.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.