Lecture

Stochastic Gradient Descent: Optimization and Convergence Analysis

Description

This lecture covers the optimization method of Stochastic Gradient Descent (SGD) for minimizing non-strongly convex functions. It explains the convergence rate of SGD iterates and the impact of step size on convergence. The lecture also discusses the modified Huber loss function and its computation, along with the optimal step size for running SGD to minimize this loss. Various convergence behaviors are compared and explained, shedding light on the differences observed in different scenarios.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.