Lecture

Variance Reduction in Deep Learning

Description

This lecture delves into variance reduction techniques in deep learning, starting with an exploration of gradient descent versus stochastic gradient descent. The instructor explains how to decrease variance while using a constant step-size and introduces the concept of mini-batch stochastic gradient descent. The lecture covers the Stochastic Variance Reduced Gradient (SVRG) method, convergence analysis, and the performance comparison of different algorithms. It concludes with a discussion on variance reduction for non-convex problems and provides insights into the application of variance reduction techniques in deep learning.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.