Lecture

Adaptive Optimization Methods: Theory and Applications

Description

This lecture covers stochastic adaptive first-order methods that converge without knowing the smoothness constant by utilizing information from stochastic gradients. It introduces variable metric stochastic gradient descent algorithms and adaptive gradient methods that locally adapt by setting the Hessian matrix based on past stochastic gradient information. The lecture also discusses AdaGrad, AcceleGrad, RMSProp, and ADAM, highlighting their properties and convergence rates. It compares various adaptive algorithms, including their performance in optimization tasks and generalization capabilities. The implications of implicit regularization in adaptive methods and the generalization performance of adaptive learning methods are also explored.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.