Lecture

Structures in Non-Convex Optimization

Description

This lecture covers scalable non-convex optimization with an emphasis on deep learning, discussing the optimization formulation for deep-learning training problems, barriers to training neural networks, and the convergence, avoidance, and speed of stochastic gradient descent. It also explores stochastic adaptive first-order methods, variable metric stochastic gradient descent, and adaptive gradient methods like AdaGrad, RMSProp, and ADAM. The lecture delves into the properties and convergence of AcceleGrad, AmsGrad, and AcceleGrad, comparing them with traditional optimization algorithms. It also examines the performance of optimization algorithms in non-convex scenarios, the implicit regularization of adaptive methods, and explicit regularization through e-stability. The lecture concludes with a discussion on generalization performance and neural network architectures.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.