Lecture

Deep Learning III

Description

This lecture covers the optimization formulation of deep learning training problems, the challenges faced in training neural networks, and the concepts of Stochastic Gradient Descent (SGD) and its variants. It also discusses critical points, the strict saddle property, and the convergence of SGD to critical points. Additionally, it explores the optimization landscape of overparametrized neural networks, the phenomenon of overparametrization, and stochastic adaptive first-order methods. The lecture concludes with a detailed explanation of the Variable Metric Stochastic Gradient Descent Algorithm and Adaptive Gradient Methods.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.