Lecture

Gradient Descent on Two-Layer ReLU Neural Networks

Description

This lecture by the instructor covers the analysis of gradient descent on wide two-layer ReLU neural networks. Starting with supervised learning concepts, it delves into the gradient flow of empirical risk and the dynamics in the infinite width limit. The presentation explores global convergence, regularized and unregularized cases, and implicit regularization. Through illustrations and numerical experiments, the lecture demonstrates the implicit regularization for linear classification and neural networks. Theoretical results on statistical efficiency and two implicit regularizations in one dynamics are discussed, leading to perspectives on mathematical models for deeper networks and the quantification of convergence speed.

This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.

Watch on Mediaspace
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.