Lecture

Gradient-Based Algorithms in High-Dimensional Learning

Description

This lecture by the instructor provides insights on gradient-based algorithms in high-dimensional non-convex learning, focusing on supervised learning, neural networks, and stochastic gradient descent. It discusses the challenges of non-convex problems, the core of machine learning, and the mysteries of deep learning theory. The lecture explores the concepts of overfitting, underfitting, and the double-descent phenomenon, shedding light on the generalization capabilities of modern neural networks. It also delves into the understanding of gradient descent and the importance of assumption-free models in data analysis. The talk presents toy models like the spiked matrix-tensor model and discusses the dynamics of deep learning, emphasizing the need to rethink generalization and the role of gradient descent.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.