Lecture

Learning Sparse Features: Overfitting in Neural Networks

Description

This lecture explores how learning sparse features can lead to overfitting in neural networks. Despite theoretical expectations, empirical evidence shows that generalization is possible due to learning meaningful features. The presentation discusses the impact of feature learning versus lazy training on generalization error scaling and the smoothness of image datasets.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.