Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture discusses the provable benefits of overparameterization in model compression, focusing on the efficiency of deep neural networks and the process of model pruning. It covers the motivation behind efficient deep nets, the principles of model pruning, empirical investigations on CIFAR10, the relationship between overparameterization and double descent, and the theoretical setup for model compression. The main contribution lies in the distributional characterization of the model, enabling the study of model compression and demonstrating the benefits of overparameterization. The lecture also explores examples of linear models and random feature pruning, emphasizing the importance of retraining for improved performance.