Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture on model compression in modern NLP covers various techniques such as pruning, quantization, weight factorization, weight sharing, knowledge distillation, and sub-quadratic transformers. It discusses the challenges of large language models, the benefits of compressing models, and the impact on memory footprint and inference time. The lecture also explores the concept of knowledge distillation, leveraging soft labels for student models, and the importance of attention mechanisms in transformer models.