Lecture

Model Compression: Techniques for Efficient NLP Models

Description

This lecture on model compression in modern NLP covers various techniques such as pruning, quantization, weight factorization, weight sharing, knowledge distillation, and sub-quadratic transformers. It discusses the challenges of large language models, the benefits of compressing models, and the impact on memory footprint and inference time. The lecture also explores the concept of knowledge distillation, leveraging soft labels for student models, and the importance of attention mechanisms in transformer models.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.