This lecture on model compression in modern NLP covers various techniques such as pruning, quantization, weight factorization, weight sharing, knowledge distillation, and sub-quadratic transformers. It discusses the challenges of large language models, the benefits of compressing models, and the impact on memory footprint and inference time. The lecture also explores the concept of knowledge distillation, leveraging soft labels for student models, and the importance of attention mechanisms in transformer models.