This lecture discusses model compression techniques essential for deploying large language models in production settings. It begins with the motivation for compression, highlighting the exponential growth of model parameters and the challenges of using large models in practical applications. The instructor introduces various compression methods, including pruning, quantization, weight factorization, knowledge distillation, and weight sharing. Each method is explained in detail, emphasizing their impact on model performance and inference time. The lecture also covers the importance of structured versus unstructured pruning and the benefits of training large models before applying compression techniques. Case studies, such as the application of pruning in sentiment analysis and the use of knowledge distillation to create smaller, efficient models, are presented to illustrate the concepts. The discussion concludes with an overview of sub-quadratic transformers, which address the limitations of traditional attention mechanisms in processing long sequences, thereby improving efficiency and performance in real-world applications.