This lecture discusses subquadratic attention mechanisms and state space models, focusing on their theoretical foundations and practical implementations. The instructor begins by explaining the concept of attention matrices and their polynomial variants, emphasizing the importance of maintaining low variance in computations. The discussion transitions to locality-sensitive hashing and its role in efficiently approximating attention mechanisms. The instructor introduces state space models, detailing how they can evolve input vectors in nearly linear time, which is crucial for handling large datasets. The lecture also covers the mathematical underpinnings of convolutions and Fourier transforms, illustrating their applications in designing efficient algorithms. The instructor highlights the significance of learnable parameters in these models and contrasts them with traditional transformer architectures. Throughout the lecture, various theoretical guarantees and experimental design choices are examined, providing insights into the performance and optimization of these advanced models. The session concludes with a discussion on the implications of these techniques for future research and applications in machine learning.