This lecture delves into the modus operandi of deep learning, exploring the benefits of deeper networks through experiments on ImageNet. It discusses the importance of over-parameterization and generalization in deep networks, emphasizing the role of minimum norm solutions. The lecture also covers topics such as interpolation points, transfer learning with CNNs, and back to unsupervised learning with auto-encoders, GANs, and diffusion models.