Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers Variational Auto-Encoders (VAE) and Nonparametric Variational Information Bottleneck (NVIB). It explains the Bayesian approach to auto-encoders, the reparameterization trick, and the taxonomy of generative models. The lecture delves into the intractability of data likelihood in VAEs, the concept of variational inference, and the use of attention-based latent spaces in Transformers. It also discusses the importance of inducing compressed representations in text data and the benefits of NVIB in generating smooth representation spaces. The presentation concludes with insights on why Transformers are effective models for language processing.