Lecture

Variational Auto-Encoders and NVIB

Description

This lecture covers Variational Auto-Encoders (VAE) and Nonparametric Variational Information Bottleneck (NVIB). It explains the Bayesian approach to auto-encoders, the reparameterization trick, and the taxonomy of generative models. The lecture delves into the intractability of data likelihood in VAEs, the concept of variational inference, and the use of attention-based latent spaces in Transformers. It also discusses the importance of inducing compressed representations in text data and the benefits of NVIB in generating smooth representation spaces. The presentation concludes with insights on why Transformers are effective models for language processing.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.