Lecture

Deep Generative Models: Part 2

Description

This lecture covers deep generative models, including mixtures of multinomials and LDA for generating new documents, PCA for dimensionality reduction, deep autoencoders with nonlinear activation functions, convolutional autoencoders for image generation, and the training process using stochastic gradient descent. It also discusses using autoencoders as generative models, the need for defining distributions over latent variables, and the visual interpretation of VAE. The lecture concludes with a recap of GANs, conditional GANs, and the limitations of Bag of Words models.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.