Lecture

Generative Models: GANs

Description

This lecture covers Generative Adversarial Networks (GANs), which model probability distributions over random variables. GANs consist of a generator and a discriminator playing a two-player game. The generator aims to produce realistic samples, while the discriminator tries to distinguish between real and fake samples. The lecture explains the optimization process of GANs as a minmax game and introduces the concepts of Nash equilibrium and Differential Nash Equilibrium. It also discusses the challenges of using Jensen-Shannon divergence in GANs and presents the Wasserstein distance as an alternative. Additionally, it explores Conditional GANs (CGANs) for generating samples conditioned on additional information like class labels. The lecture concludes with a discussion on Diffusion Models, an alternative to GANs, which progressively add noise to input data to generate samples.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.