Summary
A generative adversarial network (GAN) is a class of machine learning framework and a prominent framework for approaching generative AI. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner. GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks. The original GAN is defined as the following game: Each probability space defines a GAN game. There are 2 players: generator and discriminator. The generator's strategy set is , the set of all probability measures on . The discriminator's strategy set is the set of Markov kernels , where is the set of probability measures on . The GAN game is a zero-sum game, with objective function The generator aims to minimize the objective, and the discriminator aims to maximize the objective. The generator's task is to approach , that is, to match its own output distribution as closely as possible to the reference distribution.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (31)
DH-406: Machine learning for DH
This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and imple
PHYS-467: Machine learning for physicists
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
CIVIL-459: Deep learning for autonomous vehicles
Deep Learning (DL) is the subset of Machine learning reshaping the future of transportation and mobility. In this class, we will show how DL can be used to teach autonomous vehicles to detect objects,
Show more
Related lectures (134)
Deep Learning: Convolutional Networks
Explores convolutional neural networks, backpropagation, and stochastic gradient descent in deep learning.
Generative Models: Trajectory Forecasting
Explores generative models for trajectory forecasting in autonomous vehicles, including discriminative vs generative models, VAES, GANS, and case studies.
Learning Agents: Exploration-Exploitation Tradeoff
Explores the exploration-exploitation tradeoff in learning unknown effects of actions using multi-armed bandits and Q-learning.
Show more
Related publications (533)

Text-to-Microstructure Generation Using Generative Deep Learning

Jamie Paik, Xiaoyang Zheng

Designing novel materials is greatly dependent on understanding the design principles, physical mechanisms, and modeling methods of material microstructures, requiring experienced designers with expertise and several rounds of trial and error. Although rec ...
Wiley-V C H Verlag Gmbh2024

Fashioning Creative Expertise with Generative AI: Graphical Interfaces for GAN-Based Design Space Exploration Better Support Ideation Than Text Prompts for Diffusion Models

Pierre Dillenbourg, Richard Lee Davis, Kevin Gonyop Kim, Thiemo Wambsganss, Wei Jiang

This paper investigates the potential impact of deep generative models on the work of creative professionals. We argue that current generative modeling tools lack critical features that would make them useful creativity support tools, and introduce our own ...
2024

Robust NAS under adversarial training: benchmark, theory, and beyond

Volkan Cevher, Grigorios Chrysos, Fanghui Liu, Yongtao Wu

Recent developments in neural architecture search (NAS) emphasize the significance of considering robust architectures against malicious data. However, there is a notable absence of benchmark evaluations and theoretical guarantees for searching these robus ...
2024
Show more
Related concepts (13)
Fake news
Fake news is false or misleading information presented as news. Fake news often has the aim of damaging the reputation of a person or entity, or making money through advertising revenue. Although false news has always been spread throughout history, the term "fake news" was first used in the 1890s when sensational reports in newspapers were common. Nevertheless, the term does not have a fixed definition and has been applied broadly to any type of false information.
Deepfake
Deepfakes (portmanteau of "deep learning" and "fake") are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. Deepfakes are the manipulation of facial appearance through deep generative methods. While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can more easily deceive.
Variational autoencoder
In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods. Variational autoencoders are often associated with the autoencoder model because of its architectural affinity, but with significant differences in the goal and mathematical formulation. Variational autoencoders are probabilistic generative models that require neural networks as only a part of their overall structure.
Show more