Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture by the instructor explores the convergence properties of Langevin Monte Carlo (LMC) algorithms, focusing on the interplay between tail growth and smoothness of the target distribution. The lecture covers topics such as the sampling from Gibbs measures, the transition from discrete algorithms to continuous diffusion, and the convergence analysis of LMC under different growth rates. It also delves into the theoretical guarantees for LMC, the role of degenerately convex potentials, and the modified Log-Sobolev inequality. The main theorem presented in the lecture establishes the conditions under which LMC achieves fast convergence rates for a wide class of potentials, including non-convex and non-smooth functions.