Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture introduces the concept of normalizing flows applied to inverse problems, focusing on learning from few data points and the trade-off between expressiveness and robustness. The instructor discusses the use of Markov chains to improve sampling and reconstruction, showcasing the application of stochastic layers and variational autoencoders. The talk delves into the challenges of multimodal distributions and the need for stochastic normalizing flows. Additionally, the lecture explores the potential of continuous Markov chains and the implementation of Wasserstein gradient flows for high-dimensional problems. The presentation concludes with insights on using sliced Wasserstein for efficient computations and the practical implications of patch-based regularization in generative modeling.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace