This lecture covers the application of transformers in vision tasks, focusing on ViTs and their adaptation to dense prediction tasks. It explores the challenges faced by transformers in vision, such as quadratic complexity and the need for global receptive fields. The instructor discusses the performance of ViTs in comparison to traditional ConvNets, emphasizing the importance of massive data for training. Additionally, the lecture introduces innovative transformer architectures like Swin Transformer, Dense Prediction Transformers, and Perceiver, highlighting their advantages in handling structured inputs and outputs. Various transformer models, including MLP-Mixer and ConvMixer, are presented, showcasing the evolution of transformer-based architectures in computer vision.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace