Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
The recent developments of deep learning cover a wide variety of tasks such as image classification, text translation, playing go, and folding proteins.All these successful methods depend on a gradient-based learning algorithm to train a model on massive amounts of data using significant computation power.Even though this optimization algorithm is shared, deep learning relies on different model architectures to process the training data depending on the modality: Multi-Layer Perceptrons for vectors, Convolutional Neural Networks for images, Recurrent Neural Networks for text and sequences, and Graph Neural Networks for graphs. A recent addition to this family of models is the transformer architecture developed by Vaswani et al. (2017) for text translation. This fragmented landscape of architectures forces the practitioners to select a model based on the data modality and to learn its specificities. This is detrimental when the problem at hand involves multiple data modalities, such as image captioning. A more systematic approach would be to employ a single architecture that processes all the modalities and learns the structure of the input directly from the training data.This work takes a transversal approach between Natural Language Processing and Vision to show that the transformers, which were primarily designed to process text, can also handle images. First, we show that a self-attention layer---the building block of transformers---can provably express convolution and we demonstrate empirically that shallow layers of transformers do learn localized translated filters similar to CNNs. Our proof relies on the attention heads to mimic the receptive field of a convolutional kernel. We study how these attention heads interact and propose a new multi-head mechanism that leverages the shared representation extracted across heads. This thesis presents two adaptations of transformer models for special kinds of images. We introduce a rotation-equivariant attention layer well fitted to process images whose orientation does not hold information such as satellite images or microscopic images of biological tissues. Finally, we adapt the transformers to process large-resolution images by extracting salient patches of the images and processing them with a smaller memory footprint. Our work and the follow-up developments made the transformers a standard architecture to process images.