Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the transition from attention mechanisms to transformers in modern NLP. Starting with encoder-decoder models, the instructor explains the concept of attention and its enhancement through transformers. The lecture delves into the generation of sequences using decoding from sequence to sequence models. The attentive encoder-decoder model is introduced, highlighting the reduction of temporal bottlenecks and the use of attention to focus on different parts of the input. The attention function, including multiplicative, linear, and scaled dot product attention, is discussed. The lecture concludes with a detailed explanation of self-attention, masked multi-headed attention, and cross-attention in transformers.