Lecture

From Attention to Transformers

Description

This lecture covers the transition from attention mechanisms to transformers in modern NLP. Starting with encoder-decoder models, the instructor explains the concept of attention and its enhancement through transformers. The lecture delves into the generation of sequences using decoding from sequence to sequence models. The attentive encoder-decoder model is introduced, highlighting the reduction of temporal bottlenecks and the use of attention to focus on different parts of the input. The attention function, including multiplicative, linear, and scaled dot product attention, is discussed. The lecture concludes with a detailed explanation of self-attention, masked multi-headed attention, and cross-attention in transformers.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.