Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the fundamentals of deep learning for Natural Language Processing (NLP), starting with Neural Word Embeddings, moving on to Recurrent Neural Networks for Sequence Modeling, and concluding with Attentive Neural Modeling with Transformers. The instructor explains the challenges of representing natural language sequences, the concept of word vectors, and the importance of choosing a vocabulary. The lecture also delves into the applications of NLP, such as machine translation, text generation, and sentiment analysis. The use of self-attention and multi-headed attention in Transformers is highlighted as a breakthrough in modeling long-range dependencies in sequences.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace