Lecture

Neural Networks for NLP

Description

This lecture provides a broad overview of modern Neural Network approaches to Natural Language Processing (NLP). It covers the evolution of corpus-based linguistics, the importance of word embeddings, and the use of Neural Networks for NLP tasks. The lecture discusses key concepts such as learning word representations, the use of Neural Networks for transforming vectors into outputs, and the application of models like word2vec, Glove, and fastText. Additionally, it explores the advantages and drawbacks of Neural Networks in NLP, including the use of Multi-Layer Perceptrons (MLP) and the learning procedures involved. The lecture concludes with insights into the future of NLP, including Transfer Learning techniques like ULMFIT, ELMO, and BERT.

This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.

Watch on Mediaspace
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.