Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the fundamental concept of text representation learning, focusing on word embeddings. It explains the process of learning numerical representations for words using techniques like matrix factorization. The lecture delves into the co-occurrence matrix, GloVe model, and learning word representations through matrix factorization. It also discusses word analogies and the training methods involved in learning word embeddings. Additionally, it explores the use of large models like transformers in predicting the next word in a text. The lecture concludes with pointers to resources on word2vec, GloVe, FastText, and sent2vec.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace