Lecture

Word Embeddings: Modeling Word Context and Similarity

Description

This lecture introduces the concept of word embeddings, which aim to model the likelihood of a word and its context occurring together in a low-dimensional space. By mapping words and contexts into this space, the vector distance can be interpreted as a measure of their likelihood of co-occurrence. The instructor explains the process of learning the model from data, including formulating an optimization problem and defining a loss function to be minimized. The lecture covers topics such as obtaining negative samples, stochastic gradient descent, and computing derivatives. Additionally, alternative approaches like CBOW and GLOVE are discussed, along with the properties of word embeddings and their practical applications in document search, thesaurus construction, and document classification.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.