Lecture

Word Embeddings: Modeling Word Context and Similarity

In course
DEMO: sit dolor
Dolore nulla reprehenderit dolore velit sit. Enim nisi aliquip sunt duis elit. Eu non labore consectetur consectetur. Ipsum elit cupidatat non proident.
Login to see this section
Description

This lecture introduces the concept of word embeddings, which aim to model the likelihood of a word and its context occurring together in a low-dimensional space. By mapping words and contexts into this space, the vector distance can be interpreted as a measure of their likelihood of co-occurrence. The instructor explains the process of learning the model from data, including formulating an optimization problem and defining a loss function to be minimized. The lecture covers topics such as obtaining negative samples, stochastic gradient descent, and computing derivatives. Additionally, alternative approaches like CBOW and GLOVE are discussed, along with the properties of word embeddings and their practical applications in document search, thesaurus construction, and document classification.

Instructor
laborum cillum
Cupidatat adipisicing eiusmod labore laborum exercitation in laborum esse commodo. Magna fugiat fugiat consectetur occaecat occaecat qui nulla velit Lorem nulla sunt. Proident id consectetur non duis sunt ipsum laborum non eiusmod. Dolore nisi quis sunt ad magna labore esse consectetur excepteur elit adipisicing adipisicing.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (42)
Vector Space Semantics (and Information Retrieval)
Explores the Vector Space model, Bag of Words, tf-idf, cosine similarity, Okapi BM25, and Precision and Recall in Information Retrieval.
Word Embeddings: Introduction and Applications
Introduces word embeddings, explaining how they capture word meanings based on context and their applications in natural language processing tasks.
Word Embedding Models: Optimization and Applications
Explores optimizing word embedding models, including loss function minimization and gradient descent, and introduces techniques like Fasttext and Byte Pair Encoding.
Optimization without Constraints: Gradient Method
Covers optimization without constraints using the gradient method to find the function's minimum.
Binary Sentiment Classifier Training
Covers the training of a binary sentiment classifier using an RNN.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.