Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Pretraining Sequence-to-Sequence Models: BART and T5
Graph Chatbot
Related lectures (30)
Previous
Page 2 of 3
Next
Contextual Representations: ELMO and BERT Overview
Covers contextual representations in NLP, focusing on ELMO and BERT architectures and their applications in various tasks.
Transformers: Revolutionizing Attention Mechanisms in NLP
Covers the development of transformers and their impact on attention mechanisms in NLP.
Data Annotation: Collection and Biases in NLP
Addresses data collection, annotation processes, and biases in natural language processing.
Neural Word Embeddings: Learning Representations for Natural Language
Covers neural word embeddings and methods for learning word representations in natural language processing.
Pre-Training: BiLSTM and Transformer
Delves into pre-training BiLSTM and Transformer models for NLP tasks, showcasing their effectiveness and applications.
Transformer: Pre-Training
Explores the Transformer model, from recurrent models to attention-based NLP, highlighting its key components and significant results in machine translation and document generation.
Modern NLP and Ethics in NLP
Delves into advancements and challenges in NLP, along with ethical considerations and potential harms.
BERT: Pretraining and Applications
Delves into BERT pretraining for transformers, discussing its applications in NLP tasks.
Modern NLP: Introduction
By Antoine Bosselut introduces Natural Language Processing and its challenges, advancements in neural models, and course goals.
Neural Networks for NLP
Covers modern Neural Network approaches to NLP, focusing on word embeddings, Neural Networks for NLP tasks, and future Transfer Learning techniques.