Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Modern NLP: Data Collection, Annotation & Biases
Graph Chatbot
Related lectures (31)
Previous
Page 1 of 4
Next
Pretraining Sequence-to-Sequence Models: BART and T5
Covers the pretraining of sequence-to-sequence models, focusing on BART and T5 architectures.
Transformers: Pretraining and Decoding Techniques
Covers advanced transformer concepts, focusing on pretraining and decoding techniques in NLP.
Data Annotation: Collection and Biases in NLP
Addresses data collection, annotation processes, and biases in natural language processing.
Modern NLP and Ethics in NLP
Delves into advancements and challenges in NLP, along with ethical considerations and potential harms.
Natural Language Processing: Understanding Transformers and Tokenization
Provides an overview of Natural Language Processing, focusing on transformers, tokenization, and self-attention mechanisms for effective language analysis and synthesis.
Model Analysis
Explores neural model analysis in NLP, covering evaluation, probing, and ablation studies to understand model behavior and interpretability.
BERT: Pretraining and Applications
Delves into BERT pretraining for transformers, discussing its applications in NLP tasks.
Contextual Representations: ELMO and BERT Overview
Covers contextual representations in NLP, focusing on ELMO and BERT architectures and their applications in various tasks.
Coreference Resolution
Delves into coreference resolution, discussing challenges, advancements, and evaluation methods.
Pretraining: Transformers & Models
Explores pretraining models like BERT, T5, and GPT, discussing their training objectives and applications in natural language processing.