Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Modern NLP: Data Collection, Annotation & Biases
Graph Chatbot
Related lectures (31)
Previous
Page 2 of 4
Next
Sequence to Sequence Models: Overview and Applications
Covers sequence to sequence models, their architecture, applications, and the role of attention mechanisms in improving performance.
Transformers in Vision: Applications and Architectures
Covers the impact of transformers in computer vision, discussing their architecture, applications, and advancements in various tasks.
Machine Translation: Attention Mechanism
Explores the attention mechanism in machine translation, addressing the bottleneck problem and improving NMT performance significantly.
Prompting and Alignment
Explores prompting, alignment, and the capabilities of large language models for natural language processing tasks.
Transformers: Unifying Machine Learning Communities
Covers the role of Transformers in unifying various machine learning fields.
Ethics in NLP
Discusses the ethical implications of NLP systems, focusing on biases, toxicity, and privacy concerns in language models.
Ethical Considerations in Natural Language Processing
Explores ethical challenges in NLP systems, including biases, toxicity, privacy, and disinformation.
Deep Learning for NLP
Introduces deep learning concepts for NLP, covering word embeddings, RNNs, and Transformers, emphasizing self-attention and multi-headed attention.
Deep Learning for Question Answering
Explores deep learning for question answering, analyzing neural networks and model robustness to noise.
Transformer: Pre-Training
Explores the Transformer model, from recurrent models to attention-based NLP, highlighting its key components and significant results in machine translation and document generation.