Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Decoding from Neural Models
Graph Chatbot
Related lectures (30)
Previous
Page 2 of 3
Next
Deep Learning for NLP
Explores deep learning for NLP, covering word embeddings, context representations, learning techniques, and challenges like vanishing gradients and ethical considerations.
BERT: Pretraining and Applications
Delves into BERT pretraining for transformers, discussing its applications in NLP tasks.
Model Compression: Techniques for Efficient NLP Models
Explores model compression techniques in NLP, discussing pruning, quantization, weight factorization, knowledge distillation, and attention mechanisms.
Chemical Reactions: Transformer Architecture
Explores atom mapping in chemical reactions and the transition to reaction grammar using the transformer architecture.
Transformers: Revolutionizing Attention Mechanisms in NLP
Covers the development of transformers and their impact on attention mechanisms in NLP.
Pretraining Sequence-to-Sequence Models: BART and T5
Covers the pretraining of sequence-to-sequence models, focusing on BART and T5 architectures.
Data Annotation: Collection and Biases in NLP
Addresses data collection, annotation processes, and biases in natural language processing.
Natural Language Generation
Explores Natural Language Generation, covering neural models, biases, ethics, and evaluation challenges.
Transformers: Unifying Machine Learning Communities
Covers the role of Transformers in unifying various machine learning fields.
Language Models: Fixed-context and Recurrent Neural Networks
Discusses language models, focusing on fixed-context neural models and recurrent neural networks.