Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Transformers: Pre-Training
Graph Chatbot
Related lectures (30)
Previous
Page 3 of 3
Next
Data Annotation: Collection and Biases in NLP
Addresses data collection, annotation processes, and biases in natural language processing.
Vision-Language-Action Models: Training and Applications
Delves into training and applications of Vision-Language-Action models, emphasizing large language models' role in robotic control and the transfer of web knowledge. Results from experiments and future research directions are highlighted.
Sequence to Sequence Models: Overview and Applications
Covers sequence to sequence models, their architecture, applications, and the role of attention mechanisms in improving performance.
Model Compression: Techniques for Efficient NLP Models
Explores model compression techniques in NLP, discussing pruning, quantization, weight factorization, knowledge distillation, and attention mechanisms.
Transformer Architecture: The X Gomega
Delves into the Transformer architecture, self-attention, and training strategies for machine translation and image recognition.
Cognitive Maps in Rats and Men
Explores cognitive maps, reward systems, latent learning, attention mechanisms, and transformers in visual intelligence and machine learning.
Modern NLP: Data Collection, Annotation & Biases
Explores data annotation in NLP and the impact of biases on model fine-tuning.
Custom GPTs for Academic Writing
Delves into the use of custom GPTs for academic writing, highlighting the balance between AI assistance and traditional learning methods.
Transformers: Self-Attention and MLP
Explores transformers, emphasizing self-attention and MLP mechanisms for efficient sequence processing.
Chemical Reactions: Transformer Architecture
Explores atom mapping in chemical reactions and the transition to reaction grammar using the transformer architecture.