Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Seq2Seq Models: Attention vs. No Attention
Graph Chatbot
Related lectures (32)
Previous
Page 2 of 4
Next
Natural Language Processing: Understanding Transformers and Tokenization
Provides an overview of Natural Language Processing, focusing on transformers, tokenization, and self-attention mechanisms for effective language analysis and synthesis.
Deep Learning: Convolutional Neural Networks
Covers Convolutional Neural Networks, standard architectures, training techniques, and adversarial examples in deep learning.
Transformers: Pretraining and Decoding Techniques
Covers advanced transformer concepts, focusing on pretraining and decoding techniques in NLP.
Language Modelling and Recurrent Neural Networks
Explores language modelling, RNNs, n-gram models, LSTMs, and bidirectional RNNs.
Non-Conceptual Knowledge Systems
Delves into the impact of deep learning on non-conceptual knowledge systems and the advancements in transformers and generative adversarial networks.
Sequence to Sequence Models: Overview and Attention Mechanisms
Explores sequence to sequence models, attention mechanisms, and their role in addressing model limitations and improving interpretability.
Pre-Training: BiLSTM and Transformer
Delves into pre-training BiLSTM and Transformer models for NLP tasks, showcasing their effectiveness and applications.
Non conceptual knowledge systems
Explores the impact of Deep learning on Digital Humanities, focusing on non conceptual knowledge systems and recent advancements in AI.
Transformers: Revolutionizing Attention Mechanisms in NLP
Covers the development of transformers and their impact on attention mechanisms in NLP.
Contextual Representations: ELMo & BERT
Explores the development of contextualized embeddings in NLP, focusing on ELMo and BERT's advancements and impact on NLP tasks.