Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Transformers: Pretraining and Decoding Techniques
Graph Chatbot
Related lectures (32)
Previous
Page 2 of 4
Next
Non conceptual knowledge systems
Explores the impact of Deep learning on Digital Humanities, focusing on non conceptual knowledge systems and recent advancements in AI.
Contextual Representations: ELMO and BERT Overview
Covers contextual representations in NLP, focusing on ELMO and BERT architectures and their applications in various tasks.
Transformers: Unifying Machine Learning Communities
Covers the role of Transformers in unifying various machine learning fields.
Foundations of Deep Learning: Transformer Architecture Overview
Covers the foundational concepts of deep learning and the Transformer architecture, focusing on neural networks, attention mechanisms, and their applications in sequence modeling tasks.
Recurrent Neural Networks: Training and Challenges
Discusses recurrent neural networks, their training challenges, and solutions like LSTMs and GRUs.
Neural Networks Optimization
Explores neural networks optimization, including backpropagation, batch normalization, weight initialization, and hyperparameter search strategies.
Model Analysis
Explores neural model analysis in NLP, covering evaluation, probing, and ablation studies to understand model behavior and interpretability.
Pre-Training: BiLSTM and Transformer
Delves into pre-training BiLSTM and Transformer models for NLP tasks, showcasing their effectiveness and applications.
Deep Learning: Convolutional Neural Networks
Covers Convolutional Neural Networks, standard architectures, training techniques, and adversarial examples in deep learning.
Pretraining: Transformers & Models
Explores pretraining models like BERT, T5, and GPT, discussing their training objectives and applications in natural language processing.