Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Transformers: Revolutionizing Attention Mechanisms in NLP
Graph Chatbot
Related lectures (32)
Previous
Page 2 of 4
Next
Language Models: Fixed-context and Recurrent Neural Networks
Discusses language models, focusing on fixed-context neural models and recurrent neural networks.
Non conceptual knowledge systems
Explores the impact of Deep learning on Digital Humanities, focusing on non conceptual knowledge systems and recent advancements in AI.
Classical Language Models: Foundations and Applications
Introduces classical language models, their applications, and foundational concepts like count-based modeling and evaluation metrics.
Foundations of Deep Learning: Transformer Architecture Overview
Covers the foundational concepts of deep learning and the Transformer architecture, focusing on neural networks, attention mechanisms, and their applications in sequence modeling tasks.
Coreference Resolution
Covers coreference resolution, models, applications, challenges, and advancements in natural language processing.
Deep Learning for Question Answering
Explores deep learning for question answering, analyzing neural networks and model robustness to noise.
Introduction to Modern Natural Language Processing
Introduces the course on Modern Natural Language Processing, covering its significance, applications, challenges, and advancements in technology.
Deep Learning: Convolutional Neural Networks
Covers Convolutional Neural Networks, standard architectures, training techniques, and adversarial examples in deep learning.
Model Analysis
Explores neural model analysis in NLP, covering evaluation, probing, and ablation studies to understand model behavior and interpretability.
Neural Networks: Two Layers Neural Network
Covers the basics of neural networks, focusing on the development from two layers neural networks to deep neural networks.