Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Modern NLP: Data Collection, Annotation & Biases
Graph Chatbot
Related lectures (31)
Previous
Page 3 of 4
Next
Language Models: Fixed-context and Recurrent Neural Networks
Discusses language models, focusing on fixed-context neural models and recurrent neural networks.
Transformers: Revolutionizing Attention Mechanisms in NLP
Covers the development of transformers and their impact on attention mechanisms in NLP.
Question Answering: Deep Learning Insights
Explores question answering systems, reading comprehension models, and the challenges in achieving accurate responses.
Neuro-symbolic Representations: Commonsense Knowledge & Reasoning
Delves into neuro-symbolic representations for commonsense knowledge and reasoning in natural language processing applications.
Pre-Training: BiLSTM and Transformer
Delves into pre-training BiLSTM and Transformer models for NLP tasks, showcasing their effectiveness and applications.
Words Tokens: Lexical Level Overview
Explores words, tokens, and language models in NLP, covering challenges in defining them, lexicon usage, n-grams, and probability estimation.
Data-Driven Insights: NLP and AI Applications
Explores building OS for heterogeneous hardware, data movement efficiency, AI advancements, and NLP challenges.
Deep Learning for NLP
Delves into Deep Learning for Natural Language Processing, exploring Neural Word Embeddings, Recurrent Neural Networks, and Attentive Neural Modeling with Transformers.
Transformer Architecture: Subquadratic Attention Mechanisms
Covers transformer architecture, focusing on encoder-decoder models and subquadratic attention mechanisms for efficient processing of input sequences.
Neural Networks for NLP
Covers modern Neural Network approaches to NLP, focusing on word embeddings, Neural Networks for NLP tasks, and future Transfer Learning techniques.