Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Transformers: Self-Attention and MLP
Graph Chatbot
Related lectures (31)
Previous
Page 3 of 4
Next
Statistical Physics of Learning
Offers insights into the statistical physics of learning, exploring the relationship between neural network structure and disordered systems.
Mathematics of Data: From Theory to Computation
Covers key concepts in data mathematics, including automatic differentiation, linear layers, and attention layers.
Chemical Reactions: Transformer Architecture
Explores atom mapping in chemical reactions and the transition to reaction grammar using the transformer architecture.
Deep Learning Fundamentals
Introduces deep learning, from logistic regression to neural networks, emphasizing the need for handling non-linearly separable data.
Reinforcement Learning Concepts
Covers key concepts in reinforcement learning, neural networks, clustering, and unsupervised learning, emphasizing their applications and challenges.
Recurrent Neural Networks: Training and Challenges
Discusses recurrent neural networks, their training challenges, and solutions like LSTMs and GRUs.
Non conceptual knowledge systems
Explores the impact of Deep learning on Digital Humanities, focusing on non conceptual knowledge systems and recent advancements in AI.
Dimensionality Reduction: PCA and Autoencoders
Introduces artificial neural networks, CNNs, and dimensionality reduction using PCA and autoencoders.
Sequence to Sequence Models: Overview and Attention Mechanisms
Explores sequence to sequence models, attention mechanisms, and their role in addressing model limitations and improving interpretability.
Deep Splines: Unifying Framework for Deep Neural Networks
Introduces a functional framework for deep neural networks with adaptive piecewise-linear splines, focusing on biomedical image reconstruction and the challenges of deep splines.