Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Adaptive Gradient Methods: Theory and Applications
Graph Chatbot
Related lectures (31)
Previous
Page 1 of 4
Next
Adaptive Optimization Methods: Theory and Applications
Explores adaptive optimization methods that adapt locally and converge without knowing the smoothness constant.
Feed-forward Networks
Introduces feed-forward networks, covering neural network structure, training, activation functions, and optimization, with applications in forecasting and finance.
Neural Networks Optimization
Explores neural networks optimization, including backpropagation, batch normalization, weight initialization, and hyperparameter search strategies.
Structures in Non-Convex Optimization
Covers non-convex optimization, deep learning training problems, stochastic gradient descent, adaptive methods, and neural network architectures.
Generalization in Deep Learning
Delves into the trade-off between model complexity and risk, generalization bounds, and the dangers of overfitting complex function classes.
The Hidden Convex Optimization Landscape of Deep Neural Networks
Explores the hidden convex optimization landscape of deep neural networks, showcasing the transition from non-convex to convex models.
Generalization in Deep Learning
Explores generalization in deep learning, covering model complexity, implicit bias, and the double descent phenomenon.
Neural Networks: Regularization & Optimization
Explores neural network regularization, optimization, and practical implementation tips.
Neural Networks: Training and Activation
Explores neural networks, activation functions, backpropagation, and PyTorch implementation.
Neural Networks: Training and Optimization
Explores the training and optimization of neural networks, addressing challenges like non-convex loss functions and local minima.