Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Understanding Generalization: Implicit Bias & Optimization
Graph Chatbot
Related lectures (28)
Previous
Page 1 of 3
Next
Generalization in Deep Learning
Explores generalization in deep learning, covering model complexity, implicit bias, and the double descent phenomenon.
Implicit Bias in Machine Learning
Explores implicit bias, gradient descent, stability in optimization algorithms, and generalization bounds in machine learning.
Gradient Descent on Two-Layer ReLU Neural Networks
Analyzes gradient descent on two-layer ReLU neural networks, exploring global convergence, regularization, implicit bias, and statistical efficiency.
Generalization in Deep Learning
Delves into the trade-off between model complexity and risk, generalization bounds, and the dangers of overfitting complex function classes.
Double Descent Curves: Overparametrization
Explores double descent curves and overparametrization in machine learning models, highlighting the risks and benefits.
Adaptive Gradient Methods: Theory and Applications
Explores adaptive gradient methods, their properties, convergence, and comparison with traditional optimization algorithms.
Deep Learning: Theory and Practice
By Prof. Volkan Cevher delves into the mathematics of deep learning, exploring model complexity, risk trade-offs, and the generalization mystery.
Gradient Descent
Covers the concept of gradient descent, a universal algorithm used to find the minimum of a function.
Optimization Techniques: Stochastic Gradient Descent and Beyond
Discusses optimization techniques in machine learning, focusing on stochastic gradient descent and its applications in constrained and non-convex problems.
Adaptive Optimization Methods: Theory and Applications
Explores adaptive optimization methods that adapt locally and converge without knowing the smoothness constant.