Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Stochastic Gradient Descent: Optimization and Convergence
Graph Chatbot
Related lectures (27)
Previous
Page 3 of 3
Next
The Hidden Convex Optimization Landscape of Deep Neural Networks
Explores the hidden convex optimization landscape of deep neural networks, showcasing the transition from non-convex to convex models.
Proximal and Subgradient Descent: Optimization Techniques
Discusses proximal and subgradient descent methods for optimization in machine learning.
Optimization Techniques: Convexity in Machine Learning
Covers optimization techniques in machine learning, focusing on convexity and its implications for efficient problem-solving.
Convex Optimization: Gradient Algorithms
Covers convex optimization problems and gradient-based algorithms to find the global minimum.
Convex Sets: Mathematical Optimization
Introduces convex optimization, covering convex sets, solution concepts, and efficient numerical methods in mathematical optimization.
From Stochastic Gradient Descent to Non-Smooth Optimization
Covers stochastic optimization, sparsity, and non-smooth minimization via subgradient descent.
Optimization Techniques: Convexity and Algorithms in Machine Learning
Covers optimization techniques in machine learning, focusing on convexity, algorithms, and their applications in ensuring efficient convergence to global minima.