Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Optimization Techniques: Stochastic Gradient Descent and Beyond
Graph Chatbot
Related lectures (22)
Previous
Page 2 of 3
Next
Optimization Methods in Machine Learning
Explores optimization methods in machine learning, emphasizing gradients, costs, and computational efforts for efficient model training.
Optimality of Convergence Rates: Accelerated Gradient Descent
Explores the optimality of convergence rates in convex optimization, focusing on accelerated gradient descent and adaptive methods.
Implicit Bias in Machine Learning
Explores implicit bias, gradient descent, stability in optimization algorithms, and generalization bounds in machine learning.
Stochastic Optimization: Algorithms and Methods
Explores stochastic optimization algorithms and methods for convex problems with smooth and nonsmooth risks.
Optimization Methods: Theory Discussion
Explores optimization methods, including unconstrained problems, linear programming, and heuristic approaches.
Optimization with Constraints: KKT Conditions
Covers the KKT conditions for optimization with constraints, essential for solving constrained optimization problems efficiently.
Gradient Descent
Covers the concept of gradient descent, a universal algorithm used to find the minimum of a function.
Optimality of Convergence Rates: Accelerated/Stochastic Gradient Descent
Covers the optimality of convergence rates in accelerated and stochastic gradient descent methods for non-convex optimization problems.
Convex Optimization: Gradient Descent
Explores VC dimension, gradient descent, convex sets, and Lipschitz functions in convex optimization.
Structures in Non-Convex Optimization
Covers non-convex optimization, deep learning training problems, stochastic gradient descent, adaptive methods, and neural network architectures.