Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Gradient Descent: Optimization Techniques
Graph Chatbot
Related lectures (28)
Previous
Page 3 of 3
Next
Optimization Methods in Machine Learning
Explores optimization methods in machine learning, emphasizing gradients, costs, and computational efforts for efficient model training.
Optimality of Convergence Rates: Accelerated Gradient Descent
Explores the optimality of convergence rates in convex optimization, focusing on accelerated gradient descent and adaptive methods.
Optimization Trade-offs: Variance Reduction and Statistical Dimension
Explores optimization trade-offs, variance reduction, statistical dimension, and convergence analysis in optimization algorithms.
Truncated Conjugate Gradients for Trust-Region Subproblem
Explores truncated conjugate gradients for solving the trust-region subproblem in optimization on manifolds efficiently.
Newton Method: Convergence and Quadratic Care
Covers the Newton method and its convergence properties near the optimal point.
Gradient Descent: Optimization and Constraints
Discusses gradient descent for optimization with equality constraints and iterative convergence criteria.
Optimality of Convergence Rates: Accelerated/Stochastic Gradient Descent
Covers the optimality of convergence rates in accelerated and stochastic gradient descent methods for non-convex optimization problems.
Neural Networks: Training and Optimization
Explores the training and optimization of neural networks, addressing challenges like non-convex loss functions and local minima.