Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Gradient Descent: Optimization and Regularization
Graph Chatbot
Related lectures (29)
Previous
Page 1 of 3
Next
Gradient Descent
Covers the concept of gradient descent, a universal algorithm used to find the minimum of a function.
Gradient Descent: Optimization Techniques
Explores gradient descent, loss functions, and optimization techniques in neural network training.
Proximal Gradient Descent: Optimization Techniques in Machine Learning
Discusses proximal gradient descent and its applications in optimizing machine learning algorithms.
Optimization Methods: Theory Discussion
Explores optimization methods, including unconstrained problems, linear programming, and heuristic approaches.
Gradient Descent
Covers the algorithm of gradient descent, aiming to minimize a function by iteratively moving in the direction of the steepest decrease.
Optimization algorithms
Covers optimization algorithms, focusing on Proximal Gradient Descent and its variations.
Quasi-newton optimization
Covers gradient line search methods and optimization techniques with an emphasis on Wolfe conditions and positive definiteness.
Choosing a Step Size
Explores choosing a step size in optimization on manifolds, including backtracking line-search and the Armijo method.
Optimization Techniques: Stochastic Gradient Descent and Beyond
Discusses optimization techniques in machine learning, focusing on stochastic gradient descent and its applications in constrained and non-convex problems.
Descent methods and line search: Finiteness of the line search algorithm
Explores the Wolfe conditions for line search algorithms and proves the finiteness of the line search parameter.