Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Strong Convexity and Convergence Rates
Graph Chatbot
Related lectures (28)
Previous
Page 2 of 3
Next
Proximal Gradient Descent: Optimization Techniques in Machine Learning
Discusses proximal gradient descent and its applications in optimizing machine learning algorithms.
Optimality of Convergence Rates: Accelerated Gradient Descent
Explores the optimality of convergence rates in convex optimization, focusing on accelerated gradient descent and adaptive methods.
Truncated CG: Conjugate Gradients
Covers the truncated conjugate gradients algorithm for solving positive definite linear maps iteratively.
Optimization Methods in Machine Learning
Explores optimization methods in machine learning, emphasizing gradients, costs, and computational efforts for efficient model training.
Optimization Techniques: Convexity in Machine Learning
Covers optimization techniques in machine learning, focusing on convexity and its implications for efficient problem-solving.
Quasi-newton optimization
Covers gradient line search methods and optimization techniques with an emphasis on Wolfe conditions and positive definiteness.
Proximal and Subgradient Descent: Optimization Techniques
Discusses proximal and subgradient descent methods for optimization in machine learning.
Gradient Descent
Covers the concept of gradient descent in scalar cases, focusing on finding the minimum of a function by iteratively moving in the direction of the negative gradient.
Optimality of Convergence Rates: Accelerated/Stochastic Gradient Descent
Covers the optimality of convergence rates in accelerated and stochastic gradient descent methods for non-convex optimization problems.
Conjugate Gradient Optimization
Explores Conjugate Gradient optimization, covering quadratic and nonlinear cases, Wolfe conditions, BFGS, CG algorithms, and matrix symmetry.