Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Optimality of Convergence Rate: Acceleration in Gradient Descent
Graph Chatbot
Related lectures (26)
Previous
Page 2 of 3
Next
Proximal and Subgradient Descent: Optimization Techniques
Discusses proximal and subgradient descent methods for optimization in machine learning.
Gradient Descent: Principles and Applications
Covers gradient descent, its principles, applications, and convergence rates in optimization for machine learning.
Optimization Techniques: Gradient Descent and Convex Functions
Provides an overview of optimization techniques, focusing on gradient descent and properties of convex functions in machine learning.
Proximal Operators and Constrained Optimization
Introduces proximal operators, gradient methods, and constrained optimization, exploring their convergence and practical applications.
Primal-dual Optimization: Extra-Gradient Method
Explores the Extra-Gradient method for Primal-dual optimization, covering nonconvex-concave problems, convergence rates, and practical performance.
Convex Optimization: Gradient Algorithms
Covers convex optimization problems and gradient-based algorithms to find the global minimum.
Gradient Descent Methods
Covers gradient descent methods for convex and nonconvex problems, including smooth unconstrained convex minimization, maximum likelihood estimation, and examples like ridge regression and image classification.
Proximal Gradient Descent: Optimization Techniques in Machine Learning
Discusses proximal gradient descent and its applications in optimizing machine learning algorithms.
Stochastic Optimization: Algorithms and Methods
Explores stochastic optimization algorithms and methods for convex problems with smooth and nonsmooth risks.
Convex Optimization
Introduces convex optimization, focusing on the importance of convexity in algorithms and optimization problems.