Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Gradient Descent Methods: Theory and Computation
Graph Chatbot
Related lectures (27)
Previous
Page 2 of 3
Next
Gradient Descent: Principles and Applications
Covers gradient descent, its principles, applications, and convergence rates in optimization for machine learning.
Proximal Gradient Descent: Optimization Techniques in Machine Learning
Discusses proximal gradient descent and its applications in optimizing machine learning algorithms.
Role of Computation in Optimization
Explores the role of computation in optimization, focusing on gradient descent for convex and nonconvex problems.
Optimization Techniques: Gradient Descent and Convex Functions
Provides an overview of optimization techniques, focusing on gradient descent and properties of convex functions in machine learning.
Non-Convex Optimization: Techniques and Applications
Covers non-convex optimization techniques and their applications in machine learning.
Mathematics of Data: Computation Role
Explores the role of computation in data mathematics, focusing on iterative methods, optimization, estimators, and descent principles.
Convex Optimization: Gradient Algorithms
Covers convex optimization problems and gradient-based algorithms to find the global minimum.
Proximal Operators and Constrained Optimization
Introduces proximal operators, gradient methods, and constrained optimization, exploring their convergence and practical applications.
Stochastic Gradient Descent: Optimization and Convergence
Explores stochastic gradient descent, covering convergence rates, acceleration, and practical applications in optimization problems.
Optimization Methods in Machine Learning
Explores optimization methods in machine learning, emphasizing gradients, costs, and computational efforts for efficient model training.