Skip to main content
Graph
Search
fr
|
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Policy Gradient Methods: Convergence and Optimization
Graph Chatbot
Related lectures (26)
Previous
Page 1 of 3
Next
Proximal Gradient Descent: Optimization Techniques in Machine Learning
Discusses proximal gradient descent and its applications in optimizing machine learning algorithms.
Optimization Techniques: Convexity in Machine Learning
Covers optimization techniques in machine learning, focusing on convexity and its implications for efficient problem-solving.
Optimization Techniques: Convexity and Algorithms in Machine Learning
Covers optimization techniques in machine learning, focusing on convexity, algorithms, and their applications in ensuring efficient convergence to global minima.
Linear Programming Techniques in Reinforcement Learning
Covers the linear programming approach to reinforcement learning, focusing on its applications and advantages in solving Markov decision processes.
The Hidden Convex Optimization Landscape of Deep Neural Networks
Explores the hidden convex optimization landscape of deep neural networks, showcasing the transition from non-convex to convex models.
Faster Gradient Descent: Projected Optimization Techniques
Covers faster gradient descent methods and projected gradient descent for constrained optimization in machine learning.
Proximal and Subgradient Descent: Optimization Techniques
Discusses proximal and subgradient descent methods for optimization in machine learning.
Gradient Descent: Principles and Applications
Covers gradient descent, its principles, applications, and convergence rates in optimization for machine learning.
Optimization Techniques: Gradient Descent and Convex Functions
Provides an overview of optimization techniques, focusing on gradient descent and properties of convex functions in machine learning.
Optimization Methods in Machine Learning
Explores optimization methods in machine learning, emphasizing gradients, costs, and computational efforts for efficient model training.