Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Stochastic Gradient Descent: Theory and Applications
Graph Chatbot
Related lectures (27)
Previous
Page 3 of 3
Next
Non-Convex Optimization: Techniques and Applications
Covers non-convex optimization techniques and their applications in machine learning.
Exploration Bias
Explores regularization, learning algorithms, and subgaussian assumptions in machine learning.
Feed-forward Networks
Introduces feed-forward networks, covering neural network structure, training, activation functions, and optimization, with applications in forecasting and finance.
Optimization with Constraints: KKT Conditions Explained
Covers the KKT conditions for optimization with constraints, detailing their application and significance in solving constrained problems.
Proximal and Subgradient Descent: Optimization Techniques
Discusses proximal and subgradient descent methods for optimization in machine learning.
Neural Networks: Training and Optimization
Explores the training and optimization of neural networks, addressing challenges like non-convex loss functions and local minima.
Optimization Methods: Convergence and Trade-offs
Covers optimization methods, convergence guarantees, trade-offs, and variance reduction techniques in numerical optimization.