Explores optimization methods, including convexity, gradient descent, and non-convex minimization, with examples like maximum likelihood estimation and ridge regression.
Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.
Explores Stochastic Gradient Descent with Averaging, comparing it with Gradient Descent, and discusses challenges in non-convex optimization and sparse recovery techniques.
Covers gradient descent methods for convex and nonconvex problems, including smooth unconstrained convex minimization, maximum likelihood estimation, and examples like ridge regression and image classification.