Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.
Explores adversarial machine learning, covering the generation of adversarial examples, robustness challenges, and techniques like Fast Gradient Sign Method.
Covers optimization in machine learning, focusing on gradient descent for linear and logistic regression, stochastic gradient descent, and practical considerations.
Covers gradient descent methods for convex and nonconvex problems, including smooth unconstrained convex minimization, maximum likelihood estimation, and examples like ridge regression and image classification.