Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.
Covers optimization techniques in machine learning, focusing on convexity, algorithms, and their applications in ensuring efficient convergence to global minima.
Discusses Stochastic Gradient Descent and its application in non-convex optimization, focusing on convergence rates and challenges in machine learning.
Discusses optimization techniques in machine learning, focusing on stochastic gradient descent and its applications in constrained and non-convex problems.
Explores explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems, covering optimization, sampling, and numerical experiments.