Explores gradient descent methods for smooth convex and non-convex problems, covering iterative strategies, convergence rates, and challenges in optimization.
Explores explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems, covering optimization, sampling, and numerical experiments.
Discusses Stochastic Gradient Descent and its application in non-convex optimization, focusing on convergence rates and challenges in machine learning.