Covers the fundamentals of optimal control theory, focusing on defining OCPs, existence of solutions, performance criteria, physical constraints, and the principle of optimality.
Explores explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems, covering optimization, sampling, and numerical experiments.