Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In this paper, we present a multilevel Monte Carlo (MLMC) version of the Stochastic Gradient (SG) method for optimization under uncertainty, in order to tackle Optimal Control Problems (OCP) where the constraints are described in the form of PDEs with random parameters. The deterministic control acts as a distributed forcing term in the random PDE and the objective function is an expected quadratic loss. We use a Stochastic Gradient approach to compute the optimal control, where the steepest descent direction of the expected loss, at each iteration, is replaced by independent MLMC estimators with increasing accuracy and computational cost. The refinement strategy is chosen a-priori such that the bias and, possibly, the variance of the MLMC estimator decays as a function of the iteration counter. Detailed convergence and complexity analyses of the proposed strategy are presented and asymptotically optimal decay rates are identified such that the total computational work is minimized. We also present and analyze an alternative version of the multilevel SG algorithm that uses a randomized MLMC estimator at each iteration. Our methodology is validated through a numerical example.
Daniel Kuhn, Florian Dörfler, Soroosh Shafieezadeh Abadeh
Volkan Cevher, Alp Yurtsever, Maria-Luiza Vladarean