Publication

MATHICSE Technical Report : Analysis of stochastic gradient methods for PDE-constrained optimal control problems with uncertain parameters

Fabio Nobile, Sebastian Krumscheid, Matthieu Claude Martin
2018
Rapport ou document de travail
Résumé

We consider the numerical approximation of a risk-averse optimal control problem for an elliptic partial differential equation (PDE) with random coefficients. Specifically, the control function is a deterministic, dis- tributed forcing term that minimizes the expected mean squared distance between the state (i.e. solution to the PDE) and a target function, subject to a regularization for well posedness. For the numerical treatment of this risk-averse optimal control problem, we consider a Finite Element discretization of the underlying PDEs, a Monte Carlo sampling method, and gradient type iterations to obtain the approximate optimal control. We provide full error and complexity analysis of the proposed numerical schemes. In particular we compare the complexity of a fixed Monte Carlo gradient method, in which the Finite Element discretization and Monte Carlo sample are chosen initially and kept fixed over the gradient iterations, with a Stochastic Gradient method in which the expectation in the computation of the steepest descent direction is approximated by independent Monte Carlo estimators with small sample sizes and possibly varying Finite Element mesh sizes across iterations. We show in particular that the second strategy results in an improved computational complexity. The theoretical error estimates and complexity results are confirmed by our numerical experiments.

À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Concepts associés (34)
Computational complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem. The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory.
Méthode du gradient conjugué
vignette|Illustration de la méthode du gradient conjugué. En analyse numérique, la méthode du gradient conjugué est un algorithme pour résoudre des systèmes d'équations linéaires dont la matrice est symétrique définie positive. Cette méthode, imaginée en 1950 simultanément par Cornelius Lanczos, Eduard Stiefel et Magnus Hestenes, est une méthode itérative qui converge en un nombre fini d'itérations (au plus égal à la dimension du système linéaire).
Algorithme du gradient
Lalgorithme du gradient, aussi appelé algorithme de descente de gradient, désigne un algorithme d'optimisation différentiable. Il est par conséquent destiné à minimiser une fonction réelle différentiable définie sur un espace euclidien (par exemple, , l'espace des n-uplets de nombres réels, muni d'un produit scalaire) ou, plus généralement, sur un espace hilbertien. L'algorithme est itératif et procède donc par améliorations successives. Au point courant, un déplacement est effectué dans la direction opposée au gradient, de manière à faire décroître la fonction.
Afficher plus
Publications associées (115)

A Streamline Upwind Petrov-Galerkin Reduced Order Method for Advection-Dominated Partial Differential Equations Under Optimal Control

Fabio Zoccolan, Gianluigi Rozza

In this paper we will consider distributed Linear-Quadratic Optimal Control Problems dealing with Advection-Diffusion PDEs for high values of the Peclet number. In this situation, computational instabilities occur, both for steady and unsteady cases. A Str ...
Walter De Gruyter Gmbh2024

Reliable data-driven decision-making through optimal transport

Bahar Taskesen

Decision-making permeates every aspect of human and societal development, from individuals' daily choices to the complex decisions made by communities and institutions. Central to effective decision-making is the discipline of optimization, which seeks the ...
EPFL2024

Distributionally Robust Linear Quadratic Control

Daniel Kuhn, Bahar Taskesen, Cagil Kocyigit

Linear-Quadratic-Gaussian (LQG) control is a fundamental control paradigm that is studied in various fields such as engineering, computer science, economics, and neuroscience. It involves controlling a system with linear dynamics and imperfect observations ...
2023
Afficher plus
MOOCs associés (32)
Warm-up for EPFL
Warmup EPFL est destiné aux nouvelles étudiantes et étudiants de l'EPFL.
Introduction to optimization on smooth manifolds: first order methods
Learn to optimize on smooth, nonlinear spaces: Join us to build your foundations (starting at "what is a manifold?") and confidently implement your first algorithm (Riemannian gradient descent).
Digital Signal Processing I
Basic signal processing concepts, Fourier analysis and filters. This module can be used as a starting point or a basic refresher in elementary DSP
Afficher plus