**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Personne# Sundar Subramaniam Ganesh

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Unités associées

Chargement

Cours enseignés par cette personne

Chargement

Domaines de recherche associés

Chargement

Publications associées

Chargement

Personnes menant des recherches similaires

Chargement

Unités associées (1)

Personnes menant des recherches similaires

Domaines de recherche associés

Aucun résultat

Aucun résultat

Cours enseignés par cette personne

Aucun résultat

Publications associées (3)

Chargement

Chargement

Chargement

Sundar Subramaniam Ganesh, Fabio Nobile

In this work, we tackle the problem of minimising the Conditional-Value-at-Risk (CVaR) of output quantities of complex differential models with random input data, using gradient-based approaches in combination with the Multi-Level Monte Carlo (MLMC) method. In particular, we consider the framework of multi-level Monte Carlo for parametric expectations and propose modifications of the MLMC estimator, error estimation procedure, and adaptive MLMC parameter selection to ensure the estimation of the CVaR and sensitivities for a given design with a prescribed accuracy. We then propose combining the MLMC framework with an alternating inexact minimisation-gradient descent algorithm, for which we prove exponential convergence in the optimisation iterations under the assumptions of strong convexity and Lipschitz continuity of the gradient of the objective function. We demonstrate the performance of our approach on two numerical examples of practical relevance, which evidence the same optimal asymptotic cost-tolerance behaviour as standard MLMC methods for fixed design computations of output expectations.

2022This work aims to study the effects of wind uncertainties in civil engineering structural design. Optimising the design of a structure for safety or operability without factoring in these uncertainties can result in a design that is not robust to these perturbations.To control the effects of the uncertainties on structural loading, one has to leverage a multitude of concurrent design parameters. This work aims to tackle design optimization under uncertainty by leveraging shape parameters. In particular, we will focus on gradient-based methods.This problem poses many mathematical and computational challenges which will be targeted in this work. First is the formulation of the optimization problem itself, which poses challenges depending on which risk-measures and design costs are present in the objective function and/or constraints. Gradient-based optimization algorithms require the computation of the sensitivities of the objective function with respect to shape parameters. Since these computations involve some statistics of the underlying Quantities of Interest (QoI) and their sensitivities, their accurate and efficient estimation is of utmost importance. Multi-Level Monte Carlo (MLMC) algorithms are studied in this work for this purpose. However, novel and efficient MLMC estimators will have to be developed for computing risk-measures such as the Conditional-Value-at-Risk (CVaR) or quantiles, which are important to the application. Preliminary results have shown promise when applied to a simple fluid-dynamics problem.To produce samples of the QoI, an underlying fluid dynamics PDE will have to be solved, which often requires adaptive mesh refinement to capture local features and separation points. New techniques will have to be developed to use mesh adaptivity within MLMC algorithms. So far, mostly predefined and static hierarchies of meshes have been considered in literature. It is also planned to explore time-dependent PDEs and to explore whether ergodicity properties can be exploited to provide improved statistical estimates. Since MLMC algorithms require a large number of simulations to be run, it is important to exploit the parallelism potential behind these algorithms. In this work, a new programming library in the Python language is proposed that utilises an external scheduling and distributed computing tool to parallelize MLMC type algorithms. The aforementioned estimators and error estimates are implemented in this framework.We also aim to explore asynchronous MLMC algorithms to improve scalability. Current MLMC algorithms contain synchronization points where one has to wait until all samples are computed. An algorithm that can trigger new sample computations and update estimates on the fly would be highly desirable to avoid spooling loss when running on large machines. However, care has to be taken to avoid introducing unintended biases and correlations within this framework. Lastly, possible ways of estimating the sensitivities through combination of the forward and adjoint problems underlying the quantity of interest, in combination with MLMC estimators, will also be explored. We aim also to explore possibly asynchronous implementations of stochastic gradient based algorithms.

Sundar Subramaniam Ganesh, Sebastian Krumscheid, Fabio Nobile

In this work, we consider the problem of estimating the probability distribution, the quantile or the conditional expectation above the quantile, the so called conditional-value-at-risk, of output quantities of complex random differential models by the MLMC method. We follow the approach of (reference), which recasts the estimation of the above quantities to the computation of suitable parametric expectations. In this work, we present novel computable error estimators for the estimation of such quantities, which are then used to optimally tune the MLMC hierarchy in a continuation type adaptive algorithm. We demonstrate the efficiency and robustness of our adaptive continuation-MLMC in an array of numerical test cases.

2022