**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Multilevel Monte Carlo method

Summary

Multilevel Monte Carlo (MLMC) methods in numerical analysis are algorithms for computing expectations that arise in stochastic simulations. Just as Monte Carlo methods, they rely on repeated random sampling, but these samples are taken on different levels of accuracy. MLMC methods can greatly reduce the computational cost of standard Monte Carlo methods by taking most samples with a low accuracy and corresponding low cost, and only very few samples are taken at high accuracy and corresponding high cost.
The goal of a multilevel Monte Carlo method is to approximate the expected value of the random variable that is the output of a stochastic simulation. Suppose this random variable cannot be simulated exactly, but there is a sequence of approximations with increasing accuracy, but also increasing cost, that converges to as . The basis of the multilevel method is the telescoping sum identity,
that is trivially satisfied because of the linearity of the expectation operator. Each of the expectations is then approximated by a Monte Carlo method, resulting in the multilevel Monte Carlo method. Note that taking a sample of the difference at level requires a simulation of both and .
The MLMC method works if the variances as , which will be the case if both and approximate the same random variable . By the Central Limit Theorem, this implies that one needs fewer and fewer samples to accurately approximate the expectation of the difference as . Hence, most samples will be taken on level , where samples are cheap, and only very few samples will be required at the finest level . In this sense, MLMC can be considered as a recursive control variate strategy.
The first application of MLMC is attributed to Mike Giles, in the context of stochastic differential equations (SDEs) for option pricing, however, earlier traces are found in the work of Heinrich in the context of parametric integration. Here, the random variable is known as the payoff function, and the sequence of approximations , use an approximation to the sample path with time step .

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related courses (1)

Related publications (15)

Related people (7)

MATH-450: Numerical integration of stochastic differential equations

In this course we will introduce and study numerical integrators for stochastic differential equations. These numerical methods are important for many applications.

Related units (1)

Fabio Nobile, Sebastian Krumscheid, Michele Pisaroni

In this work we introduce and analyze a novel multilevel Monte Carlo (MLMC) estimator for the accurate approximation of central moments of system outputs affected by uncertainties. Central moments pla

2020Fabio Nobile, Sebastian Krumscheid, Valentine Ginette Madeleine Rey

We quantify the effect of uncertainties on quantities of interest related to contact mechanics of rough surfaces. Specifically, we consider the problem of frictionless non adhesive normal contact betw

2019,

In this work, we tackle the problem of minimising the Conditional-Value-at-Risk (CVaR) of output quantities of complex differential models with random input data, using gradient-based approaches in co

2022