**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Juan Pablo Madrigal Cianci

This person is no longer with EPFL

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related research domains (3)

Related publications (6)

Markov chain Monte Carlo

In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain a sample of the desired distribution by recording states from the chain. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm.

Markov chain

A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC).

Inverse problem

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.

This thesis is devoted to the construction, analysis, and implementation of two types of hierarchical Markov Chain Monte Carlo (MCMC) methods for the solution of large-scale Bayesian Inverse Problems (BIP).The first hierarchical method we present is based on the idea of parallel tempering and is well-suited for BIP whose underlying posterior measure is multi-modal or concentrates around a lower-dimensional, non-linear manifold. In particular, we present two generalizations of the Parallel Tempering algorithm in the context of discrete-time Markov chain Monte Carlo methods for Bayesian inverse problems. These generalizations use state-dependent swapping rates and are inspired by the so-called continuous-time Infinite Swapping algorithm presented in Plattner et al. [J Chem Phys 135(13):134111, 2011]. We present a thorough analysis of the convergence of our proposed methods and show that they are reversible and geometrically ergodic. Numerical experiments conducted over an array of BIP show that our proposed algorithms significantly improve sampling efficiency over competing methodologies. Our second hierarchical method is based on multi-level MCMC (ML-MCMC) techniques. In this setting, instead of sampling directly from a sufficiently accurate (and computationally expensive) posterior measure, one introduces a sequence of accuracy levels for the solution of the underlying computational model, which induces a hierarchy of posterior measures with increasing accuracy and cost to sample from. The key point of this algorithm is to construct highly coupled Markov chains together with the standard Multi-level Monte Carlo argument to obtain a better cost-tolerance complexity than a single-level MCMC algorithm. We present two types of multi-level MCMC algorithms which can be thought of as an extension of the ideas presented in Dodwell, et al. [SIAM-ASA J. Uncertain. Quantif (2015): 1075-1108]. Our first ML-MCMC method extends said ideas to a setting where a wider class of Independent Metropolis-Hastings (IMH) proposals are considered. We provide a thorough theoretical analysis and provide sufficient conditions on the proposals and the family of posteriors so that there exists a unique invariant probability measure for the coupled chains generated by our method, and the convergence to it is uniformly ergodic. We also generalize the cost-tolerance theorem of Dodwell et al., to our setting, and propose a self-tuning continuation-type ML-MCMC algorithm. Our second ML-MCMC method presents an algorithm that admits state-dependent proposals by using a maximal coupling approach. This is desirable, from a methodological perspective, whenever it is difficult to construct suitable IMH proposals, or when the empirical measure resulting from samples from the posterior at the previous level does not satisfy the assumptions required for convergence of the ML-MCMC method. We present a theoretical analysis of the method at hand and show that this new method has an invariant probability measure and converges to it with geometric ergodicity. We also extend the cost-tolerance theorem of Dodwell et. al. to this algorithm, albeit with quite restrictive assumptions. We illustrate both of the proposed ML-MCMC methodologies on several numerical examples.

Fabio Nobile, Juan Pablo Madrigal Cianci

In this work, we present, analyze, and implement a class of multilevel Markov chain Monte Carlo(ML-MCMC) algorithms based on independent Metropolis--Hastings proposals for Bayesian inverse problems. In this context, the likelihood function involves solving a complex differential model, which is then approximated on a sequence of increasingly accurate discretizations. The key point of this algorithm is to construct highly coupled Markov chains together with the standard multilevel Monte Carlo argument to obtain a better cost-tolerance complexity than a single-level MCMC algorithm. Our method extends the ideas of Dodwell et al., [SIAM/ASA J. Uncertain. Quantif.,3 (2015), pp. 1075--1108] to a wider range of proposal distributions. We present a thorough convergence analysis of the ML-MCMC method proposed, and show, in particular, that (i) under some mild conditions on the (independent) proposals and the family of posteriors, there exists a unique invariant probability measure for the coupled chains generated by our method, and (ii) that such coupled chains are uniformly ergodic. We also generalize the cost-tolerance theorem of Dodwell et al. to our wider class of ML-MCMC algorithms. Finally, we propose a self-tuning continuation-type ML-MCMC algorithm. The presented method is tested on an array of academic examples, where some of our theoretical results are numerically verified. These numerical experiments evidence how our extended ML-MCMC method is robust when targeting some pathological posteriors, for which some of the previously proposed ML-MCMC algorithms fail.

2023,

In this work, we present, analyze, and implement a class of Multi-Level Markov chain Monte Carlo (ML-MCMC) algorithms based on independent Metropolis-Hastings proposals for Bayesian inverse problems. In this context, the likelihood function involves solving a complex differential model, which is then approximated on a sequence of increasingly accurate discretizations. The key point of this algorithm is to construct highly coupled Markov chains together with the standard Multi-level Monte Carlo argument to obtain a better cost-tolerance complexity than a single-level MCMC algorithm. Our method extends the ideas of Dodwell, et al. "A hierarchical multilevel Markov chain Monte Carlo algorithm with applications to uncertainty quantification in subsurface flow," SIAM/ASA Journal on Uncertainty Quantification 3.1 (2015): 1075-1108, to a wider range of proposal distributions. We present a thorough convergence analysis of the ML-MCMC method proposed, and show, in particular, that (i) under some mild conditions on the (independent) proposals and the family of posteriors, there exists a unique invariant probability measure for the coupled chains generated by our method, and (ii) that such coupled chains are uniformly ergodic. We also generalize the cost-tolerance theorem of Dodwell et al., to our wider class of ML-MCMC algorithms. Finally, we propose a self-tuning continuation-type ML-MCMC algorithm (C-ML-MCMC). The presented method is tested on an array of academic examples, where some of our theoretical results are numerically verified. These numerical experiments evidence how our extended ML-MCMC method is robust when targeting some pathological posteriors, for which some of the previously proposed ML-MCMC algorithms fail.