**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Markov chain

Summary

A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.
Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics.
Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probabili

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (100)

Loading

Loading

Loading

Related people (14)

Related concepts (93)

In probability theory and related fields, a stochastic (stəˈkæstɪk) or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has

Andrey Nikolaevich Kolmogorov (Андре́й Никола́евич Колмого́ров, 25 April 1903 – 20 October 1987) was a Soviet mathematician who contributed to the mathematics of probability theory, topology, intuitio

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is n

Related units (9)

Related courses (101)

COM-516: Markov chains and algorithmic applications

The study of random walks finds many applications in computer science and communications. The goal of the course is to get familiar with the theory of random walks, and to get an overview of some applications of this theory to problems of interest in communications, computer and network science.

MGT-484: Applied probability & stochastic processes

This course focuses on dynamic models of random phenomena, and in particular, the most popular classes of such models: Markov chains and Markov decision processes. We will also study applications in queuing theory, finance, project management, etc.

COM-500: Statistical signal and data processing through applications

Building up on the basic concepts of sampling, filtering and Fourier transforms, we address stochastic modeling, spectral analysis, estimation and prediction, classification, and adaptive filtering, with an application oriented approach and hands-on numerical exercises.

Pierre-Olivier Guimond, Alexandre Roulet

We investigate the Rabi oscillation of an atom placed inside a quantum cavity where each mirror is formed by a chain of atoms trapped near a one-dimensional waveguide. This proposal was studied previously with the use of Markov approximation, where the delay due to the finite travel time of light between the two cavity mirrors is neglected. We show that Rabi oscillation analogous to that obtained with high-finesse classical cavities is achieved only when this travel time is much larger than the time scale that characterizes the superradiant response of the mirrors. Therefore, the delay must be taken into account and the dynamics of the problem is inherently non-Markovian. Parameters of interest such as the Rabi frequency and the cavity loss rate due to photon leakage through the mirrors are obtained.

Modeling the immune system (IS) means putting together a set of assumptions about its components (cells and organs) and their interactions. Simulations of a model show joint behavior of the components, which for complex realistic models is often impossible to find analytically. Simulations allow us to experiment on how initial concentrations and properties of the immune cells and viruses impact the IS behavior, and gain better quantitative and qualitative insight into how the IS works and why different behavior patterns occur. A simulation, once it has been created, must be reviewed both statistically and analytically as well as validated from the biological point of view. We analyzed Chao’s immune system simulation [1][2] from a statistical and analytical point. We explicited both the Markov chain which was simulated and the underlying process on which Chao’s stage-structured approach was built. Furthermore, we established a test protocol for timestep validation which Chao’s simulator passed. We evaluated Chao’s simulator’s dependence on the random number generator, which was shown to be negligible. Finally, we evaluated the simulator output and our major result is the discovery of a secondary response to a primary infection, an occurrence is not shown in Chao’s dissertation. A tertiary response to the infection is never possible due to the size of the secondary response caused by memory cells.

2005This thesis is devoted to the construction, analysis, and implementation of two types of hierarchical Markov Chain Monte Carlo (MCMC) methods for the solution of large-scale Bayesian Inverse Problems (BIP).The first hierarchical method we present is based on the idea of parallel tempering and is well-suited for BIP whose underlying posterior measure is multi-modal or concentrates around a lower-dimensional, non-linear manifold. In particular, we present two generalizations of the Parallel Tempering algorithm in the context of discrete-time Markov chain Monte Carlo methods for Bayesian inverse problems. These generalizations use state-dependent swapping rates and are inspired by the so-called continuous-time Infinite Swapping algorithm presented in Plattner et al. [J Chem Phys 135(13):134111, 2011]. We present a thorough analysis of the convergence of our proposed methods and show that they are reversible and geometrically ergodic. Numerical experiments conducted over an array of BIP show that our proposed algorithms significantly improve sampling efficiency over competing methodologies. Our second hierarchical method is based on multi-level MCMC (ML-MCMC) techniques. In this setting, instead of sampling directly from a sufficiently accurate (and computationally expensive) posterior measure, one introduces a sequence of accuracy levels for the solution of the underlying computational model, which induces a hierarchy of posterior measures with increasing accuracy and cost to sample from. The key point of this algorithm is to construct highly coupled Markov chains together with the standard Multi-level Monte Carlo argument to obtain a better cost-tolerance complexity than a single-level MCMC algorithm. We present two types of multi-level MCMC algorithms which can be thought of as an extension of the ideas presented in Dodwell, et al. [SIAM-ASA J. Uncertain. Quantif (2015): 1075-1108]. Our first ML-MCMC method extends said ideas to a setting where a wider class of Independent Metropolis-Hastings (IMH) proposals are considered. We provide a thorough theoretical analysis and provide sufficient conditions on the proposals and the family of posteriors so that there exists a unique invariant probability measure for the coupled chains generated by our method, and the convergence to it is uniformly ergodic. We also generalize the cost-tolerance theorem of Dodwell et al., to our setting, and propose a self-tuning continuation-type ML-MCMC algorithm. Our second ML-MCMC method presents an algorithm that admits state-dependent proposals by using a maximal coupling approach. This is desirable, from a methodological perspective, whenever it is difficult to construct suitable IMH proposals, or when the empirical measure resulting from samples from the posterior at the previous level does not satisfy the assumptions required for convergence of the ML-MCMC method. We present a theoretical analysis of the method at hand and show that this new method has an invariant probability measure and converges to it with geometric ergodicity. We also extend the cost-tolerance theorem of Dodwell et. al. to this algorithm, albeit with quite restrictive assumptions. We illustrate both of the proposed ML-MCMC methodologies on several numerical examples.

Related lectures (276)