**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Exchangeable random variables

Summary

In statistics, an exchangeable sequence of random variables (also sometimes interchangeable) is a sequence X1, X2, X3, ... (which may be finitely or infinitely long) whose joint probability distribution does not change when the positions in the sequence in which finitely many of them appear are altered. Thus, for example the sequences
both have the same joint probability distribution.
It is closely related to the use of independent and identically distributed random variables in statistical models. Exchangeable sequences of random variables arise in cases of simple random sampling.
Formally, an exchangeable sequence of random variables is a finite or infinite sequence X1, X2, X3, ... of random variables such that for any finite permutation σ of the indices 1, 2, 3, ..., (the permutation acts on only finitely many indices, with the rest fixed), the joint probability distribution of the permuted sequence
is the same as the joint probability distribution of the original sequence.
(A sequence E1, E2, E3, ... of events is said to be exchangeable precisely if the sequence of its indicator functions is exchangeable.) The distribution function FX1,...,Xn(x1, ..., xn) of a finite sequence of exchangeable random variables is symmetric in its arguments x1, ..., xn. Olav Kallenberg provided an appropriate definition of exchangeability for continuous-time stochastic processes.
The concept was introduced by William Ernest Johnson in his 1924 book Logic, Part III: The Logical Foundations of Science. Exchangeability is equivalent to the concept of statistical control introduced by Walter Shewhart also in 1924.
The property of exchangeability is closely related to the use of independent and identically distributed (i.i.d.) random variables in statistical models. A sequence of random variables that are i.i.d, conditional on some underlying distributional form, is exchangeable. This follows directly from the structure of the joint probability distribution generated by the i.i.d. form.
Mixtures of exchangeable sequences (in particular, sequences of i.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related courses (58)

Related concepts (11)

MATH-232: Probability and statistics

A basic course in probability and statistics

CS-101: Advanced information, computation, communication I

Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a

MATH-442: Statistical theory

The course aims at developing certain key aspects of the theory of statistics, providing a common general framework for statistical methodology. While the main emphasis will be on the mathematical asp

Independent and identically distributed random variables

In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usually abbreviated as i.i.d., iid, or IID. IID was first defined in statistics and finds application in different fields such as data mining and signal processing. Statistics commonly deals with random samples. A random sample can be thought of as a set of objects that are chosen randomly.

Exchangeable random variables

In statistics, an exchangeable sequence of random variables (also sometimes interchangeable) is a sequence X1, X2, X3, ... (which may be finitely or infinitely long) whose joint probability distribution does not change when the positions in the sequence in which finitely many of them appear are altered. Thus, for example the sequences both have the same joint probability distribution. It is closely related to the use of independent and identically distributed random variables in statistical models.

Bernoulli process

In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variables Xi are identically distributed and independent. Prosaically, a Bernoulli process is a repeated coin flipping, possibly with an unfair coin (but with consistent unfairness). Every variable Xi in the sequence is associated with a Bernoulli trial or experiment.

Related lectures (588)

Large Deviations Principle: Cramer's TheoremCOM-417: Advanced probability and applications

Covers Cramer's theorem and Hoeffding's inequality in the context of the large deviations principle.

Instrumental Inequality: Binary VariablesMGT-416: Causal inference

Explores instrumental inequality with binary variables and their generation process through arbitrary functions and observed variables.

Optimal Hunting StrategiesMGT-484: Applied probability & stochastic processes

Explores optimal hunting strategies, uncertain oil prices, and linear cost-minimization policies.