In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent. Pairwise independent random variables with finite variance are uncorrelated. A pair of random variables X and Y are independent if and only if the random vector (X, Y) with joint cumulative distribution function (CDF) satisfies or equivalently, their joint density satisfies That is, the joint distribution is equal to the product of the marginal distributions. Unless it is not clear in context, in practice the modifier "mutual" is usually dropped so that independence means mutual independence. A statement such as " X, Y, Z are independent random variables" means that X, Y, Z are mutually independent. Pairwise independence does not imply mutual independence, as shown by the following example attributed to S. Bernstein. Suppose X and Y are two independent tosses of a fair coin, where we designate 1 for heads and 0 for tails. Let the third random variable Z be equal to 1 if exactly one of those coin tosses resulted in "heads", and 0 otherwise (i.e., ). Then jointly the triple (X, Y, Z) has the following probability distribution: Here the marginal probability distributions are identical: and The bivariate distributions also agree: where Since each of the pairwise joint distributions equals the product of their respective marginal distributions, the variables are pairwise independent: X and Y are independent, and X and Z are independent, and Y and Z are independent. However, X, Y, and Z are not mutually independent, since the left side equalling for example 1/4 for (x, y, z) = (0, 0, 0) while the right side equals 1/8 for (x, y, z) = (0, 0, 0). In fact, any of is completely determined by the other two (any of X, Y, Z is the sum (modulo 2) of the others). That is as far from independence as random variables can get.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (5)
PHYS-325: Introduction to plasma physics
Introduction à la physique des plasmas destinée à donner une vue globale des propriétés essentielles et uniques d'un plasma et à présenter les approches couramment utilisées pour modéliser son comport
COM-417: Advanced probability and applications
In this course, various aspects of probability theory are considered. The first part is devoted to the main theorems in the field (law of large numbers, central limit theorem, concentration inequaliti
CS-101: Advanced information, computation, communication I
Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a
Show more
Related lectures (29)
Plasma Physics: Collisions and Resistivity
Covers Coulomb collisions and resistivity in plasma, highlighting their random walk nature.
Lovász Local Lemma: Basics
Covers the basics of the Lovász Local Lemma, including mutually independent bad events and pseudoprobabilities.
Probability Distributions: Basics
Introduces probability distributions, uniform distribution, probabilities of events, complements, unions, and disjoint events.
Show more
Related publications (13)

Differentiation between benign and malignant vertebral compression fractures using qualitative and quantitative analysis of a single fast spin echo T2-weighted Dixon sequence

Tom Hilbert, Sébastien Bacher

Objectives To determine and compare the qualitative and quantitative diagnostic performance of a single sagittal fast spin echo (FSE) T2-weighted Dixon sequence in differentiating benign and malignant vertebral compression fractures (VCF), using multiple r ...
SPRINGER2021

Tests of mutual independence among several random vectors using univariate and multivariate ranks of nearest neighbours

Soham Sarkar

Testing mutual independence among several random vectors of arbitrary dimensions is a challenging problem in Statistics, and it has gained considerable interest in recent years. In this article, we propose some nonparametric tests based on different notion ...
TAYLOR & FRANCIS LTD2021

On some consistent tests of mutual independence among several random vectors of arbitrary dimensions

Soham Sarkar

Testing for mutual independence among several random vectors is a challenging problem, and in recent years, it has gained significant attention in statistics and machine learning literature. Most of the existing tests of independence deal with only two ran ...
SPRINGER2020
Show more
Related concepts (11)
Joint probability distribution
Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered for any given number of random variables. The joint distribution encodes the marginal distributions, i.e. the distributions of each of the individual random variables. It also encodes the conditional probability distributions, which deal with how the outputs of one random variable are distributed when given information on the outputs of the other random variable(s).
Independent and identically distributed random variables
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usually abbreviated as i.i.d., iid, or IID. IID was first defined in statistics and finds application in different fields such as data mining and signal processing. Statistics commonly deals with random samples. A random sample can be thought of as a set of objects that are chosen randomly.
Conditional probability
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occurring with some sort of relationship with another event A. In this event, the event B can be analyzed by a conditional probability with respect to A. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(AB) or occasionally P_B(A).
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.