**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Concept# Conditional independence

Summary

In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probability, as a special case where the probability of the hypothesis given the uninformative observation is equal to the probability without. If is the hypothesis, and and are observations, conditional independence can be stated as an equality:
where is the probability of given both and . Since the probability of given is the same as the probability of given both and , this equality expresses that contributes nothing to the certainty of . In this case, and are said to be conditionally independent given , written symbolically as: . In the language of causal equality notation, two functions and which both depend on a common variable are described as conditionally independent using the notation , which is equivalent to the notation .
The concept of conditional independence is essential to graph-based theories of statistical inference, as it establishes a mathematical relation between a collection of conditional statements and a graphoid.
Let , , and be events. and are said to be conditionally independent given if and only if and:
This property is often written: , which should be read .
Equivalently, conditional independence may be stated as:
where is the joint probability of and given . This alternate formulation states that and are independent events, given .
It demonstrates that is equivalent to .
iff (definition of conditional probability)
iff (multiply both sides by )
iff (divide both sides by )
iff (definition of conditional probability)
Each cell represents a possible outcome. The events , and are represented by the areas shaded , and respectively. The overlap between the events and is shaded .
The probabilities of these events are shaded areas with respect to the total area.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (41)

Related people (8)

Related units (1)

Related concepts (2)

Related courses (32)

Related lectures (96)

Joint probability distribution

Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered for any given number of random variables. The joint distribution encodes the marginal distributions, i.e. the distributions of each of the individual random variables. It also encodes the conditional probability distributions, which deal with how the outputs of one random variable are distributed when given information on the outputs of the other random variable(s).

Conditional probability distribution

In probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability.

Le cours fournit une initiation à la théorie des probabilités et aux méthodes statistiques pour physiciens.

The objective of this course is to give an overview of machine learning techniques used for real-world applications, and to teach how to implement and use them in practice. Laboratories will be done i

A first graduate course in algorithms, this course assumes minimal background, but moves rapidly. The objective is to learn the main techniques of algorithm analysis and design, while building a reper

Covers unsupervised learning with PCA and K-means for dimensionality reduction and data clustering.

Explores statistical independence, Gaussian Mixture Models, and fitting data with Gaussian functions.

Explains Bayesian Networks factorization and sampling methods using DAGs and Variable Elimination.

We give an extension of Le's stochastic sewing lemma. The stochastic sewing lemma proves convergence in $L_m$ of Riemann type sums $\sum _{[s,t] \in \pi } A_{s,t}$ for an adapted two-parameter stochastic process A, under certain conditions on the moments o ...

Pascal Fua, Eduard Trulls Fortuny, Michal Jan Tyszkiewicz

Diffusion models generating images conditionally on text, such as Dall-E 2 [51] and Stable Diffusion[53], have recently made a splash far beyond the computer vision com- munity. Here, we tackle the related problem of generating point clouds, both unconditi ...

2023Negar Kiyavash, Seyed Jalal Etesami, Kun Zhang

Measuring conditional dependencies among the variables of a network is of great interest to many disciplines. This paper studies some shortcomings of the existing dependency measures in detecting direct causal influences or their lack of ability for group ...

2022