Concept

Total variation distance of probability measures

Summary
In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance, statistical difference or variational distance. Consider a measurable space and probability measures and defined on . The total variation distance between and is defined as: Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. The total variation distance is related to the Kullback–Leibler divergence by Pinsker’s inequality: One also has the following inequality, due to Bretagnolle and Huber (see, also, Tsybakov), which has the advantage of providing a non-vacuous bound even when : When is countable, the total variation distance is related to the L1 norm by the identity: The total variation distance is related to the Hellinger distance as follows: These inequalities follow immediately from the inequalities between the 1-norm and the 2-norm. The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is , that is, where the expectation is taken with respect to the probability measure on the space where lives, and the infimum is taken over all such with marginals and , respectively.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.