In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance, statistical difference or variational distance. Consider a measurable space and probability measures and defined on . The total variation distance between and is defined as: Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. The total variation distance is related to the Kullback–Leibler divergence by Pinsker’s inequality: One also has the following inequality, due to Bretagnolle and Huber (see, also, Tsybakov), which has the advantage of providing a non-vacuous bound even when : When is countable, the total variation distance is related to the L1 norm by the identity: The total variation distance is related to the Hellinger distance as follows: These inequalities follow immediately from the inequalities between the 1-norm and the 2-norm. The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is , that is, where the expectation is taken with respect to the probability measure on the space where lives, and the infimum is taken over all such with marginals and , respectively.
Michael Christoph Gastpar, Adrien Vandenbroucque, Amedeo Roberto Esposito
Nicolas Henri Bernard Flammarion, Xiang Cheng
Michael Christoph Gastpar, Adriano Pastore