Concept

Concentration inequality

Summary
In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value (typically, its expected value). The law of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables. Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them. Markov's inequality Let be a random variable that is non-negative (almost surely). Then, for every constant , Note the following extension to Markov's inequality: if is a strictly increasing and non-negative function, then Chebyshev's inequality Chebyshev's inequality requires the following information on a random variable : The expected value is finite. The variance is finite. Then, for every constant , or equivalently, where is the standard deviation of . Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality applied to the random variable with . Vysochanskij–Petunin inequalityLet X be a random variable with unimodal distribution, mean μ and finite, non-zero variance σ2. Then, for any (For a relatively elementary proof see e.g. ). Vysochanskij–Petunin inequality For a unimodal random variable and , the one-sided Vysochanskij-Petunin inequality holds as follows: Paley–Zygmund inequality Cantelli's inequality Gauss's inequality Chernoff bound The generic Chernoff bound requires the moment generating function of , defined as It always exists, but may be infinite. From Markov's inequality, for every : and for every : There are various Chernoff bounds for different distributions and different values of the parameter . See for a compilation of more concentration inequalities.