In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli random variables taking values +1 and −1 with probability 1/2 (this distribution is also known as the Rademacher distribution), then for every positive ,
Bernstein inequalities were proven and published by Sergei Bernstein in the 1920s and 1930s. Later, these inequalities were rediscovered several times in various forms. Thus, special cases of the Bernstein inequalities are also known as the Chernoff bound, Hoeffding's inequality and Azuma's inequality.
Let be independent zero-mean random variables. Suppose that almost surely, for all Then, for all positive ,
Let be independent zero-mean random variables. Suppose that for some positive real and every integer ,
Then
Let be independent zero-mean random variables. Suppose that
for all integer Denote
Then,
Bernstein also proved generalizations of the inequalities above to weakly dependent random variables. For example, inequality (2) can be extended as follows. Let be possibly non-independent random variables. Suppose that for all integers ,
Then
More general results for martingales can be found in Fan et al. (2015).
The proofs are based on an application of Markov's inequality to the random variable
for a suitable choice of the parameter .
The Bernstein inequality can be generalized to Gaussian random matrices. Let be a scalar where is a complex Hermitian matrix and is complex vector of size . The vector is a Gaussian vector of size . Then for any , we have
where is the vectorization operation and where is the largest eigenvalue of . The proof is detailed here. Another similar inequality is formulated as
where .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value (typically, its expected value). The law of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables.