Concept

Bennett's inequality

Summary
In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality was proved by George Bennett of the University of New South Wales in 1962. Let X1, ... Xn be independent random variables with finite variance. Further assume Xi ≤ a almost surely for all i, and define and Then for any t ≥ 0, where h(u) = (1 + u)log(1 + u) – u and log denotes the natural logarithm. For generalizations see Freedman (1975) and Fan, Grama and Liu (2012) for a martingale version of Bennett's inequality and its improvement, respectively. Hoeffding's inequality only assumes the summands are bounded almost surely, while Bennett's inequality offers some improvement when the variances of the summands are small compared to their almost sure bounds. However Hoeffding's inequality entails sub-Gaussian tails, whereas in general Bennett's inequality has Poissonian tails. Bennett's inequality is most similar to the Bernstein inequalities, the first of which also gives concentration in terms of the variance and almost sure bound on the individual terms. Bennett's inequality is stronger than this bound, but more complicated to compute. In both inequalities, unlike some other inequalities or limit theorems, there is no requirement that the component variables have identical or similar distributions. Suppose that each Xi is an independent binary random variable with probability p. Then Bennett's inequality says that: For , so for . By contrast, Hoeffding's inequality gives a bound of and the first Bernstein inequality gives a bound of . For , Hoeffding's inequality gives , Bernstein gives , and Bennett gives .
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (10)
Hoeffding's Inequality: Binomial Distribution
Explores Hoeffding's inequality and the binomial distribution, focusing on error minimization and generalization gaps in predictor selection.
Large Deviations Principle: Cramer's Theorem
Covers Cramer's theorem and Hoeffding's inequality in the context of the large deviations principle.
Multi-arm Bandits: Regret and Exploration
Explores regret in multi-arm bandits, balancing exploration and exploitation for optimal decision-making in real-world applications.
Show more
Related concepts (2)
Hoeffding's inequality
In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963. Hoeffding's inequality is a special case of the Azuma–Hoeffding inequality and McDiarmid's inequality. It is similar to the Chernoff bound, but tends to be less sharp, in particular when the variance of the random variables is small.
Chernoff bound
In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential (e.g. sub-Gaussian). It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables. The bound is commonly named after Herman Chernoff who described the method in a 1952 paper, though Chernoff himself attributed it to Herman Rubin.