Concept# Chebyshev's inequality

Summary

In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be k or more standard deviations away from the mean (or equivalently, at least 1 − 1/k2 of the distribution's values are less than k standard deviations away from the mean). The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.
Its practical usage is similar to the 68–95–99.7 rule, which applies only to normal distributions. Chebyshev's inequality is more general, stating that a minimum of just 75% of

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (9)

Loading

Loading

Loading

Related people (1)

Related units

No results

Related concepts

No results

Related courses (3)

FIN-415: Probability and stochastic calculus

This course gives an introduction to probability theory and stochastic calculus in discrete and continuous time. We study fundamental notions and techniques necessary for applications in finance such as option pricing, hedging, optimal portfolio choice and prediction problems.

MATH-106(a): Analysis II

Étudier les concepts fondamentaux d'analyse, et le calcul différentiel et intégral des fonctions réelles de plusieurs
variables.

MATH-234(d): Probability and statistics

Ce cours enseigne les notions élémentaires de la théorie de probabilité et de la statistique, tels que l'inférence, les tests et la régression.

Luigi Provenzano, Joachim Stubbe

We present asymptotically sharp inequalities, containing a 2nd term, for the Dirichlet and Neumann eigenvalues of the Laplacian on a domain, which are complementary to the familiar Berezin-Li-Yau and Kroger inequalities in the limit as the eigenvalues tend to infinity. We accomplish this in the framework of the Riesz mean R-1(z) of the eigenvalues by applying the averaged variational principle with families of test functions that have been corrected for boundary behaviour.

We present asymptotically sharp inequalities for the eigenvalues mu(k) of the Laplacian on a domain with Neumann boundary conditions, using the averaged variational principle introduced in [14]. For the Riesz mean R-1(z) of the eigenvalues we improve the known sharp semiclassical bound in terms of the volume of the domain with a second term with the best possible expected power of z. In addition, we obtain two-sided bounds for individual mu(k), which are semiclassically sharp, and we obtain a Neumann version of Laptev's result that the Polya conjecture is valid for domains that are Cartesian products of a generic domain with one for which Polya's conjecture holds. In a final section, we remark upon the Dirichlet case with the same methods.

Numerical methods for parabolic homogenization problems combining finite element methods (FEMs) in space with Runge-Kutta methods in time are proposed. The space discretization is based on the coupling of macro and micro finite element methods following the framework of the Heterogeneous Multiscale Method (HMM). We present a fully discrete analysis in both space and time. Our analysis relies on new (optimal) error bounds in the norms L-2(H-1), C-0(L-2), and C-0(H-1) for the fully discrete analysis in space. These bounds can then be used to derive fully discrete spacetime error estimates for a variety of Runge-Kutta methods, including implicit methods (e.g. Radau methods) and explicit stabilized method (e.g. Chebyshev methods). Numerical experiments confirm our theoretical convergence rates and illustrate the performance of the methods.

2012Related lectures (5)