Résumé
In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation (a measure of statistical dispersion) of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis. However, for statistical theory, it provides an exemplar problem in the context of estimation theory which is both simple to state and for which results cannot be obtained in closed form. It also provides an example where imposing the requirement for unbiased estimation might be seen as just adding inconvenience, with no real benefit. In statistics, the standard deviation of a population of numbers is often estimated from a random sample drawn from the population. This is the sample standard deviation, which is defined by where is the sample (formally, realizations from a random variable X) and is the sample mean. One way of seeing that this is a biased estimator of the standard deviation of the population is to start from the result that s2 is an unbiased estimator for the variance σ2 of the underlying population if that variance exists and the sample values are drawn independently with replacement. The square root is a nonlinear function, and only linear functions commute with taking the expectation. Since the square root is a strictly concave function, it follows from Jensen's inequality that the square root of the sample variance is an underestimate. The use of n − 1 instead of n in the formula for the sample variance is known as Bessel's correction, which corrects the bias in the estimation of the population variance, and some, but not all of the bias in the estimation of the population standard deviation.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Cours associés (29)
MICRO-110: Design of experiments
This course provides an introduction to experimental statistics, including use of population statistics to characterize experimental results, use of comparison statistics and hypothesis testing to eva
MGT-482: Principles of finance
The course provides a market-oriented framework for analyzing the major financial decisions made by firms. It provides an introduction to valuation techniques, investment decisions, asset valuation, f
FIN-405: Investments
The course covers a wide range of topics in investment analysis
Afficher plus
Séances de cours associées (53)
Variables aléatoires : déterministe vs aléatoire
Explore les variables aléatoires, leur variabilité, la réalisation de processus aléatoires et la méthode scientifique en science des matériaux.
Algèbre linéaire en science des données
Explore l'application de l'algèbre linéaire dans la science des données, couvrant la réduction de la variance, la théorie de la distribution des modèles et les estimations du maximum de vraisemblance.
Théorie de probabilité : Variables et distributions aléatoires
Introduit la théorie des probabilités, les variables aléatoires et les distributions, en mettant l'accent sur leurs applications dans la diffusion atomique.
Afficher plus
Concepts associés (11)
Bessel's correction
In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.
Erreur type
Lerreur type d'une statistique (souvent une estimation d'un paramètre) est l'écart type de sa distribution d'échantillonnage ou l'estimation de son écart type. Si le paramètre ou la statistique est la moyenne, on parle d'erreur type de la moyenne. La distribution d'échantillonnage est générée par tirage répété et enregistrements des moyennes obtenues. Cela forme une distribution de moyennes différentes, et cette distribution a sa propre moyenne et variance.
Sample mean and covariance
The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales.
Afficher plus