In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.
Let follow an ordinary normal distribution, . Then, follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero.
Using the parametrization of the normal distribution, the probability density function (PDF) of the half-normal is given by
where .
Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if is near zero), obtained by setting , the probability density function is given by
where .
The cumulative distribution function (CDF) is given by
Using the change-of-variables , the CDF can be written as
where erf is the error function, a standard function in many mathematical software packages.
The quantile function (or inverse CDF) is written:
where and is the inverse error function
The expectation is then given by
The variance is given by
Since this is proportional to the variance σ2 of X, σ can be seen as a scale parameter of the new distribution.
The differential entropy of the half-normal distribution is exactly one bit less the differential entropy of a zero-mean normal distribution with the same second moment about 0. This can be understood intuitively since the magnitude operator reduces information by one bit (if the probability distribution at its input is even). Alternatively, since a half-normal distribution is always positive, the one bit it would take to record whether a standard normal random variable were positive (say, a 1) or negative (say, a 0) is no longer necessary. Thus,
The half-normal distribution is commonly utilized as a prior probability distribution for variance parameters in Bayesian inference applications.
Given numbers drawn from a half-normal distribution, the unknown parameter of that distribution can be estimated by the method of maximum likelihood, giving
The bias is equal to
which yields the bias-corrected maximum likelihood estimator
The distribution is a special case of the folded normal distribution with μ = 0.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Cours associés (32)
Les étudiants traitent des observations entachées d'incertitude de manière rigoureuse. Ils maîtrisent les principales méthodes de compensation des mesures et d'estimation des paramètres. Ils appliquen
The course teaches the acquisition of a methodology of designing experiments for optimal quality of the results and of the number of experiments.
This course presents an introduction to statistical mechanics geared towards materials scientists. The concepts of macroscopic thermodynamics will be related to a microscopic picture and a statistical
In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics. Suppose has a normal distribution with mean and variance and lies within the interval . Then conditional on has a truncated normal distribution. Its probability density function, , for , is given by and by otherwise.
En théorie des probabilités et en statistique, la loi normale repliée (ou loi de défaut de forme) est une loi de probabilité continue liée à la loi normale. Considérons une variable aléatoire de loi normale avec moyenne et variance , alors la variable aléatoire est de loi normale repliée. Ainsi on ne comptabilise que la valeur de la variable mais pas son signe. Le terme « repliée » vient du fait que la densité de la loi « à gauche » de x=0 est repliée sur la partie « à droite » de x=0 en prenant la valeur absolue.
vignette|Animation de la fonction de densité d'une loi normale (forme de cloche). L'écart-type est un paramètre d'échelle. En l'augmentant, on étale la distribution. En le diminuant, on la concentre. En théorie des probabilités et en statistiques, un paramètre d'échelle est un paramètre qui régit l'aplatissement d'une famille paramétrique de lois de probabilités. Il s'agit principalement d'un facteur multiplicatif. Si une famille de densités de probabilité, dépendant du paramètre θ est de la forme où f est une densité, alors θ est bien un paramètre d'échelle.
Deep learning has achieved remarkable success in various challenging tasks such as generating images from natural language or engaging in lengthy conversations with humans.The success in practice stems from the ability to successfully train massive neural ...
EPFL2023
, ,
We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a p-dimensional Gaussian random vector from n independent samples. The proposed model minimizes the worst ...
2020
Many decision problems in science, engineering, and economics are affected by uncertainty, which is typically modeled by a random variable governed by an unknown probability distribution. For many practical applications, the probability distribution is onl ...