In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.
Let follow an ordinary normal distribution, . Then, follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero.
Using the parametrization of the normal distribution, the probability density function (PDF) of the half-normal is given by
where .
Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if is near zero), obtained by setting , the probability density function is given by
where .
The cumulative distribution function (CDF) is given by
Using the change-of-variables , the CDF can be written as
where erf is the error function, a standard function in many mathematical software packages.
The quantile function (or inverse CDF) is written:
where and is the inverse error function
The expectation is then given by
The variance is given by
Since this is proportional to the variance σ2 of X, σ can be seen as a scale parameter of the new distribution.
The differential entropy of the half-normal distribution is exactly one bit less the differential entropy of a zero-mean normal distribution with the same second moment about 0. This can be understood intuitively since the magnitude operator reduces information by one bit (if the probability distribution at its input is even). Alternatively, since a half-normal distribution is always positive, the one bit it would take to record whether a standard normal random variable were positive (say, a 1) or negative (say, a 0) is no longer necessary. Thus,
The half-normal distribution is commonly utilized as a prior probability distribution for variance parameters in Bayesian inference applications.
Given numbers drawn from a half-normal distribution, the unknown parameter of that distribution can be estimated by the method of maximum likelihood, giving
The bias is equal to
which yields the bias-corrected maximum likelihood estimator
The distribution is a special case of the folded normal distribution with μ = 0.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics. Suppose has a normal distribution with mean and variance and lies within the interval . Then conditional on has a truncated normal distribution. Its probability density function, , for , is given by and by otherwise.
The folded normal distribution is a probability distribution related to the normal distribution. Given a normally distributed random variable X with mean μ and variance σ2, the random variable Y = |X| has a folded normal distribution. Such a case may be encountered if only the magnitude of some variable is recorded, but not its sign. The distribution is called "folded" because probability mass to the left of x = 0 is folded over by taking the absolute value.
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. If a family of probability distributions is such that there is a parameter s (and other parameters θ) for which the cumulative distribution function satisfies then s is called a scale parameter, since its value determines the "scale" or statistical dispersion of the probability distribution.
Les étudiants traitent des observations entachées d'incertitude de manière rigoureuse. Ils maîtrisent les principales méthodes de compensation des mesures et d'estimation des paramètres. Ils appliquen
This course presents an introduction to statistical mechanics geared towards materials scientists. The concepts of macroscopic thermodynamics will be related to a microscopic picture and a statistical
Deep learning has achieved remarkable success in various challenging tasks such as generating images from natural language or engaging in lengthy conversations with humans.The success in practice stems from the ability to successfully train massive neural ...
We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a p-dimensional Gaussian random vector from n independent samples. The proposed model minimizes the worst ...
2020
Many decision problems in science, engineering, and economics are affected by uncertainty, which is typically modeled by a random variable governed by an unknown probability distribution. For many practical applications, the probability distribution is onl ...