Intégrale de GaussEn mathématiques, une intégrale de Gauss est l'intégrale d'une fonction gaussienne sur l'ensemble des réels. Sa valeur est reliée à la constante π par la formule où α est un paramètre réel strictement positif. Elle intervient dans la définition de la loi de probabilité appelée loi gaussienne, ou loi normale. Cette formule peut être obtenue grâce à une intégrale double et un changement de variable polaire. Sa première démonstration connue est donnée par Pierre-Simon de Laplace.
Degenerate distributionIn mathematics, a degenerate distribution is, according to some, a probability distribution in a space with support only on a manifold of lower dimension, and according to others a distribution with support only at a single point. By the latter definition, it is a deterministic distribution and takes only a single value. Examples include a two-headed coin and rolling a die whose sides all show the same number. This distribution satisfies the definition of "random variable" even though it does not appear random in the everyday sense of the word; hence it is considered degenerate.
Invariant estimatorIn statistics, the concept of being an invariant estimator is a criterion that can be used to compare the properties of different estimators for the same quantity. It is a way of formalising the idea that an estimator should have certain intuitively appealing qualities. Strictly speaking, "invariant" would mean that the estimates themselves are unchanged when both the measurements and the parameters are transformed in a compatible way, but the meaning has been extended to allow the estimates to change in appropriate ways with such transformations.
Precision (statistics)In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, . For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the reciprocal of the variance, . Other summary statistics of statistical dispersion also called precision (or imprecision) include the reciprocal of the standard deviation, ; the standard deviation itself and the relative standard deviation; as well as the standard error and the confidence interval (or its half-width, the margin of error).
Generalized chi-squared distributionIn probability theory and statistics, the generalized chi-squared distribution (or generalized chi-square distribution) is the distribution of a quadratic form of a multinormal variable (normal vector), or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.
Entropie différentielleDifferential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP).
Conditional probability distributionIn probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability.
Exchangeable random variablesIn statistics, an exchangeable sequence of random variables (also sometimes interchangeable) is a sequence X1, X2, X3, ... (which may be finitely or infinitely long) whose joint probability distribution does not change when the positions in the sequence in which finitely many of them appear are altered. Thus, for example the sequences both have the same joint probability distribution. It is closely related to the use of independent and identically distributed random variables in statistical models.
Produit de KroneckerEn mathématiques, le produit de Kronecker est une opération portant sur les matrices. Il s'agit d'un cas particulier du produit tensoriel. Il est ainsi dénommé en hommage au mathématicien allemand Leopold Kronecker. Soient A une matrice de taille m x n et B une matrice de taille p x q. Leur produit tensoriel est la matrice A ⊗ B de taille mp par nq, définie par blocs successifs de taille p x q, le bloc d'indice i,j valant a B En d'autres termes Ou encore, en détaillant les coefficients, Comme le montre l'exemple ci-dessous, le produit de Kronecker de deux matrices consiste à recopier plusieurs fois la deuxième matrice, en la multipliant par le coefficient correspondant à un terme de la première matrice.
Paramètre de positionvignette|Animation de la fonction de densité d'une loi normale, en faisant varier la moyenne entre -5 et 5. La moyenne est un paramètre de position et ne fait que déplacer la courbe en forme de cloche. En théorie des probabilités et statistiques, un paramètre de position (ou de localisation) est, comme son nom l'indique, un paramètre qui régit la position d'une densité de probabilité. Si ce paramètre (scalaire ou vectoriel) est noté λ, la densité se présente formellement comme : où f représente en quelque sorte la densité témoin.