Entropie croiséeEn théorie de l'information, l'entropie croisée entre deux lois de probabilité mesure le nombre de bits moyen nécessaires pour identifier un événement issu de l'« ensemble des événements » - encore appelé tribu en mathématiques - sur l'univers , si la distribution des événements est basée sur une loi de probabilité , relativement à une distribution de référence . L'entropie croisée pour deux distributions et sur le même espace probabilisé est définie de la façon suivante : où est l'entropie de , et est la divergence de Kullback-Leibler entre et .
Precision (statistics)In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, . For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the reciprocal of the variance, . Other summary statistics of statistical dispersion also called precision (or imprecision) include the reciprocal of the standard deviation, ; the standard deviation itself and the relative standard deviation; as well as the standard error and the confidence interval (or its half-width, the margin of error).
Wilks' theoremIn statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine.
Conditional probability distributionIn probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability.
Likelihoodist statisticsLikelihoodist statistics or likelihoodism is an approach to statistics that exclusively or primarily uses the likelihood function. Likelihoodist statistics is a more minor school than the main approaches of Bayesian statistics and frequentist statistics, but has some adherents and applications. The central idea of likelihoodism is the likelihood principle: data are interpreted as evidence, and the strength of the evidence is measured by the likelihood function.
Inverse probabilityIn probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable. Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics, the method of inverse probability (assigning a probability distribution to an unobserved variable) is called Bayesian probability, the "distribution" of data given the unobserved variable is rather the likelihood function (which is not a probability distribution), and the distribution of an unobserved variable, given both data and a prior distribution, is the posterior distribution.
Nuisance parameterIn statistics, a nuisance parameter is any parameter which is unspecified but which must be accounted for in the hypothesis testing of the parameters which are of interest. The classic example of a nuisance parameter comes from the normal distribution, a member of the location–scale family. For at least one normal distribution, the variance(s), σ2 is often not specified or known, but one desires to hypothesis test on the mean(s).
Test du multiplicateur de LagrangeLe test du multiplicateur de Lagrange (LM) ou test de score ou test de Rao est un principe général pour tester des hypothèses sur les paramètres dans un cadre de vraisemblance. L'hypothèse sous le test est exprimée comme une ou plusieurs contraintes sur les valeurs des paramètres. La statistique du test LM ne nécessite une maximisation que dans cet espace contraint des paramètres (en particulier si l'hypothèse à tester est de la forme alors ).
Régression de PoissonEn statistique, la régression de Poisson est un modèle linéaire généralisé utilisé pour les données de comptage et les tableaux de contingence. Cette régression suppose que la variable réponse Y suit une loi de Poisson et que le logarithme de son espérance peut être modélisé par une combinaison linéaire de paramètre inconnus. Soit un vecteur de variables indépendantes, et la variable que l'on cherche à prédire. Réaliser une régression de Poisson revient à supposer que suit une loi de Poisson de paramètre , avec et les paramètres de la régression à estimer, et le produit scalaire standard de .
Extremum estimatorIn statistics and econometrics, extremum estimators are a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective function, which depends on the data. The general theory of extremum estimators was developed by . An estimator is called an extremum estimator, if there is an objective function such that where Θ is the parameter space. Sometimes a slightly weaker definition is given: where op(1) is the variable converging in probability to zero.