Cross-entropyIn information theory, the cross-entropy between two probability distributions and over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution . The cross-entropy of the distribution relative to a distribution over a given set is defined as follows: where is the expected value operator with respect to the distribution .
Precision (statistics)In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, . For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the reciprocal of the variance, . Other summary statistics of statistical dispersion also called precision (or imprecision) include the reciprocal of the standard deviation, ; the standard deviation itself and the relative standard deviation; as well as the standard error and the confidence interval (or its half-width, the margin of error).
Wilks' theoremIn statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine.
Conditional probability distributionIn probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability.
Likelihoodist statisticsLikelihoodist statistics or likelihoodism is an approach to statistics that exclusively or primarily uses the likelihood function. Likelihoodist statistics is a more minor school than the main approaches of Bayesian statistics and frequentist statistics, but has some adherents and applications. The central idea of likelihoodism is the likelihood principle: data are interpreted as evidence, and the strength of the evidence is measured by the likelihood function.
Inverse probabilityIn probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable. Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics, the method of inverse probability (assigning a probability distribution to an unobserved variable) is called Bayesian probability, the "distribution" of data given the unobserved variable is rather the likelihood function (which is not a probability distribution), and the distribution of an unobserved variable, given both data and a prior distribution, is the posterior distribution.
Nuisance parameterIn statistics, a nuisance parameter is any parameter which is unspecified but which must be accounted for in the hypothesis testing of the parameters which are of interest. The classic example of a nuisance parameter comes from the normal distribution, a member of the location–scale family. For at least one normal distribution, the variance(s), σ2 is often not specified or known, but one desires to hypothesis test on the mean(s).
Score testIn statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C.
Poisson regressionIn statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.
Extremum estimatorIn statistics and econometrics, extremum estimators are a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective function, which depends on the data. The general theory of extremum estimators was developed by . An estimator is called an extremum estimator, if there is an objective function such that where Θ is the parameter space. Sometimes a slightly weaker definition is given: where op(1) is the variable converging in probability to zero.