Summary
In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance. Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the magnitude of the Lagrange multipliers associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by S. D. Silvey in 1959, which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since Breusch and Pagan's much-cited 1980 paper. The main advantage of the score test over the Wald test and likelihood-ratio test is that the score test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space. Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis. Let be the likelihood function which depends on a univariate parameter and let be the data. The score is defined as The Fisher information is where ƒ is the probability density. The statistic to test is which has an asymptotic distribution of , when is true.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (15)
MATH-234(b): Probability and statistics
Ce cours enseigne les notions élémentaires de la théorie de probabilité et de la statistique, tels que l'inférence, les tests et la régression.
MATH-236: Probability and statistics II
Linear statistical methods, analysis of experiments, logistic regression.
Show more
Related publications (38)

Generalized Bradley-Terry Models for Score Estimation from Paired Comparisons

Julien René Pierre Fageot, Sadegh Farhadkhani, Oscar Jean Olivier Villemaud, Le Nguyen Hoang

Many applications, e.g. in content recommendation, sports, or recruitment, leverage the comparisons of alternatives to score those alternatives. The classical Bradley-Terry model and its variants have been widely used to do so. The historical model conside ...
AAAI Press2024

Transportation-based functional ANOVA and PCA for covariance operators

Victor Panaretos, Yoav Zemel, Valentina Masarotto

We consider the problem of comparing several samples of stochastic processes with respect to their second-order structure, and describing the main modes of variation in this second order structure, if present. These tasks can be seen as an Analysis of Vari ...
Inst Mathematical Statistics-Ims2024
Show more
Related concepts (7)
Wald test
In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.
Informant (statistics)
In statistics, the informant (or score) is the gradient of the log-likelihood function with respect to the parameter vector. Evaluated at a particular point of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.
Pearson's chi-squared test
Pearson's chi-squared test () is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests (e.g., Yates, likelihood ratio, portmanteau test in time series, etc.) – statistical procedures whose results are evaluated by reference to the chi-squared distribution. Its properties were first investigated by Karl Pearson in 1900.
Show more