The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald and later proven to be optimal by Wald and Jacob Wolfowitz. Neyman and Pearson's 1933 result inspired Wald to reformulate it as a sequential analysis problem. The Neyman-Pearson lemma, by contrast, offers a rule of thumb for when all the data is collected (and its likelihood ratio known).
While originally developed for use in quality control studies in the realm of manufacturing, SPRT has been formulated for use in the computerized testing of human examinees as a termination criterion.
As in classical hypothesis testing, SPRT starts with a pair of hypotheses, say and for the null hypothesis and alternative hypothesis respectively. They must be specified as follows:
The next step is to calculate the cumulative sum of the log-likelihood ratio, , as new data arrive: with , then, for =1,2,...,
The stopping rule is a simple thresholding scheme:
continue monitoring (critical inequality)
Accept
Accept
where and () depend on the desired type I and type II errors, and . They may be chosen as follows:
and
In other words, and must be decided beforehand in order to set the thresholds appropriately. The numerical value will depend on the application. The reason for being only an approximation is that, in the discrete case, the signal may cross the threshold between samples. Thus, depending on the penalty of making an error and the sampling frequency, one might set the thresholds more aggressively. The exact bounds are correct in the continuous case.
A textbook example is parameter estimation of a probability distribution function. Consider the exponential distribution:
The hypotheses are
Then the log-likelihood function (LLF) for one sample is
The cumulative sum of the LLFs for all x is
Accordingly, the stopping rule is:
After re-arranging we finally find
The thresholds are simply two parallel lines with slope . Sampling should stop when the sum of the samples makes an excursion outside the continue-sampling region.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Instead data are evaluated as they are collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. Thus a conclusion may sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost.
Neural functions operate in tightly controlled conditions that are mediated by multiple electrical and chemical phenomena. Brain disorders such as Parkinson's Disease and Alzheimer's Disease perturb these conditions and cause a loss of neurons, which impai ...
Information regarding occupant flows inside buildings is beneficial for applications such as thermal-load control, market research and security enhancement. Existing methodologies for occupant tracking involve data-driven techniques that rely either on rad ...
Both glutamine (Gln) and glutamate (Glu) are known to exist in plasma and brain. However, despite the assumed relationship between brain and plasma, no studies have clarified the association between them. Proton magnetic resonance spectroscopy (MRS) was se ...