The harmonic mean p-value (HMP) is a statistical technique for addressing the multiple comparisons problem that controls the strong-sense family-wise error rate (this claim has been disputed). It improves on the power of Bonferroni correction by performing combined tests, i.e. by testing whether groups of p-values are statistically significant, like Fisher's method. However, it avoids the restrictive assumption that the p-values are independent, unlike Fisher's method. Consequently, it controls the false positive rate when tests are dependent, at the expense of less power (i.e. a higher false negative rate) when tests are independent. Besides providing an alternative to approaches such as Bonferroni correction that controls the stringent family-wise error rate, it also provides an alternative to the widely-used Benjamini-Hochberg procedure (BH) for controlling the less-stringent false discovery rate. This is because the power of the HMP to detect significant groups of hypotheses is greater than the power of BH to detect significant individual hypotheses.
There are two versions of the technique: (i) direct interpretation of the HMP as an approximate p-value and (ii) a procedure for transforming the HMP into an asymptotically exact p-value. The approach provides a multilevel test procedure in which the smallest groups of p-values that are statistically significant may be sought.
The weighted harmonic mean of p-values is defined as where are weights that must sum to one, i.e. . Equal weights may be chosen, in which case .
In general, interpreting the HMP directly as a p-value is anti-conservative, meaning that the false positive rate is higher than expected. However, as the HMP becomes smaller, under certain assumptions, the discrepancy decreases, so that direct interpretation of significance achieves a false positive rate close to that implied for sufficiently small values (e.g. ).
The HMP is never anti-conservative by more than a factor of for small , or for large .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Delves into hypothesis testing, covering test statistics, critical regions, power functions, p-values, multiple testing, and non-parametric statistics.
In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. The method is named for its use of the Bonferroni inequalities. An extension of the method to confidence intervals was proposed by Olive Jean Dunn. Statistical hypothesis testing is based on rejecting the null hypothesis if the likelihood of the observed data under the null hypotheses is low. If multiple hypotheses are tested, the probability of observing a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections of the null). Equivalently, the FDR is the expected ratio of the number of false positive classifications (false discoveries) to the total number of positive classifications (rejections of the null).
In multiple testing problems where the components come from a mixture model of noise and true effect, we seek to first test for the existence of the non-zero components, and then identify the true alternatives under a fixed significance level α. Two ...
EPFL2021
Fluorescent nanodiamonds (fNDs) represent an emerging class of nanomaterials offering great opportunities for ultrahigh resolution imaging, sensing and drug delivery applications. Their biocompatibility, exceptional chemical and consistent photostability r ...
AMER CHEMICAL SOC2019
, , , , , , , ,
The enormous variation in human lifespan is in part due to a myriad of sequence variants, only a few of which have been revealed to date. Since many life-shortening events are related to diseases, we developed a Mendelian randomization-based method combini ...