Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
When many hypotheses are tested simultaneously, the control of the number of false rejections is often the principal consideration. In practice, this is insatisfacory since we also have to keep in mind the power of detecting true effects. At the moment, the preferred choice of practitioners is often restricted to family-wise error rates (FWER) and false discovery rates (FDR). We recently introduced the scaled multiple testing error rates, which includes most existing error rates and bridges the gap between the FWER and the FDR. For example, the Scaled False Discovery Rate (SFDR) limits the number of false positives (FP) relative to an arbitrary increasing function (s) of the number of rejections (R), by bounding E(FP/s(R)). We compare the performance for different choices of the scaling function s and discuss the optimality of the error rates in some practical scenarios by considering the number of false positives FP separately from the number of true positives TP.
Jean-Philippe Thiran, Erick Jorge Canales Rodriguez, Gabriel Girard, Marco Pizzolato, Alonso Ramirez Manzanares, Juan Luis Villarreal Haro, Alessandro Daducci, Ying-Chia Lin, Sara Sedlar, Caio Seguin, Kenji Marshall, Yang Ji
Jean-Marc Vesin, Adrian Luca, Yann Prudat, Sasan Yazdani, Etienne Pruvot