Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Universal inference enables the construction of confidence intervals and tests without regularity conditions by splitting the data into two parts and appealing to Markov's inequality. Previous investigations have shown that the cost of this generality is a loss of power in regular settings for testing simple hypotheses. The present paper makes three contributions. We first clarify the reasons for the loss of power and use a simple illustrative example to investigate how the split proportion optimizing the power depends on the nominal size of the test. We then show that the presence of nuisance parameters can severely impact the power and suggest a simple asymptotic improvement. Finally, we show that combining many data splits can also sharply diminish power.
Anthony Christopher Davison, Igor Rodionov
Ali H. Sayed, Stefan Vlaski, Virginia Bordignon, Konstantinos Ntemos