Publication

Advantages of Variance Stabilization

Abstract

Variance stabilization is a simple device for normalizing a statistic. Even though its large sample properties are similar to those of studentizing, many simulation studies of confidence interval procedures show that variance stabilization works better for small samples. We investigated this question in the context of testing a null hypothesis involving a single parameter. We provide support for a measure of evidence for an alternative hypothesis that is simple to compute, calibrate and interpret. It has applications in most routine problems in statistics, and leads to more accurate confidence intervals, estimated power and hence sample size calculations than standard asymptotic methods. Such evidence is readily combined when obtained from different studies. Connections to other approaches to statistical evidence are described, with a notable link to Kullback–Leibler symmetrized divergence.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.