The power or sensitivity of a statistical test is the probability that it correctly rejects the null hypothesis (H0) when it is false. It can be equivalently thought of as the probability of correctly accepting the alternative hypothesis (H1) when it is true – that is, the ability of a test to detect an effect, if the effect actually exists. That is, : The power is in general a function of the possible distributions, often determined by a parameter, under the alternative hypothesis. As the power increases, the chances of a Type II error (false negative), which are referred to as the false negative rate (‘
), decrease, as the power is equal to 1. A similar concept is Type I error, or “false positive”. Power analysis can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size. Power analysis can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric and a nonparametric test of the same hypothesis.