**Factors Influencing Power**

Statistical power may depend on a number of factors. Some of these factors may be particular to a specific testing situation, but at a minimum, power nearly always depends on the following three factors:

- the statistical significance criterion used in the test
- the magnitude of the effect of interest in the population
- the sample size used to detect the effect

A **significance criterion** is a statement of how unlikely a result must be, if the null hypothesis is true, to be considered significant. The most commonly used criteria are probabilities of 0.05 (5%, 1 in 20), 0.01 (1%, 1 in 100), and 0.001 (0.1%, 1 in 1000). If the criterion is 0.05, the probability of obtaining the observed effect when the null hypothesis is true must be less than 0.05, and so on. One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion. This increases the chance of rejecting the null hypothesis (i.e. obtaining a statistically significant result) when the null hypothesis is false, that is, reduces the risk of a Type II error. But it also increases the risk of obtaining a statistically significant result (i.e. rejecting the null hypothesis) when the null hypothesis is not false; that is, it increases the risk of a Type I error.

The **magnitude of the effect** of interest in the population can be quantified in terms of an effect size, where there is greater power to detect larger effects. An effect size can be a direct estimate of the quantity of interest, or it can be a standardized measure that also accounts for the variability in the population. For example, in an analysis comparing outcomes in a treated and control population, the difference of outcome means Y − X would be a direct measure of the effect size, whereas (Y − X)/σ where σ is the common standard deviation of the outcomes in the treated and control groups, would be a standardized effect size. If constructed appropriately, a standardized effect size, along with the sample size, will completely determine the power. An unstandardized (direct) effect size will rarely be sufficient to determine the power, as it does not contain information about the variability in the measurements.

The **sample size** determines the amount of sampling error inherent in a test result. Other things being equal, effects are harder to detect in smaller samples. Increasing sample size is often the easiest way to boost the statistical power of a test.

The precision with which the data are measured also influences statistical power. Consequently, power can often be improved by reducing the measurement error in the data. A related concept is to improve the "reliability" of the measure being assessed (as in psychometric reliability).

The design of an experiment or observational study often influences the power. For example, in a two-sample testing situation with a given total sample size *n*, it is optimal to have equal numbers of observations from the two populations being compared (as long as the variances in the two populations are the same). In regression analysis and Analysis of Variance, there is an extensive theory, and practical strategies, for improving the power based on optimally setting the values of the independent variables in the model.

Read more about this topic: Statistical Power

### Famous quotes containing the words factors, influencing and/or power:

“The goal of every culture is to decay through over-civilization; the *factors* of decadence,—luxury, scepticism, weariness and superstition,—are constant. The civilization of one epoch becomes the manure of the next.”

—Cyril Connolly (1903–1974)

“A physician’s physiology has much the same relation to his power of healing as a cleric’s divinity has to his power of *influencing* conduct.”

—Samuel Butler (1835–1902)

“The truths of religion are never so well understood as by those who have lost the *power* of reason.”

—Voltaire [François Marie Arouet] (1694–1778)