Skip to content

Advertisement

  • Oral presentation
  • Open Access

Holding onto power: why confidence intervals are not (usually) the best basis for sample size calculations

  • 1
Trials201112 (Suppl 1) :A101

https://doi.org/10.1186/1745-6215-12-S1-A101

  • Published:

Keywords

  • Confidence Interval
  • Primary Outcome
  • Effect Estimate
  • Sample Size Calculation
  • Fundamental Difference

Objectives

It has recently been suggested in a high profile paper that statistical power is no longer a useful basis for sample size calculations (Bland, BMJ 2009). It is proposed instead to calculate the sample size to achieve a narrow confidence interval width for the treatment effect estimate. My objective is to critically appraise this proposal.

Methods

I compare the proposed approach to sample size calculations with the traditional statistical power based approach, and to the sample size calculations employed for equivalence studies which are also based on confidence interval width.

Results

With a little simplification, the sample size calculations for the traditional power-based approach, for equivalence studies, and following the new proposal can be shown to be much the same. The single fundamental difference is that the new proposal does not include a multiplier to increase the statistical power beyond 50% (i.e. only a 50:50 chance of detecting a true treatment effect of clinically important magnitude). The attempt to avoid having to define a minimum clinically important difference on a predefined primary outcome is wholly unsuccessful. The calculation of confidence interval width must be based on a particular outcome measure, still requires the size of an unimportant difference to be defined if the confidence interval is to exclude it, and additionally requires a likely true effect of treatment to be defined about which the confidence interval will be centred.

Conclusions

The proposal to base all sample size calculations on confidence interval width does not avoid the need to pre-define the minimum clinically important difference on particular important outcome measures, and in fact additionally requires that the likely effect of the intervention is specified. Most importantly, the approach does not replace statistical power. Statistical power is simply an inflation of the sample size to allow a good chance that a true treatment effect of clinically important magnitude will be detected, even if by chance it is underestimated in the trial data (as it will be, even if only slightly, with 50% probability). I conclude that statistical power is not the source of dissatisfaction with sample size calculations, and there is no real need to replace it as the basis for sample size calculations.

Authors’ Affiliations

(1)
School for Social and Community Medicine, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK

Copyright

© Metcalfe; licensee BioMed Central Ltd. 2011

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines.

Advertisement