 Methodology
 Open Access
 Published:
Sample size estimation for randomised controlled trials with repeated assessment of patientreported outcomes: what correlation between baseline and followup outcomes should we assume?
Trials volume 20, Article number: 566 (2019)
 The Correction to this article has been published in Trials 2019 20:611
Abstract
Background
Patientreported outcome measures (PROMs) are now frequently used in randomised controlled trials (RCTs) as primary endpoints. RCTs are longitudinal, and many have a baseline (PRE) assessment of the outcome and one or more postrandomisation assessments of outcome (POST). With such pretest posttest RCT designs there are several ways of estimating the sample size and analysing the outcome data: analysis of postrandomisation treatment means (POST); analysis of mean changes from pre to postrandomisation (CHANGE); analysis of covariance (ANCOVA).
Sample size estimation using the CHANGE and ANCOVA methods requires specification of the correlation between the baseline and followup measurements. Other parameters in the sample size estimation method being unchanged, an assumed correlation of 0.70 (between baseline and followup outcomes) means that we can halve the required sample size at the study design stage if we used an ANCOVA method compared to a comparison of POST treatment means method. So what correlation (between baseline and followup outcomes) should be assumed and used in the sample size calculation? The aim of this paper is to estimate the correlations between baseline and followup PROMs in RCTs.
Methods
The Pearson correlation coefficients between the baseline and repeated PROM assessments from 20 RCTs (with 7173 participants at baseline) were calculated and summarised.
Results
The 20 reviewed RCTs had sample sizes, at baseline, ranging from 49 to 2659 participants. The time points for the postrandomisation followup assessments ranged from 7 days to 24 months; 464 correlations, between baseline and followup, were estimated; the mean correlation was 0.50 (median 0.51; standard deviation 0.15; range − 0.13 to 0.91).
Conclusions
There is a general consistency in the correlations between the repeated PROMs, with the majority being in the range of 0.4 to 0.6. The implications are that we can reduce the sample size in an RCT by 25% if we use an ANCOVA model, with a correlation of 0.50, for the design and analysis. There is a decline in correlation amongst more distant pairs of time points.
Background
Patientreported outcome measures (PROMs) are now frequently used in randomised controlled trials (RCTs) as primary endpoints. All RCTs are longitudinal, and many have a baseline, or prerandomisation (PRE) assessment of the outcome, and one or more postrandomisation assessments of outcome (POST).
For such pretest posttest RCT designs, using a continuous primary outcome, the sample size estimation and the analysis of the outcome can be done using one of the following methods:

1.
Analysis of postrandomisation treatment means (POST)

2.
Analysis of mean changes from pre to postrandomisation (CHANGE)

3.
Analysis of covariance (ANCOVA).
For brevity (and following Frison and Pocock’s nomenclature [1]), these methods will be referred to as POST, CHANGE and ANCOVA respectively.
Sample size calculations are now mandatory for many research protocols and are required to justify the size of clinical trials in papers before they will be accepted for publication by journals [2]. Thus, when an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a predetermined difference (effect size) in the outcome variable, when the intervention is actually effective, at a given level of significance. Sample size is critically dependent on the type of summary measure, the proposed effect size and the method of calculating the test statistic [3]. For example, for a given power and significance level, the sample size is inversely proportional to the square of the effect size, so halving the effect size will quadruple the sample size. For simplicity, this paper will assume that we are interested in comparing the effectiveness (or superiority) of a new treatment compared to a standard treatment, at a single point in time postrandomisation.
Sample size
In a twogroup study with a Normally distributed outcome, comparing POSTrandomisation mean outcomes between two groups, the number of subjects per group n_{POST} assuming equal sample sizes and equal standard deviations (SDs) per group for a twosided significance level α and power 1 – β is [4]:
where:
δ is the target or anticipated difference in mean outcomes between the two groups
σ is the SD of the outcome postrandomisation (which is assumed to be the same in both groups)
Z_{1 – α/2} and Z_{1 – β} are the appropriate values from the standard normal distribution for the 100 (1 – α/2) and 100 (1 – β) percentiles respectively.
Consider a twogroup study with a Normally distributed outcome, with a single baseline and single postrandomisation assessment of outcomes. Comparing mean outcomes between two groups, adjusted for the baseline or prerandomisation value of the outcome, using an ANCOVA model for the number of subjects per group n_{ANCOVA} (assuming equal sample sizes and equal SDs, at baseline and postrandomisation, per group) for a twosided significance level α and power 1 – β is:
Here, ρ denotes the correlation between the baseline and postrandomisation outcomes and σ is the postrandomisation SD, which is assumed to be the same as the baseline SD [1, 5]. Machin et al. [5] refer to the (1 – ρ^{2}) term as the ’design effect’ (DE).
In a twogroup study with a Normally distributed outcome, comparing the mean change in outcomes (i.e. postrandomisation outcome – baseline) between two groups, the number of subjects per group n_{CHANGE} (assuming equal sample sizes and equal SDs, at baseline and postrandomisation, per group) for a twosided significance level α and power 1 – β is:
Here, δ_{c} is the target or anticipated difference in mean change in outcomes between the two groups and σ is the postrandomisation SD that is assumed to be the same as the baseline SD. If the expected mean values of the baseline outcomes are the same in both groups, which is likely in an RCT, then δc is the same as δ.
Figure 1 shows the relationship between the total sample size and the correlation between the baseline and postrandomisation outcomes, for the three methods of sample size estimation (POST, CHANGE and ANCOVA) with a 5% twosided significance level, 90% power, a target difference (a difference in posttreatment means or a difference in mean changes) of 0.50 and an SD of 1.0. Figure 1 shows how the total sample size is constant for POST irrespective of the baseline and postrandomisation followup correlation; the sample size declines as the correlation increases for ANCOVA and CHANGE; and that for correlations above 0.5 the sample size for ANCOVA is always the lowest and is less than or equal to the sample size for CHANGE.
Example
The SELF study [6] was a multicentre, pragmatic, unblinded, parallelgroup randomised control superiority trial designed to evaluate the clinical effectiveness of a selfmanaged single exercise programme versus usual physiotherapy treatment for rotator cuff tendinopathy (pain or weakness in the shoulder muscles). The intervention was a programme of selfmanaged exercise prescribed by a physiotherapist in relation to the most symptomatic shoulder movement. The control group received usual physiotherapy treatment. The primary outcome measure was the total score on the Shoulder Pain and Disability Index (SPADI) at 3 months postrandomisation. The SPADI Shoulder Score ranges from 0, being the best outcome (less disability), to 100 the worst (greater disability).
The original sample size calculation for the SELF trial assumed that a 10point difference in the mean 3 months postrandomisation SPADI scores between the intervention and control groups would be regarded as a minimum clinical important difference (MCID). It assumed an SD of 24 points, a power of 80% and a (twosided) significance level of 5%, meaning that using the POST sample size formula, 91 participants per group were required (182 in total). However, in light of new information from an external pilot study, the investigators undertook a sample size reestimation (SSR) calculation, which was approved by the ethics committee. The new information related to a narrower estimate of population variance from an external pilot RCT (n = 24) of 16.8 points on the SPADI and, additionally, a correlation between baseline and 3 months SPADI scores of 0.5. Using the ANCOVA sample size formula, with an SD of 17 points; correlation between baseline and 3 months SPADI scores of 0.50, 80% power, 5% twosided significance and a MCID (as before) of 10 points, it was estimated that 34 participants per group were required (68 in total). This contrasts with a sample size of 45 per group using the POST means formula with the revised SD of 17 points. Thus, with a correlation of 0.50 between baseline and followup, using the ANCOVA method for sample size estimation, we can reduce the sample size by approximately 25% (i.e. 1–0.5^{2}) compared to the POST treatment means method.
Should the method of sample size estimation mirror the proposed method of statistical analysis (of the outcome data)? That is, if an ANCOVA model is likely to be used in the statistical analysis of the collected outcome data, should an ANCOVA method that allows for the correlation also be used in the sample size estimation method? And if so, what correlation (between baseline and followup outcomes) should be assumed and used in the sample size estimation? Other factors/parameters in the sample size estimation method being unchanged, an assumed correlation of 0.70 (between baseline and followup outcomes) means that we can halve the require sample size at the study design stage, if we used an ANCOVA method compared to a comparison of POST treatment means method. It is, however, paramount to assess how realistic a correlation of 0.50 or 0.70 between baseline and postrandomisation outcomes is, and to make evidencebased assumptions on these values, as an overestimated correlation could result in an underpowered study. The aim of this paper is to estimate the observed correlations between baseline and postrandomisation followup PROMs from a number of RCTs, bridging a gap in the evidence.
Methods
Data sources
This was a secondary analysis of RCTs with continuous patientreported outcomes (both primary and secondary) undertaken in the School of Health and Related Research (ScHARR) at the University of Sheffield published between 1998 and 2019. Secondary ethics approval was gained through the University of Sheffield ScHARR Ethics Committee (Reference 024041).
Statistical analysis
For each included trial, the correlation between baseline and postrandomisation outcomes was calculated using the Pearson correlation coefficient [7]. Given a set of n pairs of observations (x_{1}, y_{1}), (x_{2}, y_{2}), …, (x_{n}, y_{n}), with means \( \overline{x} \) and \( \overline{y} \) respectively, then the Pearson correlation coefficient r is given by:
with a standard error SE(r) = \( SE(r)=\sqrt{\frac{1{r}^2}{n2}} \).
A variety of summary statistics for the baseline and postrandomisation correlations were calculated, including (1) the unweighted sample mean and median; (2) a weighted sample mean, using the fixed effect inverse variance method [4], and (3) a sample mean with allowance for clustering by trial derived from a multilevel mixedeffects linear model with a random effect for the trial using restricted maximum likelihood estimation (REML) [8]. The correlations were calculated overall and then split by trial, outcome and time point.
Results
Trials
Table 1 shows a summary of the 20 RCTs included in the analysis. Various outcome measures were used in the trials for both the primary and secondary outcomes. Table 2 provides a brief description of the outcome measures and how they were scaled. Three of the outcome measures, the Clinical Outcomes in Routine Evaluation  Outcome Measure (COREOM), Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire (PISQ31) and SPADI, have a total score and various subscales: both were included in the analysis. The 20 included RCTs had sample sizes (at baseline) ranging from 49 to 2659 participants. The time points for the postrandomisation to followup assessments ranged from 7 days to 24 months. The maximum sample size for the baseline followup correlations ranged from 39 to 2659 participants. Fourhundred and sixtyfour correlations between baseline and followup were estimated in the 20 trials. Table 1 shows, for example, that the Leg Ulcer trial (Trial 1) had 9 outcomes all assessed at 2 postrandomisation time points (3 and 12 months), giving a total of 2 × 9 = 18 correlations. The median number of outcomes per trial was 9 and ranged from 1 (in the 3Mg trial) to 15 (AIMHigh, PLINY and IPSU). The median number of correlations calculated per trial was 16.5 and ranged from 1 (in the 3Mg trial) to 65 (in the DiPALS trial). The median number of postrandomisation followup time points across the 20 trials was 2.5 and ranged from 1 to 6.
Correlation
Figure 2 shows a histogram of the 464 estimated baseline to followup correlations. The histogram is reasonably symmetrical, and the overall mean correlation was 0.50 (median of 0.51). The baseline to followup correlations ranged from − 0.13 to 0.91 with an interquartile range of 0.41 to 0.60. Since the sample sizes for the trials varied from 49 to 2659 participants, a weighted estimate of the mean correlation, using the inverse variance method, was 0.51. Since the 464 correlation estimates were from 20 trials and the correlations were nested or clustered with trials, the estimated mean correlation after allowing for clustering by trial, using a multilevel mixedeffects linear regression model (with a random effect or intercept for the trial), was 0.49 (95% confidence interval [CI] 0.45 to 0.53). These other summary estimates were very similar to the simple unweighted mean value of 0.50.
Table 3 shows the baseline to postrandomisation followup correlations aggregated by trial. The largest average correlations per trial showed a mean of 0.67 observed in the PLINY trial; the lowest average correlations were observed in the POLAR trial. The trial with the widest range of correlations was the PRACTICE trial. Figure 3 shows a box and whisker plot of how the observed baseline to followup correlations varied across the 20 RCTs along with the overall median correlation. There was considerable intertrial variation in the correlations, and it should be noted that some of the trials had less than or equal to six baseline to followup correlations estimated (3Mg [N = 1 outcome and correlation], BEADS [N = 3], Homeopathy [N = 5] and PRACTICE [N = 6]).
The time points for the postrandomisation followup assessments ranged from 7 days to 24 months. Table 4 shows the baseline to postrandomisation followup correlations by postrandomisation followup time point. Figure 4 shows a scatter plot of the baseline to followup correlations by postrandomisation followup time point for the 464 correlations from the 20 trials. Although it is not obvious from the scatter plot, a multilevel mixedeffects linear regression model (with a random intercept for the trial) suggests a small decline in the baseline to postrandomisation followup correlations the further the time points are apart. The estimated regression coefficient from the model was − 0.003 (95% CI − 0.006 to − 0.001; P = 0.005). This implies that for every unit or 1month increase in the time from baseline to the postrandomisation followup the correlation declines by 0.003 point. Figures 5 and 6 show how the correlations change over time for the Short Form Health Survey (SF36) outcomes (282 correlations and 12 trials) and the EuroQol five dimension scale (EQ5D) Utility score outcome (29 correlations and 12 trials). A similar pattern to the overall pattern is observed for these specific outcomes with a small decline (0.003 for the SF36 and 0.002 for the EQ5D) in baseline to followup correlations over time.
Table 5 shows the baseline to postrandomisation correlations by outcome. The SF36 was the most popular outcome and used in 12 out of the 20 trials. The correlations for SF36 outcomes and its various dimensions (12 trials and n = 282 correlations) showed a mean of 0.51 (median 0.53), range 0.06 to 0.91. The second most popular outcome was the EQ5D, which was used in 12 of the trials as well. Correlations for EQ5D outcomes only (12 trials and n = 50 correlations) showed a mean of 0.49 (median 0.51), range − 0.13 to 0 87. Three of the outcome measures, the COREOM, PISQ31 and SPADI, in Table 5 have a total score and various subscales. There was no clear pattern in the correlations and no reliable evidence that the total scale score correlated more highly than an individual subscale score.
Discussion
The 20 reviewed RCTs had sample sizes, at baseline, ranging from 49 to 2659 participants. The time points for the postrandomisation followup assessments ranged from 7 days to 24 months; 464 correlations between baseline and followup were estimated; the mean correlation was 0.50 (median 0.51; SD 0.15; range − 0.13 to 0.91).
The 20 RCTs included in this study were a convenience sample of trials and data and may not be representative of the population of all trials with PROMs. However, they include a wide range of populations and disease areas, a variety of different interventions and outcomes that are not untypical of other published trials. We also reviewed detailed reports of 181 RCTs published in the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) journal from 2004 to the end of July 2017 and found 11 NIHR HTA reports (and 12 outcomes) that had a sample size calculation based on the ANCOVA model [50]. For these 12 outcomes the mean baseline to followup correlation that was assumed and used in the subsequent sample size calculation was 0.49 (SD 0.09) and ranged from 0.31 to 0.60. Thus, our results, with a mean correlation of 0.50, are consistent with correlations used and published in the NIHR HTA journal.
We observed a small decline in baseline to followup correlations over time of − 0.003 per month. That is, for every unit or 1month increase in the time from baseline to the postrandomisation followup, the correlation declines by 0.003 point. Frison and Pocock [1] also report a slight decline in correlation amongst more distant pairs of time points postrandomisation, with the estimated slope being − 0.009 per month apart. So our results are also consistent with a slight decline.
It is important to make maximum use of the information available from other related studies or extrapolation from other unrelated studies. The more precise the information, the better we can design the trial. We would recommend that researchers planning a study with PROMs as the primary outcome pay careful attention to any evidence on the validity and frequency distribution of the PROM and its dimensions.
Strictly speaking, our results and conclusions only apply to the study population and the outcome measures used in the 20 RCTs. Further empirical work is required to see whether these results hold true for other outcomes, populations and interventions. However, the PROMs in this paper share many features in common with other PROM outcomes, i.e. multidimensional, ordinal or discrete response categories with upper and lower bounds, and skewed distributions; therefore, we see no theoretical reasons why these results and conclusions may not be appropriate for other PROMs.
Throughout this paper, we only considered the situation where a single dimension of the PROM is used at a single endpoint. Sometimes there is more than one endpoint of interest; PROMs are typically multidimensional (e.g. the SF36 has eight dimensions). If one of these dimensions is regarded as more important than the others, it can be named as the primary endpoint and the sample size estimates calculated accordingly. The remainder should be consigned to exploratory analyses or descriptions only.
We have also assumed a rather simple form of the alternative hypothesis that the new treatment/intervention would improve patientreported outcomes compared to the control/standard therapy. This form of hypothesis (superiority versus equivalence) may be more complicated than actually presented. However, the assumption of a simple form of the alternative hypothesis—that the new treatment/intervention would improve outcomes compared to the control/standard therapy—is not unrealistic for most superiority trials and is frequently used for other clinical outcomes. Walters gives a more comprehensive discussion of multiple endpoints and suggests several methods for analysing PROMs [4].
Overall, 5 of the 464 observed correlations were small (less than 0.10). Two of these small correlations came from the PRACTICE trial [26]. In this trial (PRACTICE) we observed a negative correlation of − 0.13 (n = 36 participants) between the baseline and 3 months followup postrandomisation time point for the EQ5D visual analogue scale (VAS) and 0.09 (n = 42 participants) between the baseline and 1 month followup. The correlations were based on small sample sizes (n = 36 and 42), and examination of the scatter plots suggested no outlying values and a random scatter. The EQ5D VAS outcome asks respondents to rate their health today on a 0 (the worst health you can imagine) to 100 (best health you can imagine) visual analogue scale. It may be that there genuinely is no correlation in the population (of chronic obstructive pulmonary disease [COPD] patients) with this outcome.
We calculated several summary correlations to allow for clustering of the outcomes by trial and the variance or standard error of the correlation estimate. The overall summary correlation for the 464 correlations was robust to the summary measure (mean, median, weighted mean, clustered mean) and was around 0.50.
Clifton and Clifton [51] comment that baseline imbalance may occur in RCTs and that ANCOVA should be used to adjust for baseline in the analysis. Clifton et al. [52] also point out the following theoretical assumptions for using the ANCOVA method for sample size estimation: (1) the pairs of baseline and postrandomisation outcomes follow a bivariate normal distribution; (2) the values of the baseline to postrandomisation followup, r, are the same in both groups; (3) the variances or SDs of the outcomes are the same in both groups. However, ANCOVA is known to be robust to departures from the assumptions of Normality. The work of Heeren and D’Agostino [53]and Sullivan and D’Agostino [54] supports the robustness of the two independent samples t test and ANCOVA when applied to three, four and fivepoint ordinal scaled data using assigned scores (like PROMs), in sample sizes as small as 20 subjects per group.
Conclusions
There is a general consistency in the correlations between the baseline and followup PROMs, with the majority being in the range from 0.4 to 0.6. The implications are that we can reduce the sample size in an RCT by 25% if we use an ANCOVA model, with a correlation of 0.50, for the design and analysis. When allowing for the correlation between baseline and followup outcome in the sample size calculation, it is preferable to be conservative and use existing data that are relevant to your outcome and your population if they are available. Secondly, be wary of having an ’automatic’ rule of adjusting your required sample size downwards by 25% just because you have a baseline assessment.
There is a slight decline in correlation between baseline and more distant postrandomisation followup time points. Finally, we would stress the importance of a sample size calculation (with all its attendant assumptions) and also stress that any such estimate is better than no sample size calculation at all, particularly in a trial protocol [55, 56]. The mere fact of calculation of a sample size means that a number of fundamental issues have been considered: what is the main outcome variable, what is a clinically important effect, and how is it measured? The investigator is also likely to have specified the method and frequency of data analysis. Thus, protocols that are explicit about sample size are easier to evaluate in terms of scientific quality and the likelihood of achieving objectives.
Availability of data and materials
The data set is available on request from the corresponding author at s.j.walters@sheffield.ac.uk.
Change history
28 October 2019
Following publication of the original article [1], we have been notified that one of an error in the Conclusions section of the Abstract.
Abbreviations
 ANCOVA:

Analysis of covariance
 COPD:

Chronic obstructive pulmonary disease
 DE:

Design effect
 HTA:

Health Technology Assessment
 MCID:

Minimum clinical important difference
 NIHR:

National Institute for Health Research
 PROM:

Patient reported outcome measure
 RCT:

Randomised controlled trial
 ScHARR:

School of Health and Related Research
 SPADI:

Shoulder Pain & Disability Index
 SSR:

Sample size reestimation
References
 1.
Frison L, Pocock SJ. Repeated measures in clinical trials: analysis using mean summary statistics and its implications for design. Stat Med. 1992;11(13):1685–704.
 2.
Altman DG, Gardner MJ, Martin J. Statistics with confidence: confidence intervals and statistical guidelines. London: BMJ Books; 2000.
 3.
Walters SJ. Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF36. Health Qual Life Outcomes. 2004;2:26.
 4.
Walters SJ. Quality of life outcomes in clinical trials and healthcare evaluation:a practical guide to analysis and interpretation. Chichester: Wiley; 2009. p. 1–365.
 5.
Machin D, Campbell MJ, Tan SB, Tan SH. Sample sizes for clinical, laboratory and epidemiology studies. 4th ed. Chichester: WileyBlackwell; 2018.
 6.
Littlewood C, Bateman M, Brown K, Bury J, Mawson S, May S, et al. A selfmanaged single exercise programme versus usual physiotherapy treatment for rotator cuff tendinopathy: a randomised controlled trial (the SELF study). Clin Rehabil. 2016;30(7):686–96.
 7.
Campbell MJ, Machin D, Walters SJ. Medical statistics: a textbook for the health sciences. 4th edition. Chichester: Wiley; 2007.
 8.
Campbell MJ, Walters SJ. How to design, analyse and report cluster randomised trials in medicine and health related research. Chichester: WileyBlackwell; 2014.
 9.
Morrell CJ, Walters SJ, Dixon S, Collins KA, Brereton LML, Peters J, et al. Cost effectiveness of community leg ulcer clinics: Randomised controlled trial. Br Med J. 1998;316(7143):1487–91.
 10.
Jack DS, Prestele H, Bakshi R. Clinical Study Report. A doubleblind, randomised, controlled study to compare methotrexate plus cyclosporine A/neoral vs. methotrexate plus placebo in subjects with early severe rheumatoid arthritis. Basel, Switzerland; 2000.
 11.
WeatherleyJones E, Nicholl JP, Thomas KJ, Parry GJ, McKendrick MW, Green ST, et al. A randomised, controlled, tripleblind trial of the efficacy of homeopathic treatment for chronic fatigue syndrome. J Psychosom Res. 2004;56(2):189–97.
 12.
Thomas KJ, MacPherson H, Ratcliffe J, Thorpe L, Brazier J, Campbell M, et al. Longer term clinical and economic benefits of offering acupuncture care to patients with chronic low back pain. Health Technol Assess. 2005;9(32):1–109.
 13.
Mitchell C, Walker J, Walters S, Morgan AB, Binns T, Mathers N. Costs and effectiveness of pre and postoperative home physiotherapy for total knee replacement: randomized controlled trial. J Eval Clin Pract. 2005;11(3):283–92.
 14.
Gariballa S, Forster S, Walters S, Powers H. A randomized, doubleblind, placebocontrolled trial of nutritional supplementation during acute illness. Am J Med. 2006;119(8):693–9.
 15.
Dixon S, Walters SJ, Turner L, Hancock BW. Quality of life and costeffectiveness of interferonalpha in malignant melanoma: results from randomised trial. Br J Cancer. 2006;94(4):492–8.
 16.
Morrell CJ, Slade P, Warner R, Paley G, Dixon S, Walters SJ, et al. Clinical effectiveness of health visitor training in psychologically informed approaches for depression in postnatal women: pragmatic cluster randomised trial in primary care. BMJ. 2009;338(7689):a3045.
 17.
Waterhouse JC, Walters SJ, Oluboyede Y, Lawson RA. A randomised 2 x 2 trial of community versus hospital pulmonary rehabilitation, followed by telephone or conventional followup. Health Technol Assess. 2010;14(6):i–v.
 18.
Farndon LJ, Vernon W, Walters SJ, Dixon S, Bradburn M, Concannon M, et al. The effectiveness of salicylic acid plasters compared with ‘usual’ scalpel debridement of corns: a randomised controlled trial. J Foot Ankle Res. 2013;6(1):40.
 19.
Mountain GA, Hind D, GossageWorrall R, Walters SJ, Duncan R, Newbould L, et al. ‘Putting Life in Years’ (PLINY) telephone friendship groups research study: pilot randomised controlled trial. Trials. 2014;15(1):141.
 20.
Goodacre S, Cohen J, Bradburn M, Stevens J, Gray A, Benger J, et al. The 3Mg trial: a randomised controlled trial of intravenous or nebulised magnesium sulphate versus placebo in adults with acute severe asthma. Health Technol Assess. 2014;18(22):1–168.
 21.
Thomas SA, Coates E, das Nair R, Lincoln NB, Cooper C, Palmer R, et al. Behavioural Activation Therapy for Depression after Stroke (BEADS): a study protocol for a feasibility randomised controlled pilot trial of a psychological intervention for poststroke depression. Pilot Feasibility Stud. 2016;2(1):45.
 22.
McDermott CJ, Bradburn MJ, Maguire C, Cooper CL, Baird WO, Baxter SK, et al. DiPALS: Diaphragm Pacing in patients with Amyotrophic Lateral Sclerosis – a randomised controlled trial. Health Technol Assess (Rockv). 2016;20(45):1–186.
 23.
Mountain G, Windle G, Hind D, Walters S, Keertharuth A, Chatters R, et al. A preventative lifestyle intervention for older adults (lifestyle matters): a randomised controlled trial. Age Ageing. 2017;46(4):627–34.
 24.
Jha S, Walters SJ, Bortolami O, Dixon S, Alshreef A. Impact of pelvic floor muscle training on sexual function of women with urinary incontinence and a comparison of electrical stimulation versus standard treatment (IPSU trial): a randomised controlled trial. Physiother. 2018;104(1):91–7.
 25.
Reddington M, Walters SJ, Cohen J, Baxter SK, Cole A. Does early intervention improve outcomes in the physiotherapy management of lumbar radicular syndrome? Results of the POLAR pilot randomised controlled trial. BMJ Open. 2018;8(7):e021631.
 26.
Cox M, O’Connor C, Biggs K, Hind D, Bortolami O, Franklin M, et al. The feasibility of early pulmonary rehabilitation and activity after COPD exacerbations: external pilot randomised controlled trial, qualitative case study and exploratory economic evaluation. Health Technol Assess . 2018;22(11):1–204.
 27.
Holt RI, Hind D, GossageWorrall R, Bradburn MJ, Saxon D, McCrone P, et al. Structured lifestyle education to support weight loss for people with schizophrenia, schizoaffective disorder and first episode psychosis: the STEPWISE RCT. Health Technol Assess. 2018;22(65):1–160.
 28.
Broadbent E, Petrie KJ, Main J, Weinman J. The Brief Illness Perception Questionnaire. J Psychosom Res. 2006;60(6):631–7.
 29.
British Spine Registry. British Spine Registry VAS (Back and Leg) Score Forms [Internet]. https://www.britishspineregistry.com/downloads/. Accessed 2 Jul 2019.
 30.
Collin C, Wade DT, Davies S, Horne V. The Barthel ADL Index: a reliability study. Int Disabil Stud. 1988;10(2):61–3.
 31.
Overall J, Gorham D. The Brief Psychiatric Rating Scale (BPRS). Psychol Rep. 1962;10:799–812.
 32.
Evans C, Connell J, Barkham M, Margison F, McGrath G, MellorClark J, et al. Towards a standardised brief outcome measure: psychometric properties and utility of the COREOM. Br J Psychiatry. 2002;180:51–60.
 33.
Aaronson NK, Ahmedzai S, Bergman B, Bullinger M, Cull A, Duez NJ, et al. The European Organization for Research and Treatment of Cancer QLQC30: a qualityoflife instrument for use in international clinical trials in oncology. J Natl Cancer Inst. 1993;85(5):365–76.
 34.
Cox JL, Holden JM, Sagovsky R. Detection of postnatal depression. Development of the 10item Edinburgh Postnatal Depression Scale. Br J Psychiatry. 1987;150:782–6.
 35.
Dolan P. Modeling valuations for EuroQol health states. Med Care. 1997;35(11):1095–108.
 36.
EuroQol Group. EuroQol—a new facility for the measurement of healthrelated quality of life. Health Policy. 1990;16(3):199–208.
 37.
Schwarzer R, Jerusalem M. Generalized SelfEfficacy Scale. In: Weinman J, Wright S, Johnston M, editors. Measures in health psychology: a user’s portfolio. Windsor: NFERNELSON; 1995. p. 35–7.
 38.
Smets EM, Garssen B, Bonke B, De Haes JC. The Multidimensional Fatigue Inventory (MFI) psychometric qualities of an instrument to assess fatigue. J Psychosom Res. 1995;39(3):315–25.
 39.
Fairbank JC, Pynsent PB. The Oswestry Disability Index. Spine (Phila Pa 1976). 2000;25(22):2940–52 discussion 2952.
 40.
Tinkler L, Hicks S. Measuring subjective wellbeing. London: Office for National Statistics; 2011. p. 29.
 41.
Kroenke K, Spitzer RL, Williams JBW. The PHQ9. J Gen Intern Med. 2001;16(9):606–13.
 42.
Rogers RG, KammererDoak D, Villarreal A, Coates K, Qualls C. A new instrument to measure sexual function in women with urinary incontinence or pelvic organ prolapse. Am J Obstet Gynecol. 2001;184(4):552–8.
 43.
Ware JE, Snow KK, Kosinski M, Gandek B. SF36 Health Survey Manual and Intepretation Guide. Boston: The Health Institute, New England Medical Center; 1993.
 44.
Ware JE, Kosinski M, Keller SD. SF36 Physical and Mental Health Summary Scales: a user’s manual. Boston: The Health Institute, New England Medical Center; 1994.
 45.
Brazier J, Roberts J, Deverill M. The estimation of a preferencebased measure of health from the SF36. J Health Econ. 2002;21(2):271–92.
 46.
Flemons WW, Reimer MA. Development of a diseasespecific healthrelated quality of life questionnaire for sleep apnea. Am J Respir Crit Care Med. 1998;158(2):494–503.
 47.
Roach KE, BudimanMak E, Songsiridej N, Lertratanakul Y. Development of a shoulder pain and disability index. Arthritis Care Res. 1991;4(4):143–9.
 48.
Hawker GA, Mian S, Kendzerska T, French M. Measures of adult pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), McGill Pain Questionnaire (MPQ), ShortForm McGill Pain Questionnaire (SFMPQ), Chronic Pain Grade Scale (CPGS), Short Form36 Bodily Pain Scale (SF). Arthritis Care Res (Hoboken). 2011;63(S11):S240–52.
 49.
Bellamy N, Buchanan WW, Goldsmith CH, Campbell J, Stitt LW. Validation study of WOMAC: a health status instrument for measuring clinically important patient relevant outcomes to antirheumatic drug therapy in patients with osteoarthritis of the hip or knee. J Rheumatol. 1988;15(12):1833–40.
 50.
Walters SJ, Dos Anjos HenriquesCadby IB, Bortolami O, Flight L, Hind D, Jacques RM, et al. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme. BMJ Open. 2017;7(3):e015276.
 51.
Clifton L, Clifton DA. The correlation between baseline score and postintervention score, and its implications for statistical analysis. Trials. 2019;20(1):43.
 52.
Clifton L, Birks J, Clifton DA. Comparing different ways of calculating sample size for two independent means: a worked example. Contemp Clin Trials Commun. 2019;13:100309.
 53.
Heeren T, D’Agostino R. Robustness of the two independent samples ttest when applied to ordinal scaled data. Stat Med. 1987;6(1):79–90.
 54.
Sullivan LM, D’Agostino RB. Robustness and power of analysis of covariance applied to ordinal scaled data as arising in randomized controlled trials. Stat Med. 2003;22(8):1317–34.
 55.
Walters SJ. Consultants’ forum: should post hoc sample size calculations be done? Pharm Stat. 2009;8(2):163–9.
 56.
Walters SJ, Campbell MJ. The use of bootstrap methods for estimating sample size and analysing healthrelated quality of life outcomes. Stat Med. 2005;24(7):1075–102.
Acknowledgements
Professor Walters is an NIHR Senior Investigator. The views expressed in this article are those of the author(s) and not necessarily those of the National Health Service (NHS), the NIHR or the Department of Health.
Funding
This research received no specific grant from any funding agency in any public, commercial or notforprofit sector.
Author information
Affiliations
Contributions
SJW is the guarantor of the study, had full access to all the data in the study and is responsible for the integrity of the data and the accuracy of the data analysis. SJW contributed to the study conception and design, acquisition of data, analysis and interpretation of data and writing of the report. IBHC contributed towards the selection, extraction and analysis of the data, as well as the drafting of the paper and graphics within it. RMJ contributed to the selection and extraction of the data, as well as the drafting of the paper. NT contributed to the selection and extraction of the data, as well as the drafting of the paper. JC contributed to the selection and extraction of the data, as well as the drafting of the paper. MTSX contributed to the selection and extraction of the data, as well as the drafting of the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Secondary ethics approval was gained through the University of Sheffield School of Health and Related Research Ethics Committee (Reference 024041).
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Walters, S.J., Jacques, R.M., dos Anjos HenriquesCadby, I.B. et al. Sample size estimation for randomised controlled trials with repeated assessment of patientreported outcomes: what correlation between baseline and followup outcomes should we assume?. Trials 20, 566 (2019). https://doi.org/10.1186/s1306301936712
Received:
Accepted:
Published:
Keywords
 Sample size estimation
 Review
 Randomised controlled trials
 Health Technology Assessment
 Publicly funded
 Correlations
 ANCOVA
 Patientreported outcome measures