Simple randomisation
The results of this survey have shown that trialists are currently using simple randomisation less often than permuted blocks, stratification and minimisation. The size of trial considered too small for simple randomisation ranged from <50 to <1,000, although some trialists surveyed believed that simple randomisation was suitable for any size of trial.
Within the wider literature, there is evidence that simple randomisation is suitable for larger trials[3, 15–17] but there is a high probability of possible imbalance between treatment groups in trials of up to 500. Simulations showed that imbalance can occur often in small trials with <100 participants (for example, >80% of occasions), but imbalance rarely occurs in trials with ≥1,000 participants[15]. Pocock and Simon also suggested that simple randomisation was only to be recommended for larger trials (say >200 participants), and cautioned that even in large trials problems may occur if one intends to analyse early results (say for data monitoring)[16].
Stratification
Forty-three per cent of trialists wrote that the number of strata to be used should depend on the size of trial. There was no general consensus on the number of strata that were considered too many, with suggested values between 1 and 100.
Pocock and Simon stated that it is seldom advisable to stratify on more than three or four variables and the size of the trial is the most important factor in deciding how many stratification variables are feasible[16]. Peto and colleagues dismiss stratification as a complication rendered unnecessary by the development of methods of analysis that adjust for covariates[18].
Pocock and Simon used simulations to show that increasing imbalance between treatment groups can arise due to increasing number of strata with incomplete randomised blocks[16]. Signorini and colleagues asserted that using a randomised block design presented a possibility of the overall trial having large differences in number of patients in each treatment, even though each stratum has more or less equal numbers on each treatment[13]. This leads to a loss in efficiency in the analysis. The same problem can arise when employing a randomisation scheme using minimisation with stratification by centre or clinician, although International Conference on Harmonisation (ICH) guidelines still recommend this[19].
Balancing prognostic factors at baseline
More than one-half of the trialists surveyed thought that groups should be balanced at baseline. The primary reasons given were that it helps face-validity and the analysis is simpler. The suggested limit on the number of prognostic factors ranged from 2 to 10, and some trialists would not set any limit.
There is controversy, especially among statisticians, as to whether prognostic factors should be used in assignment to treatment[20]. There are two main viewpoints: prognostic factors should be used to assure that patients assigned to the two arms show close balance in prognoses at baseline; or random assignment should be used without regard to prognostic factors, as fair comparison of treatment effect can be achieved through statistical adjustment of results.
Armitage and Gehan concluded that in a small to moderate-sized trial (≤100 patients) results might be invalid if prognostic factors are not used[21]. Comparisons between treatments should be made between groups that are comparable with respect to prognostic factors. Similarly, Rovers and colleagues suggested that investigators should always consider balanced allocation for a low number of patients[22]. They stated that, in a trial with 100 to 200 patients, substantial differences can occur in baseline characteristics if simple randomisation is used; if these differences can be measured, then they can be corrected to some degree in the analysis. If the number of patients decreases or the number of prognostic variables or categories increases, then the resulting imbalance could invalidate results.
The Committee for Proprietary Medicinal Products state that stratification for more than a few prognostic factors is not always possible[23]. They describe techniques of dynamic allocation such as minimisation that are often used to balance across several factors simultaneously as highly controversial even if deterministic schemes are avoided. They therefore strongly advise against the use of such methods. When deterministic schemes are used, the Committee for Proprietary Medicinal Products also suggest that the factors used in the allocation scheme should also be included as covariates in the analysis (although whether the analysis adequately reflects the randomisation scheme also remains controversial). They also state that if a multicentre trial is not stratified by centre then the reasons for not doing this should be explained and justified in the protocol.
Predictability
Nineteen per cent of trialists who had used minimisation were very concerned that the method was largely deterministic and 57% were only mildly concerned. Twenty-eight per cent were not concerned at all in 2003, but in 2011 this number had dropped to zero. Among the trialists surveyed, strategies for reducing the predictability of next assignment included not declaring the factors in the protocol, adding in other factors to produce noise, not stratifying by centre and alternating between minimisation and simple randomisation. When using a random element, the level of randomness chosen ranged from 0.66 to 0.95.
When choosing a balanced allocation method the probability of performing the prescribed allocation should be large enough to control imbalances in strata but small enough to prevent selection bias through prediction of next treatment[24]. The use of a random element (probability of assignment P<1) is supported by ICH E9[19], which also states that factors on which randomisation has been stratified should be accounted for later in the analysis. Gore recommends a value for probability of assignment of 0.75[25]. In a multicentre trial, predictability is not a problem unless stratifying by centre as recommended by ICH E9 guidelines, when all centres are essentially independent or they are not aware of the bands used[20].
The Food and Drug Administration also consider minimisation not to be random and recommend the incorporation of a random element[26]. However, a minimisation procedure to allocate treatment that does not allow investigators to predict the next treatment allocation would yield a properly randomised trial[16].
Randomised block design can also be highly predictable[13] since each block must contain equal numbers of patients on each treatment. The block size should be hidden from investigators but can sometimes be deduced from previous assignments if the block size is fixed. Therefore it is better to use random block sizes.
Analysis
In 2003 almost one-quarter of those using minimisation never adjusted for prognostic factors in the analysis, believing that if the minimisation has worked then there should be no need to do so. In 2011 this number had dropped to zero. Forty per cent indicated that they always adjusted for the minimisation factors in the analysis.
Taves maintains that clinical trials managed by minimisation should use ANCOVA for statistical comparisons[10]. Lachin and colleagues indicate for minimisation that the statistical analysis must incorporate adjustments for the covariates employed in the design in order to yield tests of proper size[17].
Forsythe and Stitt studied analysis of variance and ANCOVA in a simulation study of a small clinical trial and found some evidence that ANCOVA with minimisation was more powerful than ANCOVA with simple randomisation[27]. They showed that using minimisation without adjusting for covariates in the subsequent analysis distorted the significance level and the power of the test. However, they reported that other studies implied that the effects of stratifying by a covariate and then ignoring it in the analysis may not be severe. It may be that introducing randomness into a systematic design reduces such accidental bias, but further research is needed.
Vaughan Reed and Wickham discussed the validity of using an unadjusted analysis[28]. They maintained that the errors introduced by using an unadjusted analysis for a deterministic allocation method were likely to be less than the errors resulting from imbalance between treatment groups that might occur using simple randomisation. They maintained that serious imbalance between treatment groups undoubtedly causes problems with analysis and interpretation.
Strengths and limitations of the study
Sampling frame
For the original sampling frame, the Directory of Academic Statisticians is not compulsory so the list cannot claim to be exhaustive. The declared areas of interest are also voluntary and therefore relevant statisticians may have been missed using this source. Some research centres have dedicated programmers who set up randomisation schemes, and these people would not be included in the register. Although the second round of the survey did target these people, the response rate was very low – so it is difficult to draw any definite conclusions from these responses alone. Some individuals may also possibly have responded at both time points because the first survey was anonymous. Views from the pharmaceutical industry are not represented here at all.
Question formats
A mix of open-text and multiple-choice questions were included in the survey. Every effort was made to formulate the questions so as to elicit unbiased responses. However, some respondents queried the wording of some of the questions, perhaps reflecting the heated debate that was ongoing amongst clinical trialists, academics and governing bodies at the time the survey was sent out.