Skip to main content

Design, planning and implementation lessons learnt from a surgical multi-centre randomised controlled trial

Abstract

Background

Increasingly, pragmatic randomised controlled trials are being used to evaluate surgical interventions, although they present particular difficulties in regards to recruitment and retention.

Methods

Procedures and processes related to implementation of a multi-centre pragmatic surgical randomised controlled trial are discussed. In this surgical trial, forecasting of consent rates based on similar trials and micro-costing of study activities with research partners were undertaken and a video was produced targeting recruiting staff with the aim of aiding recruitment. The baseline assessments were reviewed to ensure the timing did not impact on the outcome. Attrition due to procedure waiting time was monitored and data were triangulated for the primary outcome to ensure adequate follow-up data.

Results

Forecasting and costing ensured that the recruitment window was of adequate length and adequate resource was available for study procedures at multiple clinics in each hospital. Recruiting staff found the recruitment video useful. The comparison of patient-reported data collected prior to randomisation and prior to treatment provided confidence in the baseline data. Knowledge of participant dropout due to delays in treatment meant we were able to increase the recruitment target in a timely fashion, and along with the triangulation of data sources, this ensured adequate follow-up of randomised participants.

Conclusions

This paper provides a range of evidence-based and experience-based approaches which, collectively, resulted in meeting our study objectives and from which lessons may be transferable.

Trial registration

ISRCTN, ISRCTN41394716. Registered on 10 May 2012.

UKCRN Study ID: 12486.

Peer Review reports

Background

Randomised controlled trials (RCTs) are becoming more widely used to assess surgical interventions despite historical resistance [1,2,3]. However, a review across surgical specialities showed over 20% (81/395) of trials are prematurely discontinued [4], with poor recruitment being the principal reason (36/81); these discontinued trials recruited 15,626 participants. The discontinuation of trials results in considerable wasted investment and at best a less precise answer to the research question [5,6,7]. Poor recruitment and retention can lead to withdrawal of funding to complete the trial, which has financial and ethical implications [5, 6, 8].

McCulloch and colleagues [1] identified different classes of surgical trials with different levels of risk in terms of successful project completion: type 1 trials compare medical management in surgery; type 2 compare surgical techniques; and type 3 compare surgical and non-surgical treatments. Type 3 trials are particularly prone to a lack of clinician and patient equipoise [1]. Recruitment to trials with treatments of differing intensity are often poor [1, 9], with randomized-to-screened ratios of 1:16 documented [10].

The most common patient-reported reasons for non-entry into surgical RCTs are treatment preference or dislike of randomisation [11, 12] and, where treatments are markedly different, there is increased likelihood of patients or clinicians declaring a preference. In addition, recruiting clinicians often struggle to explain concepts such as randomisation and equipoise [13,14,15] and with the amount and clarity of information provided during the consent process [8]. There have been several articles looking at strategies to improve recruitment and retention in trials [5, 8, 13, 16,17,18,19,20] and evidence for successful interventions is limited. Qualitative work alongside surgical trials can identify particular issues around recruitment and can train staff to address the absence of equipoise and other issues [13].

The time waited from consent to surgery is a common reason for attrition [21, 22] and type 3 trials may lead to a greater difference in waiting times between treatment groups than types 1 or 2. With increasing waiting times a problem for some health systems [23], this should be a consideration in surgical trial design. Baseline measures such as health-related quality of life (HRQoL) often change over time, meaning long waiting times between consent and surgery are a potential source of bias. If baseline measures are taken on the day of surgery there is a possibility that the measures could be affected by the knowledge of the treatment allocation [24]. The clinically intuitive timing for follow-up measures is a timepoint relative to the day of surgery, whereas the scientifically desirable timing is a timepoint relative to the day of randomisation, although there is some evidence that this makes little difference to the reported outcomes [25].

Costing the resource required to deliver RCTs is an important factor in their success. Published workload models for organisations tend to use accrual data, acuity, or a points scale to estimate the research nurses and/or clinical trial administrator/co-ordinator resource needed to implement an RCT [6,7,8,9,10,11]. There is no consensus on which model best evaluates workload in clinical research infrastructure [12]. Systems that reimburse research infrastructure based on accrual data focus on the target accrual compared to the number of whole-time equivalents (WTEs), often without accounting for screen failures, query resolution, long-term follow up, participant attrition, or the complexity of the research protocol. They are criticised for over-simplicity and implicated in staff burnout and poor quality standards [12, 13]. This paper presents lessons from the Haemorrhoidal artery ligation (HAL) versus rubber band ligation (RBL) for haemorrhoids (HubBLe) trial [26,27,28], a type 3, multi-centre, surgical RCT, to support the implementation of future studies.

Methods

Summary of trial design and procedures

The aim of the HubBLe trial [26,27,28] was to establish the clinical effectiveness and cost effectiveness of HAL compared with RBL in the treatment of people with symptomatic second or third degree haemorrhoids. Both treatments are recommended for the treatment of haemorrhoids [29,30,31,32]. The trial was a pragmatic, multi-centre, parallel group RCT involving 18 National Health Service (NHS) hospitals in England and Scotland. After consent, participants were individually randomised to HAL or RBL in equal proportions at all centres using a web-based randomisation system. Participants were followed up at 1 day, 7 days and 21 days, 6 weeks, and 12 months post-procedure. Full details of the trial methods can be found elsewhere [26,27,28]; here, we focus on methods aimed at improving participant recruitment and retention to achieve a valid data set.

Methods adopted to meet the recruitment target

Recruitment procedures

Eligibility criteria were broad in order to assure a large pool of patients from which to recruit, whilst ensuring patients were suitable for both procedures. HubBLe can be considered a type 3 trial in which medical management is compared with a surgical intervention [1, 9]: HAL was a procedure undertaken in theatre under general anaesthetic; RBL is a less intensive intervention, typically undertaken as an outpatient procedure, and is often delivered by non-surgeons. A key reason for under-recruitment in RCTs is over-optimism at the trial planning stage regarding how many people who are offered participation in the trial will consent and randomise [33,34,35,36].

In particular investigators often do not forecast based on a “reference class” of consent rates observed in previous similar trials [34, 37]. Prospect theory predicts that we are over-optimistic in our judgements, because we are overconfident and unaware or ignorant of existing data on similar projects (the “reference class”) [38,39,40]. For HubBLe, we made the following evidence-based assumptions about the recruitment activity based on a reference class of previous similar studies:

  1. 1.

    Many patients had to be screened for one patient to consent

For type 3 surgical trials, conversion rates rarely exceed 1 patient consented for every 5 screened and rates as low as one randomised for every 16 screened [9, 10]. We therefore anticipated that 12 patients will decline randomisation for each one who consents, a screening-to-randomisation ratio of 13:1.

  1. 2.

    Time spent per patient screened

Every patient screened would cost a research nurse 3 h in terms of liaison with the clinical team to ensure potentially eligible candidates were flagged; posting information about the study in advance of screening visits; time taken to get to screening visits in clinics; screening, information giving and discussion of equipoise issues; and consent and randomisation where required. For every patient recruited we requested costs for 38.5 h recruitment work. Assuming the conversion rate of 1 patient randomised in every 13 screened, recruiting 39 patients at a centre would require 1500 h (roughly 0.7 WTE over 1 year at each centre).

  1. 3.

    Coverage and rationalisation

There were an estimated two eligible patients available per clinic. With an estimate of four surgeons involved at every centre and an estimate of four clinics per centre per week, we recognised the challenge for recruiting research nurses to be available at all clinics with potentially eligible patients (coverage). Given the anticipated screening-randomised-ratio, it was imperative that as many potentially eligible patients were screened as possible. Research nurses worked with clinical teams to corral potentially eligible patients into particular clinics, especially where multiple surgeons share a waiting room. The research nurse could then use their time more efficiently with a view to minimising the number of unscreened patients (rationalisation).

Attribution and reimbursement of costs

The UK Government Department of Health’s system for attributing costs in NHS research and development (R&D) [41] means that resource for recruitment activity cannot be costed into grant applications. A case for supplementary funding for work relating to recruitment, “services support costs”, has to be made to a National Institute for Health Research (NIHR) Local Research Network (LRN) lead in the chief investigator’s locality. Once agreed, LRNs in other regions are expected to match the funding. The system has been the subject of criticism by researchers and delays associated with agreeing the allocation of costs have been documented [42,43,44,45,46]. To avoid such delays, we entered into discussion with the LRN, which began prior to the start of the study. The breakdown of research nurse costs for HubBLe are presented in Table 1; we ensured that the research nurse resource accounted for screen failures, participant attrition, data collection, data entry, query resolution, and the complexity of the research protocol.

Table 1 Costing of research nurse time per centre

Recruitment video

In addition to ensuring sufficient recruitment capacity for the trial, the team developed a recruitment video [47] based on the ProtecT trial team’s work on the explanation of randomisation and equipoise [13]. We interviewed the local ProtecT trial team, two research nurses and a consultant involved in recruitment, about their recruitment experiences and narrated the film to highlight the general principles of equipoise and randomisation and how this specifically related to HubBLe. A recent systematic review of training interventions for trial recruiters [8] identified six trials [48,49,50,51,52,53] that had employed a video as part of a face-to-face workshop for that purpose, but there do not seem to be any training programmes solely using videos to aid recruiters. This was seen to be a low-cost method that could be referred to by recruiters as many times as they wished.

Monitoring of waiting times

The duration between randomisation and treatment was monitored in the trial as we knew that there can be significant issues with waiting times for non-urgent surgeries, and that this could affect dropout rates and the intention-to-treat analysis. Whilst RBL is a simple procedure, which is often done on the day of randomisation, HAL is more intensive, is performed under general anaesthetic and requires a theatre slot to be booked. These conditions created the potential for differential participant attrition, a potential source of bias in our analysis [54, 55]. During the trial one of the centres stopped completing non-urgent surgery, which included our procedures. This has been shown to be a continuing issue for the NHS with one Clinical Commissioning Group (CCG) suspending non-urgent surgery to make financial savings prior to the end of the financial year in 2017 [56, 57]. Due to the long waiting times, for the HAL procedure in particular, and the cancellation of non-urgent procedures at one site, the dropout rate prior to the procedure was higher than anticipated. To account for this observed attrition the recruitment target was increased to 370, during the study, in order to achieve the sample of 350 treated, followed up and analysed participants.

Changes in baseline health state post-randomisation, pre-surgery

Three months into recruitment the baseline data collection was changed to the day of procedure, rather than at randomisation. This was because there was substantial between-site variation in surgical wait times, and a difference in wait times for the two treatments, meaning that scores for change in patient-reported outcome at follow up may have reflected time periods substantially greater than intended, especially in the HAL arm. The risk of bias introduced by anchoring follow up to the time of surgery, rather than the point of randomisation, is theoretical and not supported by empirical evidence [25].

Six months after this change a Data Management and Ethics Committee (DMEC) member suggested that patient-reported outcomes can be affected by the knowledge of their allocation. Since baseline data collection took place on the day of surgery, most patients already knew their allocation by this point; the concern was that perceived pain and HRQoL may differ between the groups due to expectation bias at this time [24], even though no procedure had yet taken place. The trial statistician reported that the early data did indeed support this hypothesis, in particular with higher self-reported symptoms in the HAL arm. As a result of this, we added a questionnaire to be completed before randomisation as well as at baseline on the day of surgery if the two were more than 1 week apart.

Methods to ensure a valid primary outcome data set

Three sources of primary data collection

Sometimes, in assessing an outcome, using only one data source may be unreliable and data source triangulation is necessary [58]. Our primary objective was to compare patient-reported symptom recurrence at 12 months following the procedure. Recurrence was defined using a simple dichotomous outcome derived from a previously published systematic review [59]. Patients were asked “At the moment, do you feel your symptoms are: cured or improved compared with before starting treatment; or unchanged or worse compared with before starting treatment.” We also asked patients whether (and which) procedures they had undergone for their haemorrhoids, further to trial treatment, since symptoms may only have resolved as a result of further intervention, and supplemented this with treatments as determined from their hospital notes and general practitioner (GP) in order to minimise attrition and recall bias. Finally, we reviewed adverse events and hospitalisations to identify participants that had ongoing symptoms consistent with persistent or recurrent haemorrhoids (e.g. persistent bleeding) that had not been treated.

Results

Recruitment

Recruitment took place from 9 September 2012 to 6 May 2014, with follow up completed on 28 August 2015. The target and actual recruitment, including the increase in the recruitment target is shown in Fig. 1. There were 372 participants randomly assigned to receive RBL or HAL; 187 patients were allocated to receive RBL and 185 were allocated to receive HAL. Two of these participants (both allocated to RBL) were removed from the trial completely as they were ineligible at the time of consent, meaning a total of 370 participants were entered into the trial. An important observation is that less than one quarter of the sites (study sites 1, 2, 6, and 9) account for two thirds of the participants (251/372), while half of sites contributed one sixth of randomised participants.

Fig. 1
figure 1

Participant recruitment graph. Reproduced with permission from Brown et al. 2016 [27]

Our early funding discussions with sites reduced delays in site set-up prior to the start of recruitment, and the lead site even started 1 month earlier than anticipated. Where sites agreed to our proposal, a full-time research nurse was dedicated to HubBLe during the recruitment period. Sites that exceeded their target recruitment (1, 2, 6 and 9) had a named research nurse responsible for the trial, as did sites 4, 7, and 8, though they did not recruit to target. An informal observation was that at sites where research nurses had less time to work on the trial, recruitment and the non-recruited data collection were generally poorer.

Of the 969 patients screened, 198 were not eligible (including the 2 patients that were withdrawn); the majority of these patients were not approached as clinical note review identified the exclusion criteria. The approximate randomised-to-screened ratio in the trial was 5:13; we therefore needed to formally screen (approximately) 13 people for every 5 randomised. This may underestimate the number of individuals screened, as the recording of data from non-recruited patients can be poor in clinical trials as the focus is on recruited participants.

Of the 401 eligible screened patients that were not recruited, 109 of these were not approached and 292 were invited to the trial but refused to consent; reasons for non-consent are shown in Table 2. Most patients who refused to consent did not want to be randomised due to their preference for a particular treatment (251/401, 62%).

Table 2 Reasons for non-enrolment to the trial

Video feedback

Although we did not assess the impact of the video on recruitment in any formal or structured sense, recruiting staff fed back that thinking about equipoise was very helpful, and that they found it easier to describe the two treatments after watching the video. Particular key points that were highlighted as helpful from the video were that it expressed the uncertainty of the effectiveness of each treatment; gave a similar amount of time discussing each treatment arm and avoid loaded statements that may communicate an unconscious bias for one treatment over another; and it provided the opportunity to check the patient’s understanding as you go.

Withdrawals and waiting times

Overall, 35 participants withdrew from the trial, with 24 withdrawing from the HAL group and 11 from the RBL group; reasons for withdrawal are provided in Table 3. Only 3 participants withdrew after receiving treatment and these were all in the RBL group: of the 32 participants that did not receive the procedure, 24 participants were in the HAL group compared with 8 in the RBL group.

Table 3 Reason for withdrawal (reasons for withdrawal from treatment are indicated under “Prior to treatment”)

Figure 2 shows the time between randomisation and treatment for participants at each site, excluding site 17, which randomised no participants. The median waiting times were longer for HAL (62 days) than that for RBL (0 days) as RBL was often done on the day of randomisation at the sites.

Fig. 2
figure 2

Time to procedure by site and treatment arm (days). HAL, haemorrhoidal artery ligation; RBL, rubber band ligation

Figure 3 shows that withdrawal prior to treatment in the HAL group occurred after waiting longer than participants who withdrew in the RBL group. Withdrawal of consent often occurred when contacting patients to book them in for treatment or discuss their waiting time. The majority of participants who withdrew prior to treatment did so after waiting more than a month for the procedure (29/32). Site 5 had particular problems with their waiting times, eventually stopping non-urgent surgical procedures: eight participants did not receive the HAL procedure, and four did not receive the RBL procedure due to withdrawal of consent, loss to follow up, or receiving treatment elsewhere.

Fig. 3
figure 3

Time to withdrawal (prior to treatment) by site and treatment arm (days). Figure includes only those sites experiencing participant attrition prior to treatment. HAL, haemorrhoidal artery ligation; RBL, rubber band ligation

Changes in symptoms between randomisation and procedure

Due to the differences in waiting time between randomisation and procedure (Fig. 2), data from the baseline assessment were reviewed to see if expectation bias or clinical deterioration was evident. The early accumulating data indicated that this was possible. Figure 4 depicts the pre-treatment means for self-reported symptoms and incontinence against time during the recruitment period and, as can be seen, the mean incontinence scores were initially higher in the surgery arm than in the RBL arm; a similar but less pronounced pattern was also noted for symptoms. To address this, a pre-randomisation questionnaire was introduced, with a second questionnaire given on the day of the procedure only where more than a week had elapsed between randomisation and the procedure. The group means converged by the end of recruitment, suggesting the initial differences were artefacts of relatively small sample sizes.

Fig. 4
figure 4

Baseline patient-reported haemorrhoid symptom score and incontinence as taken on day of procedure. HAL, haemorrhoidal artery ligation; RBL, rubber band ligation

The differences in means between pre-randomisation and pre-treatment (baseline) measures (Table 4) were not significant in any of the patient-reported measures, which reassured us that there was no systematic change due to expectation bias or clinical deterioration. Nevertheless, there were some considerable differences between the two measures on an individual level. To put this into context, the 95% reference intervals for change between randomisation and procedure included 0.5 standard deviations, a magnitude comparable to or exceeding the minimally clinically important difference in many RCTs. Moreover, the variability of the change (the ratio of variances, Table 4) was greater in the HAL arm for two of the four questionnaires (Vaizey faecal incontinence and the Euroqol - 5 dimensions - 5 levels (EQ-5D-5L) questionnaires), suggesting these were either sensitive to temporal trends and/or lacked test-retest validity - although on average these changes cancelled each other out in terms of the mean change.

Table 4 Agreement between self-completed measures of symptoms, incontinence, EQ-5D-5 L, and pain pre-randomisation and pre-treatment (baseline)

Primary outcome

Our primary outcome was recurrence at 1 year post-treatment. This included a patient-reported outcome measure supplemented by a case note review of further treatment and haemorrhoid-related events.

Figure 5 and Table 5 show that data were collected from all three sources (patient, consultant, and GP) on 183 participants and the best method for data collection was from the hospital notes (consultant questionnaire), with 337 (96% of the sample of 350) of these completed. If we had only relied on the patient-reported outcome we would have had outcome data on 255 participants (73% of the sample of 350). Figure 5 shows that recurrence was reported by 71 participants at 1 year but that 83 participants had received further treatment as reported in the GP or consultant questionnaire. In total 135 participants were found to have had a recurrence, which would have been underestimated had only one of these sources been used for the primary outcome.

Fig. 5
figure 5

Source of primary outcome data collection. GP, general practitioner

Table 5 Data sources for recurrence at one year

Consolidated standards of reporting trials (CONSORT) diagram

The complete trial information in relation to recruitment and data collection is provided in the CONSORT diagram in Fig. 6. Overall there was a good rate of recruitment, with 372 out of the 969 screened recruited and a low rate of attrition, with 337 (90.6%) contributing to the primary outcome. As our original target was 350, our attrition rate was 3.7%, less than the 5% used for our sample size calculation [26, 27].

Fig. 6
figure 6

Participant flow diagram. Reproduced with permission from Brown et al. 2016 [27]. HAL, haemorrhoidal artery ligation; RBL, rubber band ligation, GP, general practitioner

Discussion

Statement of findings

The HubBLe trial is a relatively rare example of a surgical trial that recruited to target and maintained adequate participant follow up. The HubBLe team reduced the risk of project failure by addressing four key areas. We increased the chances of recruiting to target with broad eligibility criteria as suggested elsewhere [5, 33,34,35,36]; forecasting recruitment rates based on previous studies [34, 37, 60]; accounting for screen failures in resourcing recruitment activity; and highlighting the issue of equipoise in the training of recruitment staff as proposed by Donovan and colleagues [13,14,15]. However, as expected for treatments of differing intensity [11], patient preference for treatment was still a barrier to recruitment. In addition, we reduced the risk of delays to recruitment, as recommended [42,43,44,45,46], by having early discussions with sites to secure funding for recruitment activity. As waiting times have been shown to be a barrier to treatment [21, 61], we anticipated differences in the time from randomisation to treatment in each arm [54], reducing the risk of bias due to differential attrition. We avoided variation in length of follow up between arms by collecting baseline data on the day of surgery in addition to randomisation. We then compared the data to check for the influence of expectation bias on self-report measures, as suggested by Schulz [24], but found no systematic differences between the two timepoints. In “Primary outcome” we reduced the risk of unreliable data by triangulating across three sources [58].

Strengths and limitations

This paper shows how a trial can use a battery of evidence-based methods and the collected experience of a clinical trials unit [62] to achieve study objectives. We were not resourced to conduct qualitative research alongside the trial to understand and address recruitment issues, as is now best practice [19, 63] and our approach was somewhat ad hoc, with systematic evaluation of the strategies not conducted. For instance, formal feedback on the recruitment training video, which was produced without any funding, was not elicited to improve future efforts and screening data may have been incomplete, as is common in RCTs [64], and time spent screening patients was not monitored so we cannot determine if our costing of recruitment activity was appropriate.

Meaning and application of findings

The data in this paper, such as those on consent rates and attrition prior to treatment, can be added to the reference class for surgical trials and used in future forecasting. Data showing the imbalance in recruitment between trial sites are important: difference in site capability is frequently observed and has implications for trial planning [65,66,67,68,69], especially in allowing over-recruitment in site contracts to compensate for less able sites.

The paper highlights issues around waiting times for surgery in the UK [25, 70,71,72,73] and how this can differ between arms in type 3 trials [9], which should be accounted for in the sample size estimation and when deciding on the timing of data collection. Consideration needs to be given to whether baseline data should be collected at randomisation or on the day of treatment [25]; though our data show there is little difference between timepoints. Decisions on whether follow-up data should be anchored to randomisation or to the trial treatment also need to be made: if HubBLe had anchored follow up to randomization rather than to trial treatment, the time between treatment and follow up would have been greater in the RBL group due to the longer waiting times for HAL, which in turn could have affected the primary outcome of recurrence.

Unanswered questions and future research

It may not be possible to repeat the comparatively generous allocation of service support costs to this trial in the UK due to the subsequent introduction of the Department of Health’s new costing template (the Activity Capture and Attribution Tool, or ACAT) and the UK Clinical Research Facility Network Intensity Tool to cost research nurse activity. Our experience on more recent trials is that these two tools may considerably underestimate the research nurse time necessary to undertake essential research procedures, threatening the success of the recruitment effort and the integrity of research data. Published workload models estimating staff resource for RCTs [74,75,76,77,78,79] are often criticised for over-simplicity, and their use can lead to staff burnout and poor implementation [80,81,82]. The rise of surgical trainee networks in the UK as a force in recruitment may go some way to compensating for the pressures on costs in public sector research, where trainees can be incentivised and co-ordinated to recruit and follow up study participants [83].

Stronger evidence for recruitment and retention strategies in RCTs is required to improve trial efficiency and meet trial objectives. Trial Forge [84] is an initiative set up to address the lack of evidence in trial decision-making, which will go some way to evaluate recruitment and retention strategies that can be used across RCTs. The use of studies within a trial (SWATs) to find evidence for implementation of RCTs is becoming more commonplace and could be used to assess some of the strategies presented in this paper [85,86,87].

Conclusions

Recruitment to and retention in trials comparing surgical interventions of different intensity is challenging but achievable. This paper provides a range of evidence-based and experience-based approaches, which collectively resulted in meeting our study objectives and from which lessons may be transferable.

Availability of data and materials

Requests for further data not available in this publication can be directed at Sheffield Clinical Trials Research Unit. Email: ctru@sheffield.ac.uk Tel: 0114 222 0866.

Abbreviations

ACAT:

Activity Capture And Attribution Tool

CCG:

Clinical Commissioning Group

DMEC:

Data Monitoring and Ethics Committee

EQ-5D-5L:

Euroqol - 5 dimensions - 5 levels questionnaire

GP:

General pracitioner

HA:

Haemorrhoidal artery ligation

HTA:

Health Technology Assessment

HRQoL:

Health-related quality of life

HubBLe:

Haemorrhoidal artery ligation versus rubber band ligation for haemorrhoids trial

LRN:

Local Research Network

NHS:

National Health Service

NIHR:

National Institute for Health Research

R&D:

Research and development

RBL:

Rubber band ligation

RCT:

Randomised controlled trial

SWAT:

Studies within a trial

WTE:

Whole time equivalent

References

  1. McCulloch P, Taylor I, Sasako M, Lovett B, Griffin D. Randomised trials in surgery: problems and possible solutions. Bmj. 2002;324:1448–51.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Cook JA. The challenges faced in the design, conduct and analysis of surgical randomised controlled trials. Trials. 2009;10:9. https://doi.org/10.1186/1745-6215-10-9.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Reeves B. Health-technology assessment in surgery. Lancet. 1999;353(SUPPL. 1):3–5.

    Article  Google Scholar 

  4. Chapman SJ, Shelton B, Mahmood H, Fitzgerald JE, Harrison EM, Bhangu A. Discontinuation and non-publication of surgical randomised controlled trials: observational study. Bmj. 2014;349:g6870.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Treweek S, Lockhart P, Pitkethly M, Cook JA, Kjeldstrøm M, Johansen M, et al. Methods to improve recruitment to randomised controlled trials: Cochrane systematic review and meta-analysis. BMJ Open. 2013;3:e002360.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Gul RB, Ali PA. Clinical trials: the challenge of recruitment and retention of participants. J Clin Nurs. 2010;19:227–33.

    Article  PubMed  Google Scholar 

  7. Marcellus L. Are we missing anything? Pursuing research on attrition. Canadian J Nurs Res. 2004;36(3):82-98.

  8. Townsend D, Mills N, Savović J, Donovan JL. A systematic review of training programmes for recruiters to randomised controlled trials. Trials. 2015;16:432.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Cook JA, Ramsay CR, Norrie J. Recruitment to publicly funded trials — are surgical trials really different? Contemp Clin Trials. 2008;29:631–4.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Anvari M, Allen C, Marshall J, Armstrong D, Goeree R, Ungar W, et al. A randomized controlled trial of laparoscopic nissen fundoplication versus proton pump inhibitors for treatment of patients with chronic gastroesophageal reflux disease: one-year follow-up. Surg Innov. 2006;13:238–49.

    Article  PubMed  Google Scholar 

  11. Abraham NS, Young JM, Solomon MJ. A systematic review of reasons for nonentry of eligible patients into surgical randomized controlled trials. Surgery. 2006;139:469–83.

    Article  PubMed  Google Scholar 

  12. Howes N, Chagla L, Thorpe M, McCulloch P. Surgical practice is evidence based. Br J Surg. 1997;84:1220–3.

    Article  CAS  PubMed  Google Scholar 

  13. Donovan J, Mills N, Smith M, Brindle L, Jacoby A, Peters T, et al. Quality improvement report: improving design and conduct of randomised trials by embedding them in qualitative research: Protect (Prostate testing for cancer and treatment) study. BMJ Br Med J. 2002;325:766–9.

    Article  Google Scholar 

  14. Donovan JL, De Salis I, Toerien M, Paramasivan S, Hamdy FC, Blazeby JM. The intellectual challenges and emotional consequences of equipoise contributed to the fragility of recruitment in six randomized controlled trials. J Clin Epidemiol. 2014;67:912–20. https://doi.org/10.1016/j.jclinepi.2014.03.010.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Wade J, Donovan JL, Athene Lane J, Neal DE, Hamdy FC. It’s not just what you say, it’s also how you say it: opening the “black box” of informed consent appointments in randomised controlled trials. Soc Sci Med. 2009;68:2018–28. https://doi.org/10.1016/j.socscimed.2009.02.023.

    Article  PubMed  Google Scholar 

  16. Fisher L, Hessler D, Naranjo D, Polonsky W. AASAP: a program to increase recruitment and retention in clinical trials. Patient Educ Couns. 2012;86:372–7. https://doi.org/10.1016/j.pec.2011.07.002.

    Article  PubMed  Google Scholar 

  17. Brueton VC, Tierney J, Stenning S, Harding S, Meredith S, Nazareth I, et al. Strategies to improve retention in randomised trials. Cochrane Database Syst Rev. 2013;12:MR000032. https://doi.org/10.1002/14651858.MR000032.pub2.

    Article  Google Scholar 

  18. Paramasivan S, Huddart R, Hall E, Lewis R, Birtle A, Donovan JL. Key issues in recruitment to randomised controlled trials with very different interventions: a qualitative investigation of recruitment to the SPARE trial (CRUK/07/011). Trials. 2011;12:78.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Fletcher B, Gheorghe A, Moore D, Wilson S, Damery S. Improving the recruitment activity of clinicians in randomised controlled trials: a systematic review. BMJ Open. 2012;2:e000496. https://doi.org/10.1136/bmjopen-2011-000496.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Page SJ, Persch AC. Recruitment, retention, and blinding in clinical trials. Am J Occup Ther. 2013;67:154–61.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Diamant A, Milner J, Cleghorn M, Sockalingam S, Okrainec A, Jackson TD, et al. Analysis of patient attrition in a publicly funded bariatric surgery program. J Am Coll Surg. 2014;219:1047–55. https://doi.org/10.1016/j.jamcollsurg.2014.08.003.

    Article  PubMed  Google Scholar 

  22. Diamant A, Cleghorn MC, Milner J, Sockalingam S, Okrainec A, Jackson TD, et al. Patient and operational factors affecting wait times in a bariatric surgery program in Toronto: a retrospective cohort study. C Open. 2015;3:E331–7.

    Article  Google Scholar 

  23. Campbell D. 193,000 NHS patients a month waiting beyond target time for surgery. Guardian. 2017; https://www.theguardian.com/society/2017/jan/13/193000-nhs-patients-a-month-waiting-beyond-target-for-surgery. Accessed 3 Sept 2019.

  24. Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002;359:696–700. https://doi.org/10.1016/S0140-6736(02)07816-9.

    Article  PubMed  Google Scholar 

  25. Cook J, MacLennan G, Murray D, Price A, Fitzpatrick R, Carr A, et al. Impact of timing of follow-up upon outcome in the TOPKAT trial. Trials. 2015;16:O33. https://doi.org/10.1186/1745-6215-16-S2-O33.

    Article  PubMed Central  Google Scholar 

  26. Tiernan J, Hind D, Watson A, Wailoo AJ, Bradburn M, Shephard N, et al. The HubBLe trial: haemorrhoidal artery ligation (HAL) versus rubber band ligation (RBL) for haemorrhoids. BMC Gastroenterol. 2012;12:153.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Brown S, Tiernan J, Biggs K, Hind D, Shephard N, Bradburn M, et al. The HubBLe trial: haemorrhoidal artery ligation (HAL) versus rubber band ligation (RBL) for symptomatic second- and third-degree haemorrhoids: a multicentre randomised controlled trial and health-economic evaluation. Health Technol Assess. 2016;20:1–150. https://doi.org/10.3310/hta20880.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Brown SR, Tiernan JP, Watson AJM, Biggs K, Shephard N, Wailoo AJ, et al. Haemorrhoidal artery ligation versus rubber band ligation for the management of symptomatic second-degree and third-degree haemorrhoids (HubBLe): a multicentre, open-label, randomised controlled trial. Lancet. 2016;6736:1–9.

    Google Scholar 

  29. National Institute for Health and Clinical Excellence (NICE). Haemorrhoids - CKS Clinical Knowledge Summaries. 2016. http://cks.nice.org.uk/haemorrhoids#!topicsummary. Accessed 12 Feb 2016.

  30. National Institute for Health and Clinical Excellence (NICE). Haemorrhoidal artery ligation | Guidance and guidelines. 2010. https://www.nice.org.uk/guidance/ipg342. Accessed 12 Feb 2016.

  31. Davis BR, Lee-Kong SA, Migaly J, Feingold DL, Steele SR. The American Society of Colon and Rectal Surgeons clinical practice guidelines for the management of hemorrhoids. 2018. https://doi.org/10.1097/DCR.0000000000001030.

    Article  Google Scholar 

  32. Wald A, Bharucha AE, Cosman BC, Whitehead WE. ACG clinical guideline: management of benign anorectal disorders. Am J Gastroenterol. 2014. https://doi.org/10.1038/ajg.2014.190.

    Article  PubMed  Google Scholar 

  33. Gorringe JAL. Initial preparations for clinical trials. In: Harris EL, Fitzgerald JD, editors. The Principles and Practice of Clinical Trials. Edinburgh: Churchill Livingstone; 1970. p. 41–6.

    Google Scholar 

  34. Collins JF, Williford WO, Weiss DG, Bingham SF, Klett CJ. Planning patient recruitment: fantasy and reality. Stat Med. 1984;3:435–43.

    Article  CAS  PubMed  Google Scholar 

  35. Lasagna L. Problems in publication of clinical trial methodology. Clin Pharmacol Ther. 1979;25(5 Pt 2):751–3.

    Article  CAS  PubMed  Google Scholar 

  36. Lasagna L. The pharmaceutical revolution forty years later. Rev Farmacol Clin y Exp. 1984;1:157–61.

    Google Scholar 

  37. White D, Hind D. Projection of participant recruitment to primary care research: a qualitative study. Trials. 2015;16:473.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica. 1979;47(2):263-291. https://doi.org/10.2307/1914185.

    Article  Google Scholar 

  39. Kahneman D, Tversky A. Intuitive prediction: biases and corrective procedures. TIMS Stud Manag Sci. 1977;12:313–27.

    Google Scholar 

  40. Flyvbjerg B. Curbing optimism bias and strategic misrepresentation in planning: reference class forecasting in practice. Eur Plan Stud. 2008;16:3–21.

    Article  Google Scholar 

  41. Simmons T. Attributing the costs of health and social care Research & Development (AcoRD). London: Department of Health; 2012.

    Google Scholar 

  42. Snooks H, Hutchings H, Seagrove A, Stewart-Brown S, Williams J, Russell I. Bureaucracy stifles medical research in Britain: a tale of three trials. BMC Med Res Methodol. 2012;12:122.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Al-Shahi Salman R, Brock TM, Dennis MS, Sandercock PAG, White PM, Warlow C. Research governance impediments to clinical trials: a retrospective survey. J R Soc Med. 2007;100:101–4.

    Article  PubMed  Google Scholar 

  44. Al-Shahi Salman R, Beller E, Kagan J, Hemminki E, Phillips RS, Savulescu J, et al. Increasing value and reducing waste in biomedical research regulation and management. Lancet. 2014;383:176–85.

    Article  PubMed  Google Scholar 

  45. Whitehead P, Drummond A, Fellows K. Research governance and bureaucracy for multisite studies: implications for occupational therapy research. Br J Occup Ther. 2011;74:355–8.

    Article  Google Scholar 

  46. Kearney A, McKay A, Hickey H, Balabanova S, Marson AG, Gamble C, et al. Opening research sites in multicentre clinical trials within the UK: a detailed analysis of delays. BMJ Open. 2014;4:e005874.

    Article  PubMed  PubMed Central  Google Scholar 

  47. scharrvids. Training video for recruitment to a surgical trial with two interventions of different intensity. 2015. https://www.youtube.com/watch?v=OlJJqCvjBM8&feature=youtu.be. Accessed 3 Sept 2019.

  48. Bernhard J, Butow P, Aldridge J, Juraskova I, Ribi K, Brown R. Communication about standard treatment options and clinical trials: can we teach doctors new skills to improve patient outcomes? Communication about standard treatment options and clinical trials: can we teach doctors new skills to improve patient…. Psychooncology. 2012;21:1265–74. https://doi.org/10.1002/pon.2044.

    Article  PubMed  Google Scholar 

  49. Jenkins V, Fallowfield L, Solis-Trapala I, Langridge C, Farewell V. Discussing randomised clinical trials of cancer therapy: evaluation of a Cancer Research UK training programme. BMJ. 2005;330:400. https://doi.org/10.1136/bmj.38366.562685.8F.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  50. Kimmick GG, Peterson BL, Kornblith AB, Mandelblatt J, Johnson JL, Wheeler J, et al. Improving accrual of older persons to cancer treatment trials: a randomized trial comparing an educational intervention with standard information: CALGB 360001. J Clin Oncol. 2005;23:2201–7.

    Article  PubMed  Google Scholar 

  51. Wuensch A, Goelz T, Bertz H, Wirsching M, Fritzsche K. Disclosing information about randomised controlled trials in oncology: training concept and evaluation of an individualised communication skills training for physicians COM-ON-rct. Eur J Cancer Care (Engl). 2011;20:570–6.

    Article  CAS  Google Scholar 

  52. Fallowfield LJ, Solis-Trapala I, Jenkins VA. Evaluation of an educational program to improve communication with patients about early-phase trial participation. Oncologist. 2012;17:377–83. https://doi.org/10.1634/theoncologist.2011-0271.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Brown RF, Butow PN, Boyle F, Tattersall MHN. Seeking informed consent to cancer clinical trials; evaluating the efficacy of doctor communication skills training. Psychooncology. 2007;16:507–16. https://doi.org/10.1002/pon.1095.

    Article  CAS  PubMed  Google Scholar 

  54. Crutzen R, Viechtbauer W, Kotz D, Spigt M. No differential attrition was found in randomized controlled trials published in general medical journals: a meta-analysis. J Clin Epidemiol. 2013;66:948–54.

    Article  PubMed  Google Scholar 

  55. Bell ML, Kenward MG, Fairclough DL, Horton NJ. Differential dropout and bias in randomised controlled trials: when it matters and when it may not. BMJ. 2013;346:e8668.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Campbell D. NHS cash crisis in Kent halts non-urgent surgery until April. Guardian. 2017; https://www.theguardian.com/society/2017/feb/02/nhs-cash-crisis-in-kent-halts-non-urgent-surgery-until-april. Accessed 3 Sept 2019.

  57. Royal College of Surgeons. Kent CCG announces unprecedented suspension of non-urgent surgery. Royal College of Surgeons press release. 2017. https://www.rcseng.ac.uk/news-and-events/media-centre/press-releases/west-kent-ccg-delays/. Accessed 3 Sept 2019.

  58. Herrett E, Shah AD, Boggon R, Denaxas S, Smeeth L, van Staa T, et al. Completeness and diagnostic validity of recording acute myocardial infarction events in primary care, hospital care, disease registry, and national mortality records: cohort study. BMJ. 2013;346:f2350. https://doi.org/10.1136/bmj.f2350.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Shanmugam V, Thaha MA, Rabindranath KS, Campbell KL, Steele RJC, Loudon MA. Systematic review of randomized trials comparing rubber band ligation with excisional haemorrhoidectomy. Br J Surg. 2005;92:1481–7.

    Article  CAS  PubMed  Google Scholar 

  60. Flyvbjerg B, Skamris Holm M, Buhl S. Underestimating costs in public works projects. APA J. 2002;68:279–95.

    Google Scholar 

  61. Redko C, Rapp RC, Carlson RG. Waiting time as a barrier to treatment entry: Perceptions of substance users. J Drug Issues. 2006;36(4):831-52.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Gohel MS, Chetter I. Are clinical trials units essential for a successful trial? Bmj. 2015;350:h2823.

    Article  CAS  PubMed  Google Scholar 

  63. de Salis I, Tomlin Z, Toerien M, Donovan J. Using qualitative research methods to improve recruitment to randomized controlled trials: the Quartet study. J Health Serv Res Policy. 2008;13 Suppl 3:92–6.

    Article  PubMed  Google Scholar 

  64. Toerien M, Brookes ST, Metcalfe C, de Salis I, Tomlin Z, Peters TJ, et al. A review of reporting of participant recruitment and retention in RCTs in six major journals. Trials. 2009;10:52. https://doi.org/10.1186/1745-6215-10-52.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Huang GD, Bull J, McKee KJ, Mahon E, Harper B, Roberts JN. Clinical trials recruitment planning: a proposed framework from the Clinical Trials Transformation Initiative. Contemp Clin Trials. 2018;66:74-9.

    Article  PubMed  Google Scholar 

  66. Gehring M, Taylor RS, Mellody M, Casteels B, Piazzi A, Gensini G, et al. Factors influencing clinical trial site selection in Europe: the Survey of Attitudes towards trial sites in Europe (the SAT-EU Study). BMJ Open. 2013;3:e002957.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Levett KM, Roberts CL, Simpson JM, Morris JM. Site-specific predictors of successful recruitment to a perinatal clinical trial. Clin Trials. 2014;11:584–9.

    Article  PubMed  Google Scholar 

  68. Zon R, Meropol NJ, Catalano RB, Schilsky RL. American Society of Clinical Oncology statement on minimum standards and exemplary attributes of clinical trial sites. J Clin Oncol. 2008;26:2562–7.

    Article  PubMed  Google Scholar 

  69. Gheorghe A, Roberts T, Pinkney TD, Morton DG, Calvert M. Rational centre selection for RCTs with a parallel economic evaluation–the next step towards increased generalisability? Health Econ. 2015;24:498–504.

    Article  PubMed  Google Scholar 

  70. Beard D, Price A, Cook J, Fitzpatrick R, Carr A, Campbell M, et al. Total or partial knee arthroplasty trial - TOPKAT: study protocol for a randomised controlled trial. Trials. 2013;14:292.

    Article  PubMed  PubMed Central  Google Scholar 

  71. McDonald AM, Knight RC, Campbell MK, Entwistle VA, Grant AM, Cook JA, Elbourne DR, Francis D, Garcia J, Roberts I, Snowdon C. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials. 2006;7(1):9.

  72. Amirfeyz R, Cook J, Gargan M, Bannister G. The role of physiotherapy in the treatment of whiplash associated disorders: a prospective study. Arch Orthop Trauma Surg. 2009;129:973–7.

    Article  PubMed  Google Scholar 

  73. Cook JA, Campbell MK, Gillies K, Skea Z. Surgeons’ and methodologists’ perceptions of utilising an expertise-based randomised controlled trial design: a qualitative study. Trials. 2018;19:478.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Roche K. Factors affecting workload of cancer clinical trials: results of a multicenter study of the National Cancer Institute of Canada Clinical Trials Group. J Clin Oncol. 2002;20:545–56.

    Article  PubMed  Google Scholar 

  75. Berridge J, Coffey M. Workload measurement. Appl Clin Trials. 2008;6:98–101.

    Google Scholar 

  76. Smuck B, Bettello P, Berghout K, Hanna T, Kowaleski B, Phippard L, et al. Ontario protocol assessment level: clinical trial complexity rating tool for workload planning in oncology clinical trials. J Oncol Pract. 2011;7:80–4.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Fowler D, Thomas C. Protocol acuity scoring as a rational approach to clinical research management. Res Pract. 2003;4:64–71.

    Google Scholar 

  78. James P, Bebee P, Beekman L, Browning D, Innes M, Kain J, et al. Creating an effort tracking tool to improve therapeutic cancer clinical trials workload management and budgeting. J Natl Compr Cancer Netw. 2011;9:1228–33.

    Article  Google Scholar 

  79. National Cancer Institute. NCI Trial Complexity Elements and Scoring Model (Version 1.2; 16 Apr 2009). http://ctep.cancer.gov/protocolDevelopment/docs/trial_complexity_elements_scoring.doc. Accessed 5 Sept 2019.

  80. Gwede CK, Johnson DJ, Roberts C, Cantor AB. Burnout in Clinical Research Coordinators in the United States. Oncol Nurs Forum. 2005;32:1123–30.

    Article  PubMed  Google Scholar 

  81. Friesema D. Workload assessment in clinical trials. CALGAB Q Newsl Cancer Leuk Gr B. 2010;19:3–8.

    Google Scholar 

  82. Hind D, Reeves BC, Bathers S, Bray C, Corkhill A, Hayward C, et al. Comparative costs and activity from a sample of UK clinical trials units. Trials. 2017;18:203.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Bhangu A, Kolias AG, Pinkney T, Hall NJ, Fitzgerald JE. Surgical research collaboratives in the UK. Lancet. 2013;382:1091–2.

    Article  PubMed  Google Scholar 

  84. Treweek S, Altman DG, Bower P, Campbell M, Chalmers I, Cotton S, et al. Making randomised trials more efficient: report of the first meeting to discuss the Trial Forge platform. Trials. 2015;16:261. https://doi.org/10.1186/s13063-015-0776-0.

    Article  PubMed  PubMed Central  Google Scholar 

  85. Rick J, Graffy J, Knapp P, Small N, Collier DJ, Eldridge S, et al. Systematic techniques for assisting recruitment to trials (START): study protocol for embedded, randomized controlled trials. Trials. 2014;15:407. https://doi.org/10.1186/1745-6215-15-407.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Bower P, Brueton V, Gamble C, Treweek S, Smith CT, Young B, et al. Interventions to improve recruitment and retention in clinical trials: a survey and workshop to assess current practice and future priorities. Trials. 2014;15:399. https://doi.org/10.1186/1745-6215-15-399.

    Article  PubMed  PubMed Central  Google Scholar 

  87. Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. Trial Forge Guidance 1: what is a study within a trial (SWAT)? Trials. 2018;19:139. https://doi.org/10.1186/s13063-018-2535-5.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge the participants who took part in the HubBLe trial and all of the NHS staff involved in recruitment, treatment, and data collection for the trial.

Funding

The HubBLe trial was funded by the Health Technology Assessment programme (HTA 10/57/46) and is published in full in the HTA journal series, Volume 20, No. 88. Further information available at: http://www.nets.nihr.ac.uk/projects/hta/105746. This report presents independent research commissioned by the NIHR. The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, Medical Research Council (MRC), Central Commissioning Facility (CCF), NIHR Evaluation, Trials and Studies Coordinating Centre (NETSCC), the HTA programme, or the Department of Health.

Author information

Authors and Affiliations

Authors

Contributions

KB and DH wrote the first draft of the paper and all authors reviewed and provided input on further drafts. SB, DH, and MB conceived of or designed the trial. KB, LS, and SB were involved in the acquisition of data for the work. MB conducted the statistical analysis for the trial and assisted KB with further analysis specific to this paper. KB and LS produced the figures and tables. All authors helped in the interpretation of the data. All authors were involved in the final approval of the version to be published. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding author

Correspondence to Katie Biggs.

Ethics declarations

Ethics approval and consent to participate

The study received ethical approval from NRES Committee Yorkshire & The Humber - South Yorkshire (12/YH/0236). Written informed consent was obtained from all trial participants prior to participation.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Biggs, K., Hind, D., Bradburn, M. et al. Design, planning and implementation lessons learnt from a surgical multi-centre randomised controlled trial. Trials 20, 620 (2019). https://doi.org/10.1186/s13063-019-3649-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-019-3649-0