Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Benefits and challenges of using the cohort multiple randomised controlled trial design for testing an intervention for depression

Trials201718:308

https://doi.org/10.1186/s13063-017-2059-4

Received: 29 September 2016

Accepted: 20 June 2017

Published: 6 July 2017

Abstract

Background

Trials which test the effectiveness of interventions compared with the status quo frequently encounter challenges. The cohort multiple randomised controlled trial (cmRCT) design is an innovative approach to the design and conduct of pragmatic trials which seeks to address some of these challenges.

Main text

In this article, we report our experiences with the first completed randomised controlled trial (RCT) using the cmRCT design. This trial—the Depression in South Yorkshire (DEPSY) trial—involved comparison of treatment as usual (TAU) with TAU plus the offer of an intervention for people with self-reported long-term moderate to severe depression. In the trial, we used an existing large population-based cohort: the Yorkshire Health Study. We discuss our experiences with recruitment, attrition, crossover, data analysis, generalisability of results, and cost. The main challenges in using the cmRCT design were the high crossover to the control group and the lower questionnaire response rate among patients who refused the offer of treatment. However, the design did help facilitate efficient and complete recruitment of the trial population as well as analysable data that were generalisable to the population of interest. Attrition rates were also smaller than those reported in other depression trials.

Conclusion

This first completed full trial using the cmRCT design testing an intervention for self-reported depression was associated with a number of important benefits. Further research is required to compare the acceptability and cost effectiveness of standard pragmatic RCT design with the cmRCT design.

Trial registration

ISRCTN registry: ISRCTN02484593. Registered on 7 Jan 2013.

Keywords

Pragmatic trials Cohort multiple RCT Recruitment Depression Trials within cohorts

Background

Since its publication in 2010, a number of studies have started using the innovative cohort multiple randomised controlled trial (cmRCT) design [1], also known as ‘Trials within Cohorts’ (www.twics.global). The cmRCT design uses a large observational cohort of people with the condition of interest, with regular measurement of outcomes for the whole cohort. This cohort provides a capacity for multiple randomised controlled trials (RCTs) over time. For each RCT, routine cohort data help to identify those eligible for the trial. Eligible participants are then randomly selected to be offered the trial intervention, and their outcomes are compared with those not randomly selected (i.e., those receiving usual care). Like some cluster trial designs, the cmRCT design uses a ‘randomisation without prior consent’ approach to informed consent [2]; thus, information about the intervention is provided only to the intervention group, and this information is given after (not before) randomisation.

Researchers in a variety of settings (including hospital, primary care and community) in the United Kingdom, Canada and The Netherlands use the design [3], including a number of studies testing interventions for mental health in the United Kingdom [4, 5] and Canada [6]. In this article, we report on our experiences of using the cmRCT design in relation to recruitment, attrition, crossover, data analysis, cost and generalisability of results for the first completed full-scale RCT of the cmRCT design: the Depression in South Yorkshire (DEPSY) trial. In this trial, we used an existing cohort to compare treatment as usual (TAU) with TAU plus the offer of a course of treatment provided by a homeopath for patients with self-reported long-term moderate to severe depression.

Main text

Recruitment

Trials often struggle to reach recruitment goals on time, and many trials fail entirely to recruit a sufficient number of participants [7], especially trials in depression [8]. This is also true for pragmatic trials of interventions such as counselling and cognitive behavioural therapy for depression [911]. Consequences of insufficient recruitment include at best reduced power and/or extended recruitment periods, contributing to increased costs, and at worst inability to draw conclusions because of underpowered trials and wasted resources.

Recruitment to the DEPSY trial was from an established population-based cohort (the Yorkshire Health Study; www.yorkshirehealthstudy.org [12]) set up to facilitate multiple pragmatic trials using the cmRCT design. All people in the cohort had previously provided self-reported information on their health (including anxiety or depression), health-related behaviours and health care resource use, and had given consent to be contacted again. The researchers invited those who had reported long-term depression or feeling moderately or severely anxious or depressed when completing the Yorkshire Health Study questionnaire (n = 5740), to complete a detailed mood and health questionnaire. Completed questionnaires were returned by 2214 patients (38.6%). This provided the trial researchers with the information needed to apply the inclusion/exclusion criteria, as well as baseline information for the trial. All cohort patients had given permission for their data to be used to look at the benefit of health treatments and to be contacted again by the researchers. Additionally, written informed consent was requested by patients taking up the offer of treatment.

It took a total of 5 months for 566 eligible participants (17% more than the original sample size calculation) to be recruited to the DEPSY trial [13]. The method used avoided many recruitment barriers encountered in other trials, where clinicians may struggle to find time to recruit patients, feel reluctant to let the research disrupt consultations, or consider the randomisation process to be inappropriate and the research process to be too much of a burden for patients [810, 14].

The majority of people take part in trials in the hope that they will obtain direct and/or indirect benefits for themselves or others as a result of their participation (e.g., improved health) [15]. It is common for patients to refuse randomisation because of the risk of not being offered the treatment of their preference [8, 16]. The use of the cmRCT design avoided this barrier because only those randomly selected to receive an offer of treatment were informed about the intervention being tested [1]. All 185 patients in the ‘Offer’ group were sent a letter offering them the treatment, 150 of whom were reached by telephone. About half (n = 95, 51.4%) accepted the offer of treatment. Everyone in both the ‘Offer’ and ‘No offer’ groups were sent baseline and follow-up questionnaires, unless they asked not to be sent any further questionnaires (n = 62, 11.0%).

Attrition

Patients who agree to participate in trials may be disappointed if they are not allocated to receive their preferred intervention [17]. As a consequence, they may become uncooperative, report poorer outcomes than experienced and even leave the trial, which may in turn significantly affect analyses and the interpretation of results [18]. Such attrition may lead to biased results and, at worst, inability to draw conclusions [19].

The fact that patients in the TAU control arm were unaware of the intervention being trialled may have contributed to lowering attrition rates (attrition here refers to non-completion of follow-up questionnaires at 6 and 12 months). Following discussion with other mental health researchers, it was estimated that a realistic response rate would be 60% [13]. Results showed that more than 80% returned completed questionnaires at 6 months, and 67% returned them at 12 months. When considering drop-out for treatment among those who accepted the offer of treatment in the offer group, over 90% followed their treatment with several consultations. Other researchers have found attrition rates of, for example, 16% for psychotherapy and 32% for parenting education [20]. Researchers in trials of antidepressant treatment found attrition rates of 23% at 6 months and 47% at 12 months [21].

The rate at which people completed follow-up questionnaires was lower (68%) in the group randomly selected to receive the offer of the intervention (Offer group), compared with the control group (87%), where patients were not offered the intervention (No offer group). In trials using the cmRCT design, some patients in the Offer group may be uninterested in responding to questionnaires if they either have no interest in or dislike the intervention [18]. This will not be an issue for patients in the No offer group, because they are unaware of the intervention. It could therefore explain a between group difference in questionnaire response rates.

Within the Offer group, 88% of those who accepted the offer and received treatment returned the completed questionnaire (equal to the response rate in the control group), compared with only 54% of those who did not take up the offer (non-accepters). To understand what might have contributed to these differences, baseline characteristics of the four groups (Offer and No offer group responders and non-responders at 6 and 12 months) were compared using a multiple linear regression model for each baseline covariate, as recommended by Walters [22]. There was no evidence of significant differences between the four groups in regard to their depression or anxiety scores or in any other baseline covariates considered to be likely to influence outcomes. Between-group comparisons could therefore be carried out with limited risk of significant influence of known potential confounding factors.

Although there were no known characteristics that could explain a lower return rate of questionnaires in Offer group non-accepters, the question remains why this particular group was less likely to return questionnaires. One possible explanation could be that non-accepters upon receiving follow-up questionnaires thought that their response was not needed, because they had not taken up the offer of treatment. Moreover, patients who do not believe in the intervention or who have unsuccessfully tried it in the past may not be interested in participating in the trial, which was reported by two patients in this trial. Further research is required to understand non-response in trials using the cmRCT design.

Crossover and data analysis

Patients in pragmatic trials not randomly selected to receive their preferred intervention may seek the intervention outside the trial, thereby contributing to bias resulting from crossover [23]. Van der Velden et al. [18] suggested there is little risk of crossover from the control group when using the cmRCT design, because patients in the control group are not given information about the intervention being trialled. We collected data on patients’ use of interventions (including the treatment used in this trial) and did not find any cases of crossover from the control to the intervention arm. A bigger challenge in the cmRCT design is ‘crossover’ from the intervention to the control arm by patients not accepting the intervention being offered and remaining on TAU. In the DEPSY trial, 40% took up the offer of treatment and received treatment; thus, 60% crossed over to the control group. This has implications for the analysis of results.

Using an intention-to-treat (ITT) analysis when significant proportions cross over, results in ‘watered-down’ estimates of potentially effective interventions [24, 25]. Such crossover is to be expected in trials using the cmRCT design [18]. Per-protocol analyses are often used by researchers as an additional (or in some cases as the only) analysis, but this approach carries a risk of bias.

For the DEPSY trial, we used an ITT analysis to estimate the effect of the offer of treatment, as well as an instrumental variables analysis [26], which is a type of complier average causal effect analysis that takes into account baseline values and where randomisation is the instrument [25], to assess the effectiveness of treatment received. This analysis compares Offer group patients who have received the intervention with patients in the No offer group who would have received the intervention if they had been offered it. This analytic method has been recommended as the secondary analysis used in RCTs because it carries lower risk of bias than per-protocol or on-treatment analyses [25].

Generalisability of results

The aim of pragmatic trials is to produce results which are generalisable to the relevant clinical population. Therefore, whenever possible, the intervention should be provided in a way that is comparable to regular practice to a trial population that is comparable to the population of interest, and the information provided (and consents sought) also should be as similar as possible. It is not common in real-world practice to give patients information about interventions that they cannot receive. Commonly, available treatment options are discussed, and treatment plans are agreed with patients. The standard procedure for RCTs is that treatment is decided by chance, and information is not tailored to the individual patient but is generic, regardless of whether the patient is offered the treatment. In trials with the cmRCT design, only those randomly selected to be offered the intervention are provided with information about the intervention. Hence, patients in the No offer group are not informed about interventions they cannot receive. This is more comparable to real-world practice and contributes therefore to increasing the generalisability of results.

The seven practitioners delivering the intervention had been instructed to practice in the normal manner. Therefore, as in everyday practice, no treatment protocols were provided for them, other than guidelines for how to deal with risk issues and adverse events. Therefore, the frequency and length of consultations, as well as the medications and advice given, were comparable to routine practice.

Analysis of the self-reported data showed that the DEPSY trial participants had several similarities to the general population of patients with depression. Depression was significantly correlated to commonly seen comorbidities (anxiety and obesity), a greater chance of being unemployed [27] and deprived [28] than the general population, and with a larger proportion of women [29] (further details in trial article). Patients more commonly had chronic depression and depression was not diagnosed but was self-reported using the Patient Health Questionnaire (PHQ-9). The PHQ-9 has been found to have a high degree of validity, reliability, sensitivity and specificity, and it is sensitive to change and useful for patients with a variety of comorbidities [30]. Though not intended to replace diagnostic interviews, it has been found to be more conservative than clinician-rated outcomes [31]. The results of the trial are in many respects therefore more likely to be generalisable to the population of patients with chronic self-reported moderate to severe depression.

Cost

Recruitment to the trial via the Yorkshire Health Study cohort cost £15,000 (access to the cohort, researcher time, and printing and mailing out questionnaires and letters), which equates to £26.50 per participant recruited. Identifying and recruiting participants from a large cohort then enabled the DEPSY trial to use unequal randomisation (1:2 intervention to control). This meant that (compared with 1:1 randomisation) 25% fewer patients were offered the intervention, thus reducing the trial intervention costs by £10,000.

Conclusion

The main benefits of using the cmRCT design to test an intervention for self-reported depression were full, fast and efficient recruitment; lower attrition rates than other depression trials; and a trial population broadly similar to the general population of patients self-reporting chronic moderate to severe depression. The main challenges in using the cmRCT design for this pragmatic RCT were the lower follow-up (questionnaire response) rate in those who refused the offer of the intervention and the large ‘crossover’ (60%) from the intervention to the control group. The data nevertheless allowed us to carry out an ITT analysis for the offer of treatment and also to assess the effectiveness of treatment received using an instrumental variables analysis. Further research is required to compare the acceptability and cost-effectiveness of a standard pragmatic RCT design with the cmRCT design for research in all fields of healthcare, including mental health.

Abbreviations

cmRCT: 

Cohort multiple randomised controlled trial

DEPSY: 

Depression in South Yorkshire

ITT: 

Intention to treat

PHQ-9: 

Patient Health Questionnaire

RCT: 

Randomised controlled trial

TAU: 

Treatment as usual

Declarations

Acknowledgements

Many thanks to the patients, practitioners and researchers who contributed to this research project, as well as to the trial steering committee.

Funding

The DEPSY trial was funded as part of PV’s doctoral research project. This project received funding from multiple sources, including private donors in Germany, Norway, Sweden and the United Kingdom as well as from a National Institute for Health Research senior investigator award.

Availability of data and materials

Patient-submitted questionnaire data are confidential and are stored at the University of Sheffield School of Health and Related Research.

Authors’ contributions

PV and CR made substantial contributions to the conception and design of the trial and wrote the first-draft manuscript. PV, CR and NJ made substantial contributions to analysis and interpretation of data, made significant contributions, revised and edited the manuscript critically, and read and approved the final manuscript.

Ethics approval and consent to participate

Ethics approval was obtained from the National Research Ethics Service (NRES) (REC reference 12/YH/0379). All cohort patients had given consent for their data to be used to look at the benefit of health treatments and to be contacted again. All patients taking up the offer of treatment gave additional consent to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
The Department of Health Studies, University of Stavanger
(2)
School of Health and Related Research, The University of Sheffield

References

  1. Relton C, Torgerson D, O’Cathain A, Nicholl J. Rethinking pragmatic RCTs: introducing the ‘cohort multiple RCT’ design. BMJ. 2010;340:c1066.View ArticlePubMedGoogle Scholar
  2. Flory JH, Mushlin AI, Goodman ZI. Proposals to conduct randomized controlled trials without informed consent: a narrative review. J Gen Intern Med. 2016;31:1511–8.View ArticlePubMedGoogle Scholar
  3. Relton C, Thomas K, Nicholl J, Uher R. Review of an innovative approach to practical trials: the ‘cohort multiple RCT’ design [poster presentation]. Trials. 2015;16 Suppl 2:P114. doi:https://doi.org/10.1186/1745-6215-16-S2-P114.View ArticlePubMed CentralGoogle Scholar
  4. Mitchell N, Hewitt C, Adamson J, Parrott S, Torgerson D, Ekers D, et al. A randomised evaluation of CollAborative care and active surveillance for Screen-Positive EldeRs with sub-threshold depression (CASPER): study protocol for a randomized controlled trials. Trials. 2011;12:225. doi:https://doi.org/10.1186/1745-6215-12-225.View ArticlePubMedPubMed CentralGoogle Scholar
  5. Overend K, Lewis H, Bailey D, Bosanquet K, Chew-Graham C, Ekers D, et al. CASPER plus (CollAborative care in Screen-Positive EldeRs with major depressive disorder): study protocol for a randomised controlled trial. Trials. 2014;15:451. doi:https://doi.org/10.1186/1745-6215-15-451.View ArticlePubMedGoogle Scholar
  6. Uher R, Cumby J, MacKenzie LE, Morash-Conway J, Glover JM, Aylott A, et al. A familial risk enriched cohort as a platform for testing early interventions to prevent severe mental illness. BMC Psychiatry. 2014;14:344. doi:https://doi.org/10.1186/s12888-014-0344-2.View ArticlePubMedPubMed CentralGoogle Scholar
  7. Sully BG, Julious SA, Nicholl J. A reinvestigation of recruitment to randomised, controlled, multicenter trials: a review of trials funded by two UK funding agencies. Trials. 2013;14:166. doi:https://doi.org/10.1186/1745-6215-14-166.View ArticlePubMedPubMed CentralGoogle Scholar
  8. Hughes-Morley A, Young B, Waheed W, Small N, Bower P. Factors affecting recruitment into depression trials: systematic review, meta-synthesis and conceptual framework. J Affect Disord. 2015;172:274–90.View ArticlePubMedGoogle Scholar
  9. Fairhurst K, Dowrick C. Problems with recruitment in a randomized controlled trial of counselling in general practice: causes and implications. J Health Serv Res Policy. 1996;1:77–80.PubMedGoogle Scholar
  10. Hetherton J, Matheson A, Robson M. Recruitment by GPs during consultations in a primary care randomized controlled trial comparing computerized psychological therapy with clinical psychology and routine GP care: problems and possible solutions. Prim Health Care Res Dev. 2004;5:5–10.View ArticleGoogle Scholar
  11. Woodford J, Farrand P, Bessant M, Williams C. Recruitment into a guided internet based CBT (iCBT) intervention for depression: lesson learnt from the failure of a prevalence recruitment strategy. Contemp Clin Trials. 2011;32:641–8.View ArticlePubMedGoogle Scholar
  12. Green MA, Li J, Relton C, Strong M, Kearns B, Wu M, et al. Cohort profile: the Yorkshire Health Study. Int J Epidemiol. 2016;45:707–12.View ArticlePubMedGoogle Scholar
  13. Viksveen P, Relton C. Depression treated by homeopaths: a study protocol for a pragmatic cohort multiple randomised controlled trial. Homeopathy. 2014;103:147–52.View ArticlePubMedGoogle Scholar
  14. Minas H, Klimidis S, Kokanovic R. Mental health research in general practice. Australas Psychiatry. 2005;13:181–4.View ArticlePubMedGoogle Scholar
  15. Heaven B, Murtagh M, Rapley T, May C, Graham R, Kaner E, et al. Patients or research subjects? A qualitative study of participation in a randomised controlled trial of a complex intervention. Patient Educ Couns. 2006;62:260–70.View ArticlePubMedGoogle Scholar
  16. Torgerson DJ, Sibbald B. Understanding controlled trials: what is a patient preference trial? BMJ. 1998;316:360.View ArticlePubMedPubMed CentralGoogle Scholar
  17. Torgerson DJ, Torgerson CJ. Designing randomised trials in health, education and the social sciences: an introduction. Basingstoke, UK: Palgrave Macmillan; 2008.View ArticleGoogle Scholar
  18. van der Velden JM, Verkooijen HM, Young-Afat DA, Burbach JPM, van Vulpen M, Relton C, et al. The cohort multiple randomized controlled trial design: a valid and efficient alternative to pragmatic trials? Int J Epidemiol. 2017;46:96–102. doi:https://doi.org/10.1093/ije/dyw050.PubMedGoogle Scholar
  19. Dumville JC, Torgerson DJ, Hewitt CE. Reporting attrition in randomised controlled trials. BMJ. 2006;332:969–71.View ArticlePubMedPubMed CentralGoogle Scholar
  20. Spinelli MG, Endicott J, Goetz RR, Segre LS. Reanalysis of efficacy of interpersonal psychotherapy for antepartum depression versus parenting education program: initial severity of depression as a predictor of treatment outcome. J Clin Psychiatry. 2016;77:535–40. doi:https://doi.org/10.4088/JCP.15m09787.View ArticlePubMedGoogle Scholar
  21. Warden D, Rush AJ, Carmody TJ, Kashner TM, Biggs MM, Crismon ML, et al. Predictors of attrition during one year of depression treatment: a roadmap to personalized intervention. J Psychiatr Pract. 2009;15:113–24. doi:https://doi.org/10.1097/01.pra.0000348364.88676.83.View ArticlePubMedPubMed CentralGoogle Scholar
  22. Walters SJ. Quality of life outcomes in clinical trials and health-care evaluation: a practical guide to analysis and interpretation. Chichester, UK: John Wiley & Sons; 2009.View ArticleGoogle Scholar
  23. Cook TD, Campbell DT. Quasi-experimentation: design & analysis issues for field settings. Chicago: Rand McNally; 1979.Google Scholar
  24. Becque T, White IR. Regaining power lost by non-compliance via full probability modelling. Stat Med. 2008;27:5640–63.View ArticlePubMedGoogle Scholar
  25. Hewitt CE, Torgerson DJ, Miles JNV. Is there another way to take account of noncompliance in randomized controlled trials? CMAJ. 2006;175:347–8.View ArticlePubMedPubMed CentralGoogle Scholar
  26. Greenland S. An introduction to instrumental variables for epidemiologists. Int J Epidemiol. 2000;29:722–9.View ArticlePubMedGoogle Scholar
  27. Jefferis BJ, Nazareth I, Marston L, Moreno-Kustner B, Bellón JÁ, Svab I, et al. Associations between unemployment and major depressive disorder: evidence from an international, prospective study (the predict cohort). Soc Sci Med. 2011;73:1627–34.View ArticlePubMedGoogle Scholar
  28. Stafford M, Marmot M. Neighbourhood deprivation and health: does it affect us all equally? Int J Epidemiol. 2003;32:357–66.View ArticlePubMedGoogle Scholar
  29. Smith DJ, Nicholl BI, Cullen B, Martin D, Ul-Haq Z, Evans J, et al. Prevalence and characteristics of probable major depression and bipolar disorder within UK Biobank: cross-sectional study of 172,751 participants. PLoS One. 2014;8, e75362. doi:https://doi.org/10.1371/journal.pone.0075362.View ArticleGoogle Scholar
  30. Kroenke K, Spitzer RL, Williams JBW, Löwe B. The Patient Health Questionnaire Somatic, Anxiety and Depressive Symptom Scales: a systematic review. Gen Hosp Psychiatry. 2010;32:345–59.View ArticlePubMedGoogle Scholar
  31. Cuijpers P, Li J, Hofmann SG, Andersson G. Self-reported versus clinician-rated symptoms of depression as outcome measures in psychotherapy research on depression: a meta-analysis. Clin Psychol Rev. 2010;30:768–78.View ArticlePubMedGoogle Scholar

Copyright

© The Author(s). 2017

Advertisement