- Open Access
Commentary on ‘Exclusion rates in randomized trials of treatments for physical conditions: a systematic review’
Trials volume 22, Article number: 76 (2021)
We read with interest the paper by He, Morales and Guthrie  recently published in Trials. This systematic review reports that 50 randomised controlled trials for people with physical conditions excluded the majority of potential patients, with a median exclusion rate of 77.1% of potential participants, with only 5.2% of trails excluding less than 25%. This review adopted high-quality systematic review procedures, including two reviewers performing data screening and extraction. As the authors reflect, the findings present significant issues for the generalisability of the studies’ results.
Systematic reviews aim to synthesise an existing body of literature with a predetermined and structured method, often focusing on identifying the efficacy of similar interventions and comparing outcomes . Systematic reviews are typically relied upon as a source to identify those interventions that may be most effective in clinical practice. He et al. focused on the exclusion rates of physical condition randomised controlled trials, rather than trial efficacy, which enables important reflections to be made upon the design and conduct of these studies. The samples that participate in randomised controlled trials are important to consider, as trials can lack external validity, with a small, select group of participants participating in a trial that may lack relevance to broader patient populations . He, Morales and Guthrie reflect that their review demonstrated the narrow populations selected to participate in trial studies, with samples that have a higher chance of improving with treatment, and a lower chance of adverse events.
In considering real-world effectiveness of interventions, we reflect on the implementation science literature, in particular the work of Proctor and colleagues, who propose a set of outcomes that can be used to evaluate successful implementation of interventions . Proctor et al. present eight implementation outcomes: acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration and sustainability. These outcomes are distinguished from service outcomes (as defined by the Institute of Medicine  such as timeliness, efficiency and equity) or client outcomes (such as satisfaction or function). The feasibility implementation outcome is a reflection of whether new program can be successfully used within a setting, noting that it is ‘reflected in poor recruitment, retention or participation rates’ . The findings presented by He, Morales and Guthrie are also a reflection of the limited feasibility of trials for treatments for physical conditions given the significant exclusion of participants.
We recently operationalised Proctor et al.’s implementation outcomes to report on the potential of implementing cancer caregiving interventions in practice, concluding that such studies lack sufficient detail in both design and reporting to sufficiently bridge the evidence to practice gap . For the feasibility outcome, we applied an operational definition as ‘participation of caregivers, and time commitment to intervention delivery’. We found that less than one-third of eligible caregivers participated in those studies that met inclusion criteria.
There is significant opportunity to improve the reporting of efficacy trials to consider future interventions and outcomes about effectiveness and potential for implementation in real-world settings. Our example of operationalising implementation outcomes used clearly defined criteria to enable measurement and systematic application across one aspect of cancer care . The findings of He et al.’s study supports calls for greater use of pragmatic trial designs [8, 9]. There is a wealth of evidence about the significant challenges in improving healthcare outcomes and implementation science has a central role in addressing such challenges . Indeed, there is a need to consider implementation early to conduct studies that have the most potential to be implemented .
While these authors highlight the limited generalisability of trials for physical conditions, we also consider that the results also indicate lack of feasibility for real-world impact. We note the many opportunities to document strategies to improve feasibility, including codesign of interventions  and conducting quality pilot studies . We also suggest the need to broaden inclusion criteria of studies to enable greater participation from the population.
Availability of data and materials
He J, Morales DR, Guthrie B. Exclusion rates in randomized controlled trials of treatments for physical conditions: a systematic review. Trials. 2020;21(1):228.
Higgins J, Wells G. Cochrane handbook for systematic reviews of interventions; 2011.
Rothwell PM. External validity of randomised controlled trials:“to whom do the results of this trial apply?”. Lancet. 2005;365(9453):82–93.
Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65–76.
Institute of Medicine Committee on Quality of Health Care in America. Crossing the qulaity chasm: A new health system for the 21st century. Washington DC: Institute of Medicine, National Academy Press; 2001.
Ugalde A, Gaskin CJ, Rankin NM, Schofield P, Boltong A, Aranda S, Chambers S, Krishnasamy M, Livingston PM. A systematic review of cancer caregiver interventions: appraising the potential for implementation of evidence into practice. Psycho-Oncol. 2019;28(4):687–701.
Shepherd HL, Geerligs L, Butow P, Masya L, Shaw J, Price M, Dhillon HM, Hack TF, Girgis A, Luckett T, et al. The elusive search for success: defining and measuring implementation outcomes in a real-world hospital trial. Front Public Health. 2019;7(293).
Glasgow RE, Magid DJ, Beck A, Ritzwoller D, Estabrooks PA. Practical clinical trials for translating research to practice: design and measurement recommendations. Med Care. 2005;43(6):551–7.
Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013;45(2):237–43.
Rankin NM, Butow PN, Hack TF, Shaw JM, Shepherd HL, Ugalde A, Sales AE. An implementation science primer for psycho-oncology: translating robust evidence into practice. J Psychosocial Oncol Res Pract. 2019;1(3):e14.
Robert G, Cornwell J, Locock L, Purushotham A, Sturmey G, Gager M. Patients and staff as codesigners of healthcare services. BMJ. 2015;350:g7714.
Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, Robson R, Thabane M, Giangregorio L, Goldsmith CH. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10(1):1.
Anna Ugalde is supported with a Victorian Cancer Agency Early Career Health Services Research Fellowship. Nicole Kiss is supported with a Victorian Cancer Agency Nursing and Allied Health Clinical Research Fellowship.
Ethics approval and consent to participate
Consent for publication
The authors have no competing interests to declare.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Ugalde, A., Kiss, N., Livingston, P.M. et al. Commentary on ‘Exclusion rates in randomized trials of treatments for physical conditions: a systematic review’. Trials 22, 76 (2021). https://doi.org/10.1186/s13063-021-05019-9