Skip to main content

Zelen design clinical trials: why, when, and how

Abstract

Background

In 1979, Marvin Zelen proposed a new design for randomized clinical trials intended to facilitate clinicians’ and patients’ participation. The defining innovation of Zelen’s proposal was random assignment of treatment prior to patient or participant consent. Following randomization, a participant would receive information and asked to consent to the assigned treatment.

Methods

This narrative review examined recent examples of Zelen design trials evaluating clinical and public health interventions.

Results

Zelen designs have often been applied to questions regarding real-world treatment or intervention effects under conditions of incomplete adherence. Examples include evaluating outreach or engagement interventions (especially for stigmatized conditions), evaluating treatments for which benefit may vary according to participant motivation, and situations when assignment to a control or usual care condition might prompt a disappointment effect. Specific practical considerations determine whether a Zelen design is scientifically appropriate or practicable. Zelen design trials usually depend on identifying participants automatically from existing records rather than by advertising, referral, or active recruitment. Assessments of baseline or prognostic characteristics usually depend on available records data rather than research-specific assessments. Because investigators must consider how exposure to treatments or interventions might bias ascertainment of outcomes, assessment of outcomes from routinely created records is often necessary. A Zelen design requires a waiver of the usual requirement for informed consent prior to random assignment of treatment. The Revised Common Rule includes specific criteria for such a waiver, and those criteria are most often met for evaluation of a low-risk and potentially beneficial intervention added to usual care. Investigators and Institutional Review Boards must also consider whether the scientific or public health benefit of a Zelen design trial outweighs the autonomy interests of potential participants. Analysis of Zelen trials compares outcomes according to original assignment, regardless of any refusal to accept or participate in the assigned treatment.

Conclusions

A Zelen design trial assesses the real-world consequences of a specific strategy to prompt or promote uptake of a specific treatment. While such trials are poorly suited to address explanatory or efficacy questions, they are often preferred for addressing pragmatic or policy questions.

Peer Review reports

Introduction

Traditional randomized clinical trials often fail to provide the relevant and timely evidence necessary to guide clinical and policy decisions regarding medical treatments or services [1,2,3,4]. Regarding relevance, participants in traditional randomized trials often differ markedly from those treated in community practice in terms of sociodemographic characteristics, prognostic characteristics, co-occurring conditions, and motivation or likelihood of treatment adherence. Consequently, outcomes observed in clinical trial participants may differ from those in real-world practice where trial results would be applied [1, 5, 6]. Furthermore, narrow entry criteria, complex consent procedures, and burdensome research assessments can slow or reduce recruitment. Inadequate recruitment contributes to the slow pace and frequent failure of many traditional clinical trials [3, 4, 7, 8].

Concerns regarding the efficiency of evidence generation and the generalizability of resulting evidence have prompted demands for evidence derived from real-world settings using more pragmatic research designs [1, 3, 5, 6]. Pragmatic or real-world clinical trials may differ from traditional trials in several dimensions [5, 6], including the specific mechanism of allocating treatment conditions or study groups. Alternatives to traditional methods of patient-level recruitment and random assignment include cluster-level random assignment, stepped-wedge designs, and Zelen designs [9, 10]. Selection of the optimal method for treatment assignment depends on the specific question to be addressed and the practical or ethical constraints imposed by the trial setting and the treatment(s) under study. Motivated by increasing interest in real-world evidence and pragmatic clinical trials, this narrative review examines the Zelen design as one type of pragmatic trial, focusing on the specific questions and settings to which the Zelen design is best suited.

Here we review key aspects of the Zelen design, characterized by randomization prior to consent followed by encouragement to accept the assigned treatment. Variations of the Zelen design may be applied to a comparison of a new treatment to a no-treatment control condition or to a comparison of alternative “active” treatments. We describe the original motivation for this design and then consider four questions: For which questions might a Zelen design be preferred? Under what conditions is this design scientifically appropriate? When is this design ethically appropriate? How should results of a Zelen design trial be interpreted?

Defining characteristics of the Zelen design

In 1979, Marvin Zelen proposed a new design for randomized clinical trials intended to facilitate clinicians’ and patients’ participation [9]. He argued that the traditional randomized trial design, requiring informed consent prior to treatment assignment reduced clinicians’ likelihood of recommending trial participation and patients’ likelihood of participating. Clinicians might be reluctant to acknowledge uncertainty about alternative treatments, and patients might prefer a new treatment presumed to be superior. In addition, comparisons of new treatments to standard care could be biased by disappointment effects when all participants are informed regarding a new treatment and only some receive it [11,12,13]. The defining innovation of Zelen’s proposal was random assignment of treatment prior to patient or participant consent. A comparison of alternative “active” treatments would follow a “double consent” design [13]. Following random assignment to alternative treatments, participants in both arms would receive information about and would be asked to consent to their assigned treatment. A comparison of some new treatments to a usual care or no-treatment control condition would follow a “single consent” design [13]. Participants assigned to the new treatment would receive information and asked to consent, while those assigned to no treatment or usual care would generally not be contacted or notified. In either design, analyses would compare outcomes according to the original treatment assignment, regardless of any refusal to accept or participate in the assigned treatment. Those who decline or discontinue the assigned treatment would be analyzed according to the initial assignment, preserving the unbiased assignment of the original randomization. Delaying informed consent until after randomization was expected to increase acceptability to clinicians and patients as well as reduce biases due to notifying a participant about a treatment they would not receive.

Zelen’s original proposal [9] acknowledged some important limitations of this design. Informed consent describing only the assigned treatment would not allow blinding of clinicians or patients. Patients’ expectations or preferences could bias subsequent assessment of outcomes. In addition, any true difference in efficacy or safety between treatments would be diluted by patients declining to receive the assigned treatment, a potential loss of precision. Zelen argued that any loss of efficiency could be overcome by increased rates of trial participation and subsequent increases in sample sizes.

Over the last 10 years, Zelen designs have been employed to evaluate a range of outreach and care navigation interventions to promote treatment engagement, including postcard or electronic messaging outreach to prevent suicide attempts or self-harm [14, 15], eHealth intervention to increase engagement in eating disorder treatment [16], care navigation to promote advanced care planning in older adults [17], and assessment plus feedback to improve early adherence to substance use treatment [18]. Similarly, this design has been used to evaluate health promotion or behavior change interventions, including a text message for HIV prevention [19], rehabilitation interventions for musculoskeletal conditions [20,21,22], health coaching to improve chronic illness management [23], and oral health intervention to prevent early childhood caries [24]. Zelen designs have also been employed to evaluate programs to prevent re-hospitalization [25] or reduce frequent emergency department use [26].

When is the Zelen design preferred?

Zelen’s original proposal discussed this design as an alternative method for either assessing efficacy of new treatments or comparing the efficacy of existing treatments. Incomplete uptake of an assigned treatment was considered a limitation or a necessary accommodation to serve the goals of increased participation in clinical research. Zelen wrote, “If only a small proportion of patients are willing to take treatment B, this experimental plan may be useless in evaluation of this treatment.” [9]

For some research questions, however, allowing incomplete uptake of an assigned treatment may be a strength rather than a limitation. In pragmatic trials, the acceptability of a treatment or intervention is often a central determinant of overall effectiveness. The Zelen design can accommodate delayed or partial uptake of treatment, more accurately reflecting real-world variability. Assessing effectiveness under conditions of incomplete uptake or adherence is especially relevant to the evaluation of outreach or prevention programs or when evaluating interventions for stigmatized conditions. When stigma reduces uptake of or adherence to a treatment under study, the resulting decrease in population-level benefit is an important component of real-world effectiveness. In each of the Zelen design examples cited above, initial refusal of an assigned intervention or early discontinuation of that intervention would be considered signal rather than noise. In evaluating outreach, engagement, and health promotion interventions, research questions typically concern uptake in settings where the treatment would be implemented rather than potential efficacy if all participants behaved as investigators might hope.

Similarly, the Zelen design may be especially appropriate for evaluating intervention effects when both uptake and potential benefit are expected to vary according to participants’ motivation. For example, an outreach and care management intervention to reduce self-harm among high-risk outpatients [15] could have broad effects via nonspecific support to all who receive outreach messages as well as more specific effects among those who engage in recommended treatment. At the same time, outreach to promote engagement could actually reduce the use of recommended treatment in those who are upset or offended by outreach messages. In these situations, the effects of participants’ values and preferences on treatment uptake or adherence constitute signal rather than noise. A traditional randomized trial, limited to those who agree to participate in a trial of outreach and engagement, could not accurately assess the net effect of these different processes.

In addition, a Zelen design may be preferred when assignment to a no-treatment or usual care control condition could prompt a disappointment effect. In that case, outcomes of those assigned a control condition may not accurately reflect what would have occurred in the absence of trial participation. Even when explicitly informed that a new treatment may have no benefit, many clinical trial participants presume that a new treatment will be superior to current practice [27]. Therapeutic optimism regarding a new or “experimental” treatment is mirrored by disappointment effects in those assigned to usual or standard care. In a single consent design evaluating a new treatment, disappointment at being assigned to a “control” condition could reduce participation in outcome data collection or influence participants’ reporting of study outcomes. In a double consent design comparing alternative treatments, disappointment over assignment to a less preferred treatment could artificially reduce adherence. Minimizing this disappointment bias was an important motivation for Zelen’s original proposal [9] that patients assigned to a usual care control condition would not be notified regarding the potential benefits of a treatment they could not actually receive. A fundamental principle of intent-to-treat analyses in a Zelen design is that incomplete uptake or adherence are key components of effectiveness, so those who decline an assigned treatment should be analyzed according to the original assignment. But a disappointment effect due to notification regarding alternative treatment assignments offered to others is an artifact of the research process. Consequently, the consent process in a Zelen design only considers the assigned treatment and does not include information regarding treatments not being offered.

When is the Zelen design scientifically appropriate?

Even when a Zelen design might be preferred to address a specific clinical or policy question, practical considerations will influence whether such a design is scientifically appropriate or practicable. Relevant aspects of the trial design include identification of potential participants, assessment of relevant baseline or prognostic characteristics, evaluating quality or fidelity of treatments, and potential bias in the ascertainment of outcomes.

In general, the use of a Zelen design requires that participants can be identified automatically rather than recruited via advertising or referral. This might be accomplished using health system records, educational records, social service records, or some other administrative data source. In any case, eligibility or inclusion must be definitively determined prior to randomization. For example, recent Zelen design trials have automatically enrolled patients from a defined population using records of hospital discharge [25] or routinely administered mental health questionnaires [15]. Recruitment procedures typical of traditional randomized trials are not consistent with the goal of including all eligible to receive an intervention, regardless of motivation or expected participation. Identification of participants via community advertising or clinician referral would usually select participants more likely to accept and adhere to the treatments under study [28].

Similarly, assessment of baseline or prognostic characteristics must be limited to information available from existing records for all randomized participants. Limiting randomization to those willing to complete a research-specific assessment would be inconsistent with the goal of assessing intervention or treatment effects in all who are eligible. Assessing baseline or prognostic characteristics after randomization raises the risk of significant bias if either participation in a research assessment or responses to a research assessment might be influenced by a patient’s knowledge of treatment assignment.

For similar reasons, comparison of treatment adherence or service utilization after random assignment must sometimes be limited to information available from existing records. This concern is especially relevant in trials of outreach programs to promote treatment engagement or improve treatment adherence. For participants assigned to the offer of a new treatment or program, it is essential to examine uptake or adherence among all those assigned, rather than limiting assessment to those who accept the offered treatment or those willing to participate in research assessments. If treatment uptake or adherence are assessed using research assessments, then participation in those assessments could certainly be affected by exposure to intervention(s) under study, introducing a significant potential bias. In addition, contacting participants or clinicians assigned to usual care (i.e., no additional outreach) control group could influence care delivery so that it no longer resembled care as usual.

Finally, investigators must consider how exposure to treatments or interventions under study might bias ascertainment of outcomes. In a comparison of some new service or program to a no-treatment or usual care control group, the offer of that new intervention could influence likelihood of participation in research outcome assessment. The resulting differential availability of outcome information could introduce significant bias. Consequently, a Zelen design is most appropriate when the primary outcome is a well-defined event that can be ascertained on all individuals randomized from automatically collected data, such as health system records, vital statistics data, educational records, or social service records. Examples of outcomes from health records include emergency department visit [26], hospitalization [25], routinely administered patient-reported outcome questionnaires, or a hospital clinician’s diagnosis of probable suicide attempt [14, 15].

When is the Zelen design ethically appropriate?

In a traditional clinical trial, a single-informed consent process encompasses random assignment, receipt of study treatment(s), and participation in study assessments. Potential participants receive information regarding all possible treatments, including both “active” and control treatments, and are asked to consent to any treatment that is subsequently assigned. In other words, informed consent for participation in traditional clinical trials is typically indivisible; potential participants agree to all aspects of a trial prior to randomization, or they do not participate at all.

Randomization prior to informed consent is a central feature of the Zelen or randomized encouragement design. Each participant receives information regarding only the treatment to which they have been assigned. Zelen’s original description allowed variation in this consent process, depending on treatments under study. In a comparison of some treatment or service with usual care, those assigned to be offered the new treatment or service would receive information and be allowed the opportunity to accept or decline. Those declining would receive care as usual. In this scenario, those assigned to receive care as usual would receive information as in usual practice, with no formal research consent procedure. In a comparison of alternative active treatments, each participant would receive information regarding the treatment to which they were assigned and allowed the opportunity to accept or decline. Those declining would then receive care as usual. Either of these variations would require a waiver of the usual requirement for informed consent to be randomly assigned.

Under the revised Common Rule [29] governing research supported by US government agencies, the requirement for informed consent prior to random assignment can be waived if specific criteria are met. First random assignment to usual care or to an “active” treatment under study must not create more than minimal additional risk. Second, waiver of informed consent prior to randomization must be necessary for the research to be practicable. Third, waiver of consent for random assignment must not adversely affect the rights or welfare of participants. Fourth, when appropriate, participants should be provided with additional pertinent information after randomization.

These requirements for waiver of consent for random assignment are most often met when a Zelen design is used to evaluate the benefit of some potentially beneficial treatment or service added to usual care. Regarding the “practicability” criterion, Zelen argued that randomization prior to consent is sometimes essential for valid evaluation of treatment effectiveness, especially when incomplete uptake is likely or “disappointment effects” might distort outcomes in participants assigned to usual care. Regarding other criteria for waiver of consent, investigators and Institutional Review Boards (IRBs) should separately evaluate impacts on participants assigned to usual care or to some “active” intervention. Each participant assigned to usual care receives precisely the same treatment they would have received if not included in the trial. By definition, assignment to the care one would ordinarily receive does not involve any increase in risk nor adversely affect a potential participant’s rights or welfare – as long as that assignment does not involve any restriction on the use of services normally available. Following Zelen’s original argument, notifying a participant of their assignment to usual care rather than some new treatment or service would not usually be appropriate or useful. That notification would offer no additional benefit or protection and could cause harm by provoking a disappointment effect. For a participant assigned to be offered some new treatment or service, we should distinguish between the effects of offering that treatment and the effects of using or receiving it. As discussed above, any participant offered some study treatment or service would subsequently be provided information and given the opportunity to accept or decline. When applying revised Common Rule criteria, we should therefore consider how assignment to an offer of a study treatment might create more than minimal risk or adversely affect a participant’s rights or welfare. Satisfying the “minimal risk” criterion usually requires that the potential risks of a study treatment be reasonably well established. Satisfying the “rights and welfare” criterion requires that the offer of a study treatment be transparent (i.e., acknowledges that the treatment or service being offered is part of research), non-coercive (i.e., clearly communicates right to refuse or withdraw), and not involve any restriction on access to treatment normally available. Allowing free choice to accept or decline an offered treatment is consistent with both the scientific aims of a Zelen design and the ethical obligations of researchers.

Applying regulatory requirements for waiver of consent is more complex for trials involving random assignment to alternative active treatments. Zelen’s argument still applies regarding the “practicability criterion”; random assignment prior to the offer of treatment is sometimes necessary to accurately assess real-world effectiveness and avoid disappointment effects. In such a design, all participants would be offered additional information after randomization, when the assigned treatment is offered. Applicability of the “minimal risk” and “rights and welfare” criteria depends on the specifics of the treatments under study and the options available to those who decline the offer of an assigned treatment. To satisfy criteria for waiver of consent, the offer of assigned treatment must be transparent and non-coercive. In addition, assigned treatments or programs should not involve any restrictions on the use of treatments normally available. In a trial comparing treatments in common use, satisfying these criteria may require that a participant be permitted to “cross over” and receive the alternative treatment under study. A pre-consent random assignment that restricted or denied access to a treatment normally available would likely not satisfy the “rights and welfare” criterion of the Revised Common Rule. Restricting access to a treatment usually available would be most concerning when random assignment determines choices about which patients may have strong preferences. In that case, informed consent prior to random assignment would usually be expected.

Simply satisfying regulatory requirements for waiver of informed consent does not necessarily imply that such a waiver is ethically appropriate. Investigators and IRBs considering such a design should consider the basic ethical principles described in the Belmont Report [30], specifically the principles of beneficence and respect for persons. Regarding beneficence, investigators are expected to maximize potential benefits to participants and minimize potential harms. Those considerations should influence the selection of study eligibility criteria, design of study interventions or treatments, design of intervention or consent procedures, and design of procedures for any study-specific assessments or data collection. Regarding respect for persons, random assignment of potential participants prior to any notification or consent does infringe on the principle that “individuals should be treated as autonomous agents.” Even if random assignment to be offered or not offered a new treatment creates no risk or harm, many patients might expect to be involved in that decision [31]. Consequently, investigators and IRBs must consider whether the scientific and public health benefits of randomization prior to consent outweigh infringing on the autonomy of potential participants.

Zelen’s proposal presumed a traditional informed consent process at the time any study treatment is offered. That process would typically include all essential elements of informed consent: explicit notification that the offer is a research activity, a description of expected procedures, a description of potential benefits and risks, notification that participation or acceptance is voluntary, and a description of alternatives to the treatment offered. For some treatments or services, however, an abridged consent procedure(15) or a waiver of informed consent to receive a study treatment might be appropriate. Such a waiver would be permissible if all of the regulatory criteria above are satisfied regarding receipt of the study treatment. Satisfying the “practicability” criterion would require a convincing argument that the research would not be practicable if a traditional informed consent procedure were required for those assigned to “experimental” treatment.

Use of a Zelen design usually requires research use of healthcare or other administrative records without explicit consent or authorization. Such data are often necessary to select eligible patients or potential participants prior to any direct contact. In most cases, identified or identifiable data are necessary. That use of identifiable data is necessary and would be considered research involving human subjects. Investigators and IRBs would then need to apply the criteria listed above regarding waiver of consent for use of records data. In most cases, that use of records data would not create additional risk or adversely affect rights and welfare.

How should findings of a Zelen design trial be interpreted?

Zelen’s proposal specified that analyses of Zelen trials should follow original treatment assignment or intent to treat [9, 10]. Any “as treated” or “per protocol” analysis would be irreparably biased. In a comparison of a new treatment or service with usual care, it is not possible to identify those in the usual care control group who would have accepted treatment had it been offered. In a comparison of alternative active treatments, it would still be problematic to compare those in each group accepting the offered treatment. We cannot assume that participants accepting one treatment would be otherwise similar to those who accept an alternative treatment.

Consequently, findings from the Zelen design studies focus on population effects and address questions regarding an offer of care. This design is optimal for assessing the overall benefits and harms of offering a treatment or service to the entire population of people eligible to receive it. Such questions are especially relevant for health system leaders considering the potential overall benefit of providing a particular healthcare program, particularly those for stigmatized conditions like substance use disorders, chronic pain, or mental health, in which eligible patients may be initially ambivalent at best about engaging in treatment.

Zelen did, however, propose that results observed under conditions of incomplete intervention uptake could be extrapolated to the full population meeting study eligibility criteria [9, 10]. This interpretation presumes that benefits or harms of the treatment under study would be similar in those who decline the treatment and those who accept it. In other words, extrapolating benefits or harms observed in a Zelen design trial to all those in the eligible population assumes that willingness to accept treatment does not modify differences between treatments in either benefits or harms.

We caution that overall benefits or harms observed in a Zelen design trial should not be adjusted for noncompliance or extrapolated to hypothetical populations with different rates of treatment uptake. We cannot presume that treatment effects observed in treatment “compliers” would be similar in those who decline an offered treatment or those who prematurely discontinue treatment. Consequently, average results observed in a Zelen design trial do not reflect a diluted or attenuated version of what would be observed if all participants accepted the offered treatment. In other words, “complier causal effects” [32, 33] would not necessarily generalize to those who receive the treatment of interest in the absence of intervention or those who decline the treatment despite intervention. Instead, findings only apply to patients at the margin [34], the middle group of patients who accept the study treatment if invited or encouraged but who would not receive it in usual care. For example, the Zelen design has been used to evaluate automatically offering smoking cessation services to all hospitalized smokers [35]. Any reduction in smoking rates observed in those who accept that offer cannot be extrapolated to those who decline. In other words, this design does not address the question “Does participation in this smoking cessation program increase quit rates?”. Instead, it addresses the question “Does a policy of routinely offering this smoking cessation program increase overall quit rates?”

Furthermore, findings of any specific Zelen design trial may not generalize to some other form of invitation or encouragement. Even if alternative forms of invitation or encouragement lead to similar rates of treatment uptake, those accepting different forms of invitation might differ with respect to benefits or harms of any specific treatment. In addition, an invitation or offer of treatment may have nonspecific supportive effects. The Zelen design does not support distinguishing specific effects of a study treatment from non-specific effects of support or encouragement.

Just as designs focused on efficacy may not accurately assess real-world effectiveness, a Zelen design trial well-suited to evaluate effectiveness may not accurately assess efficacy. Instead, a Zelen design assesses the real-world consequences of a specific strategy to prompt or promote uptake of a specific treatment. While such trials are poorly suited to address explanatory or efficacy questions, they are often preferred for addressing pragmatic or policy questions.

Availability of data and materials

N/A

References

  1. Sherman RE, Anderson SA, Dal Pan GJ, Gray GW, Gross T, Hunter NL, et al. Real-World Evidence - What Is It and What Can It Tell Us? N Engl J Med. 2016;375(23):2293–7. https://doi.org/10.1056/NEJMsb1609216.

    Article  PubMed  Google Scholar 

  2. Sherman RE, Davies KM, Robb MA, Hunter NL, Califf RM. Accelerating development of scientific evidence for medical products within the existing US regulatory framework. Nat Rev Drug Discov. 2017;16(5):297–8. https://doi.org/10.1038/nrd.2017.25.

    Article  CAS  PubMed  Google Scholar 

  3. Eapen ZJ, Lauer MS, Temple RJ. The imperative of overcoming barriers to the conduct of large, simple trials. JAMA. 2014;311(14):1397–8. https://doi.org/10.1001/jama.2014.1030.

    Article  CAS  PubMed  Google Scholar 

  4. Lauer MS, Gordon D, Wei G, Pearson G. Efficient design of clinical trials and epidemiological research: is it possible? Nat Rev Cardiol. 2017;14(8):493–501. https://doi.org/10.1038/nrcardio.2017.60.

    Article  PubMed  Google Scholar 

  5. Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350(may08 1):h2147. https://doi.org/10.1136/bmj.h2147.

    Article  PubMed  Google Scholar 

  6. Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD, Altman DG, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. CMAJ. 2009;180(10):E47–57. https://doi.org/10.1503/cmaj.090523.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Carlisle B, Kimmelman J, Ramsay T, MacKinnon N. Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials. Clin Trials. 2015;12(1):77–83. https://doi.org/10.1177/1740774514558307.

    Article  PubMed  Google Scholar 

  8. Huang GD, Bull J, Johnston McKee K, Mahon E, Harper B, Roberts JN, et al. Clinical trials recruitment planning: A proposed framework from the Clinical Trials Transformation Initiative. Contemp Clin Trials. 2018;66:74–9. https://doi.org/10.1016/j.cct.2018.01.003.

    Article  PubMed  Google Scholar 

  9. Zelen M. A new design for randomized clinical trials. N Engl J Med. 1979;300(22):1242–5. https://doi.org/10.1056/NEJM197905313002203.

    Article  CAS  PubMed  Google Scholar 

  10. Zelen M. Randomized consent designs for clinical trials: an update. Stat Med. 1990;9(6):645–56. https://doi.org/10.1002/sim.4780090611.

    Article  CAS  PubMed  Google Scholar 

  11. Lindstrom D, Sundberg-Petersson I, Adami J, Tonnesen H. Disappointment and drop-out rate after being allocated to control group in a smoking cessation trial. Contemp Clin Trials. 2010;31(1):22–6. https://doi.org/10.1016/j.cct.2009.09.003.

    Article  CAS  PubMed  Google Scholar 

  12. Adamson J, Cockayne S, Puffer S, Torgerson DJ. Review of randomised trials using the post-randomised consent (Zelen's) design. Contemp Clin Trials. 2006;27(4):305–19. https://doi.org/10.1016/j.cct.2005.11.003.

    Article  PubMed  Google Scholar 

  13. Torgerson DJ, Roland M. What is Zelen's design? BMJ. 1998;316(7131):606. https://doi.org/10.1136/bmj.316.7131.606.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Hatcher S, Coupe N, Durie M, Elder H, Tapsell R, Wikiriwhi K, et al. Te Ira Tangata: a Zelen randomised controlled trial of a treatment package including problem solving therapy compared to treatment as usual in Maori who present to hospital after self harm. Trials. 2011;12(1):117. https://doi.org/10.1186/1745-6215-12-117.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Simon GE, Beck A, Rossom R, Richards J, Kirlin B, King D, et al. Population-based outreach versus care as usual to prevent suicide attempt: study protocol for a randomized controlled trial. Trials. 2016;17(1):452. https://doi.org/10.1186/s13063-016-1566-z.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Denison-Day J, Muir S, Newell C, Appleton KM. A Web-Based Intervention (MotivATE) to Increase Attendance at an Eating Disorder Service Assessment Appointment: Zelen Randomized Controlled Trial. J Med Internet Res. 2019;21(2):e11874. https://doi.org/10.2196/11874.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Gabbard J, Pajewski NM, Callahan KE, Dharod A, Foley KL, Ferris K, et al. Effectiveness of a Nurse-Led Multidisciplinary Intervention vs Usual Care on Advance Care Planning for Vulnerable Older Adults in an Accountable Care Organization: A Randomized Clinical Trial. JAMA Intern Med. 2021;181(3):361–9. https://doi.org/10.1001/jamainternmed.2020.5950.

    Article  PubMed  Google Scholar 

  18. Raes V, De Jong CA, De Bacquer D, Broekaert E, De Maeseneer J. The effect of using assessment instruments on substance-abuse outpatients' adherence to treatment: a multi-centre randomised controlled trial. BMC Health Serv Res. 2011;11(1):123. https://doi.org/10.1186/1472-6963-11-123.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Jongbloed K, Friedman AJ, Pearce ME, Van Der Kop ML, Thomas V, Demerais L, et al. The Cedar Project WelTel mHealth intervention for HIV prevention in young Indigenous people who use illicit drugs: study protocol for a randomized controlled trial. Trials. 2016;17(1):128. https://doi.org/10.1186/s13063-016-1250-3.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Malliaras P, Cridland K, Hopmans R, Ashton S, Littlewood C, Page R, et al. Internet and Telerehabilitation-Delivered Management of Rotator Cuff-Related Shoulder Pain (INTEL Trial): Randomized Controlled Pilot and Feasibility Trial. JMIR Mhealth Uhealth. 2020;8(11):e24311. https://doi.org/10.2196/24311.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Koutoukidis DA, Land J, Hackshaw A, Heinrich M, McCourt O, Beeken RJ, et al. Fatigue, quality of life and physical fitness following an exercise intervention in multiple myeloma survivors (MASCOT): an exploratory randomised Phase 2 trial utilising a modified Zelen design. Br J Cancer. 2020;123(2):187–95. https://doi.org/10.1038/s41416-020-0866-y.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Huppe A, Zeuner C, Karstens S, Hochheim M, Wunderlich M, Raspe H. Feasibility and long-term efficacy of a proactive health program in the treatment of chronic back pain: a randomized controlled trial. BMC Health Serv Res. 2019;19(1):714. https://doi.org/10.1186/s12913-019-4561-8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Dwinger S, Rezvani F, Kriston L, Herbarth L, Harter M, Dirmaier J. Effects of telephone-based health coaching on patient-reported outcomes and health behavior change: A randomized controlled trial. PLoS One. 2020;15(9):e0236861. https://doi.org/10.1371/journal.pone.0236861.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Plutzer K, Spencer AJ. Efficacy of an oral health promotion intervention in the prevention of early childhood caries. Community Dent Oral Epidemiol. 2008;36(4):335–46. https://doi.org/10.1111/j.1600-0528.2007.00414.x.

    Article  PubMed  Google Scholar 

  25. Legrain S, Tubach F, Bonnet-Zamponi D, Lemaire A, Aquino JP, Paillaud E, et al. A new multimodal geriatric discharge-planning intervention to prevent emergency visits and rehospitalizations of older adults: the optimization of medication in AGEd multicenter randomized controlled trial. J Am Geriatr Soc. 2011;59(11):2017–28. https://doi.org/10.1111/j.1532-5415.2011.03628.x.

    Article  PubMed  Google Scholar 

  26. Reinius P, Johansson M, Fjellner A, Werr J, Ohlen G, Edgren G. A telephone-based case-management intervention reduces healthcare utilization for frequent emergency department visitors. Eur J Emerg Med. 2013;20(5):327–34. https://doi.org/10.1097/MEJ.0b013e328358bf5a.

    Article  PubMed  Google Scholar 

  27. Jansen LA. Two concepts of therapeutic optimism. J Med Ethics. 2011;37(9):563–6. https://doi.org/10.1136/jme.2010.038943.

    Article  PubMed  Google Scholar 

  28. Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply?". Lancet. 2005;365(9453):82–93. https://doi.org/10.1016/S0140-6736(04)17670-8.

    Article  PubMed  Google Scholar 

  29. Department of Health and Human Services. Federal Policy for the Protection of Human Subjects, 45 CFR Part 46. 2017.

  30. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, DC: US Department of Health and Human Services; 1979.

  31. Meyer MN, Heck PR, Holtzman GS, Anderson SM, Cai W, Watts DJ, et al. Objecting to experiments that compare two unobjectionable policies or treatments. Proc Natl Acad Sci U S A. 2019;116(22):10723–8. https://doi.org/10.1073/pnas.1820701116.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Chen H, Geng Z, Zhou XH. Identifiability and estimation of causal effects in randomized trials with noncompliance and completely nonignorable missing data. Biometrics. 2009;65(3):675–82. https://doi.org/10.1111/j.1541-0420.2008.01120.x.

    Article  PubMed  Google Scholar 

  33. Dunn G, Maracy M, Tomenson B. Estimating treatment effects from randomized clinical trials with noncompliance and loss to follow-up: the role of instrumental variable methods. Stat Methods Med Res. 2005;14(4):369–95. https://doi.org/10.1191/0962280205sm403oa.

    Article  PubMed  Google Scholar 

  34. Harris KM, Remler DK. Who is the marginal patient? Understanding instrumental variables estimates of treatment effects. Health Serv Res. 1998;33(5 Pt 1):1337–60.

    CAS  PubMed  PubMed Central  Google Scholar 

  35. Faseru B, Ellerbeck EF, Catley D, Gajewski BJ, Scheuermann TS, Shireman TI, et al. Changing the default for tobacco-cessation treatment in an inpatient setting: study protocol of a randomized controlled trial. Trials. 2017;18(1):379. https://doi.org/10.1186/s13063-017-2119-9.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

N/A

Funding

Supported by NIH grants UH3 MH007755, U19 MH121738, and UF1 MH121949.

Author information

Authors and Affiliations

Authors

Contributions

GS: Drafting manuscript, critical revision

SS: Critical revision

LD: Critical revision

Corresponding author

Correspondence to Gregory E. Simon.

Ethics declarations

Ethics approval and consent to participate

N/A

Consent for publication

N/A

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Simon, G.E., Shortreed, S.M. & DeBar, L.L. Zelen design clinical trials: why, when, and how. Trials 22, 541 (2021). https://doi.org/10.1186/s13063-021-05517-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-021-05517-w

Keywords