Skip to main content

Exploring implementation outcomes in the clinical trial context: a qualitative study of physician trial stakeholders



Cancer clinical trials can be considered evidence-based interventions with substantial benefits, but suffer from poor implementation leading to low enrollment and frequent failure. Applying implementation science approaches such as outcomes frameworks to the trial context could aid in contextualizing and evaluating trial improvement strategies. However, the acceptability and appropriateness of these adapted outcomes to trial stakeholders are unclear. For these reasons, we interviewed cancer clinical trial physician stakeholders to explore how they perceive and address clinical trial implementation outcomes.


We purposively selected 15 cancer clinical trial physician stakeholders from our institution representing different specialties, trial roles, and trial sponsor types. We performed semi-structured interviews to explore a previous adaptation of Proctor’s Implementation Outcomes Framework to the clinical trial context. Emergent themes from each outcome were developed.


The implementation outcomes were well understood and applicable (i.e., appropriate and acceptable) to clinical trial stakeholders. We describe cancer clinical trial physician stakeholder understanding of these outcomes and current application of these concepts. Trial feasibility and implementation cost were felt to be most critical to trial design and implementation. Trial penetration was most difficult to measure, primarily due to eligible patient identification. In general, we found that formal methods for trial improvement and trial implementation evaluation were poorly developed. Cancer clinical trial physician stakeholders referred to some design and implementation techniques used to improve trials, but these were infrequently formally evaluated or theory-based.


Implementation outcomes adapted to the trial context were acceptable and appropriate to cancer clinical trial physician stakeholders. Use of these outcomes could facilitate the evaluation and design of clinical trial improvement interventions. Additionally, these outcomes highlight potential areas for the development of new tools, for example informatics solutions, to improve the evaluation and implementation of clinical trials.

Peer Review reports


Cancer clinical trials aim to advance science, ensure standard of care through protocolization and are considered by many to be the best management for patients with cancer [1]. In these ways, a clinical trial itself can be considered an evidence-based practice. However, cancer clinical trials often fail to meet enrollment goals, prespecified endpoints, and timelines [2, 3]. Taken together, clinical trials can be considered complex, evidence-based interventions with substantial benefits for patients and society, yet suffering from poor implementation [4].

Prior attempts to improve trial implementation have had limited success, due at least in part to a lack of defined frameworks for trial design, evaluation, and improvement [5]. In addition to an enlarging body of clinical trials literature, there have been calls for prioritizing a focus on clinical trial improvement and analysis [6]. Applying implementation science approaches to the clinical trial context could help structure contextual assessments, define implementation outcomes, and inform intervention design to improve trial implementation and success. For example, we previously adapted Proctor’s Implementation Outcomes Framework to the trial context to address these knowledge gaps and target trial improvement and evaluation strategies [7, 8].

In brief, implementation outcomes are a measure of implementation success and exist as both an intermediate precursor to the success of a given practice and as a target for improvement. In other settings, implementation outcomes (e.g., adoption, penetration, feasibility) can measure why evidence-based interventions are not reaching anticipated levels of effectiveness. For example, a smoking cessation program may not be effective in lowering smoking rates in the real world because not many centers are actually using it, i.e., adoption of the program is low. Investigating reasons for the low adoption through a determinants framework such as the Consolidated Framework for Implementation Research (CFIR) can then identify context-specific targets for implementation strategies and interventions to overcome the identified barriers [9].

We have suggested applying a similar approach to the clinical trial context [8]. For example, answering a trial question requires enrolling sufficient participants. Problems with low enrollment may be due to low adoption of the trial by providers (i.e., physicians are not offering enrollment in the trial) or low penetration (i.e., a low proportion of eligible patients are enrolling in the trial). Each of these implementation outcomes represents a different issue likely to respond to a different improvement intervention. For example, a service to identify trial-eligible patients (e.g., an informatics solution to “flag” eligible patients in the electronic medical record during clinic visits) could help improve penetration of a trial to eligible patients. The same intervention may not be effective if physicians are not offering a trial at all (i.e., trial adoption is low). In these ways, specifying the exact outcomes of interest serve as both a measure of trial implementation and a target for trial improvement.

However, the extent to which this approach aligns with current trial practices, and the acceptability and appropriateness of these concepts to real-world trial stakeholders, needs to be better understood prior to further development and application. As clinical trials are complex multilevel interventions with numerous invested parties, the implementation approach could focus on many targets and include many stakeholder groups, all with potentially differing goals, barriers, and facilitators of trial design and enrollment behavior. Specifying exact targets and contexts for different groups, and identifying where these determinants overlap and diverge, will be critical to design and tailor trial improvements. To begin this process, we focused primarily on clinical trial enrollment as the targeted evidence-based practice to be implemented, and limited our initial interviewee group to cancer clinical trial physician stakeholders for multiple reasons.

Our evidence-based practice specification was based on our prior work identifying poor enrollment as the most common reason for trial failure [3]. While there are other factors limiting optimal impact of trials, low enrollment seems to be the reverse salient preventing trial progress. Additionally, enrollment on a trial, per se, can be considered the standard of care for management of cancer, fitting as an evidence-based practice needing improvements in implementation. For most patient-facing trials (e.g., interventional cancer trials), this decision to enroll in a trial is reliant on the patient-physician decision making dyad. We focused on the physician side of this dyad for our present work for multiple reasons. A patient must of course be willing to enroll in and consent to a trial, but the treating physician must either offer the trial or assent to enrollment. Indeed, prior work has suggested that most cancer patients will enroll on a trial if offered enrollment, and to an extent has explored patient-perspective determinants of trial enrollment [10]. We posit the physician side of trial enrollment (i.e., considering trials and offering enrollment) as the rate limiting factor in trial enrollment. In other words, it seems based on prior work and perspectives from patients that the primary problem with trial enrollment may be on the side of the physician, not the patient. Moreover, in addition to offering enrollment on trials, physicians also design trials, serve on institutional review boards, data safety monitoring committees, protocol review committees, and administrative boards overseeing trial design and conduct. In these ways, within the larger group of “physicians” there are numerous stakeholder roles with potentially different perspectives and incentives related to trial design and conduct. These represent roles that are not generally held by trial participants, and thus represent a distinct perspective from participants. This also addresses one of the “Top 10” prioritized research questions from the clinical trials PRioRiTy study: “what are the barriers and enablers for clinicians/healthcare professionals in helping conduct randomized trials?” [6] For these reasons, while future work will incorporate perspectives from other trial stakeholders, we began our investigation by interviewing cancer clinical trial physician stakeholders.

Taken together, clinical trials are critically important and can be considered evidence-based practices with poor implementation. We proposed the use of implementation science frameworks in the clinical trial context, but how components of these frameworks could be understood or applied in trials in the real world is unknown. For these reasons, we studied the consideration and use of cancer clinical trial implementation outcomes, adapted from Proctor’s outcomes, through semi-structured interviews with cancer clinical trial physician stakeholders.


As shown in Table 1, we previously adapted Proctor’s Outcomes Framework to the trial context [7, 8]. To explore how each of these outcomes was considered and assessed by cancer clinical trial physician stakeholders in trial design, conduct, and/or regulatory management, we designed a semi-structured interview guide (Supplementary Materials). We piloted this interview guide via mock interviews with two members of our investigative team prior to launching our cancer clinical trial physician stakeholder interviews. We used each of the adapted outcomes in a prompt to assess how each of these were considered and measured by cancer clinical trial physician stakeholders. We in general took an approach of asking for each of a given trial implementation outcome, how the outcome was considered and approached by cancer clinical trial physician stakeholders, and how important the outcome was considered for the overall success of a trial. For example, we previously gave an example of trial feasibility as the degree to which it is possible to meet trial enrollment goals. During our interviews, we asked interviewees: “I’m interested in your thoughts on trial feasibility. How have you assessed the feasibility of your trial reaching its goals?” We then expanded on these thoughts, for example, “how do you assess how feasible it is to meet enrollment goals in the anticipated timeline of the trial?” and “how do you consider eligibility criteria with respect to the feasibility of a trial enrolling?” Similarly, we evaluated outcomes such as sustainability by asking “how much do you consider sustained ability to enroll once a trial is opened?” We additionally tailored our interview guide to explore specific physician trial stakeholder roles, when appropriate. For example, a physician member of the Clinical Trials Support Unit was asked about their own experience and approach to the trial outcomes, and then asked how the Clinical Trials Support Unit as a body would approach these outcomes, and to what degree these outcomes influence action by the group.

Table 1 Implementation outcomes framework applied to the clinical trial context

Next, we purposively selected 15 cancer clinical trial physician stakeholders from our institution for interviews, representing multiple cancer subspecialties (urology, genitourinary medical oncology, radiation oncology, gynecologic oncology, hematologic oncology, breast medical oncology), trial-related roles (principal investigator, institutional review board, data safety monitoring board, protocol review committee, departmental clinical research team, cancer center leadership, clinical trial support unit leadership), and trial sponsor types (institutional/intramural, NIH/cooperative group, philanthropic organization, industry). All interviews were then conducted by a single interviewer (KDS) via the Zoom videoconferencing platform between July and September 2021 and were roughly 45 min in duration each. Verbal consent was obtained prior to interviews. Interviews were recorded and transcribed, then manually corrected by two coders (KDS, VV). Transcripts were imported into NVivo version 12 (QSR International, released March 2020). Each transcript was individually coded by two authors (KDS, VV) and reviewed together, with discrepancies resolved by consensus. Emergent themes from each outcome were developed collectively and representative quotes for each theme selected (Table 2). During interview coding and theme development, it was felt by the investigators that no new themes or significant ideas emerged with additional interviews, and we agreed thematic and data saturation for this population was reached. This study was considered exempt from full IRB review as human subject research with minimal risk by the University of Michigan Institutional Review Board (HUM#00,198,397).

Table 2 Themes from cancer clinical trial physician stakeholder interviews


In general, interviewees were excited to discuss potential avenues to improve clinical trials, in particular clinical trial enrollment. The adapted trial outcomes overall were well understood and accepted by cancer clinical trial physician stakeholders. We framed our questions and analysis initially around the conceptualization of each of Proctor’s implementation outcome and then probed early understanding of barriers and facilitators to each outcome. Feasibility and implementation cost were the most frequently considered outcomes as reported by our interviewees. While adoption and penetration were important, these were less often considered formally by our interviewees. In general, even when there was awareness and consideration of these outcomes, there were few specific ways to operationalize or measure them within the existing trial infrastructure. We present more specific results grouped by each of the implementation outcomes as follows.


Feasibility was frequently raised by interviewees, often unprompted, as a key issue facing trial planning. In general, aspects of feasibility most important to interviewees were identifying a sufficiently large potential study participant population and considering eligibility criteria. Interviewees reported enrollment as the key factor in trial feasibility and success, as poor enrollment was reported by an interviewee as “the easiest way for a trial to fail” [Interviewee 10].

Despite this stated importance, few interviewees reported a formal method for assessing eligible populations. This concept was intended as distinct from power analysis (i.e., determining the number of participants and events needed for the desired level of confidence in an anticipated effect size), and instead reflects how many eligible patients are available for a given trial in a given location. When asked how a trialist might estimate the number of patients potentially eligible for a trial, most interviewees report relying on estimates in a way that “historically hasn’t been the most scientific approach” [Interviewee 1]. Interviewees had awareness of how one could estimate the number of eligible patients (i.e., to measure feasibility), but there was no reported concrete pattern or method of how or when these approaches were used, and it was reported that some of these technologies have not been “leveraged to where we need to do trials” [Interviewee 1].

Other important themes related to determinants of feasibility included trial logistics, disease prevalence, and the existence of competing trials. Most interviewees reported trial logistics (e.g., staffing, frequency of lab draws) as important considerations for trial design but these were felt by interviewees to be generally well-handled by current review mechanisms.

Disease prevalence was considered more critical for trial feasibility. For some conditions, an interviewee reported the “volume of patients is so large” [Interviewee 3] a feasibility assessment was essentially unnecessary, whereas it was essential for rare conditions where recruitment is often difficult leading to long trial duration. Similarly, competing trials were reported as having significant influence on trial feasibility and enrollment, endorsed by one interviewee as “one of the first barriers to success, and probably the most pertinent one to all physicians” [Interviewee 14].

Implementation cost

The cost of running a trial significantly influences trial feasibility and design. Despite cost reported by one interviewee as “one of the first things you [should] consider,” [Interviewee 14], trial costs were felt by others to be underemphasized, and sometimes “not even on the radar and never discussed” [Interviewee 15]. However, trial design itself was reportedly reliant on costs, with “secondary endpoints… highly contingent on available funding” [Interviewee 5]. At a system level, these costs have implications for the approval and funding of other trials. The concept of opportunity cost was used to describe these issues by some interviewees, where funding one trial means “you can’t do a bunch of other potentially more useful things,” [Interviewee 2], including both investigators designing new trials and institutions/sponsors selecting trials for funding.

Aside from selecting correlative studies, other trial design changes were suggested by interviewees as implementation cost-containment measures, including decreasing sample size or follow-up time, omitting randomization, or reducing the number of trial sites.


Acceptability to providers is a key step in trial enrollment, as these providers are most frequently the gateway for patients to access trials. While interviewees did not report a formal way for measuring acceptability of a trial, the concept was well understood and clearly applicable to trial conduct. We found that a major determinant of provider-level acceptability was the logistical burden of a trial. One interviewee reported that trials can create “more work for both the patient and the physician” [Interviewee 4], sometimes adding “a bunch of tedium” [Interviewee 2] to clinical care. This may be due to more frequent visits or dictated timing of certain studies. Interviewees also reported economic considerations, where lost productivity due to seeing fewer patients, or potentially randomizing patients away from highly reimbursed procedures, has a direct financial impact on providers. Combined, these factors have implications for trial enrollment, with some providers not offering trials, with one interviewee feeling “the path of least resistance for me is to just not put people on clinical trials” [Interviewee 3]. These factors may also lead to less uptake of trials at additional sites and an increased workload on trial principal investigators to make up for shortfalls. This may increase disengagement and provider burnout, with one investigator stating: “it basically all falls on the [trial principal investigator], so I don’t want to be a [trial principal investigator] anymore” [Interviewee 3].

The increased logistical burden on providers suggests a trial must be meaningful to a provider for them to overcome the barriers to participation. As many interviewees reported, this is in part accomplished through direct engagement between investigators and providers to discuss interest in and acceptability of proposed trials, as well as receive input on the design of trials. These methods were also felt by one interviewee to get “skin in the game” [Interviewee 4] from providers to improve adoption and enrollment.

An important component of provider acceptability was the perceived equipoise of a trial. There were strong provider beliefs and “biases of current practice patterns” [Interviewee 1], as put by one interviewee, that can make a trial’s premise less acceptable. For example, interviewees reported that some physicians may not be willing to potentially randomize a patient to not receive a given treatment. One interviewee described a case where “I decided I wasn’t willing to consider no radiation, so I didn’t offer him the trial” [Interviewee 8]. This can also affect trials with multiple treatment modalities that may have similar historical efficacy but vastly different methods of administration, such as radiation therapy versus surgery for prostate cancer.

The acceptability of a trial could also change over time. This presents difficulties in enrolling to trials as evidence evolves, for example, if a new standard of care emerges that was not included in the original trial protocol. Even without external evidence, early indications of toxicity or efficacy while outcomes remain blinded could influence “your threshold for putting additional people on the trial” [Interviewee 2], according to one interviewee. A trial described by an interviewee had decreased enrollment “once it started to become apparent [patients] were not healing very well” [Interviewee 2], leading to closure of the trial. It is important to consider these issues and work in concert with data safety monitoring boards to optimize safe, sustained enrollment to trials.


There was substantial overlap in aspects of acceptability for providers and patients. Notably, we did not directly interview patients, but patient considerations were a large factor in investigators’ trial decisions. As such, our interviewees’ responses reflect physician’s perception of acceptability of trials to patients, and must not be considered to replace acceptability evaluated directly with patients. Many physician interviewees highlighted a patient advocacy component to designing and implementing trials, reportedly looking to “parrot a lot of what patients would tell them” [Interviewee 10]. The logistics of trial participation, such as travel time to trial sites, were seen as major barriers to trial acceptability and enrollment. Investigators described approaches to trial design that could decrease participation burden and increase acceptability, such as minimizing the number of return visits, identifying sites closer to a patients’ home for lab draws, or converting to virtual visits when possible.

Even when facing these burdens, many patients will seek trials to access experimental treatment. This was reported by interviewees to be a major driver for some trials, particularly in early phases, where a “trial that offers something to the patient that they can’t get off trial” [Interviewee 2] can more easily enroll. For other trials, using strategies like a 2:1 randomization scheme (i.e., a higher chance of receiving the experimental therapy) can “make it a little more palatable” [Interviewee 2] for patients to enroll on an experimental trial.

The expected benefit from trial enrollment was also highlighted by interviewees, primarily as a barrier to trial participation where perceived benefit was low. For example, in conditions with “already a 99% cure rate” [Interviewee 14], enrollment on a trial “adding a toxic therapy” [Interviewee 14] would be difficult. Late phase trials were also perceived by interviewees to be more acceptable than earlier trials, perhaps due to perception of receiving a more “proven” active treatment. Other fringe benefits, such as financial incentives, were not felt by interviewees to significantly impact enrollment to trials.


Patient trial participation hinges on provider trial adoption. Interviewees reported mechanisms to identify how many physicians were enrolling patients onto trials, but how to increase this adoption was less clear. A difficulty commonly reported by interviewees was individual provider engagement, i.e., speaking directly with other physicians about the trial. Low levels of enthusiasm for trials in general, or for a specific trial concept, were felt by interviewees to lead to poor trial adoption and enrollment rates. Part of this engagement was an individual’s belief in the importance of trials, with one interviewee reporting “some faculty [are] invested a lot more than others” [Interviewee 12]. However, some engagement may be modifiable, such as through individual, direct communication with providers through existing relationships. Interviewees reported that continued communication with providers allows for investigators to check in on trial progress and address changes, and physical co-location at clinical sites permits in-person reminders of ongoing trials at the time of clinic visits. Furthermore, interviewees reported using advertising at a group level, such as through multidisciplinary tumor boards, departmental meetings, or research meetings as a potential adoption improvement strategy.

It can be difficult to apply these techniques at scale, however. Some approaches work at an individual level, for example, one interviewee reported they individually “just see and consent and treat all patients… I find that’s the path of least resistance to get people to enroll” [Interviewee 13]. However, this approach is likely unsustainable at multiple sites or with higher enrollment goals. Similarly, individual meetings to increase adoption within a group of 2 or 3 providers was feasible, but expanding to larger groups was reported by an interviewee to be “exponentially larger and harder” [Interviewee 1], particularly if multiple sites were included.

Interviewees reported that institutional investment and support staff may help address some of the issues with adoption. Support staff resources were reported by interviewees to aid immensely with recruitment and improve the likelihood of providers adopting trials. Direct investments in resources for trials could support more of these measures. Additionally, indirect investments from an institution, such as trial involvement being considered as part of promotion or reimbursements, may contribute to a culture of inquiry and encourage trial adoption by providers.


While there were methods to assess both provider adoption of trials and how many patients total enroll in a trial, assessing the proportion of eligible patients enrolled in a trial (i.e., penetration) was reported by interviewees to be much more difficult. In part, this assessment has the same root challenge as enrolling patients: identifying who exactly is eligible. Interviewees reported that while the number of patients approached for a trial was typically recorded and easily accessed, the total number of eligible patients presenting to clinic (i.e., the denominator of total eligible patients) was difficult to measure through the electronic medical record. Despite difficulties measuring penetration, there were some attempts to improve trial conduct that target improved penetration.

One method was to manually identify eligible patients. Some interviewees reported using study coordinators or administrative support to screen all new patients for potential trials. Interviewees reported reviewing patients in a multidisciplinary setting within certain groups, such as “in the context of multidisciplinary boards” [Interviewee 5] where new patients are reviewed and eligibility for trials from within the groups’ portfolios could be assessed. Some interviewees also reported multidisciplinary tumor boards as a good opportunity to recommend trial involvement.

Other attempts to improve penetration relied on aspects of culture and peer pressure. Some interviewees emphasized trial involvement as a standard offer to every cancer patient, considering a trial “always an option” [Interviewee 7], to aid in increasing penetration. Highlighting peer trial enrollment performance was also used by some interviewees to increase confidence in enrolling to trials. As with encouraging adoption, including enrollment numbers during performance review and consideration for promotion, at least in academic settings, was also reportedly used to attempt an increase in trial penetration. Interviewees also emphasized the importance of broad eligibility criteria both for enrollment purposes and to ensure representation and access to therapies for as many patients as possible.


While trials may be successful when first launched, interviewees reported it may be more difficult to sustain this conduct over the trial period. Interviewees reported a drop-off in trial enthusiasm “after the first initial burst of patients” [Interviewee 8]. This could be from newly opened trials competing with the existing trial, providers forgetting about an existing trial, or loss of enthusiasm for a trial as early results are reported. Some strategies reported by interviewees to combat this loss of enthusiasm were reminding providers and trainees about specific trials, sending email reminders of existing trials, and strategies similar to encouraging adoption and penetration (e.g., reminders at tumor boards or research meetings). Another issue raised by interviewees was the emergence of new data or treatments affecting trial equipoise or rationale. Interviewees suggested trials could be designed with potential adaptability in mind, or amended to adjust for these new treatments.

In general, interviewees did not report issues with sustained protocol adherence or follow-up. Interviewees felt well-supported by institutional trial infrastructure and support staff resulting in good participant retention and follow-up on trials.


Fidelity to trial protocols was not reported as a major issue for cancer trials at our institution. Interviewees did suggest a hypothetical issue with protocol deviations affecting interpretation of trial results, but this was reportedly not often seen in practice. Overall, fidelity, including protocol adherence and follow-up/retention, were reported as “less of a challenge” [Interviewee 8] than other aspects of trial conduct, mostly due to strong support from trial coordinators and support units.


We did not explicitly frame an interview question to ask about appropriateness, as asking about trial appropriateness in pilot interviews was off-putting to pilot interviewees, and it was felt that the data gathered from directly asking about appropriateness would most likely only yield comments on improving trial design. Over the course of the interviews, interviewees did comment on the importance of a well-designed trial as paramount to evidence generation. From the perspective of many interviewees, a trial that was not appropriately designed to answer a reasonable question cannot be a successful trial, even if the trial meets its goal enrollment. The design features referenced by interviewees to be important aspects of appropriate trial design included sample size and effect size for power estimates, and the selection of an adequate control arm for randomized trials.


We explored implementation outcomes and early determinants of success in the clinical trial context through semi-structured interviews with cancer clinical trial physician stakeholders at our institution. Our findings highlight important underemphasized components of clinical trial conduct, as well as areas that are largely functioning well from the investigator perspectives. We found implementation outcomes to be well understood by cancer clinical trial physician stakeholders, and reflective of issues faced in trial design and implementation. Taken together, our findings highlight important targets for trial implementation improvement and evaluation research.

The most important outcome considerations for trial conduct were felt to be feasibility and implementation cost. These implementation outcomes were the most easily understood and most frequently considered by cancer clinical trial physician stakeholders. While issues of implementation cost could largely be addressed by increasing funding for trials, a more realistic aim may be improving trial efficiency. Understanding feasibility and its assessment may make trials more efficient, but operationalizing assessments of eligible patients at scale is a complex undertaking. Perhaps for this reason, despite endorsing the importance of feasibility, investigators described few formal methods of trial feasibility assessment. Development of these methods, and testing their use and effects on trial enrollment and success, is an important area for future trial implementation work. This may be of particular use in determining additional site placement in multisite trials, or in identifying locations for trials with industry, government, or cooperative group sponsors who are institution-agnostic with respect to trial sites.

Other aspects of feasibility assessment are labor-intensive, and thus costly, and may be amenable to informatics solutions [11]. Identification of patients eligible for clinical trials is a major challenge, likely increases cost of trials, and also impacts the evaluation of trial penetration to eligible patients [12]. While approaches to patient identification such as through natural language processing could help identify patients, these innovations must also be tested within trials to evaluate their impact on enrollment [11].

These improvements to assessing feasibility could result in more efficient and more cost-effective trials. This may help address a critical problem, as the cost of running trials was highlighted by multiple interviewees as a major barrier to implementing trials. Cost can potentially limit trial design elements such as collecting correlative endpoints or the duration of follow-up, discourage the launching of new trials, or create incentives to study only certain types of interventions in trials. Cost may also be particularly important in certain contexts where funds are limited and design features may be directed more strongly by sponsors. While targeting improvements in specific trial outcomes such as adoption and penetration have value per se, improving the efficiency of trial conduct and specifically trial enrollment has the potential to decrease trial cost, removing a barrier to success and facilitating more and better clinical trials. It will be important when designing trial improvement interventions to consider the cost of these interventions relative to the benefit to trials to maximize their use and encourage adoption by trialists and trial sponsors.

Conceptualizing trial enrollment as affected by adoption (i.e., uptake by providers) and penetration (i.e., proportion of eligible patients enrolling on a trial) could be helpful for targeting trial improvement interventions. Investigators in our study had strategies implicitly aimed at these components, but investigators in general were not explicit about these targets. Certain strategies, such as advertising at tumor boards, could impact both adoption and penetration, but such strategies may not work in all contexts. Prior work examining multidisciplinary meetings has highlighted the promise of improved recruitment, but also challenges with team dynamics affecting trial enrollment [13, 14]. Despite understanding these concepts and applying informal strategies (e.g., speaking directly with colleagues to increase trial adoption, advertising trial to attempt to increase penetration), there was little formal assessment of exactly how many physicians were enrolling patients (i.e., adoption) or an exact evaluation of penetration (i.e., how many patients were enrolling relative to the eligible local population). Similarly to assessments of feasibility, there was an interest in understanding penetration, but very little formal assessment or logistical capacity for its evaluation. There is a clear need for future work in this space, both to improve trial conduct and to measure the success of enrollment improvement interventions.

Overall, these concepts were easily understood and seemed acceptable to investigators, suggesting future trial improvement strategies using these terms could be an effective way to efficiently measure and improve trials. Use of this standardized language can also facilitate adaptation of implementation strategies developed in other complex intervention contexts to clinical trials. Our approach is complementary to efforts to assess trial conduct using behavioral theories, such as qualitative work aiming to improve recruitment to trials, by framing trial improvement within an implementation science model to facilitate the development and targeting of specific interventions [4, 15, 16]. Similarly, our work could add to efforts, such as those from the QuinteT group, to improve enrollment through qualitative work [17]. Indeed, prior work applying qualitative methods to efforts at recruitment has highlighted similar themes to those found in our work, especially difficulties in identifying eligible patients [18]. Our work can add to these findings by applying an implementation lens to the identified barriers, adding an interventional implementation component to the qualitative work.

An initial application of these measures is in the evaluation and improvement of ongoing trials. For example, use of our outcomes framework approach allows for endpoint measurement in trial improvement evaluations, termed studies within a trial (SWATs) [19]. For example, investigators mentioned email reminders to providers about ongoing trials to improve enrollment. A hypothetical trial improvement study, or SWAT, could randomize a set of trials to email reminders or no email reminders, and measure how many providers offer the trial (adoption) and the proportion of eligible patients enrolled (penetration). This would improve upon prior endpoints of simply “enrollment” or “success.” Such studies present opportunities to evaluate the effectiveness of informatics solutions to support trial implementation, such as algorithms to identify trial-eligible patients, best practice advisory “pop-up” alerts in the electronic medical record, or automated email audit and feedback on trial enrollment performance.

In addition to ongoing trials, our results also emphasize the importance of initial trial design. While many of the strategies used by investigators and suggested by our frameworks look to improve existing trials, it is critical to evaluate the appropriateness and feasibility of clinical trials prior to implementing them. Our interviewees emphasized that a trial must be worth doing (i.e., a trial must be appropriate for the question asked). Part of developing this question may be incorporating physician and patient input to optimize acceptability to both patients and physicians prior to beginning the trial. Despite the stated importance of trials being acceptable, our interviewees did not express a formal method of determining acceptability of trials to physicians or patients. This is another area in need of exploration to improve trial design. Ideally, we can decrease waste by improving trial design initially, and identifying and de-implementing trials doomed to fail before they begin or when they have become unsustainable.

While our interviewees reported few issues with fidelity to trial protocols or follow-up initially or sustained over the trial period, this may reflect our strong institutional trial infrastructure. Other institutions without substantial clinical trial support units may struggle more with protocol adherence or sustained follow-up. These differences may also explain the infrastructural or “trial effect” explaining part of the patient benefit of trial enrollment [20, 21]. Additionally, we focused our investigations on cancer trials, predominantly reflecting interventional trials. Some issues with trials of other types (e.g., trials of complex interventions such as smoking cessation programs) might face more barriers to fidelity and sustainability. Future work is planned to investigate these outcomes and determinants in different local contexts and for other intervention types.

Our initial experience exploring implementation outcomes in the trial context with cancer clinical trial physician stakeholders at our institution was generally positive, though our study does have limitations. The first limitation of this study was the narrow scope of participants; we only interviewed one type of clinical trial stakeholder, physicians. Many other disciplines and types of stakeholders are involved in clinical trials and will be incorporated in future studies. However, we did include interviewees from multiple cancer specialties and trial roles. Interviewing only physicians also limits understanding of the patient perspective, particularly for considerations of trial acceptability to patients. Understanding the physician perspective alone can inform trial considerations, and future work will compare physician and patient perspectives on trial design and conduct. Additionally, all interview subjects are members of our own institution, limiting the potential transferability of these perspectives to other contexts, particularly trialists at community sites. Future studies are needed to assess responses in other contexts.

Our initial qualitative exploration of clinical trial implementation outcomes identified targeted areas for trial improvement and supports the acceptability and appropriateness of implementation outcomes in the trial context. Use of the adapted implementation outcomes framework was well understood by cancer clinical trial physician stakeholders, aligned with their understanding of trial processes and barriers, and highlighted nuanced outcomes that could enhance trial improvement and measurement strategies. Applying these outcomes highlighted determinants worthy of further exploration, and future directions for trial improvement research through implementation science methods.


Through semi-structured interviews with cancer clinical trial physician stakeholders, we explored implementation outcomes in the clinical trial context and found targeted areas for future clinical trial improvement and evaluation strategies.

Availability of data and materials

The datasets generated during and/or analyzed during the current study are not publicly available as they are direct transcripts of human subject interviews, but are available from the corresponding author on reasonable request.


  1. NCCN Guidelines. NCCN Clinical Practice Guidelines in Oncology: Prostate Cancer [Internet]. 2021 [cited 2021 Feb 5]. Available from:

  2. Stensland K, Kaffenberger S, Canes D, Galsky M, Skolarus T, Moinzadeh A. Assessing genitourinary cancer clinical trial accrual sufficiency using archived trial data. JCO Clin Cancer Inform. 2020;4:614–22.

    Article  PubMed  Google Scholar 

  3. Stensland KD, McBride RB, Latif A, Wisnivesky J, Hendricks R, Roper N, et al. Adult cancer clinical trials that fail to complete: an epidemic? J Natl Cancer Inst. 2014;106(9):dju229.

    Article  PubMed  Google Scholar 

  4. Stensland KD, Damschroder LJ, Sales AE, Schott AF, Skolarus TA. Envisioning clinical trials as complex interventions. Cancer. 2022;128(17):3145–51.

    Article  PubMed  Google Scholar 

  5. Treweek S, Pitkethly M, Cook J, Fraser C, Mitchell E, Sullivan F, et al. Strategies to improve recruitment to randomised trials. Cochrane Database Syst Rev. 2018;2:MR000013.

    PubMed  Google Scholar 

  6. Healy P, Galvin S, Williamson PR, Treweek S, Whiting C, Maeso B, et al. Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership - the PRioRiTy (Prioritising Recruitment in Randomised Trials) study. Trials. 2018;19:147.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76.

    Article  PubMed  Google Scholar 

  8. Stensland KD, Sales AE, Damschroder LJ, Skolarus TA. Applying implementation frameworks to the clinical trial context. Implement Sci Commun. 2022;3:109.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Unger JM, Hershman DL, Till C, Minasian LM, Osarogiagbon RU, Fleury ME, et al. “When offered to participate”: a systematic review and meta-analysis of patient agreement to participate in cancer clinical trials. J Natl Cancer Inst. 2021;113:244–57.

    Article  PubMed  Google Scholar 

  11. Ni Y, Bermudez M, Kennebeck S, Liddy-Hicks S, Dexheimer J. A real-time automated patient screening system for clinical trials eligibility in an emergency department: design and evaluation. JMIR Med Inform. 2019;7: e14185.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Sertkaya A, Wong H-H, Jessup A, Beleche T. Key cost drivers of pharmaceutical clinical trials in the United States. Clin Trials. 2016;13:117–26.

    Article  PubMed  Google Scholar 

  13. Strong S, Paramasivan S, Mills N, Wilson C, Donovan JL, Blazeby JM. “The trial is owned by the team, not by an individual”: a qualitative study exploring the role of teamwork in recruitment to randomised controlled trials in surgical oncology. Trials. 2016;17:212.

    Article  PubMed  PubMed Central  Google Scholar 

  14. McNair AGK, Choh CTP, Metcalfe C, Littlejohns D, Barham CP, Hollowood A, et al. Maximising recruitment into randomised controlled trials: the role of multidisciplinary cancer teams. Eur J Cancer. 2008;44:2623–6.

    Article  CAS  PubMed  Google Scholar 

  15. Gillies K, Brehaut J, Coffey T, Duncan EM, Francis JJ, Hey SP, et al. How can behavioural science help us design better trials? Trials. 2021;22:882.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Hanrahan V, Biesty L, Lawrie L, Duncan E, Gillies K. Theory-guided interviews identified behavioural barriers and enablers to healthcare professionals recruiting participants to maternity trials. Journal of Clinical Epidemiology. 2022;145:81–91.

    Article  PubMed  Google Scholar 

  17. Donovan JL, Jepson M, Rooshenas L, Paramasivan S, Mills N, Elliott D, et al. Development of a new adapted QuinteT Recruitment Intervention (QRI-Two) for rapid application to RCTs underway with enrolment shortfalls—to identify previously hidden barriers and improve recruitment. Trials. 2022;23:258.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Farrar N, Elliott D, Houghton C, Jepson M, Mills N, Paramasivan S, et al. Understanding the perspectives of recruiters is key to improving randomised controlled trial enrolment: a qualitative evidence synthesis. Trials. 2022;23:883.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)? Trials. 2018;19:139.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Braunholtz DA, Edwards SJL, Lilford RJ. Are randomized clinical trials good for us (in the short term)? Evidence for a “trial effect.” J Clin Epidemiol. 2001;54:217–24.

    Article  CAS  PubMed  Google Scholar 

  21. Denburg A, Rodriguez-Galindo C, Joffe S. Clinical trials infrastructure as a quality improvement intervention in low- and middle-income countries. Am J Bioeth. 2016;16:3–11.

    Article  PubMed  Google Scholar 

Download references


Not applicable


Dr. Stensland is supported by the National Cancer Institute F32 CA264874 and T32 CA180984. The NCI funded this study, but did not have a role in the design, collection, analysis, interpretation of data, or writing of the manuscript.

Author information

Authors and Affiliations



KDS designed the study, performed and transcripted interviews, coded and interpreted the data, and wrote the manuscript. AS aided in study and interview design and critical review of the manuscript. VV transcripted and coded interviews and aided with data interpretation. LD aided in study and interview design and critical review of the manuscript. TAS aided in study design, data interpretation, supervision, and critical review of the manuscript. All author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Kristian D. Stensland.

Ethics declarations

Ethics approval and consent to participate

This study was deemed exempt by the University of Michigan Institutional Review Board. Interview subjects were verbally consented prior to participating in the semi-structured interviews.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stensland, K.D., Sales, A.E., Vedapudi, V.K. et al. Exploring implementation outcomes in the clinical trial context: a qualitative study of physician trial stakeholders. Trials 24, 297 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: