Skip to main content

A maturity model for the scientific review of clinical trial designs and their informativeness

Abstract

Background

Informativeness, in the context of clinical trials, defines whether a study’s results definitively answer its research questions with meaningful next steps. Many clinical trials end uninformatively. Clinical trial protocols are required to go through reviews in regulatory and ethical domains: areas that focus on specifics outside of trial design, biostatistics, and research methods. Private foundations and government funders rarely require focused scientific design reviews for these areas. There are no documented standards and processes, or even best practices, toward a capability for funders to perform scientific design reviews after their peer review process prior to a funding commitment.

Main body

Considering the investment in and standardization of ethical and regulatory reviews, and the prevalence of studies never finishing or failing to provide definitive results, it may be that scientific reviews of trial designs with a focus on informativeness offer the best chance for improved outcomes and return-on-investment in clinical trials. A maturity model is a helpful tool for knowledge transfer to help grow capabilities in a new area or for those looking to perform a self-assessment in an existing area. Such a model is offered for scientific design reviews of clinical trial protocols. This maturity model includes 11 process areas and 5 maturity levels. Each of the 55 process area levels is populated with descriptions on a continuum toward an optimal state to improve trial protocols in the areas of risk of failure or uninformativeness.

Conclusion

This tool allows for prescriptive guidance on next investments to improve attributes of post-funding reviews of trials, with a focus on informativeness. Traditional pre-funding peer review has limited capacity for trial design review, especially for detailed biostatistical and methodological review. Select non-industry funders have begun to explore or invest in post-funding review programs of grantee protocols, based on exemplars of such programs. Funders with a desire to meet fiduciary responsibilities and mission goals can use the described model to enhance efforts supporting trial participant commitment and faster cures.

Peer Review reports

Assessing quality in global health clinical trials

In addition to pharmaceutical industry (industry) funders, hundreds of global health clinical trials (CTs) are funded annually by private foundations, governments, and consortia. A meaningful number of these CTs end without being published or without trustworthy results [1,2,3]. A January 2024 query of ClinicalTrials.gov found 92 phase I–IV CTs currently active or enrolling participants that featured a majority of CT sites in sub-Saharan Africa. Industry—either alone or as leader of a funding group—funded 29.3% of the CTs; the US government funded 12.0% of CTs. The remaining 58.7% of CTs were funded by private foundations, with some contribution from other governments or organizations. These global health CTs had plans to enroll 91,200 participants (human research subjects). Before a CT begins, industry routinely performs scientific or methodological reviews on CT protocols to identify and address flaws in design. There is no direct evidence that other funders conduct such reviews. Because of this, it is imaginable that 70% of global health CT protocols do not receive a dedicated scientific review before enrolling their first study participants. This may account for the large difference in informativeness between industry and non-industry CTs found recently [4].

In its lifecycle, there are two phases prior to the CT’s start and participant recruitment. First is a phase when the CT has not procured a funding commitment (pre-funding), and then the second is a post-funding phase. The dominant approach used by government funders to decide if a research study will be funded is peer-review. While peer-review for pre-funding decisions is well established, it continues to evolve and not necessarily in a scientific direction. For example, a large fraction of stakeholders believe peer-review ought to change to only assess the investigator, not the proposed project, or include a lottery [5]. One systematic review found that, in pre-funding peer-review, comments on research design represented 2%, methodology 4%, and methodological details 5%, respectively, of total comments [6]. During pre-funding, these reviewers also needed to comment on dozens of other factors [6]. This dynamic—along with the sometimes-large time gap between pre-funding and CT inception and the design changes therein—makes peer review inadequate for scientific design review.

In the post-funding phase, there are two other types of review that focus on elements outside of CT design. These reviews and related concepts are described in Table 1. The two reviews that happen completely or primarily in post-funding and before participant recruitment begins are regulatory and ethical. The regulatory and ethics review domains are relatively mature and well-developed.

Table 1 Types of reviews for clinical trials

Ethical and regulatory reviews both overlap in limited ways with consideration of CT design methods. “It is clear that scientific assessments are a source of confusion for some ethics committees…ethics committee members revealed that they often had doubts about whether scientific validity is within their purview” [12]. Because the focus of an ethics review is not assessing optimal CT methods, “ethicists entering a review may be concerned about whether they have “the scientific literacy necessary to read and understand a protocol” [12]. Regulators and ethicists in low resource settings are often not trained in the scientific disciplines necessary to evaluate CT design risk—such as biostatistics and pharmacokinetics. Members of Institutional Review Boards seeking to deliver on their primary purpose—delivering an International Council for Harmonisation E6, E8, E9, and Good Clinical Practice guideline-supported participant protection review—and members of regulatory boards seeking to deliver on safety and participant protection may, justifiably, take only a secondary look at a CT’s statistical details. A cursory assessment of methods by an ethics committee may be necessary for them, but it may not be sufficient for funders. Likewise in the regulatory realm: the review of a protocol post-funding will include only targeted scientific assessment, since, for regulators, the focus on safety and similar matters crowds out efforts to identify more optimal approaches in CT design.

This state of affairs leaves an opportunity gap for scientific review of global health CT designs post-funding and prior to CT start. Industry performs scientific design reviews; it may or may not be coincidental that industry funded CTs were more likely to be informative during COVID than those CTs funded by others [17]. The US cancer academic CT community—funded by the US government—has created programs to comply with mandated post-funding scientific review of grantee CT designs. Multiple government and private CT funders, who to date have only performed pre-funding peer-reviews, are investigating the cost and effort involved with adding reviews of protocols. It is often only at the protocol stage of trial planning when a funder can see specifics such as whether the trial design is informed by systematic evidence; more advanced, pragmatic, or participant-centric design; or the presence of concrete recruitment plans, statistical analysis plans, or sample size simulations. As yet, standards do not exist.

Informativeness

Informativeness is a characterization of a CT that indicates the study will achieve its recruitment, statistical power, and other design goals, resulting in credibly answering its research questions. An informative CT “provides robust clinical insight and a solid on-ramp to either the next phase of development, a policy change, a new standard of care, or the decision not to progress further” [18]. Uninformative results are widespread. One study found only 6% of CTs funded outside of industry met all four conditions for informativeness [4]. Across a number of stakeholders working to identify design practices associated with uninformativeness, there is consensus on a core set of failures. These include principal investigators (PIs) being unrealistic or overly optimistic in their ability to set and achieve feasible and appropriate sample sizes and non-use of evidence-based disease burden and effect rates [17, 19,20,21]. “Studies that failed to influence policy change or a confident next step in a go/no-go decision were associated with factors such as lack of use of common endpoints, lack of conservatism in effect estimates, not using biostatistical simulation to derive sample sizes, using unduly restrictive inclusion criteria, and avoiding use of innovative CT designs” [18]. Qualities that drive informativeness are almost all defined during the design phase of the CT. Eleven of Zarin et al.’s twelve “red flags” for uninformativeness can be identified before a CT begins recruiting [22]. A multi-stakeholder working group of experts led by the Experimental Cancer Medical Centres made recommendations on how to improve CTs. Seven of the group’s ten consensus recommendations could or must be planned and addressed during the design phase of a CT [23]. Because likelihood of informativeness is cemented from a PI’s design work and design choices, post-funding scientific design reviews have high potential to identify risks of uninformative outcomes and suggest fixes before the CT is finalized and cannot be changed.

A maturity model for scientific design reviews of clinical trials

A maturity model is a helpful tool for knowledge transfer to help grow capabilities in a new area, or for those looking to perform a self-assessment in an existing area. Such a model is offered for scientific design reviews of CT protocols: given time and funding, a chance to identify opportunity gaps in CT design, analysis, and communication. This maturity model includes 11 process areas and 5 maturity levels. Each of the 55 process area levels is populated with descriptions on a continuum toward an optimal state to improve CT protocols in the areas of risk of failure or uninformativeness.

A maturity model is “a tool that helps assess the current effectiveness of a person or group and supports figuring out what capabilities they need to acquire next in order to improve their performance” [24]. As an organization desires to implement CT scientific design/methodology reviews, or improve existing reviews, a maturity model can help to improve quality and capacity.

There are a number of variants of maturity models. A suitable model for presenting a maturity model is the Object Management Group Business Process Maturity Model (BPMM-OMG) [25]. Maturity levels (ML) are displayed on the Y-axis and are “well-defined evolutionary plateaus toward achieving a mature…process” [26]. The ML titles specific to BPMM-OMG and their fixed definitions are shown in Table 2. These levels act as ratings or grades for parts of a review process.

Table 2 Maturity levels (BPMM-OMG)a

Capabilities, as represented in maturity models, are often called process areas (PA). PAs are one or more grouped workstreams performed to meet a need [26]. To create a usable maturity model, users must carefully select the range of capacity and efforts—the cluster of related activities: in order to evaluate a scientific design review practice, the process areas must be identified and organized. At The Bill & Melinda Gates Foundation, after developing a post-funding scientific design review program across multiple disease areas and with multiple study types, eleven PAs were identified as independent capabilities key to the program. These PAs were curated by the authors after program progress through maturity levels, participation in all areas of the program, and non-systematic interviews with other program staff. These PA descriptions for scientific design reviews are shown in Table 3. In each “cell,” or capability cluster at a particular level of maturity, the contents include examples of mastery at that level. This comprehensive set offers a new or existing practitioner the benefit of including what matters and excluding what does not, resulting in time and cost savings, better CTs, and risk reduction.

Table 3 Process areas for performing scientific design reviews of clinical trials

Once a maturity model variant is selected and the topic-specific PAs are populated, users can plot the maturity levels for each PA. In the case of a maturity model for scientific design reviews, there are 11 PAs with 5 maturity levels each. All 11 PA tables in this maturity model are included in the supplementary material. The first PA table, support for CT informativeness, is reproduced here as an exemplar of the remaining PA tables (Table 4).

Table 4 Process area 1, informativeness-centric. An informative CT includes a hypothesis that addresses an important and unresolved scientific, medical, or policy question; is designed to provide meaningful evidence related to this question; must have a realistic plan for recruiting sufficient participants; must be conducted and analyzed in a scientifically valid manner; and reports methods and results accurately, completely, and promptly [27]. An alternate definition is that an informative CT is designed to have the best chance to complete on time, answer its research questions definitively, and effect policy change or a regulatory process, through special commitment to (a) siting the CT based on epidemiology and impact rather than convenience, (b) completing a statistical analysis plan concurrently with the CT protocol, (c) using accepted endpoints and conservative effect and prevalence/incidence estimates, and (d) utilizing contemporary techniques, such as statistical simulation, innovative CT designs, and software to monitor recruitment

Discussion

In 2020, The Bill & Melinda Gates Foundation developed and implemented an approach to performing post-funding scientific design reviews for CTs developed by its grantees. The review program, as it evolved, became more complex to support high quality reviews in large volume [28]. It is likely this program generates positive impact via reducing the risk of uninformativeness, through its non-mandatory, expert recommendations for protocol changes prior to trial start. The relevance for other CT funders is high, as uninformativeness seems an endemic problem. That said, the applicability of progressing to high maturity in the model presented may be low due to a perception of little time and resources among funders. Time and funding constraints also limit the ability of PIs to implement some expert recommendations [29]. Recommendations to a PI to add significant changes to a protocol—such as the addition of a systematic evidence to inform design, a clear element of informativeness—would need to be funded by a trial planning grant.

Many post-funding scientific design reviews happen globally outside of industry, although less frequent than pre-funding, pre-protocol peer reviews. The non-industry funders of protocol reviews—such as government-funded entities, private foundations, and the United States National Institutes of Health Cancer Center academic trial funders—operate at a variety of maturity levels. In such cases, those funders interested in improving or assessing their existing protocol review programs might consider using either the Maturity Model herein or a simplified version. For example, a funder wanting to add post-funding protocol review to their pre-existing pre-funding peer review might use the model herein but leave out process areas such as (a) having a wide breadth of expertise in a large reviewer team (PA2), (b) having within-review iterations (PA4), and (c) being software-enabled (PA7).

Adopting this maturity model for post-funding scientific design reviews has strengths and limitations. Strengths include (a) the model offers measurement, and an implied pathway toward maturity, in a variety of key areas—some necessary—for delivering scientific design reviews; (b) the model is focused on addressing risk in areas most likely to fail in CTs—trial informativeness; and (c) the model was developed, adjusted, and updated based on learnings from completion of over 100 protocol reviews. Limitations include (a) adopting a commitment to multi-element excellence within eleven process areas makes for a complicated model, (b) the expense involved in pursuing this approach may be challenging for some funders to take on, and (c) due to confidentiality requirements, the foundation is not able to provide detailed examples of its program in action.

Conclusions

Industry-sponsored CTs were found to have, in select situations, significantly higher informativeness than private funder-sponsored CTs [4]. A large portion of global health CTs are supported by private funders. There is interest among private funders to adopt the multi-expert scientific design reviews in use by industry and select government and foundation funders. Peer-review of CTs today offers too little time for a rigorous evaluation of CT design and associated methods. Creating persistent improvement in a CT protocol is most likely achieved by implementing a scientific design review, and the best time for this is late in the design phase or close to when the protocol is finalized. The maturity model described can help funders who do not have an approach for creating a post-funding scientific design review program. If private funders do have such a program, this maturity model can help extend its depth and breadth. The model offers both a formative structure and a continuum promising improved precision, efficacy, collaboration, and communication. The benefit accrues to private and government funders, industry, CT participants, and global citizens alike through increased likelihood of CT informativeness and faster cures.

Availability of data and materials

The dataset analyzed during the current study is available in the ClinicalTrials.gov repository, found at https://clinicaltrials.gov/.

Abbreviations

BPMM-OMG:

Business process maturity model from Object Management Group

CT:

Clinical trial

PA:

Process area

PI:

Principal investigator

References

  1. Zheutlin AR, Niforatos J, Stulberg E, Sussman J. Research waste in randomized clinical trials: a cross-sectional analysis. J Gen Intern Med. 2020;35(10):3105–7. https://doi.org/10.1007/s11606-019-05523-4.

    Article  PubMed  Google Scholar 

  2. Carlisle B, Kimmelman J, Ramsay T, MacKinnon N. Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials. Clin Trials. 2015;12(1):77–83. https://doi.org/10.1177/1740774514558307.

    Article  PubMed  Google Scholar 

  3. Williams RJ, Tse T, DiPiazza K, Zarin DA. Terminated trials in the ClinicalTrials.gov results database: evaluation of availability of primary outcome data and reasons for termination. PLoS ONE. 2015;10(5). https://doi.org/10.1371/journal.pone.0127242

  4. Hutchinson N, Moyer H, Zarin DA, Kimmelman J. The proportion of randomized controlled trials that inform clinical practice. Elife. 2022;17(11):e79491. https://doi.org/10.7554/eLife.79491.

    Article  Google Scholar 

  5. Guthrie S, Ghiga I, Wooding S. What do we know about grant peer review in the health sciences? F1000Research. 2018;6:1335. https://doi.org/10.12688/f1000research.11917.2

  6. Hug SE, Aeschbach M. Criteria for assessing grant applications: a systematic review. Palgrave Commun. 2020;6(1):1–5. https://doi.org/10.1057/s41599-020-0412-9.

    Article  Google Scholar 

  7. Bendiscioli S. The troubles with peer review for allocating research funding: funders need to experiment with versions of peer review and decision-making. EMBO Rep. 2019;20(12):e49472. https://doi.org/10.15252/embr.201949472.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Recio-Saucedo A, Crane K, Meadmore K, Fackrell K, Church H, Fraser S, Blatch-Jones A. What works for peer review and decision-making in research funding: a realist synthesis. Res Integrity Peer Rev. 2022;7(1):1–28. https://doi.org/10.1186/s41073-022-00120-2.

    Article  Google Scholar 

  9. Turner S, Bull A, Chinnery F, Hinks J, Mcardle N, Moran R, Payne H, Guegan EW, Worswick L, Wyatt JC. Evaluation of stakeholder views on peer review of NIHR applications for funding: a qualitative study. BMJ Open. 2018;8(12):e022548. https://doi.org/10.1136/bmjopen-2018-022548.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Investigational New Drug (IND) Application. United States Food and Drug Administration website. Last reviewed February 24, 2021. Accessed April 15, 2022. https://www.fda.gov/drugs/types-applications/investigational-new-drug-ind-application

  11. “Ethics in Clinical Research”. National Institutes of Health Clinical Center website. Updated October 21, 2021. Accessed January 12, 2023. https://clinicalcenter.nih.gov/recruit/ethics.html

  12. Binik A, Hey SP. A framework for assessing scientific merit in ethical review of clinical research. Ethics Human Res. 2019;41(2):2–13. https://doi.org/10.1002/eahr.500007.

    Article  Google Scholar 

  13. Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA. 2000;283(20):2701–11. https://doi.org/10.1001/jama.283.20.2701.

    Article  CAS  PubMed  Google Scholar 

  14. Mooney-Somers J, Olsen A. Ethical review and qualitative research competence: Guidance for reviewers and applicants. Res Ethics. 2017;13(3–4):128–38. https://doi.org/10.1177/1747016116677636.

    Article  Google Scholar 

  15. Williams E, Brown TJ, Griffith P, Rahimi A, Oilepo R, Hammers H, et al. Improving the time to activation of new clinical trials at a National Cancer Institute–designated comprehensive cancer center. JCO Oncol Pract. 2020;16(4):e324–32. https://doi.org/10.1200/OP.19.00325.

    Article  PubMed  Google Scholar 

  16. Knopman D, Alford E, Tate K, Long M, Khachaturian AS. Patients come from populations and populations contain patients. A two-stage scientific and ethics review: the next adaptation for single institutional review boards. Alzheimer’s & Dementia. 2017;13(8):940–6. https://doi.org/10.1016/j.jalz.2017.06.001.

    Article  Google Scholar 

  17. Hutchinson N, Klas K, Carlisle BG, Kimmelman J, Waligora M. How informative were early SARS-CoV-2 treatment and prevention trials? A longitudinal cohort analysis of trials registered on ClinicalTrials.gov. Plos one. 2022;17(1):e0262114. https://doi.org/10.1371/journal.pone.0262114.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Hartman D, Heaton P, Cammack N, Hudson I, Dolley S, Netsi E, Norman T, Mundel T. Clinical trials in the pandemic age: what is fit for purpose? Gates Open Res. 2020;4. https://doi.org/10.12688/gatesopenres.13146.1

  19. Abrams D, Montesi SB, Moore SK, Manson DK, Klipper KM, Case MA, Brodie D, Beitler JR. Powering bias and clinically important treatment effects in randomized trials of critical illness. Crit Care Med. 2020;48(12):1710–9. https://doi.org/10.1097/CCM.0000000000004568.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Benjamin DM, Hey SP, MacPherson A, Hachem Y, Smith KS, Zhang SX, Wong S, Dolter S, Mandel DR, Kimmelman J. Principal investigators over-optimistically forecast scientific and operational outcomes for clinical trials. PLoS ONE. 2022;17(2):e0262862. https://doi.org/10.1371/journal.pone.0262862.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Rosala-Hallas A, Bhangu A, Blazeby J, Bowman L, Clarke M, Lang T, Nasser M, Siegfried N, Soares-Weiser K, Sydes MR, Wang D. Global health trials methodological research agenda: results from a priority setting exercise. Trials. 2018;19(1):1–8. https://doi.org/10.1186/s13063-018-2440-y.

    Article  Google Scholar 

  22. Zarin DA, Goodman SN, Kimmelman J. eTable: conditions for trial uninformativeness Harms from uninformative clinical trials. Jama. 2019;322(9):813–4.

    Article  PubMed  Google Scholar 

  23. Blagden SP, Billingham L, Brown LC, Buckland SW, Cooper AM, Ellis S, Fisher W, Hughes H, Keatley DA, Maignen FM, Morozov A. Effective delivery of Complex Innovative Design (CID) cancer trials—a consensus statement. Br J Cancer. 2020;122(4):473–82. https://doi.org/10.1038/s41416-019-0653-9.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Fowler M. Maturity Model. Martinfowler.com website. August 24, 2014. Accessed July 25, 2022. https://martinfowler.com/bliki/MaturityModel.html

  25. OMG Standards Development Organization. Object Management Group website. Accessed April 4, 2022. https://www.omg.org/

  26. Paulk MC, Curtis B, Chrissis MB, Weber CV. Capability maturity model, version 1.1. IEEE software. 1993;10(4):18–27. https://doi.org/10.1109/52.219

  27. Zarin DA, Goodman SN, Kimmelman J. Harms from uninformative clinical trials. JAMA. 2019;322(9):813–4. https://doi.org/10.1001/jama.2019.9892.

    Article  PubMed  Google Scholar 

  28. Burford B, Norman T, Dolley S. Scientific Review of Protocols to Enhance Informativeness of Global Health Clinical Trials. ResearchSquare. 2024. https://doi.org/10.21203/rs.3.rs-3717747/v1.

  29. McLennan S, Nussbaumer-Streit B, Hemkens LG, Briel M. Barriers and facilitating factors for conducting systematic evidence assessments in academic clinical trials. JAMA Network Open. 2021;4(11):e2136577. https://doi.org/10.1001/jamanetworkopen.2021.36577.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Not applicable

Funding

Funding was provided by the Bill & Melinda Gates Foundation.

Author information

Authors and Affiliations

Authors

Contributions

SD formulated the concept, designed the model, and wrote the original draft. TN provided supervision and edited the manuscript. DM added to the model and edited the manuscript. DH edited the manuscript and acquired financial support. All authors read and approved the final manuscript.

Corresponding author

Correspondence to S Dolley.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

 From A maturity model for the scientific review of clinical trial designs and their informativeness. Table S1. Process Area 1, Informativeness-centric. Table S2. Process Area 2, Breadth of review expertise. Table S3. Process Area 3, Depth of reviewer expertise. Table S4. Process Area 4, Iterative. Table S5. Process Area 5, Information-enhanced. Table S6. Process Area 6, Solution-oriented. Table S7. Process Area 7, Software-enabled. Table S8. Process Area 8, Collaborative. Table S9. Process Area 9, Rich in data & analytics. Table S10. Process Area 10, Reliability and quality. Table S11. Process Area 11, Time appropriate.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dolley, S., Norman, T., McNair, D. et al. A maturity model for the scientific review of clinical trial designs and their informativeness. Trials 25, 271 (2024). https://doi.org/10.1186/s13063-024-08099-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-024-08099-5

Keywords