Skip to main content

The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design

Abstract

Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.

This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.

The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.

The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits. In order to encourage its wide dissemination this article is freely accessible on the BMJ and Trials journal websites.

“To maximise the benefit to society, you need to not just do research but do it well” Douglas G Altman

Purpose of the paper

Incomplete and poor reporting of randomised clinical trials makes trial findings difficult to interpret due to study methods, results, and inference that are not reproducible. This severely undermines the value of scientific research, obstructs robust evidence synthesis to inform practice and future research, and contributes to research waste [1, 2]. The Consolidated Standards Of Reporting Trials (CONSORT) statement is a consensus-based reporting guidance framework that aims to promote and enhance transparent and adequate reporting of randomised trials [3, 4]. Specific CONSORT extensions addressing the reporting needs for particular trial designs, hypotheses, and interventions have been developed [5]. The use of reporting guidelines is associated with improved completeness in study reporting [6,7,8]; however, mechanisms to improve adherence to reporting guidelines are needed [9,10,11,12].

We developed an Adaptive designs CONSORT Extension (ACE) [13] to the CONSORT 2010 statement [3, 4] to support reporting of randomised trials that use an adaptive design (AD)—referred to as AD randomised trials. In this paper, we define an AD and summarise some types of ADs as well as their use and reporting. We then describe briefly how the ACE guideline was developed, and present its scope and underlying principles. Finally, we present the ACE checklist with explanation and elaboration (E&E) to guide its use.

Adaptive designs: definition, current use, and reporting

The ACE Steering Committee [13] agreed a definition of an AD (Box 1) consistent with the literature [14,15,16,17,18].

Box 1 Definition of an adaptive design (AD)

Substantial uncertainties often exist when designing trials around aspects such as the target population, outcome variability, optimal treatments for testing, treatment duration, treatment intensity, outcomes to measure, and measures of treatment effect [19]. Well designed and conducted AD trials allow researchers to address research questions more efficiently by allowing key aspects or assumptions of ongoing trials to be evaluated or validly stopping treatment arms or entire trials on the basis of available evidence [15, 18, 20, 21]. As a result, patients may receive safe, effective treatments sooner than with fixed (non-adaptive) designs [19, 22,23,24,25]. Despite their potential benefits, there are practical challenges and obstacles to the use of ADs [18, 26,27,28,29,30,31,32,33].

The literature on ADs is considerable, and there is specific terminology associated with the field. Box 2 gives a glossary of key terminology used throughout this E&E document.

Box 2 Definitions of key technical terms

Table 1 summarises some types of ADs and cites examples of their use in randomised trials. The motivations for these trial adaptations are well discussed [15, 18, 21, 22, 25, 103,104,105]. Notably, classification of ADs in the literature is inconsistent [13, 22], while the scope and complexity of trial adaptations and underpinning statistical methods continues to broaden [18, 20, 106].

Table 1 Some types of adaptations used in randomised trials with examples

Furthermore, there is growing literature citing AD methods [29, 78, 107] and interest in their application by researchers and research funders [26, 28, 108]. Regulators have published reflection and guidance papers on ADs [14, 108,109,110,111]. Several studies, including regulatory reviews, have investigated the use of ADs in randomised trials [27, 29, 31, 33, 37, 45, 97, 107, 108, 112,113,114,115,116,117,118,119]. In summary, ADs are used in a relatively low proportion of trials, although their use is steadily increasing in both the public and private sectors [114,115,116], and they are frequently considered at the design stage [27].

The use of ADs is likely to be underestimated due to poor reporting making it difficult to retrieve them in the literature [114]. While the reporting of standard CONSORT requirements of AD randomised trials is generally comparable to that of traditional fixed design trials [45], inadequate and inconsistent reporting of essential aspects relating to ADs is widely documented [26, 27, 45, 107, 112, 113, 120,121,122]. This may limit their credibility, the interpretability of results, and their ability to inform or change practice [14, 26,27,28, 30, 31, 108, 109, 112, 119, 120], whereas transparency and adequate reporting can help address these concerns [22, 27]. In summary, statistical and non-statistical issues arise in ADs [22, 97, 105, 108, 123,124,125,126,127], which require special reporting considerations [13].

Summary of how the ACE guideline was developed

We adhered to a registered protocol [128] and the consensus-driven methodological framework for developing healthcare reporting guidelines recommended by the CONSORT Group and the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network [129]. An open access paper detailing the rationale and the complete development process of the ACE checklist for main reports and abstracts has been published [13]. That paper details how reporting items were identified, the stakeholders who were involved, the decision-making process, consensus judgement and how reporting items were retained or dropped, and finalisation of the ACE checklist. In summary, this comprised a two-stage Delphi process involving cross-sector (public and private) and multidisciplinary key stakeholders in clinical trials research from 21 countries. Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in round one, round two, and across both rounds, respectively. A consensus meeting attended by 27 cross-sector delegates from Europe, Asia, and the US followed this. Members of the CONSORT Group provided oversight throughout. The ACE Consensus Group and Steering Committee approved the final checklist that included the abstract and contributed to this E&E document. Box 3 outlines the scope of principles guiding the application of this extension.

Box 3 ACE guideline scope and general principles

Structure of the ACE guideline

Authors should apply this guideline together with the CONSORT 2010 statement [3, 4] and any other relevant extensions depending on other design features of their AD randomised trial (such as extensions for multi-arm [132], cluster randomised [133], crossover [134], and non-inferiority and equivalence trials [135]). Box 4 summarises the changes made to develop this extension. Table 2 shows which CONSORT 2010 items were adapted and how. We provide both CONSORT 2010 and ACE items with comments, explanation, and examples to illustrate how specific aspects of different types of AD randomised trials should be reported. For the examples, we obtained some additional information from researchers or other trial documents (such as statistical analysis plans (SAPs) and protocols). Headings of examples indicate the type of AD and the specific elements of an item that were better reported, so examples may include some incomplete reporting in relation to other elements.

Box 4 Summary of significant changes to the CONSORT 2010 statement
Table 2 ACE checklist for the main report
Table 3 ACE checklist for abstracts

The ACE checklist

Tables 2 and 3 are checklists for the main report and abstract, respectively. Only new and modified items are discussed in this E&E document, as well as six items that retain the CONSORT 2010 [3, 4] wording but require clarification for certain ADs (Box 4). Authors should download and complete Additional file 1 to accompany a manuscript during journal submission.

Section 1. Title and abstract

CONSORT 2010 item 1b: Structured summary of trial design, methods, results, and conclusions (for specific guidance see CONSORT for abstracts [136, 137]).

ACE item 1b: Structured summary of trial design, methods, results, and conclusions (for specific guidance see ACE for abstracts, Table 3).

Explanation—A well structured abstract summary encompassing trial design, methods, results, and conclusions is essential regardless of the type of design implemented [137]. This allows readers to search for relevant studies of interest and to quickly judge if the reported trial is relevant to them for further reading. Furthermore, it helps readers to make instant judgements on key benefits and risks of study interventions. Table 3 presents minimum essential items authors should report in an AD randomised trial abstract. Authors should use this extension together with the CONSORT for journal and conference abstracts for additional details [136, 137] and other relevant extensions where appropriate.

CONSORT abstract item (Trial design): Description of the trial design (for example, parallel, cluster, non-inferiority).

ACE abstract item (Trial design): Description of the trial design (for example, parallel, cluster, non-inferiority); include the word “adaptive” in the content or at least as a keyword.

Explanation—AD randomised trials should be indexed properly to allow other researchers to easily retrieve them in literature searches. This is particularly important as trial design may influence interpretation of trial findings and the evidence synthesis approach used during meta-analyses. The MEDLINE database provides “Adaptive clinical trial” as a Medical Subject Heading (MeSH) topic to improve indexing [139]. Authors may also like to state the type of the AD, including details of adaptations as covered under the new item 3b (Table 3). See Box 5 for exemplars.

Box 5 Exemplars on the use of “adaptive” in the abstract content and/or as a keyword

CONSORT/ACE abstract item (Outcome): Clearly defined primary outcome for this report.

Explanation—In some AD randomised trials, the outcome used to inform adaptations (adaptation outcome) and the primary outcome of the study can differ (see item 6 of the main checklist for details). The necessity of reporting both of these outcomes and results in the abstract depends on the stage of reporting and whether the adaptation decisions made were critical to influencing the interpretation of the final results. For example, when a trial or at least a treatment group is stopped early, based on an adaptation outcome which is not the primary outcome, it becomes essential to adequately describe both outcomes in accordance with the CONSORT 2010 statement [3, 4]. Contrarily, only the description of the primary outcome in the abstract will be essential when non-terminal adaptation decisions are made (such as to change the sample size, update randomisation, or no dropping of treatments groups at interims) and when final (not interim) results are being reported. Furthermore, the results item (Table 3) should be reported consistent with the stated primary and adaptation outcome(s), where necessary. See Box 6 for exemplars.

Box 6 Exemplars on reporting outcomes in the abstract

ACE abstract item (adaptation decisions made): Specify what trial adaptation decisions were made in light of the pre-planned decision-making criteria and observed accrued data.

Explanation—A brief account of changes that were made to the trial, on what basis they were made, and when is important. The fact that the design allows for adaptations will influence interpretation of results, potentially due to operational and statistical biases. If changes should have been made, but were not, then this may further influence credibility of results. See the main checklist item 14c for details. See Box 7 for exemplars.

Box 7 Exemplars on reporting adaptation decisions made to the trial in the abstract

Section 3: Methods (Trial design)

ACE item 3b (new): Type of adaptive design used, with details of the pre-planned adaptations and the statistical information informing the adaptations.

Explanation—A description of the type of AD indicates the underlying design concepts and the applicable adaptive statistical methods. Although there is an inconsistent use of nomenclature to classify ADs, together with growing related methodology [13], some currently used types of ADs are presented in Table 1. A clear description will also improve the indexing of AD methods and for easy identification during literature reviews.

Specification of pre-planned opportunities for adaptations and their scope is essential to preserve the integrity of AD randomised trials [22] and for regulatory assessments, regardless of whether they were triggered during the trial [14, 108, 109]. Details of pre-planned adaptations enable readers to assess the appropriateness of statistical methods used to evaluate operating characteristics of the AD (item 7a) and for performing statistical inference (item 12b). Unfortunately, pre-planned adaptations are commonly insufficiently described [119]. Authors are encouraged to explain the scientific rationale for choosing the considered pre-planned adaptations encapsulated under the CONSORT 2010 item “scientific background and explanation of rationale” (item 2a). This rationale should focus on the goals of the considered adaptations in line with the study objectives and hypotheses (item 2b) [107, 108, 119, 123].

Details of pre-planned adaptations with rationale should be documented in accessible study documents for readers to be able to evaluate what was planned and unplanned (such as protocol, interim and final SAP or dedicated trial document). Of note, any pre-planned adaptation that modifies eligibility criteria (such as in population enrichment ADs [88, 146]) should be clearly described.

Adaptive trials use accrued statistical information to make pre-planned adaptation(s) (item 14c) at interim analyses guided by pre-planned decision-making criteria and rules (item 7b). Reporting this statistical information for guiding adaptations and how it is gathered is paramount. Analytical derivations of statistical information guiding pre-planned adaptations using statistical models or formulae should be described to facilitate reproducibility and interpretation of results. The use of supplementary material or references to published literature is sufficient. For example, sample size re-assessment (SSR) can be performed using different methods with or without knowledge or use of treatment arm allocation [37, 38, 40, 44]. Around 43% (15/35) of regulatory submissions needed further clarifications because of failure to describe how a SSR would be performed [119]. Early stopping of a trial or treatment group for futility can be evaluated based on statistical information to support lack of evidence of benefit that is derived and expressed in several ways. For example, conditional power [52, 147,148,149,150], predictive power [51, 148, 151,152,153], the threshold of the treatment effect, posterior probability of the treatment effect [96], or some form of clinical utility that quantifies the balance between benefits against harms [154, 155] or between patient and society perspectives on health outcomes [96]. See Box 8 for exemplars.

Box 8 Exemplars on reporting item 3b elements

CONSORT 2010 item 3b: Important changes to the design or methods after trial commencement (such as eligibility criteria), with reasons.

ACE item 3c (modification, renumbered): Important changes to the design or methods after trial commencement (such as eligibility criteria) outside the scope of the pre-planned adaptive design features, with reasons.

Explanation—Unplanned changes to certain aspects of the design or methods in response to unexpected circumstances that occur during the trial are common and will need to be reported in AD randomised trials, as in fixed design trials. This may include deviations from pre-planned adaptations and decision rules [15, 66], as well as changes to timing and frequency of interim analyses. Traditionally, unplanned changes with explanation have been documented as protocol amendments and reported as discussed in the CONSORT 2010 statement [3, 4]. Unplanned changes, depending on what they are and why they were made, may introduce bias and compromise trial credibility. Some unplanned changes may render the planned adaptive statistical methods invalid or may complicate interpretation of results [22]. It is therefore essential for authors to detail important changes that occurred outside the scope of the pre-planned adaptations and to explain why deviations from the planned adaptations were necessary. Furthermore, it should be clarified whether unplanned changes were made following access to key trial information such as interim data seen by treatment group or interim results. Such information will help readers assess potential sources of bias and implications for the interpretation of results. For ADs, it is essential to distinguish unplanned changes from pre-planned adaptations (item 3b) [161]. See Box 9 for an exemplar.

Box 9 Exemplar on reporting item 3c elements

Section 6. Outcomes

CONSORT 2010 item 6a: Completely define pre-specified primary and secondary outcome measures, including how and when they were assessed.

ACE item 6a (modification): Completely define pre-specified primary and secondary outcome measures, including how and when they were assessed. Any other outcome measures used to inform pre-planned adaptations should be described with the rationale.

Comment—Authors should also refer to the CONSORT 2010 statement [3, 4] for the original text when applying this item.

Explanation—It is paramount to provide a detailed description of pre-specified outcomes used to assess clinical objectives including how and when they were assessed. For operational feasibility, ADs often use outcomes that can be observed quickly and easily to inform pre-planned adaptations (adaptation outcomes). Thus, in some situations, adaptations may be based on early observed outcome(s) [162] that are believed to be informative for the primary outcome even though different from the primary outcome. The adaptation outcome (such as a surrogate, biomarker, or an intermediate outcome) together with the primary outcome influences the adaptation process, operating characteristics of the AD, and interpretation and trustworthiness of trial results. Despite many potential advantages of using early observed outcomes to adapt a trial, they pose additional risks of making misleading inferences if they are unreliable [163]. For example, a potentially beneficial treatment could be wrongly discarded, an ineffective treatment incorrectly declared effective or wrongly carried forward for further testing, or the randomisation updated based on unreliable information.

Authors should therefore clearly describe adaptation outcomes similar to the description of pre-specified primary and secondary outcomes in the CONSORT 2010 statement [3, 4]. Authors are encouraged to provide a clinical rationale supporting the use of an adaptation outcome that is different to the primary outcome in order to aid the clinical interpretation of results. For example, evidence supporting that the adaptation outcome can provide reliable information on the primary outcome will suffice. See Box 10 for exemplars.

Box 10 Exemplars on reporting item 6a elements

CONSORT 2010 item 6b: Any changes to trial outcomes after the trial commenced, with reasons.

ACE item 6b (modification): Any unplanned changes to trial outcomes after the trial commenced, with reasons.

Comment—Authors may wish to cross-reference the CONSORT 2010 statement [3, 4] for background details.

Explanation—Outcome reporting bias occurs when the selection of outcomes to report is influenced by the nature and direction of results. The prevalence of outcome reporting bias in medical research is well documented: discrepancies between pre-specified outcomes in protocols or registries and those published in reports [12, 168,169,170,171]; outcomes that portray favourable beneficial effects of treatments and safety profiles being more likely to be reported [169]; some pre-specified primary or secondary outcomes modified or switched after trial commencement [170]. Changes to trial outcomes may also include changes to how outcomes were assessed or measured, when they were assessed, or the order of importance to address objectives [171].

Sometimes when planning trials, there is huge uncertainty around the magnitude of treatment effects on potential outcomes viewed acceptable as primary endpoints [105, 171]. As a result, although uncommon, a pre-planned adaptation could include the choice of the primary endpoints or hypotheses for assessing the benefit-risk ratio. In such circumstances, the adaptive strategy should be clearly described as a pre-planned adaptation (item 3b). Authors should clearly report any additional changes to outcomes outside the scope of the pre-specified adaptations including an explanation of why such changes occurred in line with the CONSORT 2010 statement. This will enable readers to distinguish pre-planned trial adaptations of outcomes from unplanned changes, thereby allowing them to judge outcome reporting bias. See Box 11 for an exemplar.

Box 11 Exemplar on reporting item 6b

Section 7. Sample size and operating characteristics

CONSORT 2010 item 7a: How sample size was determined.

ACE item 7a (modification): How sample size and operating characteristics were determined.

Comments—This section heading was modified to reflect additional operating characteristics that may be required for some ADs in addition to the sample size. Items 3b, 7a, 7b, and 12b are connected so they should be cross-referenced when reporting.

Explanation—Operating characteristics, which relate to the statistical behaviour of a design, should be tailored to address trial objectives and hypotheses, factoring in logistical, ethical, and clinical considerations. These may encompass the maximum sample size, expected sample sizes under certain scenarios, probabilities of identifying beneficial treatments if they exist, and probabilities of making false positive claims of evidence [172, 173]. Specifically, the predetermined sample size for ADs is influenced, among other things, by:

  1. 1.

    Type and scope of adaptations considered (item 3b);

  2. 2.

    Decision-making criteria used to inform adaptations (item 7b);

  3. 3.

    Criteria for claiming overall evidence (such as based on the probability of the treatment effect being above a certain value, targeted treatment effect of interest, and threshold for statistical significance [174, 175]);

  4. 4.

    Timing and frequency of the adaptations (item 7b);

  5. 5.

    Type of primary outcome(s) (item 6a) and nuisance parameters (such as outcome variance);

  6. 6.

    Method for claiming evidence on multiple key hypotheses (part of item 12b);

  7. 7.

    Desired operating characteristics (see Box 2), such as statistical power and an acceptable level of making a false positive claim of benefit;

  8. 8.

    Adaptive statistical methods used for analysis (item 12b);

  9. 9.

    Statistical framework (frequentist or Bayesian) used to design and analyse the trial.

Information that guided estimation of sample size(s), including operating characteristics of the considered AD, should be described sufficiently to enable readers to reproduce the sample size calculation. The assumptions made concerning design parameters should be clearly stated and supported with evidence if possible. Any constraints imposed (for example, due to limited trial population) should be stated. It is good scientific practice to reference the statistical tools used (such as statistical software, program, or code) and to describe the use of statistical simulations when relevant (see item 24b discussion).

In a situation where changing the sample size is a pre-planned adaptation (item 3b), authors should report the initial sample sizes (at interim analyses before the expected change in sample size) and the maximum allowable sample size per group and in total if applicable. The planned sample sizes (or expected numbers of events for time-to-event data) at each interim analysis and final analysis should be reported by treatment group and overall. The timing of interim analyses can be specified as a fraction of information gathered rather than sample size. See Box 12 for exemplars.

Box 12 Exemplars on reporting item 7a elements
Fig. 1
figure 1

Adapted from Steg et al. [182]

CONSORT 2010 item 7b: When applicable, explanation of any interim analyses and stopping guidelines.

ACE item 7b (replacement): Pre-planned interim decision-making criteria to guide the trial adaptation process; whether decision-making criteria were binding or non-binding; pre-planned and actual timing and frequency of interim data looks to inform trial adaptations.

Comments—This item is a replacement so when reporting, the CONSORT 2010 [3] item 7b content should be ignored. Items 7b and 8b overlap, but we intentionally reserved item 8b specifically to enhance complete reporting of ADs with randomisation updates as a pre-planned adaptation. Reporting of these items is also connected to items 3b and 12b.

Explanation—Transparency and complete reporting of pre-planned decision-making criteria (Box 2) and how overall evidence is claimed are essential as they influence operating characteristics of the AD, credibility of the trial, and clinical interpretation of findings [22, 32, 183].

A key feature of an AD is that interim decisions about the course of the trial are informed by observed interim data (element of item 3b) at one or more interim analyses guided by decision rules describing how and when the proposed adaptations will be activated (pre-planned adaptive decision-making criteria). Decision rules, as defined in Box 2, may include, but are not limited to, rules for making adaptations described in Table 1. Decision rules are often constructed with input of key stakeholders (such as clinical investigators, statisticians, patient groups, health economists, and regulators) [184]. For example, statistical methods for formulating early stopping decision rules of a trial or treatment group(s) exist [47, 48, 185,186,187,188].

Decision boundaries (for example, stopping boundaries), pre-specified limits or parameters used to determine adaptations to be made, and criteria for claiming overall evidence of benefit and/or harm (at an interim or final analysis) should be clearly stated. These are influenced by statistical information used to inform adaptations (item 3b). Decision trees or algorithms can aid the representation of complex adaptive decision-making criteria.

Allowing for trial adaptations too early in a trial with inadequate information severely undermines robustness of adaptive decision-making criteria and trustworthiness of trial results [189, 190]. Furthermore, methods and results can only be reproducible when timing and frequency of interim analyses are adequately described. Therefore, authors should detail when and how often the interim analyses were planned to be implemented. The planned timing can be described in terms of information such as interim sample size or number of events relative to the maximum sample size or maximum number of events, respectively. For example, in circumstances when the pre-planned and actual timing or/and frequency of the interim analyses differ, reports should clearly state what actually happened (item 3c).

Clarification should be made on whether decision rules were binding or non-binding to help assess implications in the case when they were overruled or ignored. For example, when a binding futility boundary is overruled and a trial is continued, this would lead to a type I error inflation. Non-binding decision rules are those that can be overruled without having a negative effect on the control of the type I error rate. Use of non-binding futility boundaries is often advised [51]. See Box 13 for exemplars.

Box 13 Exemplars on reporting item 7b elements
Table 4 Stopping boundaries
Fig. 2
figure 2

Redrawn from Gilson et al. [260] Reused in accordance with the terms of Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/). No changes to the original figure were made

Additional examples on the use of non-binding futility boundaries and a cap on sample size following SSR and treatment selection are given in Additional file 2.

Section 8. Randomisation (Sequence generation)

CONSORT 2010 item 8b: Type of randomisation; details of any restriction (such as blocking and block size).

ACE item 8b (modification): Type of randomisation; details of any restriction (such as blocking and block size); any changes to the allocation rule after trial adaptation decisions; any pre-planned allocation rule or algorithm to update randomisation with timing and frequency of updates.

Comments—In applying this item, the reporting of randomisation aspects before activation of trial adaptations must adhere to CONSORT 2010 items 8a and 8b. This E&E document only addresses additional randomisation aspects that are essential when reporting any AD where the randomisation allocation changes. Note that the contents of extension items 7b and 8b overlap.

Explanation—In AD randomised trials, the allocation ratio(s) may remain fixed throughout or change during the trial as a consequence of pre-planned adaptations (for example, when modifying randomisation to favour treatments more likely to show benefits, after treatment selection, or upon introduction of a new arm to an ongoing trial) [69]. Unplanned changes may also change allocation ratios (for example, after early stopping of a treatment arm due to unforeseeable harms).

This reporting item is particularly important for response-adaptive randomisation (RAR) ADs as several factors influence their efficiency and operating characteristics, which in turn influence the trustworthiness of results and necessitate adequate reporting [13, 182, 197,198,199]. For RAR ADs, authors should therefore detail the pre-planned:

  1. a)

    Burn-in period before activating randomisation updates, including the period when the control group allocation ratio was fixed;

  2. b)

    Type of randomisation method with allocation ratios per group during the burn-in period as detailed in the standard CONSORT 2010 item 8b;

  3. c)

    Method or algorithm used to adapt or modify the randomisation allocations after the burn-in period;

  4. d)

    Information used to inform the adaptive randomisation algorithm and how it was derived (item 3b). Specifically, when a Bayesian RAR is used, we encourage authors to provide details of statistical models and rationale for the prior distribution chosen;

  5. e)

    Frequency of updating the allocation ratio (for example, after accrual of a certain number of participants with outcome data or defined regular time period) and;

  6. f)

    Adaptive decision-making criteria to declare early evidence in favour or against certain treatment groups (part of item 7b).

In addition, any envisaged changes to the allocation ratio as a consequence of other trial adaptations (for example, early stopping of an arm or addition of a new arm) should be stated. See Box 14 for exemplars.

Box 14 Exemplars on reporting item 8b elements

Section 11. Randomisation (Blinding)

ACE item 11c (new): Measures to safeguard the confidentiality of interim information and minimise potential operational bias during the trial.

Explanation—Preventing or minimising bias is central for robust evaluation of the beneficial and harmful effects of interventions. Analysis of accumulating trial data brings challenges regarding how knowledge or leakage of information, or mere speculation about interim treatment effects, may influence behaviour of key stakeholders involved in the conduct of the trial [22, 122, 200]. Such behavioural changes may include differential clinical management; reporting of harmful effects; clinical assessment of outcomes; and decision-making to favour one treatment group over the other. Inconsistencies in trial conduct before and after adaptations have wide implications that may affect trial validity and integrity [22]. For example, use of statistical methods that combine data across stages may become questionable or may make overall results uninterpretable. AD randomised trials whose integrity was severely compromised by disclosure of interim results have resulted in regulators questioning the credibility of conclusions [201, 202]. Most AD randomised trials, 76% (52/68) [45] and 60% (151/251) [112], did not disclose methods to minimise potential operational bias during interim analyses. The seriousness of this potential risk will depend on various trial characteristics, and the purpose of having disclosure is to enable readers to judge the risk of potential sources of bias, and thus judge how trustworthy they can assume results to be.

The literature covers processes and procedures which could be considered by researchers to preserve confidentiality of interim results to minimise potential operational bias [41, 123, 203]. There is no universal approach that suits every situation due to factors such as feasibility; nature of the trial; and available resources and infrastructure. Some authors discuss roles and activities of independent committees in adaptive decision-making processes and control mechanisms for limiting access to interim information [203,204,205].

Description of the process and procedures put in place to minimise the potential introduction of operational bias related to interim analyses and decision-making to inform adaptations is essential [22, 125, 203]. Specifically, authors should give consideration to:

  1. a)

    Who recommended or made adaptation decisions. The roles of the sponsor or funder, clinical investigators, and trial monitoring committees (for example, independent data monitoring committee or dedicated committee for adaptation) in the decision-making process should be clearly stated;

  2. b)

    Who had access to interim data and performed interim analyses;

  3. c)

    Safeguards which were in place to maintain confidentiality (for example, how the interim results were communicated and to whom and when).

See Box 15 for exemplars.

Box 15 Exemplars on reporting item 11c elements

Section 12. Statistical methods

CONSORT 2010 item 12a: Statistical methods used to compare groups for primary and secondary outcomes.

ACE item 12a (modification): Statistical methods used to compare groups for primary and secondary outcomes, and any other outcomes used to make pre-planned adaptations.

Comment—This item should be applied with reference to the detailed discussion in the CONSORT 2010 statement [3, 4].

Explanation—The CONSORT 2010 statement [3, 4] addresses the importance of detailing statistical methods to analyse primary and secondary outcomes at the end of the trial. This ACE modified item extends this to require similar description to be made of statistical methods used for interim analyses. Furthermore, statistical methods used to analyse any other adaptation outcomes (item 6) should be detailed to enhance reproducibility of the adaptation process and results. Authors should focus on complete description of statistical models and aspects of the estimand of interest [206, 207] consistent with stated objectives and hypotheses (item 2b) and pre-planned adaptations (item 3b).

For Bayesian ADs, item 12b (paragraph 6) describes similar information that should be reported for Bayesian methods.

See Box 16 for exemplars.

Box 16 Exemplars on reporting item 12a elements

ACE item 12b (new): For the implemented adaptive design features, statistical methods used to estimate treatment effects for key endpoints and to make inferences.

Comments—Note that items 7a and 12b are connected. Key endpoints are all primary endpoints as well as other endpoints considered highly important, for example, an endpoint used for adaptation.

Explanation—A goal of every trial is to provide reliable estimates of the treatment effect for assessing benefits and risks to reach correct conclusions. Several statistical issues may arise when using an AD depending on its type and the scope of adaptations, the adaptive decision-making criteria and whether frequentist or Bayesian methods are used to design and analyse the trial [22]. Conventional estimates of treatment effect based on fixed design methods may be unreliable when applied to ADs (for example, may exaggerate the patient benefit) [92, 209,210,211,212,213]. Precision around the estimated treatment effects may be incorrect (for example, the width of confidence intervals may be incorrect). Other methods available to summarise the level of evidence in hypothesis testing (for example, p-values) may give different answers. Some factors and conditions that influence the magnitude of estimation bias have been investigated and there are circumstances when it may not be of concern [209, 214,215,216,217,218]. Secondary analyses (for example, health economic evaluation) may also be affected if appropriate adjustments are not made [219, 220]. Cameron et al. [221] discuss methodological challenges in performing network meta-analysis when combining evidence from randomised trials with ADs and fixed designs. Statistical methods for estimating the treatment effect and its precision exist for some ADs [64, 222,223,224,225,226,227,228,229,230,231] and implementation tools are being developed [78, 232,233,234]. However, these methods are rarely used or reported and the implications are unclear [45, 209, 235]. Debate and research on inference for some ADs with complex adaptations is ongoing.

In addition to statistical methods for comparing outcomes between groups (item 12a), we specifically encourage authors to clearly describe statistical methods used to estimate measures of treatment effects with associated uncertainty (for example, confidence or credible intervals) and p-value (when appropriate); referencing relevant literature is sufficient. When conventional or naïve estimators derived from fixed design methods are used, it should be clearly stated. In situations where statistical simulations were used to either explore the extent of bias in estimation of the treatment effects (such as [181, 236]) or operating characteristics, it is good practice to mention this and provide supporting evidence (item 24c).

ADs tend to increase the risk of making misleading or unjustified claims of treatments effects if traditional methods that ignore trial adaptations are used. In general, this arises when selecting one or more hypothesis test results from a possible list in order to claim evidence of the desired conclusion. For instance, the risks may increase by testing the same hypothesis several times (for example, at interim and final analyses), hypothesis testing of multiple treatment comparisons, selecting an appropriate population from multiple target populations, adapting key outcomes, or a combination of these [22]. A variety of adaptive statistical methods exist for controlling specific operating characteristics of the design (for example, type I error rate, power) depending on the nature of the repeated testing of hypotheses [47, 57, 58, 78, 193, 237,238,239,240,241,242].

Authors should therefore state operating characteristics of the design that have been controlled and details of statistical methods used. The need for controlling a specific type of operating characteristic (for example, pairwise or familywise type I error rate) is context dependent (for example, based on regulatory considerations, objectives and setting) so clarification is encouraged to help interpretation. How evidence of benefit and/or risk is claimed (part of item 7a) and hypotheses being tested (item 2b) should be clear. In situations where statistical simulations were used, we encourage authors to provide a report, where possible (item 24b).

When data or statistical tests across independent stages are combined to make statistical inference, authors should clearly describe the combination test method (for example, Fisher’s combination method, inverse normal method or conditional error function) [193, 240, 241, 243, 244] and weights used for each stage (when not obvious). This information is important because different methods and weights may produce results that lead to different conclusions. Bauer and Einfalt [107] found low reporting quality of these methods.

Brard et al. [245] found evidence of poor reporting of Bayesian methods. To address this, when a Bayesian AD is used, authors should detail the model used for analysis to estimate the posterior probability distribution; the prior distribution used and rationale for its choice; whether the prior was updated in light of interim data and how; and clarify the stages when the prior information was used (interim or/and final analysis). If an informative prior was used, the source of data to inform this prior should be disclosed where applicable. Of note, part of the Bayesian community argue that it is not principled to control frequentist operating characteristics in Bayesian ADs [246], although these can be computed and presented [22, 154, 247].

Typically, ADs require quickly observed adaptation outcomes relative to the expected length of the trial. In some ADs, randomised participants who have received the treatment may not have their outcome data available at the interim analysis (referred to as overrunning participants) for various reasons [248]. These delayed responses may pose ethical dilemmas depending on the adaptive decisions taken, present logistical challenges, or diminish the efficiency of the AD depending on their prevalence and the objective of the adaptations [201]. It is therefore useful for readers to understand how overrunning participants were dealt with at interim analyses especially after a terminal adaptation decision (for example, when a trial or treatment groups were stopped early for efficacy or futility). If outcome data of overrunning participants were collected, a description should be given of how these data were analysed and combined with interim results after the last interim decision was made. Some formal statistical methods to deal with accrued data from overrunning participants have been proposed [249].

See Box 17 for exemplars.

Box 17 Exemplars on reporting item 12 elements

Section 13. Results (Participant flow)

CONSORT 2010 item 13a: For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome.

ACE item 13a (modification): For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome and any other outcomes used to inform pre-planned adaptations, if applicable.

Comments—Authors are referred to the CONSORT 2010 statement [3, 4] for detailed discussion. Here, we only address additional requirements for ADs.

Explanation—The CONSORT 2010 statement [3, 4] discusses why it is essential to describe participant flow adequately from screening to analysis. This applies to both interim and final analyses depending on the stage of reporting. The number of participants for each group with adaptation outcome data (that contributed to the interim analyses) should also be reported if different from the number of participants with primary outcome data. Furthermore, authors should report the number of randomised participants, for each group, that did not contribute to each interim analysis because of lack of mature outcome data at that interim look. For example, overrunning participants that were still being followed up when a terminal adaptation decision was made (for example, dropping of treatment groups or early trial termination). The presentation of participant flow should align with the key hypotheses (for example, subpopulation(s) and full study population) and treatment comparisons depending on the stage of results being reported.

See Box 18 for exemplars.

Box 18 Exemplars on reporting item 13 (participant flowcharts)

Section 14. Results (Recruitment)

CONSORT 2010 item 14a: Dates defining the periods of recruitment and follow-up.

ACE item 14a (modification): Dates defining the periods of recruitment and follow-up, for each group.

Comment—Authors should refer to the CONSORT 2010 statement [3, 4] for the discussion.

Explanation—Consumers of research findings should be able to put trial results, study interventions, and comparators into context. Some ADs, such as those that evaluate multiple treatments allowing dropping of futile ones, selection of promising treatments, or addition of new treatments to an ongoing trial [19, 102, 258, 259], incorporate pre-planned adaptations to drop or add new treatment groups during the course of the trial. As a result, dates of recruitment and follow-up may differ across treatment groups. In addition, the comparator arm may also change with time and concurrent or non-concurrent controls may be used. There are statistical implications that include how analysis populations for particular treatment comparisons are defined at different stages. For each treatment group, authors should clearly state the exact dates defining recruitment and follow-up periods. It should be stated if all treatment groups were recruited and followed-up during the same period.

See Box 19 for exemplars.

Box 19 Exemplars on reporting item 14a

CONSORT 2010/ACE item 14b (clarification): Why the trial ended or was stopped.

Comment—This item should be applied without reference to the CONSORT 2010 statement [3, 4].

Explanation—Some clinical trials are stopped earlier than planned for reasons that will have implications for interpretation and generalisability of results. For example, poor recruitment is a common challenge [261]. This may limit the inference drawn or complicate interpretation of results based on insufficient or truncated trial data. Thus, the reporting of reasons for stopping a trial early including circumstances leading to that decision could help readers to interpret results with relevant caveats.

The CONSORT 2010 statement [3, 4], however, did not distinguish early stopping of a trial due to a pre-planned adaptation from an unplanned change. To address this and for consistency, we have now reserved this item for reporting of reasons why the trial or certain treatment arm(s) were stopped outside the scope of pre-planned adaptations, including those involved in deliberations leading to this decision (for example, sponsor, funder, or trial monitoring committee). We also introduced item 14c to capture aspects of adaptation decisions made in light of the accumulating data, such as stopping the trial or treatment arm because the decision-making criterion to do so has been met.

See Box 20 for exemplars.

Box 20 Exemplars on reporting item 14b

ACE item 14c (new): Specify what trial adaptation decisions were made in light of the pre-planned decision-making criteria and observed accrued data

Explanation—ADs depend on adherence to pre-planned decision rules to inform adaptations. Thus, it is vital for research consumers to be able to assess whether the adaptation rules were adhered to as pre-specified in the decision-making criteria given the observed accrued data at the interim analyses. Failure to adhere to pre-planned decision rules may undermine the integrity of the results and validity of the design by affecting the operating characteristics (see item 7b for details on binding and non-binding decision rules).

Unforeseeable events can occur that may lead to deviations from some pre-planned adaptation decisions rules (for example, the overruling or ignoring of certain rules). It is therefore essential to adequately describe which pre-planned adaptations were enforced, which were pre-planned but were not enforced or overruled even though the interim analysis decision rules indicated an adaptation should be made, and which unplanned changes were made other than unplanned early stopping of the trial or treatment arm(s) covered by item 14b. Pre-planned adaptations that were not implemented are difficult to assess because the interim decisions made versus the pre-planned intended decisions are often poorly reported, and reasons are rarely given [115]. The rationale for ignoring or overruling pre-planned adaptation decisions, or making unplanned decisions that affect the adaptations should be clearly stated and also who recommended or made such decisions (for example, the data monitoring committee or adaptation committee). This enables assessment of potential bias in the adaptation decision-making process, which is crucial for the credibility of the trial.

Authors should indicate the point at which the adaptation decisions were made (that is, stage of results) and any additional design changes that were made as a consequence of adaptation decisions (for example, change in allocation ratio).

See Box 21 for exemplars.

Box 21 Exemplars on reporting item 14c elements

Section 15. Results (Baseline data)

CONSORT 2010 item 15: A table showing baseline demographic and clinical characteristics for each group.

ACE Item 15a «15 (clarification, renumbered): A table showing baseline demographic and clinical characteristics for each group.

Comments—We renumbered the item to accommodate the new item 15b. This item should be applied with reference to the CONSORT 2010 statement [3, 4], with additional requirements for specific ADs.

Explanation—The presentation of treatment group summaries of key characteristics and demographics of randomised participants who contributed to results influences interpretation and helps readers and medical practitioners to make judgements about which patients the results are applicable to. For some ADs, such as population (or biomarker or patient) enrichment [83, 146], when the study population is considered heterogeneous, a trial could be designed to evaluate if study treatments are effective in specific pre-specified subpopulations or a wider study population (full population). A pre-planned adaptation strategy may involve testing the effect of treatments in both pre-specified subpopulations of interest and the wider population in order to target patients likely to benefit the most. For such ADs, it is essential to provide summaries of characteristics of those who were randomised and who contributed to the results being reported (both interim or final), by treatment group for each subpopulation of interest and the full population consistent with hypotheses tested. These summaries should be reported without hypothesis testing of baseline differences in participants’ characteristics because it is illogical in randomised trials [263,264,265,266]. The CONSORT 2010 statement [3, 4] presents an example of how to summarise baseline characteristics.

In the presence of marked differences in the numbers of randomised participants and those included in the interim or final analyses, authors are encouraged to report baseline summaries by treatment group for these two populations. Readers will then be able to assess representativeness of the interim or final analysis population relative to those randomised and also the target population.

See Box 22 for an exemplar.

Box 22 Exemplar on reporting item 15a

ACE item 15b (new): Summary of data to enable the assessment of similarity in the trial population between interim stages.

Comment—This item is applicable for ADs conducted in distinct stages for which the trial has progressed beyond the first stage.

Explanation—Changes in trial conduct and other factors may introduce heterogeneity in the characteristics or standard management of patients before and after trial adaptations. Consequently, results may be inconsistent or heterogeneous between stages (interim parts) of the trial [201]. For ADs, access to interim results or mere guesses based on interim decisions taken may influence behaviour of those directly involved in the conduct of the trial and thus introduce operational bias [22]. Some trial adaptations may introduce intended changes to inclusion or exclusion criteria (for example, population enrichment [88, 146]). Unintended changes to characteristics of patients over time may occur (population drift) [267]. A concern is whether this could lead to a trial with a different study population that does not address the primary research objectives [268]. This jeopardises validity, interpretability, and credibility of trial results. It may be difficult to determine whether differences in characteristics between stages occurred naturally due to chance, were an unintended consequence of pre-planned trial adaptations, represent operational bias introduced by knowledge or communication of interim results, or are for other reasons [269]. However, details related to item 11c may help readers make informed judgements on whether any observed marked differences in characteristics between stages are potentially due to systematic bias or just chance. Therefore, it is essential to provide key summary data of participants included in the analysis (as discussed in item 15a) for each interim stage of the trial and overall. Authors are also encouraged to give summaries by stage and treatment group. This will help readers assess similarity in the trial population between stages and whether it is consistent across treatment groups.

See Box 23 for an exemplar.

Box 23 Exemplar on reporting item 15b elements
Table 5 Characteristics of randomised participants (N = 1202) in stage 1 and 2

Section 16. Results (Numbers analysed)

CONSORT 2010/ACE item 16 (clarification): For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups.

Comments—The item should be used in reference to the CONSORT 2010 statement [3, 4] for original details and examples. Here, we give additional clarification for some specific requirements of certain ADs such as population enrichment [83, 146].

Explanation—We clarify that the number of participants by treatment group should be reported for each analysis at both the interim analyses and final analysis whenever a comparative assessment is performed (for example, for efficacy, effectiveness, or safety). Most importantly, the presentation should reflect the key hypotheses considered to address the research questions. For example, population (or patient or biomarker) enrichment ADs can be reported by treatment group for each pre-specified subpopulation and full population depending on key hypotheses tested.

Section 17. Results (Outcomes and estimation)

CONSORT 2010/ACE item 17a (clarification): For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval).

Comments—We expanded the explanatory text to address some specific requirements of certain ADs such as population enrichment [146]. Therefore, the item should be used in reference to the CONSORT 2010 [3] for original details and examples.

Explanation—In randomised trials, we analyse participant outcome data collected after study treatments are administered to address research questions about beneficial and/or harmful effects of these treatments. In principle, reported results should be in line with the pre-specified estimand(s) and compatible with the research questions or objectives [206, 207]. The CONSORT 2010 statement [3, 4] addresses what authors should report depending on the outcome measures. These include group summary measures of effect, for both interim and final analyses, including the number of participants contributing to the analysis, appropriate measures of the treatment effects (for example, between group effects for a parallel group randomised trial) and associated uncertainty (such as credible or confidence intervals). Importantly, the presentation is influenced by how the key hypotheses are configured to address the research questions. For some ADs, such as population (or biomarker or patient) enrichment, key hypotheses often relate to whether the study treatments are effective in the whole target population of interest or in specific subpopulations of the target population classified by certain characteristics. In such ADs, reporting of results as detailed in the CONSORT 2010 should mirror hypotheses of interest. That is, we expect the outcome results to be presented for the subpopulations and full target population considered by treatment group. This is to help readers interpret results on whether the study treatments are beneficial to the target population as a whole or only to specific pre-specified subpopulations.

ACE item 17c (new): Report interim results used to inform interim decision-making.

Explanation—Adherence to pre-planned adaptations and decision rules including timing and frequency is essential in AD randomised trials. This can only be assessed when the pre-planned adaptations (item 3b), adaptive decision rules (item 7b), and results that are used to guide the trial adaptations are transparently and adequately reported.

Marked differences in treatment effects between stages may arise (for example, discussed in item 15b) making overall interpretation of their results difficult [88, 110, 267, 269,270,271,272]. The presence of heterogeneity questions the rationale for combining results from independent stages to produce overall evidence, as is also the case for combining individual studies in a meta-analysis [88, 273]. Although this problem is not unique to AD randomised trials, consequences of trial adaptation may worsen the problem [269]. Authors should at least report the relevant interim or stage results that were used to make each adaptation, consistent with items 3b and 7b; for example, interim treatment effects with uncertainty, interim conditional power or variability used for SSR, and trend in the probabilities of allocating participants to a particular treatment group as the trial progresses. Authors should report interim results of treatment groups or subpopulations that have been dropped due to lack of benefit or poor safety. This reduces the reporting bias caused by selective disclosure of treatments only showing beneficial and/or less harmful effects.

See Box 24 for exemplars.

Box 24 Exemplars on reporting item 17 elements
Fig. 3
figure 3

Redrawn from Pallmann et al. [22] Reused in accordance with the terms of Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/). No changes to the original figure were made

Table 6 Interim results

Section 20. Discussion (Limitations)

CONSORT 2010/ACE item 20 (clarification): Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses.

Comments—No change in wording is made to this item so it should be applied with reference to the CONSORT 2010 statement [3, 4] for original details and examples. Here, we only address additional considerations for ADs.

Explanation—We expect authors to discuss the arguments for and against the implemented study design and its findings. Several journals have guidelines for structuring the discussion to prompt authors to discuss key limitations with possible explanations. The CONSORT 2010 statement [3, 4] addresses general aspects relating to potential sources of bias, imprecision, multiplicity of analyses and implications of unplanned changes to methods or design. For AD randomised trials, further discussion should include the implications of:

▪ Any deviations from the pre-planned adaptations (for example, decision rules that were not enforced or overruled and changes in timing or frequency of interim analyses);

▪ Interim analyses (for example, updating randomisation with inadequate burn-in period);

▪ Protocol amendments on the trial adaptations and results;

▪ Potential sources of bias introduced by interim analyses or decision-making;

▪ Potential bias and imprecision of the treatment effects if naïve estimation methods were used;

▪ Potential heterogeneity in patient characteristics and treatment effects between stages;

▪ Whether outcome data (for example, efficacy and safety data) were sufficient to robustly inform trial adaptations at interim analyses and;

▪ Using adaptation outcome(s) different from the primary outcome(s).

Additionally, it is encouraged to discuss the observed efficiencies of pre-planned adaptations in addressing the research questions and lessons learned about using the AD, both negative and positive. This is optional as it does not directly influence the interpretation of the results but enhances much-needed knowledge transfer of innovative trial designs. Therefore, authors have been encouraged to consider separate methodology publications in addition to trial results [54, 181].

See Box 25 for exemplars.

Box 25 Exemplars on reporting item 20

Section 21. Discussion (Generalisability)

CONSORT 2010/ACE item 21 (clarification): Generalisability (external validity, applicability) of the trial findings.

Comments—We have not changed the wording of this item so it should be considered in conjunction with the CONSORT 2010 statement [3, 4]. However, there are additional considerations that may influence the generalisability of results from AD randomised trials.

Explanation—Regardless of the trial design, authors should discuss how the results are generalisable to other settings or situations (external validity) and how the design and conduct of the trial minimised or mitigated potential sources of bias (internal validity) [3]. For ADs, there are many factors that may undermine both internal (see item 20 clarifications) and external validity. Trial adaptations are planned with a clear rationale to achieve research goals or objectives. Thus, the applicability of the results may be intentionally relevant to the target population enrolled or pre-specified subpopulation(s) with certain characteristics (subsets of the target population). Specifically, the implemented adaptations and other factors may cause unintended population drift or inconsistencies in the conduct of the trial. Authors should discuss the population to whom the results are applicable including any threats to internal and external validity which are trial dependent based on the implemented adaptations.

See Box 26 for exemplars.

Box 26 Exemplar on reporting item 21 elements

Section 24. Other information (Statistical analysis plan and other relevant trial documents)

ACE item 24b (new): Where the full statistical analysis plan and other relevant trial documents can be accessed.

Explanation—Pre-specifying details of statistical methods and their execution including documentation of amendments and when they occurred is good scientific practice that enhances trial credibility and reproducibility of methods, results and inference. The SAP is the principal technical document that details the statistical methods for the design of the study; analysis of the outcomes; aspects that influence the analysis approaches; and presentation of results consistent with the research questions/objectives and estimands [206, 207] in line with the trial protocol (now item 24a). General guidance on statistical principles for clinical trials to consider with the aim to standardise research practice exists [274,275,276]. AD trials tend to bring additional statistical complexities and considerations during the design and analyses depending on the trial adaptations considered. Access to the full SAP with amendments (if applicable) addressing interim and final analyses is essential. This can be achieved through the use of several platforms such as online supplementary material, online repositories, or referencing published material. This enables readers to access additional information relating to the statistical methods that may not be feasible to include in the main report.

Critical details of the trial adaptations (for example, the decision-making criteria or adaptation algorithm and rules) may be intentionally withheld from publicly accessible documents (for example, protocol) while the trial is ongoing [41, 203]. These details may be documented in a formal document with restricted access and disclosed only when the trial is completed in order to minimise operational bias (item 11c). For this situation, authors should provide access to such details withheld with any amendments made for transparency and an audit trail of pre-planned AD aspects.

For some AD randomised trials, methods to derive statistical properties analytically may not be available. Thus, it becomes necessary to perform simulations under a wide range of plausible scenarios to investigate the operating characteristics of the design (item 7a), impact on estimation bias (item 12b), and appropriateness and consequences of decision-making criteria and rules [154, 277]. In such cases, we encourage authors to reference accessible material used for this purpose (for example, simulation protocol and report, or published related material). Furthermore, it is good scientific practice to reference software, programs or code used for this task to facilitate reproducible research.

The operating characteristics of ADs heavily depend on following the pre-planned adaptations and adaptive decision-making criteria and rules. ADs often come with additional responsibilities for the traditional monitoring committees or require a specialised monitoring committee to provide independent oversight of the trial adaptations (for example, adaptive decision-making or adaptation committee). Thus, it is essential to be transparent about the adaptation decision-making process, roles and responsibilities of the delegated DMC(s), recommendations made by the committee and whether recommendations were adhered to. Authors are encouraged to provide supporting evidence (for example, DMC charter).

See Box 27 for exemplars.

Box 27 Exemplars on reporting item 24b

Conclusions

There is a multidisciplinary desire to improve efficiency in the conduct of randomised trials. ADs allow pre-planned adaptations that offer opportunities to address research questions in randomised trials more efficiently compared to fixed designs. However, ADs can make the design, conduct and analysis of trials more complex. Potential biases can be introduced during the trial in several ways. Consequently, there are additional demands for transparency and reporting to enhance the credibility and interpretability of results from adaptive trials.

This CONSORT extension provides minimum essential reporting requirements that are applicable to pre-planned adaptations in AD randomised trials, designed and analysed using frequentist or Bayesian statistical methods. We have also given many exemplars of different types of ADs to help authors when using this extension. Our consensus process involved stakeholders from the public and private sectors [13, 128]. We hope this extension will facilitate better reporting of randomised ADs and indirectly improve their design and conduct, as well as much-needed knowledge transfer.

Availability of data and materials

Not applicable. We have published the development process of this guideline including anonymised participant data from Delphi surveys which is publicly accessible [13].

Abbreviations

ACE:

Adaptive designs CONSORT Extension

AD:

Adaptive design

CONSORT:

Consolidated Standards Of Reporting Trials

E&E:

Explanation and elaboration

EQUATOR:

Enhancing the QUAlity and Transparency Of health Research

(I)DMC:

(independent) data monitoring committee

GSD:

Group sequential design

MAMS:

Multi-arm multi-stage design

MeSH:

Medical Subject Heading

RAR:

Response-adaptive randomisation

SAP:

Statistical analysis plan

SSR:

Sample size re-estimation/re-assessment/re-calculation

References

  1. Yordanov Y, Dechartres A, Porcher R, Boutron I, Altman DG, Ravaud P. Avoidable waste of research related to inadequate methods in clinical trials. BMJ. 2015;350:h809. https://doi.org/10.1136/bmj.h809.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Chen YL, Yang KH. Avoidable waste in the production and reporting of evidence. Lancet. 2009;374:786. https://doi.org/10.1016/S0140-6736(09)61591-9.

    Article  PubMed  Google Scholar 

  3. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869. https://doi.org/10.1136/bmj.c869.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010;152:726–32. https://doi.org/10.7326/0003-4819-152-11-201006010-00232.

    Article  PubMed  Google Scholar 

  5. CONSORT Group. Extensions of the CONSORT statement http://www.consort-statement.org/extensions.

  6. Ivers NM, Taljaard M, Dixon S, et al. Impact of CONSORT extension for cluster randomised trials on quality of reporting and study methodology: review of random sample of 300 trials, 2000-8. BMJ. 2011;343:d5886. https://doi.org/10.1136/bmj.d5886.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Moher D, Jones A, Lepage L, CONSORT Group (Consolidated Standards for Reporting of Trials). Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA. 2001;285:1992–5. https://doi.org/10.1001/jama.285.15.1992.

    Article  CAS  PubMed  Google Scholar 

  8. Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006;185:263–7. https://doi.org/10.5694/j.1326-5377.2006.tb00557.x.

    Article  PubMed  Google Scholar 

  9. Blanco D, Biggane AM, Cobo E, MiRoR network. Are CONSORT checklists submitted by authors adequately reflecting what information is actually reported in published papers? Trials. 2018;19:80. https://doi.org/10.1186/s13063-018-2475-0.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Jin Y, Sanger N, Shams I, et al. Does the medical literature remain inadequately described despite having reporting guidelines for 21 years? - a systematic review of reviews: an update. J Multidiscip Healthc. 2018;11:495–510. https://doi.org/10.2147/JMDH.S155103.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Janackovic K, Puljak L. Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals. Trials. 2018;19:591. https://doi.org/10.1186/s13063-018-2976-x.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Goldacre B, Drysdale H, Dale A, et al. COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials. 2019;20:118. https://doi.org/10.1186/s13063-019-3173-2.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Dimairo M, Coates E, Pallmann P, et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16:210. https://doi.org/10.1186/s12916-018-1196-2.

    Article  PubMed  PubMed Central  Google Scholar 

  14. FDA. Adaptive designs for medical device clinical studies: draft guidance for industry and food and drug administration staff. 2015. https://www.fda.gov/ucm/groups/fdagov-public/@fdagov-meddev-gen/documents/document/ucm446729.pdf.

    Google Scholar 

  15. Chow S-C, Chang M. Adaptive design methods in clinical trials - a review. Orphanet J Rare Dis. 2008;3:11. https://doi.org/10.1186/1750-1172-3-11.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Dragalin V. Adaptive designs: terminology and classification. Drug Inf J. 2006;40:425–35. https://doi.org/10.1177/216847900604000408.

    Article  Google Scholar 

  17. Gallo P, Chuang-Stein C, Dragalin V, Gaydos B, Krams M, Pinheiro J, PhRMA Working Group. Adaptive designs in clinical drug development--an executive summary of the PhRMA Working Group. J Biopharm Stat. 2006;16:275–83, discussion 285–91, 293–8, 311–2. https://doi.org/10.1080/10543400600614742.

    Article  PubMed  Google Scholar 

  18. Kairalla JA, Coffey CS, Thomann MA, Muller KE. Adaptive trial designs: a review of barriers and opportunities. Trials. 2012;13:145. https://doi.org/10.1186/1745-6215-13-145.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Lewis RJ. The pragmatic clinical trial in a learning health care system. Clin Trials. 2016;13:484–92. https://doi.org/10.1177/1740774516655097.

    Article  PubMed  Google Scholar 

  20. Curtin F, Heritier S. The role of adaptive trial designs in drug development. Expert Rev Clin Pharmacol. 2017;10:727–36. https://doi.org/10.1080/17512433.2017.1321985.

    Article  CAS  PubMed  Google Scholar 

  21. Park JJ, Thorlund K, Mills EJ. Critical concepts in adaptive clinical trials. Clin Epidemiol. 2018;10:343–51. https://doi.org/10.2147/CLEP.S156708.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Pallmann P, Bedding AW, Choodari-Oskooei B, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29. https://doi.org/10.1186/s12916-018-1017-7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Jaki T, Wason JMS. Multi-arm multi-stage trials can improve the efficiency of finding effective treatments for stroke: a case study. BMC Cardiovasc Disord. 2018;18:215. https://doi.org/10.1186/s12872-018-0956-4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Parmar MK, Sydes MR, Cafferty FH, et al. Testing many treatments within a single protocol over 10 years at MRC clinical trials unit at UCL: multi-arm, multi-stage platform, umbrella and basket protocols. In: Clinical trials. UK: SAGE Publications Sage; 2017. p. 451–61. https://doi.org/10.1177/1740774517725697.

    Chapter  Google Scholar 

  25. Porcher R, Lecocq B, Vray M, participants of Round Table N° 2 de Giens XXVI. Adaptive methods: when and how should they be used in clinical trials? Therapie. 2011;66:319–26, 309-17. https://doi.org/10.2515/therapie/2011044.

    Article  PubMed  Google Scholar 

  26. Dimairo M, Boote J, Julious SA, Nicholl JP, Todd S. Missing steps in a staircase: a qualitative study of the perspectives of key stakeholders on the use of adaptive designs in confirmatory trials. Trials. 2015;16:430. https://doi.org/10.1186/s13063-015-0958-9.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Dimairo M, Julious SA, Todd S, Nicholl JP, Boote J. Cross-sector surveys assessing perceptions of key stakeholders towards barriers, concerns and facilitators to the appropriate use of adaptive designs in confirmatory trials. Trials. 2015;16:585. https://doi.org/10.1186/s13063-015-1119-x.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Meurer WJ, Legocki L, Mawocha S, et al. Attitudes and opinions regarding confirmatory adaptive clinical trials: a mixed methods analysis from the adaptive designs accelerating promising trials into treatments (ADAPT-IT) project. Trials. 2016;17:373. https://doi.org/10.1186/s13063-016-1493-z.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Morgan CC, Huyck S, Jenkins M, et al. Adaptive design: results of 2012 survey on perception and use. Ther Innov Regul Sci. 2014;48:473–81. https://doi.org/10.1177/2168479014522468.

    Article  PubMed  Google Scholar 

  30. Coffey CS, Levin B, Clark C, et al. Overview, hurdles, and future work in adaptive designs: perspectives from a National Institutes of Health-funded workshop. Clin Trials. 2012;9:671–80. https://doi.org/10.1177/1740774512461859.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Quinlan J, Gaydos B, Maca J, Krams M. Barriers and opportunities for implementation of adaptive designs in pharmaceutical product development. Clin Trials. 2010;7:167–73. https://doi.org/10.1177/1740774510361542.

    Article  PubMed  Google Scholar 

  32. Coffey CS, Kairalla JA. Adaptive clinical trials: progress and challenges. Drugs R D. 2008;9:229–42. https://doi.org/10.2165/00126839-200809040-00003.

    Article  CAS  PubMed  Google Scholar 

  33. Hartford A, Thomann M, Chen X, et al. Adaptive designs: results of 2016 survey on perception and use. Ther Innov Regul Sci. 2018. https://doi.org/10.1177/2168479018807715.

  34. Chaitman BR, Pepine CJ, Parker JO, Combination Assessment of Ranolazine In Stable Angina (CARISA) Investigators, et al. Effects of ranolazine with atenolol, amlodipine, or diltiazem on exercise tolerance and angina frequency in patients with severe chronic angina: a randomized controlled trial. JAMA. 2004;291:309–16. https://doi.org/10.1001/jama.291.3.309.

    Article  CAS  PubMed  Google Scholar 

  35. Zajicek JP, Hobart JC, Slade A, Barnes D, Mattison PG, MUSEC Research Group. Multiple sclerosis and extract of cannabis: results of the MUSEC trial. J Neurol Neurosurg Psychiatry. 2012;83:1125–32. https://doi.org/10.1136/jnnp-2012-302468.

    Article  PubMed  Google Scholar 

  36. Miller E, Gallo P, He W, et al. DIA’s adaptive design scientific working group (ADSWG): best practices case studies for “less well-understood” adaptive designs. Ther Innov Regul Sci. 2017;51:77–88. https://doi.org/10.1177/2168479016665434.

    Article  PubMed  Google Scholar 

  37. Wang S-J, Peng H, Hung HJ. Evaluation of the extent of adaptation to sample size in clinical trials for cardiovascular and CNS diseases. Contemp Clin Trials. 2018;67:31–6. https://doi.org/10.1016/j.cct.2018.02.004.

    Article  CAS  PubMed  Google Scholar 

  38. Chen YH, Li C, Lan KK. Sample size adjustment based on promising interim results and its application in confirmatory clinical trials. Clin Trials. 2015;12:584–95. https://doi.org/10.1177/1740774515594378.

    Article  PubMed  Google Scholar 

  39. Jennison C, Turnbull BW. Adaptive sample size modification in clinical trials: start small then ask for more? Stat Med. 2015;34:3793–810. https://doi.org/10.1002/sim.6575.

    Article  PubMed  Google Scholar 

  40. Mehta CR, Pocock SJ. Adaptive increase in sample size when interim results are promising: a practical guide with examples. Stat Med. 2011;30:3267–84. https://doi.org/10.1002/sim.4102.

    Article  PubMed  Google Scholar 

  41. Chuang-Stein C, Anderson K, Gallo P, et al. Sample size reestimation: a review and recommendations. Drug Inf J. 2006;40:475–84. https://doi.org/10.1177/216847900604000413.

    Article  Google Scholar 

  42. Friede T, Kieser M. A comparison of methods for adaptive sample size adjustment. Stat Med. 2001;20:3861–73. https://doi.org/10.1002/sim.972.

    Article  CAS  PubMed  Google Scholar 

  43. Friede T, Kieser M. Sample size recalculation for binary data in internal pilot study designs. Pharm Stat. 2004;3:269–79. https://doi.org/10.1002/pst.140.

    Article  Google Scholar 

  44. Friede T, Kieser M. Sample size recalculation in internal pilot study designs: a review. Biom J. 2006;48:537–55. https://doi.org/10.1002/bimj.200510238.

    Article  PubMed  Google Scholar 

  45. Stevely A, Dimairo M, Todd S, et al. An investigation of the shortcomings of the CONSORT 2010 statement for the reporting of group sequential randomised controlled trials: a methodological systematic review. PLoS One. 2015;10:e0141104. https://doi.org/10.1371/journal.pone.0141104.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Pritchett Y, Jemiai Y, Chang Y, et al. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease. Clin Trials. 2011;8:165–74. https://doi.org/10.1177/1740774511399128.

    Article  PubMed  Google Scholar 

  47. Jennison C, Turnbull BW. Group sequential methods with applications to clinical trials. London: Chapman & Hall/CRC; 2000.

    Google Scholar 

  48. Whitehead J. The design and analysis of sequential clinical trials. 2nd ed. Hoboken: Wiley; 2000.

    Google Scholar 

  49. Mehta CR, Tsiatis AA. Flexible sample size considerations using information-based interim monitoring. Drug Inf J. 2001;35:1095–112. https://doi.org/10.1177/009286150103500407.

    Article  Google Scholar 

  50. Herson J, Buyse M, Wittes JT. On stopping a randomized clinical trial for futility. In: Kowalski J, Piantadosi S, editors. Designs for clinical trials: perspectives on current issues. Berlin: Springer; 2012. p. 109–37. https://doi.org/10.1007/978-1-4614-0140-7_5.

    Chapter  Google Scholar 

  51. Gallo P, Mao L, Shih VH. Alternative views on setting clinical trial futility criteria. J Biopharm Stat. 2014;24:976–93. https://doi.org/10.1080/10543406.2014.932285.

    Article  PubMed  Google Scholar 

  52. Lachin JM. Futility interim monitoring with control of type I and II error probabilities using the interim Z-value or confidence limit. Clin Trials. 2009;6:565–73. https://doi.org/10.1177/1740774509350327.

    Article  PubMed  Google Scholar 

  53. Pushpakom SP, Taylor C, Kolamunnage-Dona R, et al. Telmisartan and insulin resistance in HIV (TAILoR): protocol for a dose-ranging phase II randomised open-labelled trial of telmisartan as a strategy for the reduction of insulin resistance in HIV-positive individuals on combination antiretroviral therapy. BMJ Open. 2015;5:e009566. https://doi.org/10.1136/bmjopen-2015-009566.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Sydes MR, Parmar MKB, Mason MD, et al. Flexible trial design in practice - stopping arms for lack-of-benefit and adding research arms mid-trial in STAMPEDE: a multi-arm multi-stage randomized controlled trial. Trials. 2012;13:168. https://doi.org/10.1186/1745-6215-13-168.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Parmar MKB, Barthel FM-S, Sydes M, et al. Speeding up the evaluation of new agents in cancer. J Natl Cancer Inst. 2008;100:1204–14. https://doi.org/10.1093/jnci/djn267.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  56. Cohen DR, Todd S, Gregory WM, Brown JM. Adding a treatment arm to an ongoing clinical trial: a review of methodology and practice. Trials. 2015;16:179. https://doi.org/10.1186/s13063-015-0697-y.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Magirr D, Stallard N, Jaki T. Flexible sequential designs for multi-arm clinical trials. Stat Med. 2014;33:3269–79. https://doi.org/10.1002/sim.6183.

    Article  CAS  PubMed  Google Scholar 

  58. Hommel G. Adaptive modifications of hypotheses after an interim analysis. Biom J. 2001;43:581–9. https://doi.org/10.1002/1521-4036(200109)43:5<581::AID-BIMJ581>3.0.CO;2-J.

    Article  Google Scholar 

  59. Jaki T. Multi-arm clinical trials with treatment selection: what can be gained and at what price? Clin Investig (Lond). 2015;5:393–9. https://doi.org/10.4155/cli.15.13.

    Article  CAS  Google Scholar 

  60. Wason J, Magirr D, Law M, Jaki T. Some recommendations for multi-arm multi-stage trials. Stat Methods Med Res. 2016;25:716–27. https://doi.org/10.1177/0962280212465498.

    Article  PubMed  Google Scholar 

  61. Wason J, Stallard N, Bowden J, Jennison C. A multi-stage drop-the-losers design for multi-arm clinical trials. Stat Methods Med Res. 2017;26:508–24. https://doi.org/10.1177/0962280214550759.

    Article  PubMed  Google Scholar 

  62. Ghosh P, Liu L, Senchaudhuri P, Gao P, Mehta C. Design and monitoring of multi-arm multi-stage clinical trials. Biometrics. 2017;73:1289–99. https://doi.org/10.1111/biom.12687.

    Article  PubMed  Google Scholar 

  63. Heritier S, Lô SN, Morgan CC. An adaptive confirmatory trial with interim treatment selection: practical experiences and unbalanced randomization. Stat Med. 2011;30:1541–54. https://doi.org/10.1002/sim.4179.

    Article  PubMed  Google Scholar 

  64. Posch M, Koenig F, Branson M, Brannath W, Dunger-Baldauf C, Bauer P. Testing and estimation in flexible group sequential designs with adaptive treatment selection. Stat Med. 2005;24:3697–714. https://doi.org/10.1002/sim.2389.

    Article  PubMed  Google Scholar 

  65. Bauer P, Kieser M. Combining different phases in the development of medical treatments within a single trial. Stat Med. 1999;18:1833–48. https://doi.org/10.1002/(SICI)1097-0258(19990730)18:14<1833::AID-SIM221>3.0.CO;2-3.

    Article  CAS  PubMed  Google Scholar 

  66. Bretz F, Koenig F, Brannath W, Glimm E, Posch M. Adaptive designs for confirmatory clinical trials. Stat Med. 2009;28:1181–217. https://doi.org/10.1002/sim.3538.

    Article  PubMed  Google Scholar 

  67. Giles FJ, Kantarjian HM, Cortes JE, et al. Adaptive randomized study of idarubicin and cytarabine versus troxacitabine and cytarabine versus troxacitabine and idarubicin in untreated patients 50 years or older with adverse karyotype acute myeloid leukemia. J Clin Oncol. 2003;21:1722–7. https://doi.org/10.1200/JCO.2003.11.016.

    Article  CAS  PubMed  Google Scholar 

  68. Grieve AP. Response-adaptive clinical trials: case studies in the medical literature. Pharm Stat. 2017;16:64–86. https://doi.org/10.1002/pst.1778.

    Article  PubMed  Google Scholar 

  69. Hu F, Rosenberger WF. The theory of response-adaptive randomization in clinical trials. Wiley: Hoboken, 2006 doi:https://doi.org/10.1002/047005588X.

  70. Nowacki AS, Zhao W, Palesch YY. A surrogate-primary replacement algorithm for response-adaptive randomization in stroke clinical trials. Stat Methods Med Res. 2017;26:1078–92. https://doi.org/10.1177/0962280214567142.

    Article  PubMed  Google Scholar 

  71. Eickhoff JC, Kim K, Beach J, Kolesar JM, Gee JR. A Bayesian adaptive design with biomarkers for targeted therapies. Clin Trials. 2010;7:546–56. https://doi.org/10.1177/1740774510372657.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Williamson SF, Jacko P, Villar SS, Jaki T. A Bayesian adaptive design for clinical trials in rare diseases. Comput Stat Data Anal. 2017;113:136–53. https://doi.org/10.1016/j.csda.2016.09.006.

    Article  PubMed  Google Scholar 

  73. Berry DA, Eick SG. Adaptive assignment versus balanced randomization in clinical trials: a decision analysis. Stat Med. 1995;14:231–46. https://doi.org/10.1002/sim.4780140302.

    Article  CAS  PubMed  Google Scholar 

  74. Chen YH, Gesser R, Luxembourg A. A seamless phase IIB/III adaptive outcome trial: design rationale and implementation challenges. Clin Trials. 2015;12:84–90. https://doi.org/10.1177/1740774514552110.

    Article  PubMed  Google Scholar 

  75. Cuffe RL, Lawrence D, Stone A, Vandemeulebroecke M. When is a seamless study desirable? Case studies from different pharmaceutical sponsors. Pharm Stat. 2014;13:229–37. https://doi.org/10.1002/pst.1622.

    Article  PubMed  Google Scholar 

  76. Donohue JF, Fogarty C, Lötvall J, INHANCE Study Investigators, et al. Once-daily bronchodilators for chronic obstructive pulmonary disease: indacaterol versus tiotropium. Am J Respir Crit Care Med. 2010;182:155–62. https://doi.org/10.1164/rccm.200910-1500OC.

    Article  CAS  PubMed  Google Scholar 

  77. Bretz F, Schmidli H, König F, Racine A, Maurer W. Confirmatory seamless phase II/III clinical trials with hypotheses selection at interim: general concepts. Biom J. 2006;48:623–34. https://doi.org/10.1002/bimj.200510232.

    Article  PubMed  Google Scholar 

  78. Bauer P, Bretz F, Dragalin V, König F, Wassmer G. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat Med. 2016;35:325–47. https://doi.org/10.1002/sim.6472.

    Article  PubMed  Google Scholar 

  79. Koenig F, Brannath W, Bretz F, Posch M. Adaptive Dunnett tests for treatment selection. Stat Med. 2008;27:1612–25. https://doi.org/10.1002/sim.3048.

    Article  PubMed  Google Scholar 

  80. Antoniou M, Jorgensen AL, Kolamunnage-Dona R. Biomarker-guided adaptive trial designs in phase II and phase III: a methodological review. PLoS One. 2016;11:e0149803. https://doi.org/10.1371/journal.pone.0149803.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  81. Liu S, Lee JJ. An overview of the design and conduct of the BATTLE trials. Chin Clin Oncol. 2015;4:33. https://doi.org/10.3978/j.issn.2304-3865.2015.06.07.

    Article  PubMed  Google Scholar 

  82. Barker AD, Sigman CC, Kelloff GJ, Hylton NM, Berry DA, Esserman LJ. I-SPY 2: an adaptive breast cancer trial design in the setting of neoadjuvant chemotherapy. Clin Pharmacol Ther. 2009;86:97–100. https://doi.org/10.1038/clpt.2009.68.

    Article  CAS  PubMed  Google Scholar 

  83. Renfro LA, Mallick H, An MW, Sargent DJ, Mandrekar SJ. Clinical trial designs incorporating predictive biomarkers. Cancer Treat Rev. 2016;43:74–82. https://doi.org/10.1016/j.ctrv.2015.12.008.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  84. Ondra T, Dmitrienko A, Friede T, et al. Methods for identification and confirmation of targeted subgroups in clinical trials: a systematic review. J Biopharm Stat. 2016;26:99–119. https://doi.org/10.1080/10543406.2015.1092034.

    Article  PubMed  PubMed Central  Google Scholar 

  85. Chiu Y-D, Koenig F, Posch M, Jaki T. Design and estimation in clinical trials with subpopulation selection. Stat Med. 2018;37:4335–52. https://doi.org/10.1002/sim.7925.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Graf AC, Wassmer G, Friede T, Gera RG, Posch M. Robustness of testing procedures for confirmatory subpopulation analyses based on a continuous biomarker. Stat Methods Med Res. 2019;28:1879–92. https://doi.org/10.1177/0962280218777538.

    Article  PubMed  Google Scholar 

  87. Joshi A, Zhang J, Fang L. Statistical design for a confirmatory trial with a continuous predictive biomarker: a case study. Contemp Clin Trials. 2017;63:19–29. https://doi.org/10.1016/j.cct.2017.05.010.

    Article  PubMed  Google Scholar 

  88. Wang SJ, Hung HMJ. Adaptive enrichment with subpopulation selection at interim: methodologies, applications and design considerations. Contemp Clin Trials. 2013;36:673–81. https://doi.org/10.1016/j.cct.2013.09.008.

    Article  PubMed  Google Scholar 

  89. Hünseler C, Balling G, Röhlig C, Clonidine Study Group, et al. Continuous infusion of clonidine in ventilated newborns and infants: a randomized controlled trial. Pediatr Crit Care Med. 2014;15:511–22. https://doi.org/10.1097/PCC.0000000000000151.

    Article  PubMed  Google Scholar 

  90. Hommel G, Kropf S. Clinical trials with an adaptive choice of hypotheses. Drug Inf J. 2001;35:1423–9. https://doi.org/10.1177/009286150103500438.

    Article  Google Scholar 

  91. Branson M, Whitehead J. Estimating a treatment effect in survival studies in which patients switch treatment. Stat Med. 2002;21:2449–63. https://doi.org/10.1002/sim.1219.

    Article  PubMed  Google Scholar 

  92. Shao J, Chang M, Chow S-C. Statistical inference for cancer trials with treatment switching. Stat Med. 2005;24:1783–90. https://doi.org/10.1002/sim.2128.

    Article  PubMed  Google Scholar 

  93. Skrivanek Z, Gaydos BL, Chien JY, et al. Dose-finding results in an adaptive, seamless, randomized trial of once-weekly dulaglutide combined with metformin in type 2 diabetes patients (AWARD-5). Diabetes Obes Metab. 2014;16:748–56. https://doi.org/10.1111/dom.12305.

    Article  CAS  PubMed  Google Scholar 

  94. Léauté-Labrèze C, Hoeger P, Mazereeuw-Hautier J, et al. A randomized, controlled trial of oral propranolol in infantile hemangioma. N Engl J Med. 2015;372:735–46. https://doi.org/10.1056/NEJMoa1404710.

    Article  CAS  Google Scholar 

  95. Mehta CR, Liu L, Theuer C. An adaptive population enrichment phase III trial of TRC105 and pazopanib versus pazopanib alone in patients with advanced angiosarcoma (TAPPAS trial). Ann Oncol. 2019;30:103–8. https://doi.org/10.1093/annonc/mdy464.

    Article  CAS  PubMed  Google Scholar 

  96. Nogueira RG, Jadhav AP, Haussen DC, DAWN Trial Investigators, et al. Thrombectomy 6 to 24 hours after stroke with a mismatch between deficit and infarct. N Engl J Med. 2018;378:11–21. https://doi.org/10.1056/NEJMoa1706442.

    Article  PubMed  Google Scholar 

  97. Collignon O, Koenig F, Koch A, et al. Adaptive designs in clinical trials: from scientific advice to marketing authorisation to the European medicine agency. Trials. 2018;19:642. https://doi.org/10.1186/s13063-018-3012-x.

    Article  PubMed  PubMed Central  Google Scholar 

  98. Lewis RJ, Angus DC, Laterre PF, Selepressin Evaluation Programme for Sepsis-induced Shock-Adaptive Clinical Trial, et al. Rationale and design of an adaptive phase 2b/3 clinical trial of selepressin for adults in septic shock: selepressin evaluation programme for sepsis-induced shock - adaptive clinical trial. Ann Am Thorac Soc. 2018;15:250–7. https://doi.org/10.1513/AnnalsATS.201708-669SD.

    Article  PubMed  Google Scholar 

  99. Cui L, Hung HM, Wang SJ. Modification of sample size in group sequential clinical trials. Biometrics. 1999;55:853–7. https://doi.org/10.1111/j.0006-341X.1999.00853.x.

    Article  CAS  PubMed  Google Scholar 

  100. Jenkins M, Stone A, Jennison C. An adaptive seamless phase II/III design for oncology trials with subpopulation selection using correlated survival endpoints. Pharm Stat. 2011;10:347–56. https://doi.org/10.1002/pst.472.

    Article  PubMed  Google Scholar 

  101. Wason JMS, Abraham JE, Baird RD, et al. A Bayesian adaptive design for biomarker trials with linked treatments. Br J Cancer. 2015;113:699–705. https://doi.org/10.1038/bjc.2015.278.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  102. Saville BR, Berry SM. Efficiencies of platform clinical trials: a vision of the future. Clin Trials. 2016;13:358–66. https://doi.org/10.1177/1740774515626362.

    Article  PubMed  Google Scholar 

  103. Phillips AJ, Keene ON, PSI Adaptive Design Expert Group. Adaptive designs for pivotal trials: discussion points from the PSI adaptive design expert group. Pharm Stat. 2006;5:61–6. https://doi.org/10.1002/pst.206.

    Article  PubMed  Google Scholar 

  104. Rong Y. Regulations on adaptive design clinical trials. Pharm Regul Aff. 2014;3. https://doi.org/10.4172/2167-7689.1000116.

  105. Bauer P, Brannath W. The advantages and disadvantages of adaptive designs for clinical trials. Drug Discov Today. 2004;9:351–7. https://doi.org/10.1016/S1359-6446(04)03023-5.

    Article  PubMed  Google Scholar 

  106. Huskins WC, Fowler VG Jr, Evans S. Adaptive designs for clinical trials: application to healthcare epidemiology research. Clin Infect Dis. 2018;66:1140–6. https://doi.org/10.1093/cid/cix907.

    Article  CAS  PubMed  Google Scholar 

  107. Bauer P, Einfalt J. Application of adaptive designs--a review. Biom J. 2006;48:493–506. https://doi.org/10.1002/bimj.200510204.

    Article  CAS  PubMed  Google Scholar 

  108. Elsäßer A, Regnstrom J, Vetter T, et al. Adaptive clinical trial designs for European marketing authorization: a survey of scientific advice letters from the European medicines agency. Trials. 2014;15:383. https://doi.org/10.1186/1745-6215-15-383.

    Article  PubMed  PubMed Central  Google Scholar 

  109. Food and Drug Administration. Guidance for industry: adaptive design clinical trials for drugs and biologics. 2010. https://www.fda.gov/downloads/Drugs/.../Guidances/ucm201790.pdf.

    Google Scholar 

  110. CHMP. Reflection paper on methodological issues in confirmatory clinical trials planned with an adaptive design. 2007. https://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500003616.pdf.

    Google Scholar 

  111. Food and Drug Administration. Adaptive designs for clinical trials of drugs and biologics: guidance for industry. 2019. https://www.fda.gov/media/78495/download.

    Google Scholar 

  112. Yang X, Thompson L, Chu J, et al. Adaptive design practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. 2016;50:710–7. https://doi.org/10.1177/2168479016656027.

    Article  PubMed  Google Scholar 

  113. Mistry P, Dunn JA, Marshall A. A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines. BMC Med Res Methodol. 2017;17:108. https://doi.org/10.1186/s12874-017-0393-6.

    Article  PubMed  PubMed Central  Google Scholar 

  114. Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17:150. https://doi.org/10.1186/s13063-016-1273-9.

    Article  PubMed  PubMed Central  Google Scholar 

  115. Sato A, Shimura M, Gosho M. Practical characteristics of adaptive design in phase 2 and 3 clinical trials. J Clin Pharm Ther. 2018;43:170–80. https://doi.org/10.1111/jcpt.12617.

    Article  CAS  PubMed  Google Scholar 

  116. Bothwell LE, Avorn J, Khan NF, Kesselheim AS. Adaptive design clinical trials: a review of the literature and ClinicalTrials.gov. BMJ Open. 2018;8:e018320. https://doi.org/10.1136/bmjopen-2017-018320.

    Article  PubMed  PubMed Central  Google Scholar 

  117. Gosho M, Sato Y, Nagashima K, Takahashi S. Trends in study design and the statistical methods employed in a leading general medicine journal. J Clin Pharm Ther. 2018;43:36–44. https://doi.org/10.1111/jcpt.12605.

    Article  CAS  PubMed  Google Scholar 

  118. Cerqueira FP, Jesus AMC, Cotrim MD. Adaptive design: a review of the technical, statistical, and regulatory aspects of implementation in a clinical trial. Ther Innov Regul Sci. 2019. https://doi.org/10.1177/2168479019831240.

  119. Lin M, Lee S, Zhen B, et al. CBER’s experience with adaptive design clinical trials. Ther Innov Regul Sci. 2016;50:195–203. https://doi.org/10.1177/2168479015604181.

    Article  PubMed  Google Scholar 

  120. Dimairo M. The utility of adaptive designs in publicly funded confirmatory trials. 2016. http://etheses.whiterose.ac.uk/13981/1/DimairoPhDThesis2016WhiteRoseSubmission.pdf.

    Google Scholar 

  121. Detry MA, Lewis RJ, Broglio KR, et al. Standards for the design, conduct, and evaluation of adaptive randomized clinical trials. 2012. https://www.pcori.org/assets/Standards-for-the-Design-Conduct-and-Evaluation-of-Adaptive-Randomized-Clinical-Trials.pdf.

    Google Scholar 

  122. Campbell G. Similarities and differences of Bayesian designs and adaptive designs for medical devices: a regulatory view. Stat Biopharm Res. 2013;5:356–68. https://doi.org/10.1080/19466315.2013.846873.

    Article  Google Scholar 

  123. Gaydos B, Anderson KM, Berry D, et al. Good practices for adaptive clinical trials in pharmaceutical product development. Ther Innov Regul Sci. 2009;43:539–56. https://doi.org/10.1177/009286150904300503.

    Article  Google Scholar 

  124. Chow S-C, Chang M, Pong A. Statistical consideration of adaptive methods in clinical development. J Biopharm Stat. 2005;15:575–91. https://doi.org/10.1081/BIP-200062277.

    Article  PubMed  Google Scholar 

  125. Chow S-C, Corey R. Benefits, challenges and obstacles of adaptive clinical trial designs. Orphanet J Rare Dis. 2011;6:79. https://doi.org/10.1186/1750-1172-6-79.

    Article  PubMed  PubMed Central  Google Scholar 

  126. Quinlan J, Krams M. Implementing adaptive designs: logistical and operational considerations. Drug Inf J. 2006;40:437–44. https://doi.org/10.1177/216847900604000409.

    Article  Google Scholar 

  127. Wang SJ. Perspectives on the use of adaptive designs in clinical trials. Part I. statistical considerations and issues. J Biopharm Stat. 2010;20:1090–7. https://doi.org/10.1080/10543406.2010.514446.

    Article  PubMed  Google Scholar 

  128. Dimairo M, Todd S, Julious S, et al. ACE project protocol version 2.3: development of a CONSORT Extension for adaptive clinical trials: EQUATOR Netw; 2016. https://www.equator-network.org/wp-content/uploads/2017/12/ACE-Project-Protocol-v2.3.pdf.

  129. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7:e1000217. https://doi.org/10.1371/journal.pmed.1000217.

    Article  PubMed  PubMed Central  Google Scholar 

  130. Rosenberg MJ. The agile approach to adaptive research: optimizing efficiency in clinical development. 1st ed. Hoboken: Wiley; 2010. https://doi.org/10.1002/9780470599686.

    Book  Google Scholar 

  131. Avery KNL, Williamson PR, Gamble C, Members of the Internal Pilot Trials Workshop supported by the Hubs for Trials Methodology Research, et al. Informing efficient randomised controlled trials: exploration of challenges in developing progression criteria for internal pilot studies. BMJ Open. 2017;7:e013537. https://doi.org/10.1136/bmjopen-2016-013537.

    Article  PubMed  PubMed Central  Google Scholar 

  132. Juszczak E, Altman DG, Hopewell S, Schulz K. Reporting of multi-arm parallel-group randomized trials: extension of the CONSORT 2010 statement. JAMA. 2019;321:1610–20. https://doi.org/10.1001/jama.2019.3087.

    Article  PubMed  Google Scholar 

  133. Campbell MK, Piaggio G, Elbourne DR, Altman DG, CONSORT Group. Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012;345:e5661. https://doi.org/10.1136/bmj.e5661.

    Article  PubMed  Google Scholar 

  134. Dwan K, Li T, Altman DG, Elbourne D. CONSORT 2010 statement: extension to randomised crossover trials. BMJ. 2019;366:l4378. https://doi.org/10.1136/bmj.l4378.

    Article  PubMed  PubMed Central  Google Scholar 

  135. Piaggio G, Elbourne DR, Pocock SJ, Evans SJ, Altman DG, CONSORT Group. Reporting of noninferiority and equivalence randomized trials: extension of the CONSORT 2010 statement. JAMA. 2012;308:2594–604. https://doi.org/10.1001/jama.2012.87802.

    Article  CAS  PubMed  Google Scholar 

  136. Hopewell S, Clarke M, Moher D, CONSORT Group, et al. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLoS Med. 2008;5:e20. https://doi.org/10.1371/journal.pmed.0050020.

    Article  PubMed  PubMed Central  Google Scholar 

  137. Hopewell S, Clarke M, Moher D, CONSORT Group, et al. CONSORT for reporting randomised trials in journal and conference abstracts. Lancet. 2008;371:281–3. https://doi.org/10.1016/S0140-6736(07)61835-2.

    Article  PubMed  Google Scholar 

  138. Ioannidis JPA, Evans SJW, Gøtzsche PC, CONSORT Group, et al. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med. 2004;141:781–8. https://doi.org/10.7326/0003-4819-141-10-200411160-00009.

    Article  PubMed  Google Scholar 

  139. MEDLINE. Adaptive clinical trial MeSH descriptor data 2019. 2019. https://meshb.nlm.nih.gov/record/ui?ui=D000076362.

    Google Scholar 

  140. Backonja M, Williams L, Miao X, Katz N, Chen C. Safety and efficacy of neublastin in painful lumbosacral radiculopathy: a randomized, double-blinded, placebo-controlled phase 2 trial using Bayesian adaptive design (the SPRINT trial). Pain. 2017;158:1802–12. https://doi.org/10.1097/j.pain.0000000000000983.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  141. Barnes PJ, Pocock SJ, Magnussen H, et al. Integrating indacaterol dose selection in a clinical study in COPD using an adaptive seamless design. Pulm Pharmacol Ther. 2010;23:165–71. https://doi.org/10.1016/j.pupt.2010.01.003.

    Article  CAS  PubMed  Google Scholar 

  142. Jones AE, Puskarich MA, Shapiro NI, et al. Effect of levocarnitine vs placebo as an adjunctive treatment for septic shock: the rapid Administration of Carnitine in Sepsis (RACE) randomized clinical Trial. JAMA Netw Open. 2018;1:e186076. https://doi.org/10.1001/jamanetworkopen.2018.6076.

    Article  PubMed  PubMed Central  Google Scholar 

  143. Khalil EAG, Weldegebreal T, Younis BM, et al. Safety and efficacy of single dose versus multiple doses of AmBisome for treatment of visceral leishmaniasis in eastern Africa: a randomised trial. PLoS Negl Trop Dis. 2014;8:e2613. https://doi.org/10.1371/journal.pntd.0002613.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  144. Steg PG, Mehta SR, Pollack CV Jr, TAO Investigators, et al. Anticoagulation with otamixaban and ischemic events in non-ST-segment elevation acute coronary syndromes: the TAO randomized clinical trial. JAMA. 2013;310:1145–55. https://doi.org/10.1001/jama.2013.277165.

    Article  CAS  PubMed  Google Scholar 

  145. McMurray JJV, Packer M, Desai AS, PARADIGM-HF Investigators and Committees, et al. Angiotensin-neprilysin inhibition versus enalapril in heart failure. N Engl J Med. 2014;371:993–1004. https://doi.org/10.1056/NEJMoa1409077.

    Article  CAS  PubMed  Google Scholar 

  146. Rosenblum M, Hanley DF. Adaptive enrichment designs for stroke clinical trials. Stroke. 2017;48:2021–5. https://doi.org/10.1161/STROKEAHA.116.015342.

    Article  PubMed  Google Scholar 

  147. Lachin JM. A review of methods for futility stopping based on conditional power. Stat Med. 2005;24:2747–64. https://doi.org/10.1002/sim.2151.

    Article  PubMed  Google Scholar 

  148. Proschan M, Lan KKG, Wittes JT. Power: conditional, unconditional, and predictive. In: Statistical monitoring of clinical trials - a unified approach. Berlin: Springer; 2006. p. 43–66.

    Google Scholar 

  149. Lan KG, Simon R, Halperin M. Stochastically curtailed tests in long–term clinical trials. Seq Anal. 1982;1:37–41. https://doi.org/10.1080/07474948208836014.

    Article  Google Scholar 

  150. Bauer P, Koenig F. The reassessment of trial perspectives from interim data--a critical view. Stat Med. 2006;25:23–36. https://doi.org/10.1002/sim.2180.

    Article  PubMed  Google Scholar 

  151. Herson J. Predictive probability early termination plans for phase II clinical trials. Biometrics. 1979;35:775–83. https://doi.org/10.2307/2530109.

    Article  CAS  PubMed  Google Scholar 

  152. Choi SC, Pepple PA. Monitoring clinical trials based on predictive probability of significance. Biometrics. 1989;45:317–23. https://doi.org/10.2307/2532056.

    Article  CAS  PubMed  Google Scholar 

  153. Spiegelhalter DJ, Freedman LS, Blackburn PR. Monitoring clinical trials: conditional or predictive power? Control Clin Trials. 1986;7:8–17. https://doi.org/10.1016/0197-2456(86)90003-6.

    Article  CAS  PubMed  Google Scholar 

  154. Skrivanek Z, Berry S, Berry D, et al. Application of adaptive design methodology in development of a long-acting glucagon-like peptide-1 analog (dulaglutide): statistical design and simulations. J Diabetes Sci Technol. 2012;6:1305–18. https://doi.org/10.1177/193229681200600609.

    Article  PubMed  PubMed Central  Google Scholar 

  155. Ouellet D. Benefit-risk assessment: the use of clinical utility index. Expert Opin Drug Saf. 2010;9:289–300. https://doi.org/10.1517/14740330903499265.

    Article  CAS  PubMed  Google Scholar 

  156. Thadhani R, Appelbaum E, Chang Y, et al. Vitamin D receptor activation and left ventricular hypertrophy in advanced kidney disease. Am J Nephrol. 2011;33:139–49. https://doi.org/10.1159/000323551.

    Article  CAS  PubMed  Google Scholar 

  157. Thadhani R, Appelbaum E, Pritchett Y, et al. Vitamin D therapy and cardiac structure and function in patients with chronic kidney disease: the PRIMO randomized controlled trial. JAMA. 2012;307:674–84. https://doi.org/10.1001/jama.2012.120.

    Article  CAS  PubMed  Google Scholar 

  158. Gould AL, Shih WJ. Sample size re-estimation without unblinding for normally distributed outcomes with unknown variance. Commun Stat Theory Methods. 1992;21:2833–53. https://doi.org/10.1080/03610929208830947.

    Article  Google Scholar 

  159. Gould AL. Planning and revising the sample size for a trial. Stat Med. 1995;14:1039–51, discussion 1053-5. https://doi.org/10.1002/sim.4780140922.

    Article  CAS  PubMed  Google Scholar 

  160. Kieser M, Friede T. Blinded sample size reestimation in multiarmed clinical trials. Drug Inf J. 2000;34:455–60. https://doi.org/10.1177/009286150003400214.

    Article  Google Scholar 

  161. Posch M, Proschan MA. Unplanned adaptations before breaking the blind. Stat Med. 2012;31:4146–53. https://doi.org/10.1002/sim.5361.

    Article  PubMed  Google Scholar 

  162. Chataway J, Nicholas R, Todd S, et al. A novel adaptive design strategy increases the efficiency of clinical trials in secondary progressive multiple sclerosis. Mult Scler. 2011;17:81–8. https://doi.org/10.1177/1352458510382129.

    Article  PubMed  Google Scholar 

  163. Fleming TR, Powers JH. Biomarkers and surrogate endpoints in clinical trials. Stat Med. 2012;31:2973–84. https://doi.org/10.1002/sim.5403.

    Article  PubMed  PubMed Central  Google Scholar 

  164. Heatley G, Sood P, Goldstein D, MOMENTUM 3 Investigators, et al. Clinical trial design and rationale of the Multicenter Study of MagLev Technology in Patients Undergoing Mechanical Circulatory Support Therapy with HeartMate 3 (MOMENTUM 3) investigational device exemption clinical study protocol. J Heart Lung Transplant. 2016;35:528–36. https://doi.org/10.1016/j.healun.2016.01.021.

    Article  PubMed  Google Scholar 

  165. Barrington P, Chien JY, Showalter HDH, et al. A 5-week study of the pharmacokinetics and pharmacodynamics of LY2189265, a novel, long-acting glucagon-like peptide-1 analogue, in patients with type 2 diabetes. Diabetes Obes Metab. 2011;13:426–33. https://doi.org/10.1111/j.1463-1326.2011.01364.x.

    Article  CAS  PubMed  Google Scholar 

  166. Geiger MJ, Skrivanek Z, Gaydos B, Chien J, Berry S, Berry D. An adaptive, dose-finding, seamless phase 2/3 study of a long-acting glucagon-like peptide-1 analog (dulaglutide): trial design and baseline characteristics. J Diabetes Sci Technol. 2012;6:1319–27. https://doi.org/10.1177/193229681200600610.

    Article  PubMed  PubMed Central  Google Scholar 

  167. James ND, Sydes MR, Mason MD, STAMPEDE investigators, et al. Celecoxib plus hormone therapy versus hormone therapy alone for hormone-sensitive prostate cancer: first results from the STAMPEDE multiarm, multistage, randomised controlled trial. Lancet Oncol. 2012;13:549–58. https://doi.org/10.1016/S1470-2045(12)70088-8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  168. Dwan K, Kirkham JJ, Williamson PR, Gamble C. Selective reporting of outcomes in randomised controlled trials in systematic reviews of cystic fibrosis. BMJ Open. 2013;3:e002709. https://doi.org/10.1136/bmjopen-2013-002709.

    Article  PubMed  PubMed Central  Google Scholar 

  169. Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3:e3081. https://doi.org/10.1371/journal.pone.0003081.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  170. Lancee M, Lemmens CMC, Kahn RS, Vinkers CH, Luykx JJ. Outcome reporting bias in randomized-controlled trials investigating antipsychotic drugs. Transl Psychiatry. 2017;7:e1232. https://doi.org/10.1038/tp.2017.203.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  171. Evans S. When and how can endpoints be changed after initiation of a randomized clinical trial? PLoS Clin Trials. 2007;2:e18. https://doi.org/10.1371/journal.pctr.0020018.

    Article  PubMed  PubMed Central  Google Scholar 

  172. Wason JMS, Mander AP, Thompson SG. Optimal multistage designs for randomised clinical trials with continuous outcomes. Stat Med. 2012;31:301–12. https://doi.org/10.1002/sim.4421.

    Article  PubMed  Google Scholar 

  173. Wason JMS, Mander AP. Minimizing the maximum expected sample size in two-stage phase II clinical trials with continuous outcomes. J Biopharm Stat. 2012;22:836–52. https://doi.org/10.1080/10543406.2010.528104.

    Article  PubMed  PubMed Central  Google Scholar 

  174. Cook JA, Julious SA, Sones W, et al. DELTA2 guidance on choosing the target difference and undertaking and reporting the sample size calculation for a randomised controlled trial. BMJ. 2018;363:k3750. https://doi.org/10.1136/bmj.k3750.

    Article  PubMed  PubMed Central  Google Scholar 

  175. Bell ML. New guidance to improve sample size calculations for trials: eliciting the target difference. Trials. 2018;19:605. https://doi.org/10.1186/s13063-018-2894-y.

    Article  PubMed  PubMed Central  Google Scholar 

  176. Dunnett C. Selection of the best treatment in comparison to a control with an application to a medical trial. In: Santer T, Tamhane A, editors. Design of experiments : ranking and selection. New York: Marcel Dekker; 1984. p. 47–66. https://books.google.co.uk/books?id=1Un6FKdqUg4C&printsec=frontcover#v=onepage&q&f=false.

    Google Scholar 

  177. Magirr D, Jaki T, Whitehead J. A generalized Dunnett test for multi-arm multi-stage clinical studies with treatment selection. Biometrika. 2012;99:494–501. https://doi.org/10.1093/biomet/ass002.

    Article  Google Scholar 

  178. Whitehead J, Jaki T. One- and two-stage design proposals for a phase II trial comparing three active treatments with control using an ordered categorical endpoint. Stat Med. 2009;28:828–47. https://doi.org/10.1002/sim.3508.

    Article  PubMed  Google Scholar 

  179. Jaki T, Magirr D. Designing multi-arm multi-stage studies: R Package ‘MAMS’; 2014.

    Book  Google Scholar 

  180. Hwang IK, Shih WJ, De Cani JS. Group sequential designs using a family of type I error probability spending functions. Stat Med. 1990;9:1439–45. https://doi.org/10.1002/sim.4780091207.

    Article  CAS  PubMed  Google Scholar 

  181. Steg PG, Mehta SR, Pollack CV Jr, et al. Design and rationale of the treatment of acute coronary syndromes with otamixaban trial: a double-blind triple-dummy 2-stage randomized trial comparing otamixaban to unfractionated heparin and eptifibatide in non-ST-segment elevation acute coronary syndromes with a planned early invasive strategy. Am Heart J. 2012;164:817–24.e13. https://doi.org/10.1016/j.ahj.2012.10.001.

    Article  CAS  PubMed  Google Scholar 

  182. Du Y, Wang X, Jack LJ. Simulation study for evaluating the performance of response-adaptive randomization. Contemp Clin Trials. 2015;40:15–25. https://doi.org/10.1016/j.cct.2014.11.006.

    Article  PubMed  Google Scholar 

  183. Lorch U, O’Kane M, Taubel J. Three steps to writing adaptive study protocols in the early phase clinical development of new medicines. BMC Med Res Methodol. 2014;14:84. https://doi.org/10.1186/1471-2288-14-84.

    Article  PubMed  PubMed Central  Google Scholar 

  184. Guetterman TC, Fetters MD, Legocki LJ, et al. Reflections on the adaptive designs accelerating promising trials into treatments (ADAPT-IT) process-findings from a qualitative study. Clin Res Regul Aff. 2015;32:121–30. https://doi.org/10.3109/10601333.2015.1079217.

    Article  PubMed  PubMed Central  Google Scholar 

  185. Pocock SJ. Group sequential methods in the design and analysis of clinical trials. Biometrika. 1977;64:191. https://doi.org/10.1093/biomet/64.2.191.

    Article  Google Scholar 

  186. O’Brien PC, Fleming TR. A multiple testing procedure for clinical trials. Biometrics. 1979;35:549–56. https://doi.org/10.2307/2530245.

    Article  PubMed  Google Scholar 

  187. Gsponer T, Gerber F, Bornkamp B, Ohlssen D, Vandemeulebroecke M, Schmidli H. A practical guide to Bayesian group sequential designs. Pharm Stat. 2014;13:71–80. https://doi.org/10.1002/pst.1593.

    Article  PubMed  Google Scholar 

  188. Emerson SS, Kittelson JM, Gillen DL. Frequentist evaluation of group sequential clinical trial designs. Stat Med. 2007;26:5047–80. https://doi.org/10.1002/sim.2901.

    Article  PubMed  Google Scholar 

  189. Togo K, Iwasaki M. Optimal timing for interim analyses in clinical trials. J Biopharm Stat. 2013;23:1067–80. https://doi.org/10.1080/10543406.2013.813522.

    Article  PubMed  Google Scholar 

  190. Xi D, Gallo P, Ohlssen D. On the optimal timing of futility interim analyses. Stat Biopharm Res. 2017;9:293–301. https://doi.org/10.1080/19466315.2017.1340906.

    Article  Google Scholar 

  191. Kelsen DP, Ginsberg R, Pajak TF, et al. Chemotherapy followed by surgery compared with surgery alone for localized esophageal cancer. N Engl J Med. 1998;339:1979–84. https://doi.org/10.1056/NEJM199812313392704.

    Article  CAS  PubMed  Google Scholar 

  192. Medical Research Council Oesophageal Cancer Working Group. Surgical resection with or without preoperative chemotherapy in oesophageal cancer: a randomised controlled trial. Lancet. 2002;359:1727–33. https://doi.org/10.1016/S0140-6736(02)08651-8.

    Article  Google Scholar 

  193. Bauer P, Köhne K. Evaluation of experiments with adaptive interim analyses. Biometrics. 1994;50:1029–41. https://doi.org/10.2307/2533441.

    Article  CAS  PubMed  Google Scholar 

  194. Stahl M, Walz MK, Riera-Knorrenschild J, et al. Preoperative chemotherapy versus chemoradiotherapy in locally advanced adenocarcinomas of the oesophagogastric junction (POET): long-term results of a controlled randomised trial. Eur J Cancer. 2017;81:183–90. https://doi.org/10.1016/j.ejca.2017.04.027.

    Article  CAS  PubMed  Google Scholar 

  195. Pocock SJ, Clayton TC, Stone GW. Challenging issues in clinical trial design: part 4 of a 4-part series on statistics for clinical trials. J Am Coll Cardiol. 2015;66:2886–98. https://doi.org/10.1016/j.jacc.2015.10.051.

    Article  PubMed  Google Scholar 

  196. Jansen JO, Pallmann P, MacLennan G, Campbell MK, UK-REBOA Trial Investigators. Bayesian clinical trial designs: another option for trauma trials? J Trauma Acute Care Surg. 2017;83:736–41. https://doi.org/10.1097/TA.0000000000001638.

    Article  PubMed  Google Scholar 

  197. Jiang Y, Zhao W, Durkalski-Mauldin V. Impact of adaptation algorithm, timing, and stopping boundaries on the performance of Bayesian response adaptive randomization in confirmative trials with a binary endpoint. Contemp Clin Trials. 2017;62:114–20. https://doi.org/10.1016/j.cct.2017.08.019.

    Article  PubMed  Google Scholar 

  198. Chappell R, Durkalski V, Joffe S. University of Pennsylvania ninth annual conference on statistical issues in clinical trials: where are we with adaptive clinical trial designs? (morning panel discussion). Clin Trials. 2017;14:441–50. https://doi.org/10.1177/1740774517723590.

    Article  PubMed  Google Scholar 

  199. Brown CH, Ten Have TR, Jo B, et al. Adaptive designs for randomized trials in public health. Annu Rev Public Health. 2009;30:1–25. https://doi.org/10.1146/annurev.publhealth.031308.100223.

    Article  PubMed  PubMed Central  Google Scholar 

  200. Fleming TR, Sharples K, McCall J, Moore A, Rodgers A, Stewart R. Maintaining confidentiality of interim data to enhance trial integrity and credibility. Clin Trials. 2008;5:157–67. https://doi.org/10.1177/1740774508089459.

    Article  PubMed  PubMed Central  Google Scholar 

  201. He W, Gallo P, Miller E, et al. Addressing challenges and opportunities of “less well-understood” adaptive designs. Ther Innov Regul Sci. 2017;51:60–8. https://doi.org/10.1177/2168479016663265.

    Article  PubMed  Google Scholar 

  202. Husten L. Orexigen released interim data without approval of trial leaders: Forbes; 2015. https://www.forbes.com/sites/larryhusten/2015/03/03/orexigen-released-interim-data-without-approval-of-trial-leaders/#74a030de4aef.

  203. Gallo P. Confidentiality and trial integrity issues for adaptive designs. Drug Inf J. 2006;40:445–50. https://doi.org/10.1177/216847900604000410.

    Article  Google Scholar 

  204. Chow S-C, Corey R, Lin M. On the independence of data monitoring committee in adaptive design clinical trials. J Biopharm Stat. 2012;22:853–67. https://doi.org/10.1080/10543406.2012.676536.

    Article  PubMed  Google Scholar 

  205. Herson J. Coordinating data monitoring committees and adaptive clinical trial designs. Drug Inf J. 2008;42:297–301. https://doi.org/10.1177/009286150804200401.

    Article  Google Scholar 

  206. Akacha M, Bretz F, Ohlssen D, et al. Estimands and their role in clinical trials. Stat Biopharm Res. 2017;9:268–71. https://doi.org/10.1080/19466315.2017.1302358.

    Article  Google Scholar 

  207. Akacha M, Bretz F, Ruberg S. Estimands in clinical trials - broadening the perspective. Stat Med. 2017;36:5–19. https://doi.org/10.1002/sim.7033.

    Article  PubMed  Google Scholar 

  208. Gu X, Chen N, Wei C, et al. Bayesian two-stage biomarker-based adaptive design for targeted therapy development. Stat Biosci. 2016;8:99–128. https://doi.org/10.1007/s12561-014-9124-2.

    Article  PubMed  Google Scholar 

  209. Wittes J. Stopping a trial early - and then what? Clin Trials. 2012;9:714–20. https://doi.org/10.1177/1740774512454600.

    Article  PubMed  Google Scholar 

  210. Bassler D, Briel M, Montori VM, STOPIT-2 Study Group, et al. Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta-regression analysis. JAMA. 2010;303:1180–7. https://doi.org/10.1001/jama.2010.310.

    Article  CAS  PubMed  Google Scholar 

  211. Wears RL. Are we there yet? Early stopping in clinical trials. Ann Emerg Med. 2015;65:214–5. https://doi.org/10.1016/j.annemergmed.2014.12.020.

    Article  PubMed  Google Scholar 

  212. Hughes MD, Pocock SJ. Stopping rules and estimation problems in clinical trials. Stat Med. 1988;7:1231–42. https://doi.org/10.1002/sim.4780071204.

    Article  CAS  PubMed  Google Scholar 

  213. Pocock SJ, Hughes MD. Practical problems in interim analyses, with particular regard to estimation. Control Clin Trials. 1989;10(Suppl):209S–21S. https://doi.org/10.1016/0197-2456(89)90059-7.

    Article  CAS  PubMed  Google Scholar 

  214. Walter SD, Han H, Briel M, Guyatt GH. Quantifying the bias in the estimated treatment effect in randomized trials having interim analyses and a rule for early stopping for futility. Stat Med. 2017;36:1506–18. https://doi.org/10.1002/sim.7242.

    Article  CAS  PubMed  Google Scholar 

  215. Wang H, Rosner GL, Goodman SN. Quantifying over-estimation in early stopped clinical trials and the “freezing effect” on subsequent research. Clin Trials. 2016;13:621–31. https://doi.org/10.1177/1740774516649595.

    Article  PubMed  PubMed Central  Google Scholar 

  216. Freidlin B, Korn EL. Stopping clinical trials early for benefit: impact on estimation. Clin Trials. 2009;6:119–25. https://doi.org/10.1177/1740774509102310.

    Article  PubMed  Google Scholar 

  217. Bauer P, Koenig F, Brannath W, Posch M. Selection and bias--two hostile brothers. Stat Med. 2010;29:1–13. https://doi.org/10.1002/sim.3716.

    Article  PubMed  Google Scholar 

  218. Walter SD, Guyatt GH, Bassler D, Briel M, Ramsay T, Han HD. Randomised trials with provision for early stopping for benefit (or harm): the impact on the estimated treatment effect. Stat Med. 2019;38:2524–43. https://doi.org/10.1002/sim.8142.

    Article  CAS  PubMed  Google Scholar 

  219. Flight L, Arshad F, Barnsley R, et al. A review of clinical trials with an adaptive design and health economic analysis. Value Health. 2019;22:391–8. https://doi.org/10.1016/j.jval.2018.11.008.

    Article  PubMed  Google Scholar 

  220. Whitehead J. Supplementary analysis at the conclusion of a sequential clinical trial. Biometrics. 1986;42:461–71. https://doi.org/10.2307/2531197.

    Article  CAS  PubMed  Google Scholar 

  221. Cameron C, Ewara E, Wilson FR, et al. The importance of considering differences in study design in network meta-analysis: an application using anti-tumor necrosis factor drugs for ulcerative colitis. Med Decis Mak. 2017;37:894–904. https://doi.org/10.1177/0272989X17711933.

    Article  Google Scholar 

  222. Mehta CR, Bauer P, Posch M, Brannath W. Repeated confidence intervals for adaptive group sequential trials. Stat Med. 2007;26:5422–33. https://doi.org/10.1002/sim.3062.

    Article  PubMed  Google Scholar 

  223. Brannath W, König F, Bauer P. Estimation in flexible two stage designs. Stat Med. 2006;25:3366–81. https://doi.org/10.1002/sim.2258.

    Article  PubMed  Google Scholar 

  224. Brannath W, Mehta CR, Posch M. Exact confidence bounds following adaptive group sequential tests. Biometrics. 2009;65:539–46. https://doi.org/10.1111/j.1541-0420.2008.01101.x.

    Article  PubMed  Google Scholar 

  225. Gao P, Liu L, Mehta C. Exact inference for adaptive group sequential designs. Stat Med. 2013;32:3991–4005. https://doi.org/10.1002/sim.5847.

    Article  PubMed  Google Scholar 

  226. Kunzmann K, Benner L, Kieser M. Point estimation in adaptive enrichment designs. Stat Med. 2017;36:3935–47. https://doi.org/10.1002/sim.7412.

    Article  PubMed  Google Scholar 

  227. Jennison C, Turnbull BW. Analysis following a sequential test. In: Group sequential methods with applications to clinical trials: Chapman & Hall/CRC; 2000. p. 171–87.

  228. Heritier S, Lloyd CJ, Lô SN. Accurate p-values for adaptive designs with binary endpoints. Stat Med. 2017;36:2643–55. https://doi.org/10.1002/sim.7324.

    Article  PubMed  Google Scholar 

  229. Simon R, Simon N. Inference for multimarker adaptive enrichment trials. Stat Med. 2017;36:4083–93. https://doi.org/10.1002/sim.7422.

    Article  PubMed  Google Scholar 

  230. Kunz CU, Jaki T, Stallard N. An alternative method to analyse the biomarker-strategy design. Stat Med. 2018;37:4636–51. https://doi.org/10.1002/sim.7940.

    Article  PubMed  PubMed Central  Google Scholar 

  231. Hack N, Brannath W. Estimation in adaptive group sequential trials; 2011.

    Google Scholar 

  232. Zhu L, Ni L, Yao B. Group sequential methods and software applications. Am Stat. 2011;65:127–35. https://doi.org/10.1198/tast.2011.10213.

    Article  Google Scholar 

  233. Tymofyeyev Y. A review of available software and capabilities for adaptive designs. In: Practical considerations for adaptive trial design and implementation. Berlin: Springer; 2014. p. 139–55. https://doi.org/10.1007/978-1-4939-1100-4_8.

    Chapter  Google Scholar 

  234. Hack N, Brannath W, Brueckner M. AGSDest: estimation in adaptive group sequential trials. 2019. https://cran.r-project.org/web/packages/AGSDest/.

    Google Scholar 

  235. Fernandes RM, van der Lee JH, Offringa M. A systematic review of the reporting of data monitoring committees’ roles, interim analysis and early termination in pediatric clinical trials. BMC Pediatr. 2009;9:77. https://doi.org/10.1186/1471-2431-9-77.

    Article  PubMed  PubMed Central  Google Scholar 

  236. Choodari-Oskooei B, Parmar MKB, Royston P, Bowden J. Impact of lack-of-benefit stopping rules on treatment effect estimates of two-arm multi-stage (TAMS) trials with time to event outcome. Trials. 2013;14:23. https://doi.org/10.1186/1745-6215-14-23.

    Article  PubMed  PubMed Central  Google Scholar 

  237. Bratton DJ. Design issues and extensions of multi-arm multi-stage clinical trials. 2015. https://discovery.ucl.ac.uk/1459437/.

    Google Scholar 

  238. Wason JMS, Jaki T. Optimal design of multi-arm multi-stage trials. Stat Med. 2012;31:4269–79. https://doi.org/10.1002/sim.5513.

    Article  PubMed  Google Scholar 

  239. Wason J, Stallard N, Bowden J, et al. A multi-stage drop-the-losers design for multi-arm clinical trials. Stat Methods Med Res. 2014. https://doi.org/10.1002/sim.6086.

  240. Lehmacher W, Wassmer G. Adaptive sample size calculations in group sequential trials. Biometrics. 1999;55:1286–90. https://doi.org/10.1111/j.0006-341X.1999.01286.x.

    Article  CAS  PubMed  Google Scholar 

  241. Vandemeulebroecke M. An investigation of two-stage tests. Stat Sin. 2006;16:933–51.

    Google Scholar 

  242. Graf AC, Bauer P, Glimm E, Koenig F. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications. Biom J. 2014;56:614–30. https://doi.org/10.1002/bimj.201300153.

    Article  PubMed  PubMed Central  Google Scholar 

  243. Posch M, Bauer P. Adaptive two stage designs and the conditional error function. Biom J. 1999;41:689–96. https://doi.org/10.1002/(SICI)1521-4036(199910)41:6<689::AID-BIMJ689>3.0.CO;2-P.

    Article  Google Scholar 

  244. Proschan MA, Hunsberger SA. Designed extension of studies based on conditional power. Biometrics. 1995;51:1315–24. https://doi.org/10.2307/2533262.

    Article  CAS  PubMed  Google Scholar 

  245. Brard C, Le Teuff G, Le Deley M-C, et al. Bayesian survival analysis in clinical trials: what methods are used in practice? Clin Trials. 2016. https://doi.org/10.1177/1740774516673362.

  246. Wagenmakers E-J, Gronau QF, Stefan A, et al. A Bayesian perspective on the proposed FDA guidelines for adaptive clinical trials. 2018. https://www.bayesianspectacles.org/a-bayesian-perspective-on-the-proposed-fda-guidelines-for-adaptive-clinical-trials/.

    Google Scholar 

  247. FDA. Guidance, for the use of Bayesian statistics in medical device clinical trials. 2010. https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm071121.pdf.

    Google Scholar 

  248. Whitehead J. Overrunning and underrunning in sequential clinical trials. Control Clin Trials. 1992;13:106–21. https://doi.org/10.1016/0197-2456(92)90017-T.

    Article  CAS  PubMed  Google Scholar 

  249. Hampson LV, Jennison C. Group sequential tests for delayed responses (with discussion). J R Stat Soc Ser B Stat Methodol. 2013;75:3–54. https://doi.org/10.1111/j.1467-9868.2012.01030.x.

    Article  Google Scholar 

  250. Emerson S, Fleming T. Parameter estimation following group sequential hypothesis testing. Biometrika. 1990;77:875–92. https://doi.org/10.1093/biomet/77.4.875.

    Article  Google Scholar 

  251. Tröger W, Galun D, Reif M, Schumann A, Stanković N, Milićević M. Viscum album [L.] extract therapy in patients with locally advanced or metastatic pancreatic cancer: a randomised clinical trial on overall survival. Eur J Cancer. 2013;49:3788–97. https://doi.org/10.1016/j.ejca.2013.06.043.

    Article  PubMed  Google Scholar 

  252. MacArthur RD, Hawkins TN, Brown SJ, et al. Efficacy and safety of crofelemer for noninfectious diarrhea in HIV-seropositive individuals (ADVENT trial): a randomized, double-blind, placebo-controlled, two-stage study. HIV Clin Trials. 2013;14:261–73. https://doi.org/10.1310/hct1406-261.

    Article  CAS  PubMed  Google Scholar 

  253. Brannath W, Zuber E, Branson M, et al. Confirmatory adaptive designs with Bayesian decision tools for a targeted therapy in oncology. Stat Med. 2009;28:1445–63. https://doi.org/10.1002/sim.3559.

    Article  PubMed  Google Scholar 

  254. Marcus R, Eric P, Gabriel K. On closed testing procedures with special reference to ordered analysis of variance. Biometrika. 1976;63:655–60. https://doi.org/10.1093/biomet/63.3.655.

    Article  Google Scholar 

  255. Simes RJ. An improved Bonferroni procedure for multiple tests of significance. Biometrika. 1986;73:751. https://doi.org/10.1093/biomet/73.3.751.

    Article  Google Scholar 

  256. Kim ES, Herbst RS, Wistuba II, et al. The BATTLE trial: personalizing therapy for lung cancer. Cancer Discov. 2011;1:44–53. https://doi.org/10.1158/2159-8274.CD-10-0010.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  257. Pushpakom S, Kolamunnage-Dona R, Taylor C, TAILoR Study Group, et al. TAILoR (TelmisArtan and InsuLin Resistance in Human Immunodeficiency Virus [HIV]): an adaptive-design, dose-ranging phase IIb randomized trial of telmisartan for the reduction of insulin resistance in HIV-positive individuals on combination antiretroviral therapy. Clin Infect Dis. 2019;3:ciz589. https://doi.org/10.1093/cid/ciz589.

    Article  CAS  Google Scholar 

  258. James ND, Sydes MR, Clarke NW, STAMPEDE investigators, et al. Addition of docetaxel, zoledronic acid, or both to first-line long-term hormone therapy in prostate cancer (STAMPEDE): survival results from an adaptive, multiarm, multistage, platform randomised controlled trial. Lancet. 2016;387:1163–77. https://doi.org/10.1016/S0140-6736(15)01037-5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  259. Berry SM, Connor JT, Lewis RJ. The platform trial: an efficient strategy for evaluating multiple treatments. JAMA. 2015;313:1619–20. https://doi.org/10.1001/jama.2015.2316.

    Article  PubMed  Google Scholar 

  260. Gilson C, Chowdhury S, Parmar MKB, Sydes MR, STAMPEDE Investigators. Incorporating Biomarker Stratification into STAMPEDE: an adaptive multi-arm, multi-stage trial platform. Clin Oncol (R Coll Radiol). 2017;29:778–86. https://doi.org/10.1016/j.clon.2017.10.004.

    Article  CAS  Google Scholar 

  261. Pemberton VL, Evans F, Gulin J, et al. Performance and predictors of recruitment success in National Heart, Lung, and Blood Institute’s cardiovascular clinical trials. Clin Trials. 2018;15:444–51. https://doi.org/10.1177/1740774518792271.

    Article  PubMed  Google Scholar 

  262. de Jong MD, Ison MG, Monto AS, et al. Evaluation of intravenous peramivir for treatment of influenza in hospitalized patients. Clin Infect Dis. 2014;59:e172–85. https://doi.org/10.1093/cid/ciu632.

    Article  CAS  PubMed  Google Scholar 

  263. Harvey LA. Statistical testing for baseline differences between randomised groups is not meaningful. Spinal Cord. 2018;56:919. https://doi.org/10.1038/s41393-018-0203-y.

    Article  CAS  PubMed  Google Scholar 

  264. Senn S. Testing for baseline balance in clinical trials. Stat Med. 1994;13:1715–26. https://doi.org/10.1002/sim.4780131703.

    Article  CAS  PubMed  Google Scholar 

  265. de Boer MR, Waterlander WE, Kuijper LDJ, Steenhuis IH, Twisk JW. Testing for baseline differences in randomized controlled trials: an unhealthy research behavior that is hard to eradicate. Int J Behav Nutr Phys Act. 2015;12:4. https://doi.org/10.1186/s12966-015-0162-z.

    Article  PubMed  PubMed Central  Google Scholar 

  266. Altman DG. Comparability of randomised groups. Stat. 1985;34:125. https://doi.org/10.2307/2987510.

    Article  Google Scholar 

  267. Koch A. Confirmatory clinical trials with an adaptive design. Biom J. 2006;48:574–85. https://doi.org/10.1002/bimj.200510239.

    Article  PubMed  Google Scholar 

  268. Chang M, Chow S-C, Pong A. Adaptive design in clinical research: issues, opportunities, and recommendations. J Biopharm Stat. 2006;16:299–309, discussion 311-2. https://doi.org/10.1080/10543400600609718.

    Article  CAS  PubMed  Google Scholar 

  269. Gallo P, Chuang-Stein C. What should be the role of homogeneity testing in adaptive trials? Pharm Stat. 2009;8:1–4. https://doi.org/10.1002/pst.342.

    Article  PubMed  Google Scholar 

  270. Friede T, Henderson R. Exploring changes in treatment effects across design stages in adaptive trials. Pharm Stat. 2009;8:62–72. https://doi.org/10.1002/pst.332.

    Article  PubMed  Google Scholar 

  271. Wang S-J, Brannath W, Brückner M, et al. Unblinded adaptive statistical information design based on clinical endpoint or biomarker. Stat Biopharm Res. 2013;5:293–310. https://doi.org/10.1080/19466315.2013.791639.

    Article  CAS  Google Scholar 

  272. Parker RA. Testing for qualitative interactions between stages in an adaptive study. Stat Med. 2010;29:210–8. https://doi.org/10.1002/sim.3757.

    Article  PubMed  Google Scholar 

  273. Gonnermann A, Framke T, Großhennig A, Koch A. No solution yet for combining two independent studies in the presence of heterogeneity. Stat Med. 2015;34:2476–80. https://doi.org/10.1002/sim.6473.

    Article  PubMed  PubMed Central  Google Scholar 

  274. Gamble C, Krishan A, Stocken D, et al. Guidelines for the content of statistical analysis plans in clinical trials. JAMA. 2017;318:2337–43. https://doi.org/10.1001/jama.2017.18556.

    Article  PubMed  Google Scholar 

  275. DeMets DL, Cook TD, Buhr KA. Guidelines for statistical analysis plans. JAMA. 2017;318:2301–3. https://doi.org/10.1001/jama.2017.18954.

    Article  PubMed  Google Scholar 

  276. ICH. ICH E9: statistical principles for clinical trials. 1998. https://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E9/Step4/E9_Guideline.pdf.

    Google Scholar 

  277. Thorlund K, Haggstrom J, Park JJ, Mills EJ. Key design considerations for adaptive clinical trials: a primer for clinicians. BMJ. 2018;360:k698. https://doi.org/10.1136/bmj.k698.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

ACE Consensus Group: We are very grateful to the consensus meeting participants who provided recommendations on reporting items to retain in this guidance and for comments on the structure of this E&E document. All listed members approved the final ACE checklist and offered opportunity to review this E&E document. Sally Hopewell joined the group during the write up stage to oversee the process on behalf of the CONSORT Group following the sad passing of Doug G Altman.

Munyaradzi Dimairo1; Toshimitsu Hamasaki2; Susan Todd3; Christopher J Weir4; Adrian P Mander5; James Wason5 6; Franz Koenig7; Steven A Julious8; Daniel Hind1; Jon Nicholl1; Douglas G Altman9; William J Meurer10; Christopher Cates11; Matthew Sydes12; Yannis Jemiai13; Deborah Ashby14 (chair); Christina Yap15; Frank Waldron-Lynch16; James Roger17; Joan Marsh18; Olivier Collignon19; David J Lawrence20; Catey Bunce21; Tom Parke22; Gus Gazzard23; Elizabeth Coates1; Marc K Walton25; Sally Hopewell9.

1 School of Health and Related Research, University of Sheffield, Sheffield, UK.

2 National Cerebral and Cardiovascular Center, Osaka, Japan.

3 Department of Mathematics and Statistics, University of Reading, Reading, UK.

4 Edinburgh Clinical Trials Unit, Centre for Population Health Sciences, Usher Institute of Population Health Sciences and Informatics, The University of Edinburgh, Edinburgh, UK.

5 MRC Biostatistics Unit, University of Cambridge, School of Clinical Medicine, Cambridge Institute of Public Health, Cambridge, UK.

6 Institute of Health and Society, Newcastle University, UK.

7 Medical University of Vienna, Center for Medical Statistics, Informatics, and Intelligent Systems, Vienna, Austria.

8 Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK.

9 Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK.

10 Departments of Emergency Medicine and Neurology, University of Michigan, Ann Arbor, MI, USA.

11 Cochrane Airways, PHRI, SGUL, London, UK.

12 MRC Clinical Trials Unit, UCL, Institute of Clinical Trials and Methodology, London, UK.

13 Cytel, Cambridge, USA.

14 School of Public Health, Imperial College London, UK.

15 Cancer Research UK Clinical Trials Unit, Institute of Cancer and Genomic Sciences, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK.

16 Novartis Institutes for Biomedical Research, Basel, Switzerland.

17 Institutional address not applicable.

18 The Lancet Psychiatry, London, UK.

19 Luxembourg Institute of Health, Strassen, Luxembourg.

20 Novartis Campus, Basel, Switzerland.

21 School of Population Health and Environmental Sciences, Faculty of Life Sciences and Medicine, King’s College London, London, UK.

21 Berry Consultants, Merchant House, Abingdon, UK.

23 NIHR Biomedical Research Centre at Moorfields Eye Hospital and UCL Institute of Ophthalmology, London, UK.

25 Janssen Research and Development, Ashton, USA.

ACE Steering Committee: Munyaradzi Dimairo, Elizabeth Coates, Philip Pallmann, Susan Todd, Steven A Julious, Thomas Jaki, James Wason, Adrian P Mander, Christopher J Weir, Franz Koenig, Marc K Walton, Katie Biggs, Jon P Nicholl, Toshimitsu Hamasaki, Michael A Proschan, John A Scott, Yuki Ando, Daniel Hind, Douglas G Altman.

External Expert Panel: We would like to thank William Meurer (University of Michigan, Departments of Emergency Medicine and Neurology, Ann Arbor, MI, USA); Yannis Jemiai (Cytel, Cambridge, USA); Stephane Heritier (Monash University, Department of Epidemiology and Preventive Medicine, School of Public Health and Preventive Medicine, Australia); and Christina Yap (Cancer Research UK Clinical Trials Unit, Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, UK) for their invaluable contributions in reviewing the checklist and working definitions of technical terms.

Administrative support: We thank Sarah Gonzalez for coordination and administrative support throughout the project; Sheffield Clinical Trials Research Unit for their support, particularly Mike Bradburn and Cindy Cooper; and Benjamin Allin and Anja Hollowell for the Delphi surveys technical and administrative support.

Delphi survey participants: We are grateful for the input of several key stakeholders during the Delphi surveys and many who gave informal advice.

Other support: Trish Groves contributed to the consensus workshop, reviewed and approved the finalised ACE checklist. Lingyun Liu provided dummy baseline information for the TAPPAS trial.

Funding

This paper summarises independent research jointly funded by the National Institute for Health Research (NIHR) Clinical Trials Unit Support Funding scheme and the Medical Research Council (MRC) Network of Hubs for Trials Methodology Research (HTMR) (MR/ L004933/− 2/N90).

The funders had no role in the design of the study and collection, analysis, and interpretation of data and in writing or approving the manuscript.

MD, JW, TJ, APM, ST, SAJ, FK, CJW, DH, and JN were co-applicants who sourced funding from the NIHR CTU Support Funding scheme. MD, JW and TJ sourced additional funding from the MRC HTMR.

TJ’s contribution was in part funded by his Senior Research Fellowship (NIHR-SRF-2015-08-001) funded by NIHR. NHS Lothian via the Edinburgh Clinical Trials Unit supported CW in this work. The University of Sheffield via the Sheffield Clinical Trials Unit and the NIHR CTU Support Funding scheme supported MD’s contribution. An MRC HTMR grant (MR/L004933/1-R/N/P/B1) partly funded PP’s contribution.

The views expressed are those of the authors and not necessarily those of the National Health Service (NHS), the NIHR, the MRC, the Department of Health and Social Care, US Food and Drug Administration, or Pharmaceutical and Medical Devices Agency, Japan.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

The idea originated from MD’s NIHR Doctoral Research Fellowship (DRF-2012-05-182) under the supervision of SAJ, ST, and JPN. The idea was presented, discussed, and contextualised at the 2016 annual workshop of the MRC Network of HTMR Adaptive Designs Working Group (ADWG), which was attended by six members of the Steering Committee (MD, TJ, PP, JW, APM, and CJW). MD, JW, TJ, APM, ST, SAJ, FK, CJW, DH, and JPN conceptualised the study design and applied for funding. All authors contributed to the conduct of the study and interpretation of the results. MD, ST, PP, CJW, JW, TJ and SAJ led the write-up of the manuscript. All authors except DGA contributed to the writeup of the manuscript, reviewed, and approved this final version. DGA oversaw the development process on behalf of the CONSORT Group and also contributed to the initial draft manuscript until his passing after the approval of the final checklists. The Steering Committee is deeply saddened by the passing of DGA who did not have the opportunity to approve the final manuscript. We dedicate this work to him in memory of his immense contribution to the ACE project, medical statistics, good scientific research practice and reporting, and humanity.

Members of the ACE Consensus Group contributed to the consensus workshop, approved the final ACE checklist, and were offered the opportunity to review the final manuscript.

Corresponding author

Correspondence to Munyaradzi Dimairo.

Ethics declarations

Ethics approval and consent to participate

The project ethics approval was granted by the Research Ethics Committee of the School of Health and Related Research (ScHARR) at the University of Sheffield (ref: 012041). All Delphi participants during the development of the ACE guideline provided consent online during survey registration.

Consent for publication

Not applicable.

Competing interests

MKW is an employee of Janssen Research and Development, Inc. All other authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1: Appendix A.

Main ACE checklist available to download.

Additional file 2: Appendix B.

Additional examples of Box 13.

Additional file 3: Appendix C.

Example 2 of Box 14 (Bayesian RAR).

Additional file 4: Appendix D.

Example of a CONSORT flowchart for reporting 2-stage adaptive design (such as inferential seamless) that use combination test methods.

Additional file 5: Appendix E.

Example of a CONSORT flowchart for reporting a population enrichment adaptive design (assuming enrichment was done at an interim analysis).

Additional file 6: Appendix F.

Example of a CONSORT flowchart for reporting a population enrichment adaptive design (assuming enrichment was not done at an interim analysis).

Additional file 7: Appendix G.

Example of a CONSORT flowchart for reporting a response-adaptive randomisation adaptive design with frequent randomisation updates.

Additional file 8: Appendix H.

Example of a CONSORT flowchart for reporting a MAMS adaptive design.

Additional file 9: Appendix I.

Box 22 - Dummy baseline table for the TAPPS trial.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dimairo, M., Pallmann, P., Wason, J. et al. The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. Trials 21, 528 (2020). https://doi.org/10.1186/s13063-020-04334-x

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/s13063-020-04334-x