- Open Access
- Open Peer Review
Synthesizing existing evidence to design future trials: survey of methodologists from European institutions
© The Author(s). 2019
- Received: 6 September 2018
- Accepted: 13 May 2019
- Published: 7 June 2019
‘Conditional trial design’ is a framework for efficiently planning new clinical trials based on a network of relevant existing trials. The framework considers whether new trials are required and how the existing evidence can be used to answer the research question and plan future research. The potential of this approach has not been fully realized.
We conducted an online survey among trial statisticians, methodologists, and users of evidence synthesis research using referral sampling to capture opinions about the conditional trial design framework and current practices among clinical researchers. The questions included in the survey were related to the decision of whether a meta-analysis answers the research question, the optimal way to synthesize available evidence, which relates to the acceptability of network meta-analysis, and the use of evidence synthesis in the planning of new studies.
In total, 76 researchers completed the survey. Two out of three survey participants (65%) were willing to possibly or definitely consider using evidence synthesis to design a future clinical trial and around half of the participants would give priority to such a trial design. The median rating of the frequency of using such a trial design was 0.41 on a scale from 0 (never) to 1 (always). Major barriers to adopting conditional trial design include the current regulatory paradigm and the policies of funding agencies and sponsors.
Participants reported moderate interest in using evidence synthesis methods in the design of future trials. They indicated that a major paradigm shift is required before the use of network meta-analysis is regularly employed in the design of trials.
- Conditional trial design
- Sample size
- Network of interventions
Systematic reviews can identify knowledge gaps that may direct the research agenda toward questions that need further investigation. Knowledge gaps may arise when the available data are insufficient, or when there is no evidence at all that can answer a research question. Once identified, primary research (e.g., trials) may be designed and conducted to fill such gaps.
Such considerations, along with implementation strategies, have appeared in the literature. The Agency of Healthcare Research and Quality developed a framework for determining research gaps using systematic reviews . Methods for informing aspects of trial design based on a pairwise meta-analysis have also been proposed and include powering a future trial based on a relevant existing meta-analysis [2–4] or investigating how a future trial would alter the meta-analytic summary effect obtained thus far [5, 6]. These methods are limited to situations in which existing evidence consists of two interventions. When existing evidence forms a network of interventions, synthesis of available trials can be done using network meta-analysis. Network meta-analysis is increasingly used in health technology assessment (HTA) to summarize evidence and inform guidelines . However, its potential to inform trial design has not received much attention.
Methodological developments that use network meta-analysis as a basis for further research [3, 8] have been recently collated to form a holistic framework for planning future trials based on a network of interventions . The framework, called ‘conditional trial design’, combines considerations relevant to both evidence synthesis and trial design; ‘conditional’ refers to the fact that the design of a new study depends (is conditional) on the existing evidence. The framework consists of three parts. The first part asks whether the existing evidence answers the research question. This part pertains to interpreting meta-analysis results, which is related to deciding whether existing evidence is conclusive, whether multiple testing is needed when a meta-analysis is regularly updated, and how to interpret evidence from multiple outcomes. The second part of the framework is related to how best to use the existing evidence to answer the research question. The third and last part of the framework addresses how to use the existing evidence to plan future research. The conditional trial design requires that the assumptions of network meta-analysis are plausible and that the credibility of the results is high. In the case of violation of the transitivity assumption (that for each comparison there is an underlying true relative treatment effect which applies to all studies regardless of the treatments compared), or in the presence of studies with a high risk of bias, the existing network of interventions would not provide reliable evidence and thus should not be used to inform the planning of new studies.
We conducted a survey of views on the feasibility of the conditional trial design among trial statisticians, methodologists (researchers developing methodology), and users of evidence synthesis research. To this aim, the survey included questions relevant to the three parts of the conditional trial design. In particular, our objectives were to capture opinions and current practices regarding: 1) the decision about whether a meta-analysis answers the research question (first part); 2) the acceptability of network meta-analysis as a technique to enhance the evidence and answer the research question (second part); and 3) the use of evidence synthesis in the planning of future clinical research (third part).
Our convenience sample consisted of researchers working in Europe either in nonprofit organizations or in the pharmaceutical industry. We contacted researchers from the World Health Organization (WHO), 13 HTA agencies, 17 pharmaceutical companies or companies that prepare HTA submissions, and all clinical trial units in the UK, Norway, Switzerland, and Germany. The full list of contacted organizations can be found in Additional file 1. We sent a brief description and the link to the survey by email to key personnel within each organization, which included a request to forward it to anyone within their organization who might be interested, or we sent email messages to a mailing list or individuals. We did not track whether an invited person completed the survey, and we sent no reminders.
The first part of the survey concerned current practices in deciding whether a meta-analysis answers the research question at hand. Only participants experienced in evidence synthesis and those who had been involved in deciding about funding clinical research were directed to this part. Certain questions asked participants to choose or report what they are actually doing, in practice, while others asked participants to choose what they think should be done. Topics related to interpretation of the meta-analysis results, how multiple outcomes are integrated, and issues of multiple testing in the context of a continuously updated meta-analysis. A separate section covered issues related to the acceptability of network meta-analysis.
The next part of the survey contained questions about the use of evidence synthesis, as pairwise or network meta-analysis, for the design of clinical trials. For all questions in this part, the term clinical trials referred to randomized, post-marketing (e.g., phase IV) controlled clinical trials. Participants experienced in clinical trials and those who declared involvement in funding decisions were directed to this part (Fig. 1). Some of the questions were formulated so that the participants answered them in their capacity as citizens who fund research (such as EU-funded clinical trials or other research funded by national funds through their taxation).
We derived descriptive statistics as frequencies and percentages for participants’ characteristics (affiliation, job role, experience in meta-analysis and clinical trials). Percentages include missing responses in the denominator. Some questions allowed or requested free text answers by participants; we present some illustrative written quotes regarding participants’ willingness to consider a clinical trial design informed by meta-analysis and the biggest barriers to adopting such a design. Where a visual analogue scale was used and for the question of rating clinical research proposals submitted for funding, median, 25th, and 75th percentiles are presented. As a post-hoc analysis, we used a Pearson’s Chi-squared test to examine whether level of experience with evidence synthesis and clinical trials was related to different views on the acceptability of network meta-analysis and participants’ likelihood to consider the use of conditional trial design. Whenever any expected frequency is less than 1 or at least 20% of cells had expected counts of 5 or less, a Fisher’s exact test was used instead of a Pearson’s Chi-squared test. The rest of the analyses were planned prospectively. All analyses were performed using Stata 14.1.
In total, 76 researchers completed the survey, of whom 29 (38%) were affiliated with a clinical trial unit and 15 (20%) with the pharmaceutical industry. Fifty-three participants (70%) had performed and/or evaluated a systematic review, 46 (61%) had designed a clinical trial, and 36 participants (47%) had been involved in decisions about funding clinical research including reviewing grant applications.
Opinions and practices of participants regarding evidence-based planning of future trials
What is your primary affiliation?
Clinical trials unit
A funding body
How do you judge whether a summary treatment effect provides conclusive evidence or whether further research is needed (more than one choice allowed)?
I examine the statistical significance of the summary effect and its CI
I examine the clinical importance of the summary effect and its CI
I test whether future studies could change the statistical significance of the summary effect
I follow the GRADE guidelines for judging imprecision
Not involved in interpretation of meta-analysis results/other/missing
Do you think that network meta-analysis should be considered as the preferred evidence synthesis method instead of pairwise meta-analysis?
Yes, network meta-analysis should always be preferred
No, network meta-analysis should not be considered
It should be considered only if there are no or few direct studies
According to your experience, results from relevant meta-analyses are considered to (more than one choice allowed):
Define the alternative effect size in power calculations
Decide about the intervention in the comparator arm
Define other parameters involved in sample size calculations
Define health outcomes to be monitored
What do you think is the biggest barrier towards adopting the conditional trial design in designing trials?
Lack of training
Changing the paradigm of funders and researchers
Lack of good-quality meta-analyses
Median (25th to 75th percentile)
As a citizen supporting publicly funded research how would you rank (from 1 being the top priority to 5 being the least) the following proposals tackling the treatments for an important health condition? Consider also the cost for each research proposal (presented in parenthesis in arbitrary units).
A well-powered three-arm randomized trial comparing the three most promising interventions (none of which is standard care) (100)
4.0 (3.0 to 5.0)
A well-powered three-arm randomized trial comparing the two most promising interventions and standard treatment (90)
2.0 (1.0 to 2.0)
A well-powered two-arm randomized trial comparing a newly launched treatment and standard treatment (70)
3.0 (2.0 to 4.0)
A large registry involving many countries (40)
5.0 (3.5 to 5.0)
A network meta-analysis comparing all available treatments using existing studies (10)
1.5 (1.0 to 3.0)
Does the existing evidence answer the research question?
Among the 76 participants, 68 (89%) had experience in evidence synthesis and answered questions related to the first part of the conditional trial design framework which is relevant to the interpretation of meta-analysis results (Fig. 1).
When asked about judging when a summary treatment effect is conclusive and when further research is needed, 39 of these 68 researchers (57%) examined the clinical importance of the summary effect, while slightly fewer (31) examined the statistical significance of the summary effect (Table 1). Most participants examining the statistical significance of the summary effect also examine its clinical importance (28 participants, 37%).
Participants were asked about adjustment for multiple testing issues when a meta-analysis is updated with new studies. Twenty-two of the 68 participants (32%) indicated that adjustment for multiple testing is not required for a repeatedly updated meta-analysis, while 18 participants (27%) reported that such an adjustment is required. The rest (28 participants, 41%) either did not respond or indicated that they did not know. Participants were also asked about interpreting evidence from multiple outcomes that bears upon a preference for one of two treatments. Among the 68 participants, 25 (37%) reported involving stakeholders in deciding which outcomes are more important, while 22 participants (32%) used methods described in the recommendations of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group.
How best to use the existing evidence to answer the research question?
The 68 participants who had experience in evidence synthesis were directed to answer questions regarding the second part of the conditional trial design: how to use the existing evidence to answer the research question (Fig. 1).
Asked whether they prefer network meta-analysis as an evidence synthesis method to pairwise meta-analysis, participants indicated a comparatively low preference for network meta-analysis. Among the 68 participants, 15 (22%) preferred network to pairwise meta-analysis. A total of 25 participants (37%) indicated that network meta-analysis should be considered when there are either no or very few direct studies (Table 1). Eight participants suggested other approaches as indicated by two of their responses: “I would look at both direct and indirect analysis” and “I see the evaluation as one process and don’t want to disregard one versus the other”.
How to use the existing evidence to plan future research?
Among the total of 76 participants, 43 researchers experienced in clinical trial design (57%) were directed to questions related to the third part of the conditional trial design, which is relevant to practices and opinions about using meta-analysis to inform aspects of the design of future clinical trials (Fig. 1).
Practices of using meta-analysis in the design of clinical trials
Participants rated their use of evidence synthesis in the design of clinical trials on a visual rating scale from 0 (never) to 1 (always). The median value was 0.44 (25th percentile 0.22, 75th percentile 0.67). A total of 29 participants (67%) reported using meta-analyses of previous trials in the determination of other parameters involved in sample size calculations (such as standard deviations, baseline risk, and so on), 25 participants (58%) considered meta-analyses in defining alternative effect sizes in power calculations, and 22 (51%) used meta-analyses in the determination of health outcomes to be monitored (Table 1).
When asked about the best among five approaches to resolve uncertainty regarding the best pharmaceutical treatment for a given condition, a three-arm randomized trial comparing the two most promising interventions and standard treatment, and a network meta-analysis comparing all treatment alternatives were the most popular options (rating medians 2.0 and 1.5, respectively). The least favorable research design was a large international registry (rating median 5.0, Table 1). The rating frequencies for each research proposal are given in Additional file 3.
Acceptability of sample size calculations based on an existing meta-analysis
Key free text quotes from responses
From respondents who answered “No” or “Possibly” to the question “Would you be willing to consider a conditional trial design next time you plan a trial?”
• “Lots of examples where a large definitive trial has contradicted the results of a meta-analysis of smaller trials”
• “Any meta-analysis is observational research”
• “Because when you finalize the trial, the meta-analysis will be outdated. Your study should be a standalone trial”
• “Not enough faith in the homogeneity/comparability of the studies”
• “The assumptions behind a meta-analysis (homogeneity, no publication bias), are very rarely plausible, so a typical RCT has to offer a chance of providing a definitive conclusion on its own”
• “Clinical trials are perceived as independent pieces of evidence. There would need to be a major shift by regulators, HTA bodies and physicians for companies to design trials in the context of meta-analyses”
• “Usually the context in which I work is of trials supporting applications for a license. Regulators require each study to be ‘significant’ independently of others”
• “Wonder whether it would be convincing to authorities”
• “In the regulatory context, meta-analyses are typically NOT considered for approval decisions, at least not directly. (Typically). I would answer differently for publicly funded studies. A newish suggestion—most of our trials are phase II/III, where things are a little different”
From respondents who replied “Other” to the question “What do you think is the biggest barrier towards adopting conditional trial design in designing trials?”
• “Although trials can be planned to add just enough power to an existing meta-analysis, there is a high risk that such planning fails because of wrong assumptions, differences in study execution, or other reasons”
• “It is flawed and too risky (why give an experimental drug in an underpowered study)”
• “Guidelines from important regulatory and health economic agencies”
• “Lack of dissemination”
• “Skepticism as trials should be powered to stand alone, I would think. All other studies in the MA may not be comparable or of high quality”
• “It’s not necessarily logical”
• “I don’t believe this is an appropriate way to design trials”
Relation between level of experience with clinical trials/evidence synthesis and acceptability of network meta-analysis and conditional trial design
Experienced researchers in evidence synthesis were more likely to have confidence in network meta-analysis. Among the 27 participants with experience in evidence synthesis who indicated that they either can perform network meta-analysis themselves or have been involved in systematic reviews with network meta-analysis, 11 (41%) responded that, in general, network meta-analysis is preferable to pairwise meta-analysis. Among the 41 participants with little or no experience with network meta-analysis, only four (10%) said that network meta-analysis is to be preferred (Pearson’s Chi-squared test P value 0.003, Additional file 3).
The willingness to consider the use of an existing meta-analysis to inform sample size calculations of a new study did not materially vary according to researchers’ experience in clinical trials or evidence synthesis (Additional file 3).
In this survey of methodologists based in Europe, participants reported low to moderate use of evidence synthesis methods in the design of future trials. Evidence synthesis is used for the design of around half of the trials. The information most used relates to the parameters required for sample size calculations and outcome definitions. Our results broadly agree with those of Clayton et al. who found that 50% of investigators who responded to their survey had used meta-analysis to inform a future trial . The scope of the survey by Clayton et al. was similar to ours but it did not focus on issues pertaining to interpreting evidence synthesis and acceptability of network meta-analysis.
Empirical evidence has shown lower uptake of systematic reviews in planning new trials than the findings in the current survey and the survey by Clayton et al. [11–19]. Clarke et al. assessed reports of randomized trials published in Annals of Internal Medicine, BMJ, JAMA, The Lancet, and the New England Journal of Medicine in the month of May in the years 1997, 2001, 2005, and 2009. According to their findings, only a small proportion of trial reports attempted to integrate their findings with existing evidence [11, 12, 15, 16]. Out of 446 trial protocols submitted to the UK ethics committees in 2009, only four (less than 1%) used a meta-analysis and 92 (21%) used previous studies to define the treatment difference sought . A review of 1523 trials published from 1963 to 2004 showed that fewer than 25% of relevant previous randomized controlled trials were cited by subsequent randomized controlled trials .
Funders of clinical trials often emphasize the importance of using existing evidence in grant applications [14, 22, 23]. Thirty-seven (77%) out of 48 trials funded by the National Institute for Health Research (NIHR) Health Technology Assessment program between 2006 and 2008 referenced a systematic review in the funding application; the percentage was 100% for trials funded in 2013 . The interest of funders in research synthesis dates back to the 1990s when several organizations responsible for funding clinical research started to require systematic reviews of existing research as a prerequisite for considering funding for new trials . But as Clayton et al. point out, it is not clear to what extent and in which way funders expect evidence synthesis to be used . Nasser et al. searched the websites of 11 research funding organizations and, while four of them require systematic reviews to show that new clinical trials are needed, only the NIHR requires reference to relevant systematic reviews . We did not specifically survey bodies that fund clinical trials (such as the NIHR or the Swiss National Science Foundation). A survey of funding agencies along with a review of their guidance on how trialists should use existing evidence when designing and implementing new trials would be an important step forward.
Our study has some limitations that render the generalizability of its results questionable. First, the sample size of our survey was 76 participants, which is relatively small; a bigger sample size would allow us to produce more precise estimates for the outcomes of interest. Furthermore, using referral or snowball sampling means that we could not estimate the response rate for our survey. Second, we cannot exclude the possibility that the characteristics of participants systematically differed from those who either did not receive the questionnaire or received it but decided not to participate. Such nonresponse selection bias seems likely considering that a relatively high proportion of participants knew about calculating sample size based on a meta-analysis (60%), despite the fact that the methods have only recently been developed [2, 8, 9] and, in our experience, are not widely used. This indicates that the participants were probably a well-informed sample of methodologists who were up to date with recent developments. Moreover, the questionnaire has not been independently validated and some terms used might have different meaning for researchers with different backgrounds. A follow-up survey on a larger scale, including representatives from funding agencies, could provide more information on the potential of using existing evidence in the design of new studies.
We clarified in the survey that the term “clinical trials” should mean “randomized, post-marketing (e.g., phase IV) controlled clinical trials”. This clarification was made because usually little evidence is available before licensing which constitutes an important barrier to using the proposed method. However, it might be that trials examining licensed treatments are considered phase III because of their size and scope. Clearer guidance on how comparative effectiveness data can and should be used in the entire process of approval and adoption of new drugs would be of interest [25, 26].
This survey indicates a lack of consensus in aspects related to the interpretation of meta-analysis results. None of the answers to the question regarding interpreting evidence from multiple outcomes was selected by more than about a third of participants. Participants also did not agree on the use of adjustment for multiple testing when a meta-analysis is updated. This lack of consensus is in line with the lack of agreement about using sequential methods in the literature. Opinions range from regularly using sequential meta-analysis [27, 28], to adjusting for repeated updates in specific cases [29–31], to never correcting summary treatment effects using sequential methods . Concerns about the reliability of meta-analysis affect the acceptability of the conditional trial design; we think, however, that such concerns are likely to diminish over time as meta-analysis is increasingly used for decision-making and guideline development. The second main pillar of skepticism towards the conditional trial design is the perception of trials as independent experiments. It will be interesting to see whether this view will be challenged in the light of increasing awareness of research waste.
Resources for health research are limited and thus an economical and ethical allocation of funds for clinical trials requires minimizing human and monetary costs and risks. While certain research funders, clinical trial planners, and journal editors acknowledge the need to consult the existing evidence base before conducting a new trial, in practice these considerations are not concrete and explicit and quantitative methods are rarely used. We propose that clinical trialists explicitly report (e.g., in published protocols) how they will compute the sample size of their planned trials including the way in which they will use existing evidence, for example by defining the alternative effect size, the intervention group risk, or by computing the conditional power of the planned trial. Further research on ways in which evidence synthesis can be efficiently used in the planning of new trials could use, and possibly combine, considerations from value of information analysis, adaptive design methodology, and formal decision analytic methods. Funding agencies and journal editors could contribute to preventing waste by establishing concrete policies on the use of existing evidence when assessing requests for funding or publishing trials.
The authors thank C. Ritter for his valuable editorial assistance and the three reviewers for their helpful comments that greatly improved this paper.
AN is supported by the Swiss National Science Foundation (Grant No. 179158). ME was supported by a special project funding (Grant No. 174281) from the Swiss National Science Foundation. GS received funding from a Horizon 2020 Marie-Curie Individual Fellowship (Grant no. 703254). The sponsors had no role in the design, analysis, or reporting of this study.
GS, AN, and ME conceived the study and designed the survey questionnaire. ST critically revised the survey questionnaire. GS contacted the survey participants. AN designed the survey in Survey Monkey, performed the main analyses, and wrote the first draft of the paper. All authors critically revised the manuscript, interpreted the results, and performed a critical review of the manuscript for intellectual content. GS, AN, and ME produced the final version of the submitted article and all co-authors approved it.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Robinson KA, Saldanha IJ, Mckoy NA. Frameworks for determining research gaps during systematic reviews. Rockville: Agency for Healthcare Research and Quality (US); 2011.Google Scholar
- Roloff V, Higgins JPT, Sutton AJ. Planning future studies based on the conditional power of a meta-analysis. Stat Med. 2013;32(1):11–24.View ArticleGoogle Scholar
- Nikolakopoulou A, Mavridis D, Salanti G. Using conditional power of network meta-analysis (NMA) to inform the design of future clinical trials. Biom J Biom Z. 2014;56(6):973–90.View ArticleGoogle Scholar
- Sutton AJ, Cooper NJ, Jones DR, Lambert PC, Thompson JR, Abrams KR. Evidence-based sample size calculations based upon updated meta-analysis. Stat Med. 2007;26(12):2479–500.View ArticleGoogle Scholar
- Langan D, Higgins JPT, Gregory W, Sutton AJ. Graphical augmentations to the funnel plot assess the impact of additional evidence on a meta-analysis. J Clin Epidemiol. 2012;65(5):511–9.View ArticleGoogle Scholar
- Ferreira ML, Herbert RD, Crowther MJ, Verhagen A, Sutton AJ. When is a further clinical trial justified? BMJ. 2012;345:e5913.View ArticleGoogle Scholar
- Kanters S, Ford N, Druyts E, Thorlund K, Mills EJ, Bansback N. Use of network meta-analysis in clinical guidelines. Bull World Health Organ. 2016;94(10):782–4.View ArticleGoogle Scholar
- Nikolakopoulou A, Mavridis D, Salanti G. Planning future studies based on the precision of network meta-analysis results. Stat Med. 2016;35(7):978–1000.View ArticleGoogle Scholar
- Salanti G, et al. Planning a future randomized clinical trial based on a network of relevant past trials. Trials. 2018;19(1):365. https://doi.org/10.1186/s13063-018-2740-2.
- Clayton GL, et al. The INVEST project: investigating the use of evidence synthesis in the design and analysis of clinical trials. Trials. 2017;18:1.View ArticleGoogle Scholar
- Clarke M, Hopewell S, Chalmers I. Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. J R Soc Med. 2007;100(4):187–90.View ArticleGoogle Scholar
- Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. Lancet. 2010;376(9734):20–1.View ArticleGoogle Scholar
- Fergusson D, Glass KC, Hutton B, Shapiro S. Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clin Trials Lond Engl. 2005;2(3):218–229-232.View ArticleGoogle Scholar
- Chalmers I, Hedges LV, Cooper H. A brief history of research synthesis. Eval Health Prof. 2002;25(1):12–37.View ArticleGoogle Scholar
- Clarke M, Alderson P, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals. JAMA. 2002;287(21):2799–801.View ArticleGoogle Scholar
- Clarke M, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA. 1998;280(3):280–2.View ArticleGoogle Scholar
- Cooper NJ, Jones DR, Sutton AJ. The use of systematic reviews when designing studies. Clin. Trials Lond. Engl. 2005;2(3):260–4.View ArticleGoogle Scholar
- Chalmers I, et al. How to increase value and reduce waste when research priorities are set. Lancet Lond Engl. 2014;383(9912):156–65.View ArticleGoogle Scholar
- Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol. 2013;13(1):50.View ArticleGoogle Scholar
- Clark T, Berger U, Mansmann U. Sample size determinations in original research protocols for randomised clinical trials submitted to UK research ethics committees: review. BMJ. 2013;346:f1135.View ArticleGoogle Scholar
- Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med. 2011;154(1):50–5.View ArticleGoogle Scholar
- Nasser M, et al. What are funders doing to minimise waste in research? Lancet Lond. Engl. 2017;389(10073):1006–7.View ArticleGoogle Scholar
- Clark T, Davies H, Mansmann U. Five questions that need answering when considering the design of clinical trials. Trials. 2014;15:286.View ArticleGoogle Scholar
- Bhurke S, Cook A, Tallant A, Young A, Williams E, Raftery J. Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort. BMC Med Res Methodol. 2015;15:1.View ArticleGoogle Scholar
- Didden E-M, et al. Prediction of real-world drug effectiveness prelaunch: case study in rheumatoid arthritis. Med Decis Mak. 2018;38(6):719–29.View ArticleGoogle Scholar
- Egger M, Moons KGM, Fletcher C, GetReal Workpackage 4. GetReal: from efficacy in clinical trials to relative effectiveness in the real world. Res. Synth. Methods. 2016;7(3):278–81.View ArticleGoogle Scholar
- Brok J, Thorlund K, Wetterslev J, Gluud C. Apparently conclusive meta-analyses may be inconclusive—trial sequential analysis adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal meta-analyses. Int J Epidemiol. 2009;38(1):287–98.View ArticleGoogle Scholar
- Thorlund K, et al. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses? Int J Epidemiol. 2009;38(1):276–86.View ArticleGoogle Scholar
- Higgins JPT, Whitehead A, Simmonds M. Sequential methods for random-effects meta-analysis. Stat Med. 2011;30(9):903–21.View ArticleGoogle Scholar
- Nikolakopoulou A, Mavridis D, Egger M, Salanti G. Continuously updated network meta-analysis and statistical monitoring for timely decision-making. Stat Methods Med Res. 2016;27(5):1312–30. https://doi.org/10.1177/0962280216659896.View ArticleGoogle Scholar
- Simmonds M, Salanti G, McKenzie J, Elliott J. Living Systematic Review Network, Living systematic reviews: 3. Statistical methods for updating meta-analyses. J Clin Epidemiol. 2017;91:38–46.View ArticleGoogle Scholar
- Cochrane Methods 2012. (2012). https://doi.org/10.1002/14651858.CD201201.