Skip to main content

Use of external evidence for design and Bayesian analysis of clinical trials: a qualitative study of trialists’ views

Abstract

Background

Evidence from previous studies is often used relatively informally in the design of clinical trials: for example, a systematic review to indicate whether a gap in the current evidence base justifies a new trial. External evidence can be used more formally in both trial design and analysis, by explicitly incorporating a synthesis of it in a Bayesian framework. However, it is unclear how common this is in practice or the extent to which it is considered controversial. In this qualitative study, we explored attitudes towards, and experiences of, trialists in incorporating synthesised external evidence through the Bayesian design or analysis of a trial.

Methods

Semi-structured interviews were conducted with 16 trialists: 13 statisticians and three clinicians. Participants were recruited across several universities and trials units in the United Kingdom using snowball and purposeful sampling. Data were analysed using thematic analysis and techniques of constant comparison.

Results

Trialists used existing evidence in many ways in trial design, for example, to justify a gap in the evidence base and inform parameters in sample size calculations. However, no one in our sample reported using such evidence in a Bayesian framework. Participants tended to equate Bayesian analysis with the incorporation of prior information on the intervention effect and were less aware of the potential to incorporate data on other parameters. When introduced to the concepts, many trialists felt they could be making more use of existing data to inform the design and analysis of a trial in particular scenarios. For example, some felt existing data could be used more formally to inform background adverse event rates, rather than relying on clinical opinion as to whether there are potential safety concerns. However, several barriers to implementing these methods in practice were identified, including concerns about the relevance of external data, acceptability of Bayesian methods, lack of confidence in Bayesian methods and software, and practical issues, such as difficulties accessing relevant data.

Conclusions

Despite trialists recognising that more formal use of external evidence could be advantageous over current approaches in some areas and useful as sensitivity analyses, there are still barriers to such use in practice.

Peer Review reports

Background

The importance of using existing evidence to inform the design and analysis of a randomised controlled trial (RCT) is increasingly recognised by trialists and funders alike [1]. Intuitively, new research should learn from previous, related research and avoid unnecessary repetition and research waste [2, 3]. Evidence syntheses now play an important role [4]. Results from systematic reviews are often used informally in planning new trials, for example, to indicate whether a gap in the current evidence base justifies a new trial [5, 6].

A more formal approach to using existing evidence is to use a Bayesian statistical framework in which previous evidence about a parameter is used to derive a prior distribution [7]. Parameters that might be informed by a synthesis of previous evidence include: the intervention effect, either at the design stage only (in power or sample size calculations [8,9,10,11]) or at the analysis stage [12]; the control group event rate, particularly for rare outcomes [13, 14]; and the intra-class correlation coefficient in a cluster RCT [15]. Furthermore, incorporating external evidence from ‘meta-epidemiological’ studies about the extent of bias typically associated with potential methodological limitations in the new trial, e.g. infeasibility of blinding the patient or personnel [16, 17], can produce a bias-adjusted treatment effect estimate [18], allowing the analyst to assess the robustness of their findings [19]. Although studies suggest evidence synthesis is increasingly used to inform trial design, the extent to which Bayesian approaches are used, and opinions on their use, is unclear [19, 20]. In a recent survey of attendees of the International Clinical Trials Methodology Conference, many responders indicated they felt increased use of external evidence in trial design and analysis would be desirable [20].

We undertook a qualitative study to explore trialists’ views on, and any experiences with, incorporation of external evidence through a Bayesian statistical approach. We focused on the use of a synthesis of relevant evidence on particular parameters, with reference to examples which we described, and identification of any barriers to the use of these approaches. We explored which types of external evidence were considered most potentially relevant and useful, and the likely acceptability of such use in practice.

Methods

Ethical approval was obtained by the University of Bristol on 27 April 2017 (Reference number 48101). We report our study according to the Consolidated Criteria for Reporting Qualitative Research (COREQ) [21] (summary table in Table S1).

Recruitment and sampling

We aimed to sample a range of individuals, from trials units and universities across multiple locations in England, with experience of working in trials. The following positions were targeted to ensure a diverse sample: methods leads, lead trial statisticians, trialists writing grant applications and leads of NIHR funded trials. Both clinicians and statisticians were included to capture a range of perspectives. Individuals involved in developing evidence synthesis methods were excluded, as we felt these methodologists may tend to be more supportive of using advanced methods than most trialists [20]. We did not require participants to have experience with or any knowledge of Bayesian analysis.

A key informant sampling approach was initially adopted [22], whereby an initial list of potential participants was drawn up from individuals known to the study team (GLC, HEJ, JPTH). The lead researcher (GLC) contacted these via email, to explain the study purpose through the provision of a participant information sheet (PIS) and to ask whether they would be willing to take part in an interview. We did not specifically refer to Bayesian analysis in the PIS. Instead, the PIS explained that we wanted to explore ‘trialists’ views and experiences of analysing trials in the context of the wider evidence base’.

After the initial list of potential participants had been approached, participants were then identified via snowball sampling [22, 23], whereby interviewees suggested potential contacts, supplemented by purposive sampling [24, 25] to ensure we sampled from our entire intended population, for example, including both junior roles, such as trial statisticians, and more senior roles [26]. Recruitment was driven by theoretical saturation (whereby data collection continues until no new themes emerged) [27] and when it was felt that maximum variation had been reached [28, 29].

Data collection

Interviews were semi-structured to ensure similar areas were covered in each interview, with sufficient flexibility to allow new issues of importance to emerge [30]. There were separate topic guides for clinicians and statisticians. These were very similar, with some questions rephrased for clinicians to focus more on the conceptual ideas of using previous evidence in different scenarios. The topic guides were initially developed with suggestions from all members of the study team and were iteratively modified in light of emerging findings. An example of a topic guide used is shown in Figure S1. We began the interviews by trying to elicit which statistical methods trialists had used. We then explored how previous evidence was considered and/or used, when designing and analysing a trial. We further explored participants’ views on, and any experiences with, the formal incorporation of existing data via Bayesian approaches. The basic concepts of a Bayesian approach were first explained to participants who were not familiar with this.

The latter part of each interview was used to examine three hypothetical scenarios in which evidence syntheses could be incorporated into a trial. We focussed on the use to inform the following parameters: (1) the intervention effect,  (2) potential bias associated with unavoidable trial limitations and (3) ‘nuisance’ parameters such as baseline rates of adverse events. We also explored the potential barriers to implementing these methods in practice. Interviews were recorded using an encrypted audio recorder.

Data analysis

Interview recordings were transcribed verbatim whole to conduct a comprehensive analysis. Transcripts were analysed thematically and inductively by GLC (the lead investigator with a background in statistics and experience working as a trial statistician), under the guidance of DE (an experienced qualitative researcher), using techniques of constant comparison whereby similarities and differences between interviewees were explored [31]. Coding was conducted using the qualitative data analysis software, NVivo (Version 11). Codes within transcripts were analytically summarised such that each code could be interpreted on its own [31], collated to explore any emerging patterns and organised into themes. Emerging themes were compared with other codes across the dataset, to see if there were any shared or disparate views amongst particular subgroups [28], such as methods leads, lead statisticians or trialists within the same unit [32].

The first three transcripts were double coded by DE, and a further three transcripts were double coded by a member of the study team with expertise in statistics and evidence synthesis (HEJ). The overall meaning and interpretation of codes were found to be similar, and minor disparities were discussed until there was a consensus. After analysing the first 13 interviews, we felt that maximum variation had been reached [28]. We conducted a further three interviews to check that no new codes emerged directly relating to the key findings.

Results

Participants

Of those individuals approached, only one declined. Amongst the 16 interviewees, three had a clinical background (two of whom were Chief Investigators), while 13 had a statistics background. 25% (4/16) had greater than 10 years of experience working in trials. Interviews lasted a mean of 54 minutes (range 37–79). Table 1 provides participant and employment role-related characteristics.

Table 1 Participant and employment role-related characteristics

Analysis

We report our findings in four themes, supported by quotations.

Theme 1: External evidence is used informally in trial design but not in analysis

Participants across all trials units were using evidence syntheses informally in a number of ways to inform trial design. Uses of evidence were included to justify the trial and inform the choice of outcomes or parameter values for sample size calculations.

Senior stat, P12: "I suppose it’s something that people do informally but not in a structured Bayesian [way]."

Senior stat, P1: "When I was determining a minimal clinically important difference I had, you have various methods to determine a minimal and clinically important difference. I basically did a meta-analysis of all of those results and came up with a value and that’s the value I reported."

Participants reported that the sourcing of previous evidence was often instigated by the clinician, who would share a published paper, usually a systematic review, with the study team. This was typically used to support the need for the trial and to demonstrate equipoise in funding applications.

Ch inv, P5: "Obviously, there’s the evidence of equipoise. There’s the evidence of the knowledge gap."

Ch inv, P3: "And unless you can convince a funder or ethics committee of equipoise to two treatment arms then you won’t get, you can’t do the study. There’s got to be some sort of uncertainty."

All sixteen trialists reported that they consistently used ‘standard’ and ‘simpler’ statistical models such as logistic, linear and Cox regression, or mixed effect models for repeated measure data, that did not incorporate external evidence through a Bayesian analysis.

Methods lead, P10: "It’s generally in a way just the simplest technique that will get the job done and not overcomplicating it. Calculations get confused enough as it is!"

Some participants indicated that they had used previous studies informally to inform the choice of statistical methods for their analysis.

Methods lead, P4: "I don’t think I’ve ever used previous evidence in the analysis stage other than if I was looking it up before I did the analysis to inform what analysis I might do, but not actually [using previous evidence]."

Participants indicated that any external information on adverse event rates is summarised descriptively.

Methods lead, P10: "I rely on the DMC [Data monitoring commitee] quite a lot basically. I don’t think we’ve got good methods for looking at adverse event rates really. It’s often just listings or tabulations."

Senior stat, P1: "I think a lot of the adverse events that I’ve reported, we then compare them to cohort data or kind of a population level data rather than trial specific."

None of our participants had used external evidence to adjust for potential biases in their numerical results. Trialists indicated they would simply describe limitations in their trial design, e.g. inability to blind outcome assessors, in their discussion.

Senior stat, P1: "Yes, I suppose it can do [bias the results] but I’ve never been particularly worried about it.…I haven’t done it [bias adjustment]; not thought about it or even something we discuss as being potentially biased; just see it as a limitation."

Theme 2: Concerns regarding the acceptance of Bayesian methods in practice

Some senior statisticians, particularly methods leads, voiced concerns that trials using Bayesian methods might be less likely to be published or funded.

Methods lead, P4: "Whether it would be accepted by decision makers. I guess if you’re doing anything that’s not the norm, you’d just be a bit scared, even getting it published. Reviewers could be like ‘what on earth have you done? I’ve worked in trials all my life and I’ve never done this."

Some also felt that Chief Investigators, who ultimately sign off the analysis plan, would not understand Bayesian methods and, moreover, would not encourage their use.

Senior stat, P11: "I think actually clinicians and things are more familiar with the frequentist approach rather than Bayesian, and actually it can be more difficult when you say, ‘I’ve used Bayesian methods,’ and they think, ‘Oh, what have you done?’"

There was also a reluctance to change current methods.

Senior stat, P1: "But I think you’d have to do a lot, change a lot of peoples mind set to be able to make them do this type of analysis compared with the analysis we already do."

All 16 participants also indicated that they felt it was difficult to trust external evidence and expressed concerns that incorporating it in their analysis could bias their own trial results. For example, participants were concerned about likely differences in the population between the external evidence and their own trial.

Senior stat, P11: "I am always a bit uncertain with meta-analysis about how you can group together different trials, because they are different trials. They don’t use the same patient groups and there are different intricacies in there. I suppose it depends on the call to the evidence and if there were reasons why that cohort were different, or the outcomes were different, or the intervention was different."

Theme 3: Practical challenges of use

Throughout the interviews it was clear there were practical challenges participants felt they (and/or the wider trials unit) would face if they wanted to use external evidence formally in practice. One of the most common issues surrounded logistics and the administrative burden associated with accessing external data, and the corresponding consideration of anonymisation of research participants.

Trial stat, P8: "I suppose if there was, if there was consistency in the way the studies were reported and there was a way, a simple way of collecting all of the high-quality evidence together very quickly, then that would obviously be a big help but yes, I suppose that’s a bit of a pipe dream really."

Methods lead, P10: "If it’s publicly funded you need to make the data available and that seems reasonable. But there’s still always an administrative exercise in getting through approvals and getting that, and for somebody to create a dataset that can be shared without risking identifiable data."

Most participants thought a systematic review was the most obvious source of external data. However, most indicated that their trial units did not have direct access to a systematic review team, and they viewed systematic reviewers as having a different skill set from their own.

Ch Inv, P5: "One of the problems is there’s probably a shortage of systematic review capacity. So, finding systematic reviewers is really tough actually."

More senior trial members identified concerns regarding how much extra time would be needed to implement Bayesian methods and the implicit costs associated with this.

Ch Inv, P3: "I’m trying to get funding for a study now to do this comparison, I can’t easily spend loads of money having a statistician spending ages trying to make a brilliantly efficient trial design..."

It was frequently brought up that, in order for these methods to be used in practice, there would need to be guidelines and requirements by funders.

Trial stat, P8: "So, I think that would be a helpful if there was, I mean certainly if there was some sort of guidance that had been produced elsewhere."

Methods lead, P10: "I think that’s partly about real world examples that you can look at and see how somebody did it and then it’s also about software and knowing how to implement it even if you wanted to."

Many statisticians also did not feel confident in using Bayesian methods or Bayesian statistical software. They expressed concerns about the time and financial pressures associated with having to learn these techniques.

Senior stat, P11: "I personally don’t go anywhere near them [Bayesian methods]. I think I did do a course in Bayesian stuff, but I just don’t think I work that way and I don’t feel comfortable using Bayesian methodology, so I personally would shy away from it."

Trial stat, P9: "It [Bayesian analysis] was a black-box moment of it went into the system and came out and I didn’t really know what had gone on in between [laughs]. Very bad statistician!"

Methods lead, P4: "I think a lot of it is accessibility of the software because Stata it’s just very straightforward, WinBUGS it’s not. So, I think that’s a massive hurdle. If you could do it in Stata people would probably do it."

Theme 4: Perceived advantages of making use of existing evidence

Despite the barriers described in themes 2 and 3, many of the trialists expressed enthusiasm for the concept of making more use of existing data in particular as secondary or sensitivity analyses.

Senior stat, P16: "OK, as a statistician, you’re always taught the more data you have, the better. The more information you have the better. So why – if you’ve got the information there – would you not use it?"

Senior stat, P1: "… I think it would take a lot, as a said before, a lot for people to change the specific analysis they were going to do. So I think this would be a subsequent or secondary analysis that people would do but it would be interesting."

Many thought that making more use of existing data would be advantageous, as a lot of time and money is invested in trials, which it is important not to waste.

Senior stat, P1: "We don’t want to do a trial that wastes a) time and b) money, so if we had existing evidence which would cut down time and money then I think we should do it to start with."

Senior stat, P11: "I guess, because you do collect a heck of a lot of data for each trial and obviously not everyone has registry databases like ourselves, I don’t know if there is some way that there will always be like an evolving cycle of data. …. I think I’ve heard it talked about the trial in a certain area you always collect certain variables and then those variables could be uploaded to a dataset and then it actually creates a big one. Everyone’s trial data gets compiled together and then you do have a big database that you could then use to inform sample size calculations and other things like that."

We observed that when trialists think of Bayesian methods, they were generally only thinking about prior information on the treatment effect—and may be unaware of available methods to make use of data on other parameters. In discussing the potential to use external evidence to inform other types of parameters (see the ‘Methods’ section), several trialists expressed interest in the idea of making better use of existing safety data so that rare events could be picked up faster. When asked about their views on using data from a similar population to predict what the expected adverse event rate would be in the control arm, many felt this could be potentially advantageous. This was also similar to what some trialists were doing informally.

Ch Inv, P3: "So, we do make use of it [external evidence on adverse event rates] but obviously in a suboptimal way and I can imagine that doing this kind of approach for adverse events for example. would offer greater safety would allow safety signal to become obvious in my study earlier maybe so therefore better."

The concept of using an existing meta-analysis to power a new trial, based on its ability to impact an existing meta-analysis [8, 10, 11, 33], was unfamiliar to all participants. Having briefly explained the concepts of this to participants, many thought it was an attractive idea and could make the trial more efficient.

Methods lead, P4: "If you’re wanting to change practice, your one trial is not going to change practice. The body of evidence [meta-analysis] is going to change practice."

Senior stat, P1: "You’re not gonna sort of waste time and money showing an effect size in a single trial when you might be able to do it in a combination with existing studies. I think that’s quite sensible, but I guess it’s a case-by-case basis."

Trial stat, P8: "I think it’s quite sensible probably to power it based on, you know, making a change to that if it means that you, you know, you’re gonna recruit less participants."

Discussion

We found that trialists were using existing evidence in many ways, including to justify there is a gap in the evidence base and to inform parameters in sample size calculations. However, none of the trialists in our sample had an experience of explicitly incorporating prior information in either trial design or analysis through the use of Bayesian statistical methods. Our study showed that trialists felt they could be making more use of existing data to inform the design and analysis of a clinical trial in particular scenarios, such as in secondary or sensitivity analyses. We observed that when trialists thought of Bayesian methods, they appeared to be thinking about the use of prior information on the treatment effect only and may be unaware of available methods to make use of data on other parameters. Several participants in our study found the idea of formal use of evidence on other parameters appealing and thought improvements could be made to current methods. The use of informative priors based on external evidence in sample size calculations and to assess whether adverse event rates were higher than baseline was perceived as attractive. Similarly, although trialists do not think about how their trial results will influence a future meta-analysis, many thought it would be useful to investigate when conducting the sample size calculation. In contrast, no participant expressed enthusiasm for the concept of ‘bias adjustment’ based on external evidence. Participants also identified barriers to implementing these methods in practice. Trialists expressed concerns about the relevance or quality of external data and how incorporation of this could potentially affect their own trial. We also found that trialists did not feel confident in the use or interpretation of Bayesian methods and identified practical issues including difficulty accessing relevant data, anonymisation issues and the extra time associated with this.

Our finding regarding the relevance of prior information being a key concern to trialists is consistent with the results from a recent survey of trial methodologists [20] and has been an area greatly discussed [34]. The greatest barrier to the use of existing evidence according to this survey was time constraints. Although we did not identify time constraints as an overarching theme, this issue was mentioned by some participants as one of the practical challenges of using external evidence in trials. A more detailed exploration in our study revealed the extra time needed to conduct a systematic review was a concern, given that systematic review teams are often not integrated into clinical trial teams. We also found trialists found it difficult to access and collate other external data, either aggregate or individual participant data. Some statisticians noted the time that would be needed to learn new methods and software. Lilford [35] argues the assumption of equipoise in randomised trials is misleading to the patients being invited to participate in a new trial. More often than not, some evidence exists before a trial such as similar treatments in the same disease area or the same treatments in other populations. The findings of our qualitative study are consistent with this: trialists recognised that previous studies gave some indication about the potential intervention effect and that it is the accumulation of evidence that is likely to change practice. However, they did not explicitly incorporate this information into the trial design or analysis. Our study appears consistent with Brocklehurst et al. [36] in the overarching finding that it remains unclear to trialists (including investigators) the process by which external evidence should be considered, and at precisely which stages of a trial.

Having found that the statisticians in our sample did not feel confident in the use of, and/or had concerns about the acceptability of, Bayesian methods, more training and specific methodological guidelines on the use of Bayesian approaches in trials may be beneficial. In particular, guidance could raise awareness of Bayesian approaches to incorporate external evidence on parameters other than the treatment effect, given that our participants knew little about options in this area. Tutorial papers, user-written packages in generic rather than specialist software and provision of example code may increase accessibility. We note that Bayesian approaches are becoming easier to implement in practice via, for example, the development of core R packages [37]. Development of an easily accessible repository of relevant data, ideally individual participant data, would help facilitate the use of external evidence in practice. There have been many calls for such a platform [38].

One limitation of this study is that snowball sampling was used to identify participants after initial key contacts in each group were sampled from colleagues known to the study team. This potentially limits the generalisability of the findings to all trialists. However, as data collection continued, sampling became increasingly purposive, with a view to achieving a sample of maximum variation, to ensure insights were captured from a range of informants operating in different contexts [29]. We succeeded in sampling from each group of our intended population [29, 39, 40], including both statisticians and clinicians in a range of senior and junior positions. We also acknowledge that our sample size of 16 participants may be limited; however, qualitative studies are typically small in order to generate rich and in-depth insights and accounts of participants’ views and experiences [40, 41]. Given that the predominant approach to clinical trial design is within a frequentist paradigm and that analysing only Bayesian researchers would not have answered our intended research question, the term ‘Bayesian’ was not used in the participant information sheet. As such, it is possible that we missed people who had a potentially strong view on such methods. However, we felt this was preferable in order to sample a wider range of participants. Given that none of our participants had experience in using Bayesian methods to incorporate prior information, an interesting further study might involve identifying and interviewing trialists who do have such experience. This study might explore how and why these trialists have used Bayesian approaches and how any barriers might have been overcome.

Conclusions

In conclusion, trialists recognise that more formal use of external evidence could be advantageous over current approaches in some areas, particularly to inform parameters other than the treatment effect, for which very limited information may be available from the trial, and useful as sensitivity analyses. Trialists do however note that there are still many barriers to such use in practice. Clear guidance, user-friendly software and accessibility to a repository of data might increase uptake of Bayesian approaches in practice.

Availability of data and materials

N/A

Abbreviations

RCT:

Randomised controlled trial; NIHR: National Institute for Heath Research; PIS: participant information sheet

References

  1. Bhurke S, Cook A, Tallant A, Young A, Williams E, Raftery J. Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort. BMC Med Res Methodol. 2015;15(1):108. https://doi.org/10.1186/s12874-015-0102-2.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Clarke M. Doing new research? Don’t forget the old - nobody should do a trial without reviewing what is known. Plos Med. 2004;1(2):100–2. https://doi.org/10.1371/journal.pmed.0010035.

    Article  Google Scholar 

  3. Goudie AC, Sutton AJ, Jones DR, Donald A. Empirical assessment suggests that existing evidence could be used more fully in designing randomized controlled trials. J Clin Epidemiol. 2010;63(9):983–91. https://doi.org/10.1016/j.jclinepi.2010.01.022.

    Article  PubMed  Google Scholar 

  4. Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol. 2013;13(1):50. https://doi.org/10.1186/1471-2288-13-50.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Cooper NJ, Jones DR, Sutton AJ. The use of systematic reviews when designing studies. Clin Trials. 2005;2(3):260–4. https://doi.org/10.1191/1740774505cn090oa.

    Article  PubMed  Google Scholar 

  6. Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med. 2011;154(1):50–U187. https://doi.org/10.7326/0003-4819-154-1-201101040-00007.

    Article  PubMed  Google Scholar 

  7. Berry DA. Bayesian clinical trials. Nat Rev Drug Discov. 2006;5(1):27–36. https://doi.org/10.1038/nrd1927.

    Article  CAS  PubMed  Google Scholar 

  8. Roloff V, Higgins JPT, Sutton AJ. Planning future studies based on the conditional power of a meta-analysis. Stat Med. 2013;32(1):11–24. https://doi.org/10.1002/sim.5524.

    Article  PubMed  Google Scholar 

  9. Sutton A, Cooper N, Abrams K. Evidence based sample size calculations for future trials based on results of current meta-analyses. Controlled Clin Trials. 2003;24:88S-S.

    Google Scholar 

  10. Spiegelhalter DJ. Incorporating Bayesian ideas into health-care evaluation. Stat Sci. 2004;19(1):156–74. https://doi.org/10.1214/088342304000000080.

    Article  Google Scholar 

  11. Jones HE, Ades AE, Sutton AJ, Welton NJ. Use of a random effects meta-analysis in the design and analysis of a new clinical trial. Stat Med. 2018;37(30):4665–79.

    Article  Google Scholar 

  12. Tierney JF, Pignon JP, Gueffyier F, Clarke M, Askie L, Vale CL, et al. How individual participant data meta-analyses have influenced trial design, conduct, and analysis. J Clin Epidemiol. 2015;68(11):1325–35. https://doi.org/10.1016/j.jclinepi.2015.05.024.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Peto R, Emberson J, Landray M, Baigent C, Collins R, Clare R, et al. Analyses of cancer data from three Ezetimibe trials. N Engl J Med. 2008;359(13):1357–66. https://doi.org/10.1056/NEJMsa0806603.

    Article  CAS  PubMed  Google Scholar 

  14. Hampson LV, Whitehead J, Eleftheriou D, Brogan P. Bayesian methods for the design and interpretation of clinical trials in very rare diseases. Statistics in Medicine. 2014;33(24):4186–201. https://doi.org/10.1002/sim.6225.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Turner RA, Omar RZ, Thompson SG. Constructing intervals for the intracluster correlation coefficient using Bayesian modelling, and application in cluster randomized trials. Stat Med. 2006;25(9):1443–56. https://doi.org/10.1002/sim.2304.

    Article  PubMed  Google Scholar 

  16. Page MJ, Higgins JPT, Clayton G, Sterne JAC, Hrobjartsson A, Savovic J. Empirical evidence of study design biases in randomized trials: systematic review of meta-epidemiological studies. Plos One. 2016;11(7):e0159267.

    Article  Google Scholar 

  17. Savovic J, Jones HE, Altman DG, Harris RJ, Juni P, Pildal J, et al. Influence of reported study design characteristics on intervention effect estimates from randomised controlled trials: combined analysis of meta-epidemiological studies. Health Technol Assess. 2012;16(35):1- +.

    Article  CAS  Google Scholar 

  18. Welton NJ, Ades AE, Carlin JB, Altman DG, Sterne JAC. Models for potentially biased evidence in meta-analysis using empirically based priors. J Royal Stat Soc. 2009;172(1):119–36. https://doi.org/10.1111/j.1467-985X.2008.00548.x.

    Article  Google Scholar 

  19. Moye LA. Bayesians in clinical trials: asleep at the switch. Stat Med. 2008;27(4):469–82. https://doi.org/10.1002/sim.2928.

    Article  PubMed  Google Scholar 

  20. Clayton GL, Smith IL, Higgins JPT, Mihaylova B, Thorpe B, Cicero R, et al. The INVEST project: investigating the use of evidence synthesis in the design and analysis of clinical trials. Trials. 2017;18(1):219. https://doi.org/10.1186/s13063-017-1955-y.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042.

    Article  PubMed  Google Scholar 

  22. Dempster M. A research guide for health and clinical psychology. Basingstoke: Palgrave Macmillan; 2011. https://doi.org/10.1007/978-0-230-34477-8.

    Book  Google Scholar 

  23. Goodman LA. Snowball sampling. Annals of Mathematical Statistics. 1961;32(1):148–70. https://doi.org/10.1214/aoms/1177705148.

    Article  Google Scholar 

  24. Mays N, Pope C. Rigour and qualitative research. Br Med J. 1995;311(6997):109–12. https://doi.org/10.1136/bmj.311.6997.109.

    Article  CAS  Google Scholar 

  25. Ritchie J, Lewis J, McNaughton Nicholls C, Ormston R. Qualitative research practice: a guide for social science students and researchers. 2nd ed. SAGE: Los Angeles, California; 2014.

    Google Scholar 

  26. Braun V, Clarke V. Successful qualitative research: a practical guide for beginners. London: SAGE; 2013.

    Google Scholar 

  27. Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995;18(2):179–83. https://doi.org/10.1002/nur.4770180211.

    Article  CAS  PubMed  Google Scholar 

  28. Connelly LM, Peltzer JN. Underdeveloped themes in qualitative research relationship with interviews and analysis. Clin Nurse Spec. 2016;30(1):51–7.

    Article  Google Scholar 

  29. Patton MQ. Two decades of developments in qualitative inquiry: a personal, experiential perspective. Qual Soc Work. 2002;1(3):261–83.

    Article  Google Scholar 

  30. Snowdon C. Qualitative and mixed methods research in trials. Trials. 2015;16(1):558. https://doi.org/10.1186/s13063-015-1084-4.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Clarke V, Braun V. Teaching thematic analysis. Psychologist. 2013;26(2):120–3.

    Google Scholar 

  32. Sandelowski M, Leeman J. Writing usable qualitative health research findings. Qual Health Res. 2012;22(10):1404–13. https://doi.org/10.1177/1049732312450368.

    Article  PubMed  Google Scholar 

  33. Sutton AJ, Cooper NJ, Jones DR, Lambert PC, Thompson JR, Abrams KR. Evidence-based sample size calculations based upon updated meta-analysis. Stat Med. 2007;26(12):2479–500. https://doi.org/10.1002/sim.2704.

    Article  PubMed  Google Scholar 

  34. Greenland S. Response: Bayesian perspectives for epidemiological research. Int J Epidemiol. 2006;35(3):777–8. https://doi.org/10.1093/ije/dyl081.

    Article  Google Scholar 

  35. Lilford RJ. Ethics of clinical trials from a Bayesian and decision analytic perspective: whose equipoise is it anyway? Br Med J. 2003;326(7396):980–1. https://doi.org/10.1136/bmj.326.7396.980.

    Article  Google Scholar 

  36. Brocklehurst P, Elbourne D, Alfirevic Z. Role of external evidence in monitoring clinical trials: experience from a perinatal trial. Br Med J. 2000;320(7240):995–8. https://doi.org/10.1136/bmj.320.7240.995.

    Article  CAS  Google Scholar 

  37. Gabry J, Simpson D, Vehtari A, Betancourt M, Gelman A. Visualization in Bayesian workflow. J Royal Stat Soc. 2019;182(2):389–402. https://doi.org/10.1111/rssa.12378.

    Article  Google Scholar 

  38. Tudur Smith C, Hopkins C, Sydes MR, Woolfall K, Clarke M, Murray G, et al. How should individual participant data (IPD) from publicly funded clinical trials be shared? BMC Med. 2015;13(1):298. https://doi.org/10.1186/s12916-015-0532-z.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Green J, Thorogood N. Qualitative methods for health research. 3rd ed. Los Angeles, Calif: SAGE; 2014.

    Google Scholar 

  40. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. https://doi.org/10.1191/1478088706qp063oa.

    Article  Google Scholar 

  41. Kuper A, Reeves S, Levinson W. Qualitative research - an introduction to reading and appraising qualitative research. BMJ. 2008;337(7666):a288.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the trialists and clinicians who took part in this study and Bristol Transcription Services who transcribed twelve audio recordings.

Funding

GLC was funded by a PhD studentship from the Medical Research Council (MRC) Hubs for Trials Methodology Research. DE was funded by the NIHR Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol. HEJ was supported by an MRC Career Development Award in Biostatistics (MR/M014533/1). JPTH is an NIHR senior investigator (NF-SI-0617-10145), was supported by the National Institute for Health Research (NIHR) Applied Research Collaboration West (ARC West) at University Hospitals Bristol and Weston NHS Foundation Trust and the NIHR Bristol Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol, and is a member of the MRC Integrative Epidemiology Unit at the University of Bristol.

The views expressed are those of the authors and not necessarily those of the National Health Service, the NIHR or the UK Department of Health and Social Care.

Author information

Authors and Affiliations

Authors

Contributions

GLC, HEJ, JPTH and DE were responsible for the original study design. GLC and HEJ were responsible for securing the ethics and regulatory approvals. GLC recruited the participants, collected the data and conducted the qualitative analysis of the findings. DE provided qualitative support and training to GLC, helped develop the topic guides and helped with the analysis of the raw data. All authors were responsible for the interpretation of the findings, preparation, reading and final approval of the manuscript. GLC was the principal investigator for the project.

Corresponding author

Correspondence to Gemma L. Clayton.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was obtained by the University of Bristol on 27 April 2017 (reference number 48101).

Consent for publication

N/A

Competing interests

All authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Table S1.

COREQ checklist. Figure S1. Example of topic guide.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Clayton, G.L., Elliott, D., Higgins, J.P.T. et al. Use of external evidence for design and Bayesian analysis of clinical trials: a qualitative study of trialists’ views. Trials 22, 789 (2021). https://doi.org/10.1186/s13063-021-05759-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-021-05759-8

Keywords