Incorporating external evidence in trialbased costeffectiveness analyses: the use of resampling methods
 Mohsen Sadatsafavi^{1, 2, 3}Email author,
 Carlo Marra^{2},
 Shawn Aaron^{4} and
 Stirling Bryan^{3, 5}
DOI: 10.1186/1745621515201
© Sadatsafavi et al.; licensee BioMed Central Ltd. 2014
Received: 25 September 2013
Accepted: 19 May 2014
Published: 3 June 2014
Abstract
Background
Costeffectiveness analyses (CEAs) that use patientspecific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCTbased CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCTbased CEA for the incorporation of external evidence.
Methods
We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions.
Results
In a proofofconcept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods.
Conclusions
The bootstrap method of RCTbased CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.
Keywords
Costbenefit analysis Bayes theorem Clinical trial StatisticsnonparametricBackground
Randomized controlled trials (RCTs), especially ‘pragmatic’ RCTs that measure the effectiveness of interventions in realistic settings, are an attractive opportunity to provide information on costeffectiveness[1]. In the context of such a RCT, many aspects of treatment from clinical outcomes to adverse events and costs are measured at the individual level, which can be used to formulate an efficient policy based on costeffectiveness principles. A growing number of trials incorporate economic endpoints at the design stage and there are established guidelines for conducting a costeffectiveness analysis (CEA) alongside a RCT[2, 3].
The statistic of interest in a CEA is the incremental cost effectiveness ratio (ICER), which is defined as the difference in cost (∆C) between two competing treatments over the difference in their health outcome (effectiveness) (∆E). With patientspecific cost and health outcomes at hand, estimating the population value of the ICER from an observed sample becomes a classical statistical inference problem. However, given the awkward statistical properties of cost data and some health outcomes such as qualityadjusted life years (QALYs), and issues around parametric inference on ratio statistics, many investigators choose resampling methods for quantifying the sampling variation around costs, health outcomes, and the ICER[4]. In parallelarm RCTs, this can be performed by obtaining a bootstrap sample within each arm of the trial and calculating the mean cost and effectiveness within each arm from the bootstrap sample; repeating this step many times provides a random sample from the joint distribution of armspecific cost and effectiveness outcomes. This sample can then be used to make inference on (such as calculate the confidence or credible interval for) the ICER[5].
Recently, such a framework for evaluating the cost and outcomes of health technologies has received some criticism[6–8]. Specifically, critics argue that making decisions on the costeffectiveness of competing treatments should be based on all the available evidence, not just those obtained from a single RCT[8]. In this context, evidence synthesis is the practice of combining multiple sources of evidence (from other RCTs, expert opinion, and case histories) in informing the treatment decision, a task that is quantitatively performed using the Bayes’ rule[9].
A conventional analysis of a clinical trial often involves making inference primarily on the effect size and secondarily on other aspects of treatment such as safety or compliance. These measures are conceptually distinct enough to be analyzed and reported separately and trialists have a full arsenal of standard statistical methods at their grasp for such analyses. Evidence synthesis is often conducted separately, usually through quantitative metaanalysis, after the results of several studies are available. An economist, on the other hand, does not have the luxury of dissecting RCT results into different components as costeffectiveness is a function of all aspects of an intervention. As such, evidence external to the trial on any aspect of treatment has bearings on the results of the CEA. In addition, when a RCT is used as a vehicle for the CEA the incorporation of external evidence must be part of the analysis. Results of a CEA have direct policy implications and the economist cannot defer evidence synthesis to any subsequent stage[8].
For trialbased CEAs, if external evidence on cost or effectiveness is available then the investigator can use standard parametric Bayesian methods to combine this information with trial results[9]. This has been the dominant paradigm in the Bayesian analysis of RCTbased CEAs[10–14]. However, prior information on cost and typical effectiveness outcomes such as QALY is rarely available and if it is, it is often inappropriate to transfer to other settings[15, 16]. This is because such outcomes are, to a large extent, affected by the specific settings in the jurisdiction in which they are measured (such as unit prices for medical resources). On the other hand, evidence on the aspects of the intervention that relate to the pathophysiology of the underlying health condition and the biologic impact of treatment, such as the effect size of treatment or rate of adverse events, are less affected by specific settings and are therefore more transferable[17]. This puts the investigator in a difficult situation for a RCTbased CEA as inference is made directly on cost and effectiveness using the observed sample, but evidence is available on some other aspects of treatment. One way to overcome this challenge is to create a parametric model to connect costeffectiveness outcomes with parameters for which external evidence is available, and use Bayesian analysis, for example through Markov Chain Monte Carlo (MCMC) sampling techniques[18]. But such a model must connect several parameters through link functions, regression equations, and error terms. This involves a multitude of parametric assumptions and there is always the danger of model misspecification[19, 20]. In addition, even with the advent of generic statistical software for Bayesian analysis, implementing such a model and comprehensive model diagnostics are not an easy undertaking. For an investigator using resampling methods for the CEA who wishes to incorporate external evidence in the analysis, this paradigm shift to parametric modeling can be a challenge.
In this proofofconcept study, we propose and illustrate simple modifications of the bootstrap approach for RCTbased CEAs that enable Bayesian evidence synthesis. Our proposed method requires a parametric specification of the external evidence while avoiding parametric assumptions on the costeffectiveness outcomes and their relation with the external evidence. The remainder of the paper is structured as follows: after outlining the context, a Bayesian interpretation of the bootstrap is presented. Next, the theory of the incorporation of external evidence into such sampling scheme is explained. A case study featuring a realworld RCT is used to demonstrate the applicability and face validity of the proposed method. A discussion section on the various aspects of the new method and its strengths and weaknesses compared to parametric approaches concludes the paper.
Methods
Context
Let θ = {θ _{ i }, θ _{ e }} be the set of parameters to be estimated from the data of a RCT and some external evidence. It consists of two subsets: θ _{ i }, the parameter (s) of interest for which there is no external evidence, and θ _{ e }, some parameters for which external evidence is available. Typically, θ _{ i } includes cost and effectiveness outcomes, and θ _{ e } consists of some biological measures of treatment such as treatment effect. Let D represent the individuallevel data of the current parallelarm RCT, fully available to the investigator. We assume the population of interest for inference is the same as the population from which D is obtained, a fundamental assumption in any RCTbased CEA.
Bayesian bootstrap
omitting a normalizing constant which is the function of D, but not θ. Here π(θ) is our prior distribution on θ, P(Dθ) is the likelihood of current data, and P(θD) is the posterior distribution having observed the trial data D. If prior and posterior distributions are from a parametric family indexed by a set of distribution parameters, then a fully parametric model can be used to draw inference on P(θD). However, one can perform such Bayesian inference nonparametrically: Rubin showed that if we assume a prior noninformative Dirichlet distribution for D itself (regardless of which parameter to estimate), then we can directly draw from P(θD) using a simple process called the Bayesian bootstrap[21]. In the Bayesian bootstrap of a dataset D consisting of n independent observations, a probability vector P = (p _{1}, …, p _{ n }) is generated by randomly drawing from Dirichlet(n; 1, …, 1). The probability distribution that puts the mass of p _{ i } on the i ^{ th } observation in D can be considered a random draw from the ‘distribution of the distribution’ that has generated D. Let D* represent a bootstrapped sample of D generated in this way, then according to the argument made above, θ*, the value of θ measured in this sample, is a random draw from P(θD)[21].
Ordinary bootstrap as an approximation of the Bayesian bootstrap
The process of ordinary bootstrap can also be seen as generating a probability vector to the data, except only the probability vector is generated from the scaled multinomial distribution[22]. Such a process does not mathematically correspond to formal Bayesian inference. Nevertheless, the similarity in both the operation and results to the Bayesian bootstrap has led some investigators to interpret the ordinary bootstrap in a Bayesian way[23]. For example, the widely popular nonparametric imputation of missing data uses ordinary bootstrap as an approximate to the Bayesian bootstrap[22, 24]. Indeed, it has already been shown that the ordinary and Bayesian bootstrap methods generate very similar results in nonparametric value of information analysis of RCT data[21]. Given this, for the rest of this work we use Bayesian and ordinary bootstraps interchangeably.
CEA without the incorporation of external evidence
 1For i = 1,…,M, where M is the number of bootstraps:
 a.
Generate D*, a (Bayesian) bootstrap sample with bootstrapping performed within each arm of the trial.
 b.
Calculate θ* from D*.
 a.
 2
Store the value of θ* and jump to 1.
This approach generates M random draws from the posterior distribution of θ having observed the RCT data. This is indeed the widely popular bootstrap method of RCTbased CEA[4]. An estimator for the ICER from the bootstrapped data can be obtained by calculating the ratio of the mean cost over mean effectiveness from the bootstrap samples[4]. Various methods can be used to construct a credible interval from the bootstrapped samples around this value[4, 25]. These samples can also be used to present uncertainty in the form of a costeffectiveness plane or costeffectiveness acceptability curve (CEAC)[26].
Incorporating external evidence
Let D _{ e } be some external data providing evidence on θ _{ e }. While the external data is not fully available to the investigator, evidence is available most typically in the form of the external likelihood P(D _{ e }θ _{ e }), for example, recovered from the reported maximum likelihood estimate and confidence bounds of treatment effect from a previously published study. We require D and D _{ e } to be independent samples. This is a typical and fundamental assumption in evidence synthesis, for example in metaanalysis of treatment effect from multiple trials. By our definition of θ _{ i } and θ _{ e }, we know that the external likelihood only provides information on θ _{ e } (the information on θ _{ i } is either not collected or is not reported by the investigators of the external study). As such, the external likelihood is a marginal likelihood for θ _{ e } and hence is not a function of θ _{ i }. We also note that sometimes external evidence is obtained through a more subjective process, such as elicitation of expert opinion. In such cases, D _{ e } becomes an abstract entity and P(D _{ e }θ _{ e }) can be seen as a ‘weight’ function representing the degree of plausibility of θ _{ e } against external knowledge.
In the above derivations, in the first step we have applied the Bayes rule; the second step factorizes the likelihood given the independence of the external and current data; and the third step is based on the fact that the external data provides no information about θ _{ i } (that is, P(D _{ e } θ _{ i }, θ _{ e }) is not a function of θ _{ i }), so the likelihood term P(D _{ e } θ _{ i }, θ _{ e }) is reduced to P(D _{ e }θ _{ e }).
Sampling from the posterior distribution
Suppose that a random sample can be generated from an ‘easy’ distribution g, but we are actually interested in obtaining a sample from a ‘difficult’ distribution h. How can we use the samples from g to obtain samples from h? Two popular methods for converting samples from g to h are rejection sampling[27] and importance sampling[28]; both are based on applying weights proportional to density ratio h/g to each observation from g. In the present context, g = P(θD) and h = P(θD, D _{ e }); the weights are, according to (Equation 2), proportional to P(D _{ e }θ _{ e }). That is, to obtain samples from P(θD, D _{ e }), each θ* as a sample from P(θD), obtained through bootstrapping, needs to be weighted by$\mathit{P}\left({\mathit{D}}_{\mathit{e}}{\mathit{\theta}}_{\mathit{e}}^{*}\right)$. To operationalize this, we propose two approaches based on rejection and importance sampling schemes. The reader can refer to Smith and Gelfand for an elegant elaboration on these two sampling schemes (along with the derivations)[27].
Rejection sampling
 1For i = 1,…,M, where M is the desired size of the sample:
 a.
Generate D*, a (Bayesian) bootstrap sample of D, with bootstrapping performed separately within each arm of the trial.
 b.
Calculate the parameters ${\mathit{\theta}}^{*}=\left\{{\mathit{\theta}}_{\mathit{i}}^{*},{\mathit{\theta}}_{\mathit{e}}^{*}\right\}$ from this sample.
 c.
Calculate ${\mathit{P}}^{*}=\mathit{P}\left({\mathit{D}}_{\mathit{e}}{\mathit{\theta}}_{\mathit{e}}^{*}\right)$, the weight of ${\mathit{\theta}}_{\mathit{e}}^{*}$ according to external evidence.
 d.
Randomly draw u from a uniform distribution in the interval [0,1]. If u > P* , then ignore the bootstrap sample and jump to step a.
 a.
 2
Store the value of θ* and jump to 1.
This approach generates M random draws from the posterior distribution of θ having observed the RCT data and the external evidence. All the subsequent steps of the CEA, such as calculating the average cost and effectiveness outcomes, interval estimations, and drawing the costeffectiveness plane and the CEAC, remain unchanged. Of note, this algorithm requires that P* be valid probabilities bounded between 0 and 1. As such, the external likelihood should be scaled (e.g., divided by${max}_{{\mathit{\theta}}_{\mathit{e}}}\left[\mathit{P}\left({\mathit{D}}_{\mathit{e}}\left{\mathit{\theta}}_{\mathit{e}}\right.\right)\right]$).
Importance sampling
As an alternative to probabilistically accepting or rejecting bootstrap samples one can assign the weights directly to each bootstrap sample[27]. That is, one proceeds by obtaining a desired number of bootstraps, calculating${\mathit{\theta}}_{\mathit{e}}^{*}$ in each sample, and assigning a weight proportional to$\mathit{P}\left({\mathit{D}}_{\mathit{e}}{\mathit{\theta}}_{\mathit{e}}^{*}\right)$ to each bootstrap. All subsequent calculations require incorporating such weights (for example, ICER will be the ratio of the weighted mean of costs over the weighted mean of effectiveness).
Regularity conditions
Fundamental to the proposed sampling scheme is that the joint likelihood of D and D _{ e } can be factorized into two independent likelihoods. The onus is on the investigator to ensure this condition is satisfied with at least a good approximation. This can be contextspecific. A few scenarios that violate this assumption are when D and D _{ e } have overlapping samples, when D _{ e } is an estimate from a metaanalysis of studies that included the current study D, or when D _{ e } represents experts’ opinion about treatment effect if their opinion is already influenced by the results of the current study (the hindsight bias[29]).
In addition, the general regularity conditions required for the rejection and importance samplings should hold[27]. Particularly, since P(θD) is most often continuous (or for the regular bootstrap it takes many discrete values), the external likelihood P(D _{ e }θ), should also be continuous, otherwise the chance of samples from P(θD) hitting nonzero areas of P(D _{ e }θ) will be infinitely small. Next, θ _{ e } should be identifiable (unique) within each D*. This assumption holds for the most typical form of external evidence such as rates or measures of relative risk[30]. Further, P(D _{ e }θ) should be bounded. If P(D _{ e }θ) has an infinite maximum, for example, if it is proportional to the density function of a beta distribution with either of its parameters being less than one the proposed sampling schemes might fail. Such distributions are, however, mainly used as noninformative priors and seldom represent external evidence in realistic scenarios. On the other hand, mixedtype distributions such as the so called lumpandsmear priors that put point mass on the value of the parameter consistent with the null hypothesis ([31] page 161), have unbounded density functions and cannot readily be used in the proposed sampling methods.
We used data from a realworld RCT to show the practical aspects of implementing the proposed algorithms. Ethics approval was obtained from the Ottawa Hospital Research Ethics Board (#200262301H) and Vancouver Coastal Health Authority (#C030275).
Results
An illustrative example
This case study is to demonstrate the operational aspects of implementing the algorithm and is not intended to be a practice in comprehensive evidence synthesis to inform policy.
The case study is based on the OPTIMAL trial, a multicenter study evaluating the benefits of combination pharmacological therapy in preventing respiratory exacerbations in patients with chornic, obstructive pulmonary disease (COPD)[32, 33]. Pharmacological treatment of COPD, typically with inhaled medications, is often required to keep the symptoms under control and reduce the risk of exacerbations. Sometimes patients receive combinations of treatments of different classes in an attempt to bring the disease under control. However, there is a lack of evidence on whether such combination therapies are effective. The OPTIMAL trial was designed to estimate the comparative efficacy and costeffectiveness of single and combination therapies in COPD. It included 449 patients randomized into three treatment groups: T1: monotherapy with an inhaled anticholinergic (tiotropium, N = 156); T2: double therapy with an inhaled anticholinergic plus an inhaled betaagonist (tiotropium + salmeterol, N = 148); and T3: triple therapy with an inhaled anticholinergic, an inhaled betaagonist, and an inhaled corticosteroid (tiotropium + fluticasone + salmeterol, N = 145). The primary outcome measure of the RCT was the proportion of patients who experienced at least one respiratory exacerbation by the end of the followup period (52 weeks). This outcome was not significantly different across the three arms: the odds ratio (OR) for the risk of having at least one exacerbation by the end of the followup period was 1.03 (95% CI, 0.63 to 1.67) for T2 versus T1 and 0.84 (95%CI, 0.47 to 1.49) for T3 versus T1 (lower OR indicates a better outcome). Because the T2 arm in the OPTIMAL trial was dominated (was associated with higher costs and worse effectiveness outcomes) in the original CEA, and for the sake of brevity, in this case study we restrict the analysis to a comparison between T3 and T1.
Details of the original CEA are reported elsewhere[34]. Data on both resource use and quality of life were collected at individual level during the trial, which was used to carry out the CEA. The main outcome of the CEA was the incremental costs per QALY gained for T3 versus T1 (that is, the difference in mean costs over the difference in mean QALYs). Since individual level resource use and effectiveness outcomes were available, the CEA was based on the direct inference on their distribution. No external information was incorporated in the analysis in the original CEA.
External evidence
with μ and σ corresponding to the mean and standard deviation of the normal distribution. We note that the uncertainty around the logRR from external evidence, represented by the above probability distribution, stems from two sources: the finite sample of the external study, and our assumption on betweenstudy variability. Overall, the RR representing external evidence is much more in favor of combination therapy than the RR observed in the OPTIMAL trial. As such, we a priori expect that the incorporation of external evidence shall improve the costeffectiveness outcomes in favor of T3.
 1For i = 1,2,…,M.
 a.
Generate D ^{*}, a (Bayesian) bootstrap sample within each of the three arms of the RCT.
 b.
Impute the missing values in costs, utilities, and exacerbations in D ^{*}.
 c.
Calculate $\phantom{\rule{0.25em}{0ex}}{\mathit{\theta}}_{\mathit{T}3,\mathit{T}1}^{*}$, the log(RR) of exacerbation during the followup period for T3 vs. T1 from the bootstrapped sample.
 d.
Calculate $\phantom{\rule{0.25em}{0ex}}\mathit{P}=\mathit{P}\left({\mathit{\theta}}_{\mathit{T}3,\mathit{T}1}^{*}\right)$ using the distribution constructed for the external evidence.
 e.
Randomly draw u from a uniform distribution in the interval [0,1]. If u >P, then ignore the bootstrapped sample and jump to step a.
 f.
. Calculate mean costs, exacerbations, and QALYs for each arm from D ^{*}.
 a.
 2
Store the average values for costs, exacerbation rates, and QALYs; then jump to 1.
The simulation was stopped after 10,000 accepted bootstraps for the rejection sampling method incorporating the external evidence were generated. To obtain the results using the importance sampling method, we used the same set of bootstraps generated in the above algorithm, including all the accepted and rejected bootstraps.
In addition to the ICER, we also reported the expected values of the cost and health outcomes for each trial arm, and also plotted the CEAC, without and with the incorporation of the external evidence. The CEAC between two treatments is the probability that a treatment is costeffective compared to another at a given value of the decisionmaker’s willingnesstopay (λ) for one unit of the health outcome[26]. The statistical code for this case study is provided in Additional file1.
Results of the case study
Outcomes of the OPTIMAL CEA without and with the incorporation of external evidence*
T1  T3  Difference (T3 – T1)  ICER  

No external evidence  
Bayesian bootstrap  Costs  2649 (466)  4074 (547)  1425 (721)  250,329 
QALY  0.7071 (0.0075)  0.7128 (0.0093)  0.0057 (0.0087)  
Ordinary bootstrap  Costs  2650 (467)  4077 (551)  1427 (721)  251,171 
QALY  0.7071 (0.0075)  0.7128 (0.0093)  0.0057 (0.0087)  
With external evidence  
Bayesian bootstrap  Costs  2753 (492)  3959 (510)  1205 (709)  121,260 
QALY  0.7053 (0.0074)  0.7152 (0.0092)  0.0099 (0.0085)  
Ordinary bootstrap  Costs  2742 (477)  3966 (536)  1225 (709)  126,387 
QALY  0.7054 (0.0074)  0.7151 (0.0092)  0.0098 (0.0084) 
As this table demonstrates, the incorporation of external evidence shifted the outcomes of the T3 arm in the favorable direction (lower costs and higher QALYs), and shifted the outcomes of the T1 arm in the opposite direction. This is an expected finding given the strong evidence in favor of T3 for the effect size of T3 versus T1 from the external source.
Discussion
Contemporarily, when an economic evaluation is conducted alongside a single RCT, the practice of evidence synthesis is not an integral part of the analysis. In our opinion, this is partly because parametric Bayesian modeling, the hitherto only available method, results in problemspecific and complex statistical models. In this work we propose simple and intuitive algorithms for the incorporation of external evidence in RCTbased CEAs that use bootstrapping to draw inference. Rejection and importance samplings which form the basis of the proposed method are popular paradigms in which sampling from a ‘difficult’ distribution is replaced by sampling from a proposal (or instrumental) distribution[40]. Here, sampling from P(θD, D _{ e }) is performed via P(θD), and the latter can easily be sampled through (Bayesian) bootstrapping.
In synthesizing evidence for RCTbased CEAs, a carefully crafted parametric model with comprehensive analysis of model convergence and sensitivity of results to parametric assumptions has indisputable strengths over resampling approaches, including the higher computational efficiency of MCMC or likelihoodbased methods and the ability to synthesize and propagate all evidence in a single analytical framework[41, 42]. Nevertheless, important advantages make the proposed resampling methods a competitive option. The proposed methods are intuitive and easy extensions of the popular bootstrap method of RCTbased CEAs; they do not require specialist software and indepth content expertise for implementation. In addition to such practical advantages, the proposed resampling methods connect the parameters for which external evidence is available to the cost and effectiveness outcomes without an explicit model, which is a requirement in parametric Bayesian approaches.
Our paper provides a conceptual framework and further research into theory, as well as practical issues in using this method, should follow. The apparent simplicity of the bootstrap may conceal the assumptions being made, especially with small datasets[21, 43]. Furthermore, if the external evidence and RCT data substantially differ on the information they provide for the evidence (that is, that the prior and data are in conflict)[44], or when there are multiple parameters for which external evidence is available, then the sampling methods will become inefficient.
Further research is needed to improve sampling efficiency and to incorporate external evidence in other paradigms such as cluster or crossover RCTs. Importantly, the theoretical construct of the proposed method does not necessarily restrict it to RCTbased CEAs. A similar concept can be used to reconcile evaluations based on observational data with external evidence. This will inevitably invoke questions about the applicability of different metrics of the effect size in nonrandomized studies (for example, average treatment effect versus average treatment effect for the treated), and the validity of the bootstrap as the sampling method (for example, in a propensityscorematched cohort). In addition, further empirical research is required to evaluate the realworld applicability and feasibility of the method and to demonstrate its comparative performance against conventional methods of evidence synthesis (for example, parametric Bayesian analysis using MCMC).
This paper deliberately stays away from the debate on whether to incorporate external evidence for a given situation an d focuses on the ‘how to’ question. The ‘whether to’ question is contextspecific and great care is required for the sensible use of external evidence in each setting. For the case study, for example, the substantial discrepancy in the results between the external and current RCTs (with regard to the efficacy of triple therapy versus monotherapy) should more than anything generate misgivings about the suitability of borrowing evidence from that external source. However, the case study was undertaken as a step in the direction of proof of concept, applicability, and face validity of the proposed methods. This is not a withdrawal from the deep considerations required for sensible evidence synthesis.
Conclusions
Faced with the escalating costs of RCTs and the requirement by many decisionmaking bodies for formal economic evaluation of emerging health technologies, trialists and health economists are hardpressed to generate as much relevant information for policymakers as possible. As such, and despite criticisms, it appears that RCTbased CEAs are here to stay. The incorporation of external evidence helps optimize adoption decisions. Aside from their theoretical contribution, if their realworld applicability is proven the proposed methods can provide the large camp of analysts using bootstrap for RCTbased CEAs with a statistically sound, easily implementable tool for such purpose.
Abbreviations
 CEA:

Costeffectiveness analysis
 CEAC:

Costeffectiveness acceptability curve
 COPD:

Chronic obstructive pulmonary disease
 ICER:

Incremental costeffectiveness ratio
 MCMC:

Markov chain Monte Carlo
 OR:

Odds ratio
 RCT:

Randomized controlled trial
 RR:

Rate ratio
 QALY:

Qualityadjusted life year.
Declarations
Acknowledgments
This study was part of MS's PhD research which was funded by a graduate fellowship award from the Canadian Institutes of Health Research. The authors would like to thank Dr Craig Mitton (University of British Columbia) and Lawrence McCanmdless (Simon Fraser University) for their valuable advice, and Ms Stephanie Harvard and Ms Jenny Leese for editorial assistance.
Authors’ Affiliations
References
 Drummond M: Introducing economic and quality of life measurements into clinical studies. Ann Med. 2001, 33: 344349. 10.3109/07853890109002088.View ArticlePubMedGoogle Scholar
 Glick H, Doshi J, Sonnad S, Polsky D: Economic Evaluation in Clinical Trials. 2007, New York: Oxford University PressGoogle Scholar
 Ramsey S, Willke R, Briggs A, Brown R, Buxton M, Chawla A, Cook J, Glick H, Liljas B, Petitti D, Reed S: Good research practices for costeffectiveness analysis alongside clinical trials; the ISPOR RCTCEA Task Force report. Value in health. 2005, 8: 52133. 10.1111/j.15244733.2005.00045.x.View ArticlePubMedGoogle Scholar
 Briggs A, Wonderling D, Mooney C: Pulling costeffectiveness analysis up by its bootstraps: a nonparametric approach to confidence interval estimation. Health Econ. 1997, 6: 327340. 10.1002/(SICI)10991050(199707)6:4<327::AIDHEC282>3.0.CO;2W.View ArticlePubMedGoogle Scholar
 Drummond M, O’Brien B, Stoddart G, Torrance G: Methods for the Economic Evaluation of Health Care Programmes. 2005, United Kingdom: Oxford University PressGoogle Scholar
 Buxton MJ, Drummond MF, Van Hout BA, Prince RL, Sheldon TA, Szucs T, Vray M: Modelling in economic evaluation: an unavoidable fact of life. Health Econ. 1997, 6: 217227. 10.1002/(SICI)10991050(199705)6:3<217::AIDHEC267>3.0.CO;2W.View ArticlePubMedGoogle Scholar
 Brennan A, Akehurst R: Modelling in health economic evaluation. What is its place? What is its value?. Pharmacoeconomics. 2000, 17: 445459. 10.2165/0001905320001705000004.View ArticlePubMedGoogle Scholar
 Sculpher M, Claxton K, Drummond M, McCabe C: Whither trialbased economic evaluation for health care decision making?. Health Econ. 2006, 15: 677687. 10.1002/hec.1093.View ArticlePubMedGoogle Scholar
 Spiegelhalter D, Freedman L, Parmar M: Bayesian Approaches to Randomized Trials. Journal of the Royal Statistical Society Series A (Statistics in Society). 1994, 157: 357416. 10.2307/2983527.View ArticleGoogle Scholar
 O’Hagan A, Stevens JW, Montmartin J: Bayesian costeffectiveness analysis from clinical trial data. Stat Med. 2001, 20: 733753. 10.1002/sim.861.View ArticlePubMedGoogle Scholar
 Briggs A: A Bayesian approach to stochastic costeffectiveness analysis. An illustration and application to blood pressure control in type 2 diabetes. Int J Technol Assess Health Care. 2001, 17: 6982. 10.1017/S0266462301104071.View ArticlePubMedGoogle Scholar
 Heitjan D, Moskowitz A, Whang W: Bayesian estimation of costeffectiveness ratios from clinical trials. Health Econ. 1999, 8: 191201. 10.1002/(SICI)10991050(199905)8:3<191::AIDHEC409>3.0.CO;2R.View ArticlePubMedGoogle Scholar
 Heitjan D, Li H: Bayesian estimation of costeffectiveness: an importancesampling approach. Health Economics. 2004, 13: 191198. 10.1002/hec.825.View ArticlePubMedGoogle Scholar
 Al M, Van Hout B: A Bayesian approach to economic analyses of clinical trials: the case of stenting versus balloon angioplasty. Health Econ. 2000, 9: 599609. 10.1002/10991050(200010)9:7<599::AIDHEC530>3.0.CO;2#.View ArticlePubMedGoogle Scholar
 O’Brien B: A tale of two (or more) cities: geographic transferability of pharmacoeconomic data. Am J Manag Care. 1997, 3 (Suppl): S3339.PubMedGoogle Scholar
 Cook JR, Drummond M, Glick H, Heyse JF: Assessing the appropriateness of combining economic data from multinational clinical trials. Stat Med. 2003, 22: 19551976. 10.1002/sim.1389.View ArticlePubMedGoogle Scholar
 Drummond M, Barbieri M, Cook J, Glick H, Lis J, Malik F, Reed S, Rutten F, Sculpher M, Severens J: Transferability of economic evaluations across jurisdictions: ISPOR Good Research Practices Task Force report. Value Health. 2009, 12: 409418. 10.1111/j.15244733.2008.00489.x.View ArticlePubMedGoogle Scholar
 Lunn D, Thomas A, Best N, Spiegelhalter D: WinBUGS – A Bayesian modelling framework: Concepts, structure, and extensibility. Statistics and Computing. 2000, 10: 325337. 10.1023/A:1008929526011.View ArticleGoogle Scholar
 Mihaylova B, Briggs A, O’Hagan A, Thompson S: Review of statistical methods for analysing healthcare resources and costs. Health Econ. 2011, 20: 897916. 10.1002/hec.1653.View ArticlePubMedGoogle Scholar
 Thompson S, Nixon R: How sensitive are costeffectiveness analyses to choice of parametric distributions?. Med Decis Making. 2005, 25: 416423. 10.1177/0272989X05276862.View ArticlePubMedGoogle Scholar
 Rubin D: The Bayesian Bootstrap. Ann Statist. 1981, 9: 130134. 10.1214/aos/1176345338.View ArticleGoogle Scholar
 Rubin D: Multiple Imputation for Nonresponse in Surveys. 1987, New York: John WileyView ArticleGoogle Scholar
 Lo A: A Large Sample Study of the Bayesian Bootstrap. Ann Statist. 1987, 15: 360375. 10.1214/aos/1176350271.View ArticleGoogle Scholar
 Schafer J: Multiple imputation: a primer. Statistical Methods in Medical Research. 1999, 8: 315. 10.1191/096228099671525676.View ArticlePubMedGoogle Scholar
 Polsky D, Glick HA, Willke R, Schulman K: Confidence intervals for costeffectiveness ratios: a comparison of four methods. Health Econ. 1997, 6: 243252. 10.1002/(SICI)10991050(199705)6:3<243::AIDHEC269>3.0.CO;2Z.View ArticlePubMedGoogle Scholar
 Fenwick E, Claxton K, Sculpher M: Representing uncertainty: the role of costeffectiveness acceptability curves. Health Economics. 2001, 10: 779787. 10.1002/hec.635.View ArticlePubMedGoogle Scholar
 Smith A, Gelfand A: Bayesian Statistics without Tears: A SamplingResampling Perspective. The American Statistician. 1992, 46: 8488.Google Scholar
 Von Neumann J: Various techniques used in connection with random digits. Nat Bureau Stand Appl Math Ser. 1951, 12: 3638.Google Scholar
 Roese NJ, Vohs KD: Hindsight Bias. Perspectives on Psychological Science. 2012, 7: 411426. 10.1177/1745691612454303.View ArticlePubMedGoogle Scholar
 Lehmann EL, Casella G: Theory of Point Estimation. 1998, New York: SpringerGoogle Scholar
 Spiegelhalter D, Abrams K, Myles J: Bayesian Approaches to Clinical Trials and Health Care Evaluation. 2004, Chichester: John Wiley & SonsGoogle Scholar
 Aaron S, Vandemheen K, Fergusson D, FitzGerald M, Maltais F, Bourbeau J, Goldstein R, McIvor A, Balter M, O’donnell D: The Canadian Optimal Therapy of COPD Trial: design, organization and patient recruitment. Can Respir J. 2004, 11: 581585.View ArticlePubMedGoogle Scholar
 Aaron S, Vandemheen K, Fergusson D, Maltais F, Bourbeau J, Goldstein R, Balter M, O’Donnell D, McIvor A, Sharma S, Bishop G, Anthony J, Cowie R, Field S, Hirsch A, Hernandez P, Rivington R, Road J, Hoffstein V, Hodder R, Marciniuk D, McCormack D, Fox G, Cox G, Prins H, Ford G, Bleskie D, Doucette S, Mayers I, Chapman K: Tiotropium in combination with placebo, salmeterol, or fluticasonesalmeterol for treatment of chronic obstructive pulmonary disease: a randomized trial. Ann Intern Med. 2007, 146: 545555. 10.7326/00034819146820070417000152.View ArticlePubMedGoogle Scholar
 Najafzadeh M, Marra C, Sadatsafavi M, Aaron S, Sullivan S, Vandemheen K, Jones P, FitzGerald J: Cost effectiveness of therapy with combinations of long acting bronchodilators and inhaled steroids for treatment of COPD. Thorax. 2008, 63: 962967. 10.1136/thx.2007.089557.View ArticlePubMedGoogle Scholar
 Mills EJ, Druyts E, Ghement I, Puhan MA: Pharmacotherapies for chronic obstructive pulmonary disease: a multiple treatment comparison metaanalysis. Clin Epidemiol. 2011, 3: 107129.View ArticlePubMedPubMed CentralGoogle Scholar
 Ernst P, Gonzalez AV, Brassard P, Suissa S: Inhaled corticosteroid use in chronic obstructive pulmonary disease and the risk of hospitalization for pneumonia. Am J Respir Crit Care Med. 2007, 176: 162166. 10.1164/rccm.2006111630OC.View ArticlePubMedGoogle Scholar
 Spitzer WO, Suissa S, Ernst P, Horwitz RI, Habbick B, Cockcroft D, Boivin JF, McNutt M, Buist AS, Rebuck AS: The use of betaagonists and the risk of death and near death from asthma. N Engl J Med. 1992, 326: 501506. 10.1056/NEJM199202203260801.View ArticlePubMedGoogle Scholar
 Welte T, Miravitlles M, Hernandez P, Eriksson G, Peterson S, Polanowski T, Kessler R: Efficacy and tolerability of budesonide/formoterol added to tiotropium in patients with chronic obstructive pulmonary disease. Am J Respir Crit Care Med. 2009, 180: 741750. 10.1164/rccm.2009040492OC.View ArticlePubMedGoogle Scholar
 Ades A, Lu G, Higgins J: The Interpretation of RandomEffects MetaAnalysis in Decision Models. Medical Decision Making. 2005, 25: 646654. 10.1177/0272989X05282643.View ArticlePubMedGoogle Scholar
 Robert C, Casella G: Monte Carlo Statistical Methods. 2004, New York: SpringerView ArticleGoogle Scholar
 Cooper N, Sutton A, Abrams K, Turner D, Wailoo A: Comprehensive decision analytical modelling in economic evaluation: a Bayesian approach. Health Econ. 2004, 13: 203226. 10.1002/hec.804.View ArticlePubMedGoogle Scholar
 Ades A, Sculpher M, Sutton A, Abrams K, Cooper N, Welton N, Lu G: Bayesian methods for evidence synthesis in costeffectiveness analysis. Pharmacoeconomics. 2006, 24: 119. 10.2165/0001905320062401000001.View ArticlePubMedGoogle Scholar
 Beran R: The Impact of the Bootstrap on Statistical Algorithms and Theory. Statistical Science. 2003, 18: 175184. 10.1214/ss/1063994972.View ArticleGoogle Scholar
 Hoch J, Briggs A, Willan AR: Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and costeffectiveness analysis. Health Econ. 2002, 11: 415430. 10.1002/hec.678.View ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.