 Methodology
 Open Access
 Published:
Sample size determination for a binary response in a superiority clinical trial using a hybrid classical and Bayesian procedure
Trials volume 18, Article number: 83 (2017)
Abstract
Background
When designing studies that have a binary outcome as the primary endpoint, the hypothesized proportion of patients in each population experiencing the endpoint of interest (i.e., π _{1},π _{2}) plays an important role in sample size and power calculations. Point estimates for π _{1} and π _{2} are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed.
Methods
This paper presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of π _{1} and π _{2} into the study’s power calculation. Conditional expected power (CEP), which averages the traditional power curve using the prior distributions of π _{1} and π _{2} as the averaging weight conditional on the presence of a positive treatment effect (i.e., π _{2}>π _{1}), is used, and the sample size is found that equates the prespecified frequentist power (1−β) and the conditional expected power of the trial.
Results
Notional scenarios are evaluated to compare the probability of achieving a target value of power with a trial design based on traditional power and a design based on CEP. We show that if there is uncertainty in the study parameters and a distribution of plausible values for π _{1} and π _{2}, the performance of the CEP design is more consistent and robust than traditional designs based on point estimates for the study parameters. Traditional sample size calculations based on point estimates for the hypothesized study parameters tend to underestimate the required sample size needed to account for the uncertainty in the parameters. The greatest marginal benefit of the proposed method is achieved when the uncertainty in the parameters is not large.
Conclusions
Through this procedure, we are able to formally integrate prior information on the uncertainty and variability of the study parameters into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the sample size that is necessary to achieve a high level of CEP given the available prior information helps protect against misspecification of hypothesized treatment effect and provides a substantiated estimate that forms the basis for discussion about the study’s feasibility during the design phase.
Background
When designing a study that has a binary outcome as the primary endpoint, the hypothesized proportion of patients in each population experiencing the endpoint of interest (i.e., π _{1},π _{2}) plays an important role in sample size determination. In a twoarm study comparing two independent proportions, π _{2}−π _{1} represents the true hypothesized difference between groups, sometimes also known as the minimal relevant difference [1]. While the treatment effect may also be parameterized equivalently using an odds ratio or relative risk, when appropriate, the most frequently used sample size formula expresses the treatment effect using the difference between groups [2, 3]. In the case of proportions, the variance of the difference depends on the individual hypothesized values for the population parameters π _{1} and π _{2} under the alternative hypothesis. Thus, the sample size required to detect a particular difference of interest is affected by both the magnitude of the difference and the individual hypothesized values.
Traditional sample size formulas incorporate beliefs about π _{1} and π _{2} through single point estimates [1]. However, there is often uncertainty in these hypothesized proportions and, thus, a distribution of plausible values that should be considered when determining sample size. Misspecification of these hypothesized proportions in the sample size calculation may lead to an underpowered study, or one that has a low probability of detecting a smaller and potentially clinically relevant difference when such a difference exists [4]. Alternatively, if there is strong evidence in favor of a large difference, a study may be overpowered to detect a small hypothesized difference. Thus, a method for determining sample size that formally uses prior information on the distribution of study design parameters can mitigate the risk that the power calculation will be overly optimistic or overly conservative.
Similar difficulty surrounding the choice of study parameters for a continuous endpoint with known variance [5] and for a continuous endpoint with unknown variance [6] has been discussed previously. We have presented methods that formally incorporate the distribution of prior information on both the treatment effect and the variability of the endpoint into sample size determination. In this paper, we extend these methods to a binary endpoint by using a “hybrid classical and Bayesian” [7] technique based on conditional expected power (CEP) [8] to account for the uncertainty in study parameters π _{1} and π _{2} when determining the sample size of a superiority clinical trial. Unlike traditional power, which is calculated assuming the truth of a point alternative hypothesis (π _{2}−π _{1}=Δ _{ A }) for given values of π _{1} and π _{2}, CEP conditions on the truth of a composite alternative of superiority (e.g., π _{2}−π _{1}>0 or π _{2}>π _{1}). CEP formally incorporates available prior information on both π _{1} and π _{2} into the power calculations by averaging the traditional power curve using the product of the prior distribution of π _{1} and the conditional prior distribution of π _{2},p(π _{2}  π _{2}>π _{1}), as the averaging weight. Based on the available prior information, the sample size that yields the desired level of CEP can be used when estimating the required sample size of the study.
While there has been much research in the area of Bayesian sample size determination [9–12], the hybrid classical and Bayesian method presented here aligns more with the ideas found in traditional frequentist sample size determination. Unlike traditional frequentist methods, however, we do not assume that the true parameters under the alternative hypothesis are known. This assumption rarely holds; typically, parameter values are estimated from early phase or pilot studies, studies of the intervention in different populations, or studies of similar agents in the current population [13, 14]. Thus, there is uncertainty surrounding the estimation of these population parameters and natural prior distributions of plausible values of these parameters that should be incorporated into the assessment of a trial’s power. Our method incorporates knowledge on the magnitude and uncertainty in the parameters into the traditional frequentist notion of power through explicit prior distributions on these unknown parameters to give CEP. As discussed in the “Methods” Section, CEP is not only well behaved, but it allows us to maintain a definition of power that intuitively converges to the traditional definition. Bayesian methodology is used only during the study design to allow prior information, through the prior distributions, to inform a choice for the sample size. Traditional type I and type II error rates, which have been accepted in practice, are maintained, and inferences are based on the likelihood of the data. The probability of achieving a target value of power using this method is compared to the performance of a traditional design. It is our hope that this formal method for incorporating prior knowledge into the study design will form the basis of thoughtful discussion about the feasibility of the study in order to reduce the number of poorly designed, underpowered studies that are conducted.
Methods
CEP for dichotomous outcome
Suppose that the study endpoint is dichotomous so that the probability (risk) of experiencing the event of interest in group 2 (the experimental treatment group), π _{2}, is compared to that in group 1 (the control group), π _{1}. The responses (i.e., the number of successes) in each group follow a binomial distribution. Assume that after n observations in each independent group or N=2n total observations, the twosample Ztest of proportions is performed to test the null hypothesis H _{0}:π _{2}=π _{1} (i.e., π _{2}−π _{1}=Δ=0) versus the twosided alternative hypothesis H _{1}:π _{2}≠π _{1} (i.e., π _{2}−π _{1}=Δ≠0), where π _{2}>π _{1} indicates benefit of the experimental treatment over the control. The test is based on the test statistic T=p _{2}−p _{1}, or the difference in the proportion of successes in each sample. Under H _{0}:π _{2}=π _{1}=π,T · ∼ N(0,σ _{0}) in large samples, where σ _{0} is the standard deviation of the normal distribution. Assuming equal sample sizes n in each group gives \(\sigma _{0} = \sqrt {2 \pi (1\pi)/n }\), where π=(π _{1}+π _{2})/2. In this setting, H _{0} is rejected at the αlevel of significance if \(T \geq z_{{}_{1\alpha /2}} \, \hat {\sigma }_{0}\), where \(\phantom {\dot {i}\!}z_{{}_{1\alpha /2}}\) is the critical value for lower tail area 1−α/2 of the standard normal distribution and π is estimated by p=(p _{1}+p _{2})/2 in \(\hat {\sigma }_{0}\). A positive conclusion, D _{1}, occurs if \(Z = T/\hat {\sigma }_{0} \geq z_{{}_{1\alpha /2}}\).
Under \(H_{1}: \pi _{2}\pi _{1} = \Delta _{A}, T \overset {\cdot }{\sim } N(\Delta _{A}, \sigma _{1})\), where \(\sigma _{1} = \sqrt {(\pi _{2} (1\pi _{2}) + \pi _{1} (1\pi _{1}))/n}\). Thus, the traditional power of this test to detect the hypothesized difference corresponding to values of π _{1} and π _{2} under H _{1} is
where Φ[ ·] is the standard normal cumulative distribution function. Since the traditional power curve is discontinuous at π _{2}=π _{1} for a twosided test, we assume a successful outcome or π _{2}>π _{1} when calculating power; thus, π _{2}−π _{1}=π _{2}−π _{1} in (1). One may plot the power function for fixed N and π _{1} over values of π _{2} or equivalently over values of π _{2}−π _{1} to give the traditional power curve. Figure 1 shows the traditional power surfaces for N=48 and for N=80 with hypothesized values of π _{2}=0.7 and π _{1}=0.3. Power curves for fixed π _{2}=0.7 and variable π _{1} and for fixed π _{1}=0.3 and variable π _{2} are highlighted. Sample size is chosen to give high traditional power (e.g., 0.80≤1−β≤0.90) to detect an effect at least as large as the hypothesized difference for π _{2} and π _{1} by solving (1) for N [2]:
The traditional power curve does not account for the uncertainty associated with the unknown population parameters π _{2} and π _{1} and does not indicate if the planned sample size is adequate given this uncertainty. Average or expected power (EP) was developed as a way to use the distribution of prior beliefs about the unknown parameters to provide an overall predictive probability of a positive conclusion [8, 9, 15–24]. EP, also known as assurance [20], probability of study success [23], or Bayesian predictive power [24], averages the traditional power curve using the prior distributions for the unknown parameters to weight the average without restricting the prior distributions to assume treatment superiority. In the case of a binomial response, assuming π _{1} and π _{2} are independent yields a special case of the general multivariate formulation which allows the joint distribution p(π _{1},π _{2}) to be factored into the product of the two prior distributions p(π _{1}) and p(π _{2}). Thus, the traditional power curve P(D _{1}  π _{2},π _{1}) is averaged using the product of the prior distributions for π _{2} and π _{1},p(π _{2}) and p(π _{1}), respectively, as the averaging weight [8], which gives the following formulation for EP:
Expected power conditional on the experimental treatment’s superiority, π _{2}>π _{1}, is known as conditional expected power (CEP) [8]. Unlike EP, CEP is found by using the conditional prior distribution for π _{2},p(π _{2}  π _{2}>π _{1}), in the averaging weight. Since this conditional prior is now dependent on π _{1} and equals zero when π _{2}≤π _{1}, to ensure integration to 1 when P(π _{1}>π _{2})>0, the conditional prior is scaled by the normalization factor P(π _{2}>π _{1})^{−1}, or the inverse probability of the experimental treatment’s superiority. This gives the following formulation for CEP:
where
The unconditional prior distributions p(π _{1}) and p(π _{2}) are defined such that π _{1}∉ [0,1]⇒p(π _{1})=0 and π _{2}∉ [ 0,1]⇒p(π _{2})=0 (e.g., beta or uniform(0,1) distributions).
Combining (1) and (3) gives the following equation for CEP:
Note, any appropriate sample size and power formulas may be used to evaluate CEP in (5). For example, continuitycorrected versions of (2) or the arcsine approximation [25, 26] may alternatively be utilized instead of (2) to determine sample size, while related power formulas may be used instead of (1) for CEP calculations.
To evaluate CEP under a proposed design, find N in (2) for the hypothesized values of π _{1} and π _{2}, significance level α, and traditional power level 1−β. Numerical integration may then be used to evaluate CEP (5) for the assumed prior distributions p(π _{1}) and p(π _{2}). If CEP for the proposed design is less than 1−β, the study is expected to be underpowered under the treatment superiority assumption, and if the CEP is greater than 1−β, the study is expected to be overpowered. To ensure that the study is expected to be appropriately powered under the treatment superiority assumption, an iterative search procedure can be used to find the value of the sample size N in (5) that gives CEP equal to the threshold of traditional power 1−β. The value of N that achieves this desired level is denoted N ^{∗}. As in traditional power, we would like the probability of detecting a difference when a positive difference exists to be high (i.e., 0.80≤1−β≤0.90). Pseudocode 1 outlines the steps for this process.
If the prior distributions put all their mass at a single positive point, essentially becoming a traditional point alternative hypothesis, EP and CEP reduce to the traditional formulation of power. However, for prior distributions where P(π _{1}>π _{2})>0, CEP will be greater than EP, with CEP approaching 1 and EP approaching P(π _{2}>π _{1}) as N→∞:
When there is no doubt of a beneficial effect (i.e., P(π _{2}>π _{1})=1), CEP equals EP.
Previous work in this area almost exclusively uses expected power P(D _{1}) to account for uncertainty in study design parameters [8, 9, 15–24], and finds the sample size that gives the desired level of P(D _{1}). Our preference for using CEP as opposed to EP to inform the design of a study is twofold. EP gives the predictive probability of a positive conclusion, regardless of the truth of the alternative hypothesis. CEP, however, is conceptually analogous to traditional power in that it is conditional on the truth of the benefit of the experimental treatment, which provides a more familiar framework for setting the desired level of CEP for a study. Secondly, if P(π _{1}>π _{2})>0, then EP will not approach 1 as the sample size goes to infinity because \({\lim }_{N\to \infty } P(D_{1})=1P(\pi _{1}>\pi _{2})\). CEP, however, is conditioned on π _{2}>π _{1}, so it approaches 1 as the sample size increases since \({\lim }_{N\to \infty } P(D_{1} \,\, \pi _{2} > \pi _{1}) = \frac {1P(\pi _{1}>\pi _{2})}{P(\pi _{2}>\pi _{1})}=1\). Thus, CEP is also more mathematically analogous to traditional power in that the probability of correctly reaching a positive conclusion is assured as the sample size goes to infinity.
Prior distributions
The prior distributions p(π _{1}) and p(π _{2}) reflect the current knowledge about the response rate in each treatment group before the trial is conducted. In the design phase of a clinical trial, a review of the literature is often performed. This collection of prior evidence forms a natural foundation for specifying the prior distributions. Historical data are commonly pooled using traditional metaanalysis techniques to calculate an overall point estimate [27, 28]; however, a Bayesian randomeffects metaanalysis [29–31] may be more appropriate when the goal is to hypothesize a prior distribution. The priors can also incorporate the pretrial consensus of experts in the field [9] or Phase II trial data [22]. Translating and combining prior evidence and opinions to form a prior distribution is often hailed as the most challenging part of using a Bayesian framework [7], and several works [32–35] describe techniques for eliciting a prior distribution.
A beta distribution, which is defined on the interval [ 0,1], can be used to describe initial beliefs about the parameters π _{1} and π _{2}. If π _{ j }∼Beta(a,b), then
where shape parameters a>0 and b>0. The mean, variance, and mode of the prior distribution are given by: μ=a/(a+b),τ ^{2}=ab/((a+b)^{2}(a+b+1)), and m=(a−1)/(a+b−2) for a,b>1, respectively. For fixed μ, larger values of a and b decrease τ ^{2}. One may choose the shape parameters a and b by fixing the mean and variance of the distribution at fixed values μ and τ ^{2}, which yields a=μ ^{2}(1−μ)/τ ^{2}−μ and b=a(1−μ)/μ. For skewed distributions, one may wish to describe central tendency using the mode m rather than the mean. Under a traditional design, the difference in modes, m _{2}−m _{1}, is a natural estimate for the hypothesized difference in proportions. When fixing m and τ ^{2}, the corresponding value of b may be found by solving the general cubic equation Ab ^{3}+Bb ^{2}+Cb+D=0, with coefficients
The corresponding value of a is given by \(a=\frac {2mmb1}{m1}\). (Table 2 in the Appendix reports the values of a and b for given m and τ ^{2}.) Notice that for a given variance τ ^{2}, the value of a when the mode =m equals the value of b when the mode =1−m. Thus, when m=0.5,a=b.
A uniform prior distribution may also be assumed for π _{ j } with limits within the interval [ 0,1]. The uniform prior has lower bound a and upper bound b, or π _{ j }∼U(a,b), and is constant over the range [ a,b]. The prior is centered at μ=(a+b)/2 with variance τ ^{2}=(b−a)^{2}/12. The noninformative prior distribution that assumes no values of π _{ j } are more probable than any others is U(0,1)≡Beta(1,1). One may also restrict the range of the uniform distribution to focus on smaller ranges for π _{1} and π _{2}. Rather than setting the lower and upper bounds of the uniform, one may set the mean μ<1 and variance \(\tau ^{2} < \frac {\min (\mu ^{2}, (1\mu)^{2})}{3}\) of the prior distribution, which gives lower bound \(a = \mu  \sqrt {3 \, \tau ^{2}} \) and upper bound \(b = \mu + \sqrt {3 \, \tau ^{2}}\). Again, under a traditional design, the difference in means μ _{2}−μ _{1} is a natural estimate for the hypothesized difference in proportions when presented with uniform prior evidence. (Table 3 in the Appendix reports the values of a and b for given μ and τ ^{2}.) Notice that restrictions exist for the variances assumed for certain means to maintain bounds between [ 0,1].
Results
The procedures described in the “Methods” Section were applied to a set of notional scenarios to compare traditionally designed studies to those designed using CEP. The integration step of Pseudocode 1 was approximated using Riemann sums with step size 0.0001.
An example scenario assumed betadistributed priors for π _{1} and π _{2}, such that π _{1}∼Beta(6.62,14.11) and π _{2}∼Beta(14.11,6.62). For this scenario, a traditionally designed study would select a sample size of N=48 based on (2) to achieve 80% power and a twosided type I error of 5%, with hypothesized values of π _{1}=mode(Beta(6.62,14.11))=0.3 and π _{2}=mode(Beta(14.11,6,62))=0.7. However, based on the assumed prior distributions, a study with a sample size of 48 could achieve less than 80% power when π _{1}≠0.3 or π _{2}≠0.7. In fact, based on (5), the study with sample size N=48 would give CEP=67.8%. Figure 2 a displays the joint distribution of π _{1} and π _{2}, conditional on π _{2}>π _{1}, and highlights the region where power would be less than 80% under a traditional design when the sample size is N=48. For this scenario, the study with sample size N=48 would achieve power less than the target value in more than 56% of instances when π _{2}>π _{1}.
For the same scenario, a CEPdesigned study would select a sample size of N ^{∗}=80 based on Pseudocode 1 to achieve 80% CEP with a twosided type I error of 5%. Figure 2 b displays the joint distribution of π _{1} and π _{2}, conditional on π _{2}>π _{1}, and highlights the region where power would be less than 80% under a CEP design when the sample size is N ^{∗}=80. For this scenario, the study with sample size N ^{∗}=80 would achieve power less than the target value in approximately 33% of instances when π _{2}>π _{1}. Note that the intersection of the two regions corresponds to values of π _{1} and π _{2} that give power from (1) equal to 80% with sample size N=80.
The probability of achieving power at least equal to the target value, conditional on the experimental treatment’s superiority (π _{2}>π _{1}), is here termed the performance of the design. While CEP provides a point estimate of power under the treatment superiority assumption, performance indicates how robust the design is. The performance of the design is given by:
where
Thus, the traditionally designed study from the example scenario produced a performance of (100−56)%=44%, while the CEP design, which explicitly accounts for uncertainty, produced a more robust performance of (100−33)%=67%. However, this increase in performance required an increase in sample size from N=48 to N ^{∗}=80. The increase in performance divided by the increase in sample size is here termed the marginal benefit for the scenario due to CEP. The marginal benefit for the example scenario due to CEP is given by (67−44)%/(80−48)=0.71%. If there is no uncertainty in the design parameters, then there would be no marginal benefit due to CEP, since the probability of achieving less than the target power would be assumed 0 for a traditionally designed study and the CEPdesigned study would give N ^{∗}=N. On the other hand, if the uncertainty in the design parameters is very large, the marginal benefit may approach 0, since the CEPdesigned study could give N ^{∗}>>N with limited increase in performance. This is important to consider, since a very small marginal benefit could make it impractical to achieve a desired value for CEP or a desired threshold of performance.
Since the performance and marginal benefit result from the prior distributions of π _{1} and π _{2}, several notional scenarios were evaluated to explore the relationship between prior distributions, CEP, and performance. Tables 4, 5 and 6 in the Appendix display the results of several scenarios that assumed Betadistributed priors for π _{1} and π _{2}. The mode and variance of p(π _{ j }),j=1,2, are denoted m _{ j } and \(\tau ^{2}_{j}\), respectively. The procedure for generating the results from Table 4 in the Appendix, for which \(\tau ^{2}_{1}=\tau ^{2}_{2}\), is given below:

1.
The modes, m _{1} and m _{2}, and variances, \(\tau ^{2}_{1}=\tau ^{2}_{2}\), were used to hypothesize a beta prior distribution for π _{1} and π _{2}, respectively.

2.
For each pair of prior distributions (p(π _{1}),p(π _{2})) considered:

(a)
Traditional sample size is found using (2) by setting the hypothesized values of π _{1} and π _{2} equal to the mode of each prior, m _{1} and m _{2}, respectively. Twosided type I error α=0.05 and traditional power 1−β=0.80 are assumed. Traditional sample size is denoted \(\hat {N}\). If \(\hat {N}\) is odd, the sample size is increased by 1 to provide equal sample size for both groups.

(b)
The CEP of the traditional design is found using (5), with \(N=\hat {N}\), twosided α=0.05, and 1−β=0.80.

(c)
The performance of the traditional design is found using (6), with \(N=\hat {N}\), twosided α=0.05, and 1−β=0.80.

(d)
The smallest sample size for which CEP evaluates to ≥1−β is found using PseudoCode 1 and is denoted N ^{∗}. If N ^{∗} is odd, the sample size is increased by 1 to provide equal sample size for both groups.

(e)
The probability of a positive treatment effect, P(π _{2}>π _{1}), is found using (4) with Riemann sum integral approximations.

(f)
The conditional expected difference, E(π _{2}−π _{1}π _{2}>π _{1}), is found using Riemann sum integral approximations of
$$\begin{aligned} {}E(\pi_{2} \! \!\pi_{1}\pi_{2} \!\!>\!\! \pi_{1})\,=\,\frac{1}{P(\pi_{2}\!\!>\!\!\pi_{1})}\!\! \int \limits_{\pi_{1}=0}^{1} \int \limits_{\pi_{2}=\pi_{1}}^{1} \!\!(\pi_{2}\,\,\pi_{1}) p\!\left(\pi_{1}\right) \!p\!\left(\pi_{2} \right) \!d\pi_{2} d\pi_{1}. \end{aligned} $$ 
(g)
The performance of the CEP design is found using (6), with N=N ^{∗}, twosided α=0.05, and 1−β=0.80.

(h)
The marginal benefit due to CEP for the scenario is found by dividing the difference between the CEP design performance and the traditional design performance by the difference between the CEP sample size and the traditional sample size, \(N^{*}\hat {N}\).

(a)
Table 4 in the Appendix shows that when m _{2}−m _{1}>1/3, the performance of the traditional design decreases as \(\tau _{1}^{2}=\tau _{2}^{2}\) increases. This is explained by the fact that the conditional expected difference is less than the hypothesized difference that was used in the traditional design sample size calculation. This occurs for m _{2}−m _{1}>1/3 since both prior distributions are approaching U(0,1) as \(\tau _{1}^{2}=\tau _{2}^{2}\) increases, and E(π _{2}−π _{1}π _{2}>π _{1})=1/3 for π _{1},π _{2}∼U(0,1). Thus, when m _{2}−m _{1}<1/3, the performance of the traditional design increases as \(\tau _{1}^{2}=\tau _{2}^{2}\) increases since the hypothesized difference is less than the limit of the conditional expected difference. When m _{2}−m _{1} is smaller than E(π _{2}−π _{1}π _{2}>π _{1}), CEP will be high for a traditional design with hypothesized difference m _{2}−m _{1}, since it is designed to detect a difference smaller than the expected difference.
The procedure was also applied to scenarios where \(\tau _{1}^{2} = 0.001\) and \(\tau _{2}^{2} > 0.001\) (Table 5 in the Appendix) and scenarios where \(\tau _{1}^{2} = 0.08\) and \(\tau _{2}^{2} < 0.08\) (Table 6 in the Appendix), corresponding to small and large uncertainty, respectively, in the proportion experiencing the outcome in the control group. Table 5 in the Appendix shows that the performance of the traditional design is similar to the performance seen in Table 4 in the Appendix. However, when \(\tau _{1}^{2}\) is fixed at 0.001,E(π _{2}−π _{1}π _{2}>π _{1}) begins near m _{2}−m _{1} and approaches (1−m _{1})/2 as \(\tau _{2}^{2}\) increases because p(π _{2}π _{2}>π _{1}) is approaching U(m _{1},1). Thus, when m _{2}−m _{1}>(1−m _{1})/2, the performance of the traditional design decreases as \(\tau _{2}^{2}\) increases, and when m _{2}−m _{1}<(1−m _{1})/2, the performance of the traditional design increases as \(\tau _{2}^{2}\) increases.
When \(\tau _{1}^{2}\) is fixed at 0.08,E(π _{2}−π _{1}π _{2}>π _{1}) approaches 1/3 from m _{2}/2. If E(π _{2}−π _{1}π _{2}>π _{1}) is increasing towards 1/3 as \(\tau _{2}^{2}\) increases, then the performance of the traditional design will increase. If E(π _{2}−π _{1}π _{2}>π _{1}) decreases towards 1/3 as \(\tau _{2}^{2}\) increases, then the performance of the traditional design will decrease. If m _{2}/2>1/3, then the performance of the traditional design will decrease as \(\tau _{2}^{2}\) increases. This happens because, as \(\tau _{2}^{2}\) increases, the hypothesized difference is decreasing from m _{2}/2 to 1/3. The behavior of the traditional design is summarized in Table 1.
Excursions with uniform priors were performed. Table 7 in the Appendix shows that the performance of a traditional design under a uniform prior is similar to the performance observed in Table 4 in the Appendix. However, fewer trends are visible because the parameters of the uniform distribution are more restricted than the parameters of the beta distribution.
As expected, the performance of the CEP design changes minimally as \(\tau _{1}^{2}=\tau _{2}^{2}\) increases, since N ^{∗} is chosen to explicitly account for changes in \(\tau _{1}^{2}=\tau _{2}^{2}\). Note, N ^{∗} is directly tied to E(π _{2}−π _{1}π _{2}>π _{1}): N ^{∗} increases as the conditional expected difference decreases, and N ^{∗} decreases as the conditional expected difference increases. This occurs because increasing the variability can increase the conditional expected difference if the resulting conditional priors give more relative weight to larger differences and less relative weight to smaller differences compared to the unconditional priors. This is more likely to occur when m _{1} is large, since increasing the variability when m _{1} is large will make smaller values of π _{1} more likely due to the condition that π _{2}>π _{1}. Similarly, when m _{2} is small, larger values of π _{2} are more likely under the assumption that π _{2}>π _{1}.
The marginal benefit due to CEP is greatest for small values of \(\tau _{1}^{2}=\tau _{2}^{2}\). This is so because the relative difference between \(\hat {N}\) and N ^{∗} is smallest when the uncertainty is low (i.e., when the traditional assumptions closely approximate the CEP assumptions). However, the marginal benefit due to CEP decreases minimally or remains constant as the uncertainty increases because the difference in performance is always less than 1, while the difference in sample size, \(N^{*}\hat {N}\), can be greater than 200 in some cases. Furthermore, as \(\tau _{1}^{2}=\tau _{2}^{2}\) increases, the performance of the traditional design can improve even though \(\hat {N}\) remains constant, while N ^{∗} may have to increase to maintain the performance of the CEP design.
When \(\tau _{1}^{2}\) is fixed at 0.001, the performance of the CEP design remains stable at approximately 0.7. However, the marginal benefit is greater with fixed, low uncertainty in π _{1} compared with the changing uncertainty in Table 4 in the Appendix. The sample size required to achieve CEP of 1−β with fixed \(\tau _{1}^{2}\) is reduced compared to scenarios with changing \(\tau _{1}^{2}\). This is because uncertainty in the control group is small, which indicates that reducing the uncertainty in the control parameter can increase the benefit of CEP to the study.
When \(\tau _{1}^{2}\) is fixed at 0.08, the performance of the CEP design remains stable at approximately 0.71. However, the marginal benefit is very small because N ^{∗} is always greater than that in Table 4 or Table 5 in the Appendix due to the larger uncertainty in π _{1}. Again, this demonstrates that it is beneficial to minimize the uncertainty in π _{1} to increase the marginal benefit.
Note that for small differences in m _{2}−m _{1} and any large variance, the CEP design can reduce the sample size from the value determined from a traditional design. The reason is that increased uncertainty under the treatment superiority assumption increases the likelihood of differences greater than m _{2}−m _{1}.
Discussion
Many underpowered clinical trials are conducted with limited justification for the chosen study parameters used to determine the required sample size [36, 37] with scientific, economic, and ethical implications [36, 38]. While sample size calculations based on traditional power assume no uncertainty in the study parameters, the hybrid classical and Bayesian procedure presented here formally accounts for the uncertainty in the study parameters by incorporating the prior distributions for π _{1} and π _{2} into the calculation of conditional expected power (CEP). This method allows available evidence on both the magnitude and the variability surrounding the parameters to play a formal role in determining study power and sample size.
In this paper, we explored several notional scenarios to compare the performance of the CEP design to that of a design based on traditional power. We show that if there is uncertainty in the study parameters and a distribution of plausible values for π _{1} and π _{2}, the performance of the CEP design is more consistent and robust than that of traditional designs based on point estimates for the study parameters. Traditional sample size calculations based on point estimates for the hypothesized study parameters tend to underestimate the required sample size needed to account for the uncertainty in the parameters.
The scenarios demonstrate that reducing uncertainty in the control parameter π _{1} can lead to greater benefit from the CEPdesigned study, because the relative difference between \(\hat {N}\) and N ^{∗} is smallest when uncertainty is low. Therefore, it is worthwhile to use historical information to reduce the variability in the control group proportion rather than focusing only on the prior for the experimental treatment group. Nonetheless, when there is significant overlap between the prior distributions and a small hypothesized difference m _{2}−m _{1}, traditional study designs can be overpowered under the treatment superiority assumption compared to the CEP design, and the CEP design would result in a smaller sample size. This happens because increased uncertainty under the treatment superiority assumption increases the relative likelihood of differences greater than m _{2}−m _{1}.
In the scenarios we evaluated, the performance of the traditional design was highly dependent on the prior distributions but exhibited predictable behavior. The CEP design, however, consistently generated performance near 70% across all scenarios. This indicates that power greater than the target 1−β would not be uncommon for a CEP design. This begs the question of whether or not 1−β is an appropriate target for CEP, since it could apparently lead to overpowered studies. To avoid this issue, one may use a lower target for CEP or instead design the study using a target value of performance and use our iterative N ^{∗} search to find the design that achieves acceptable performance.
Additionally, when comparing the method based on CEP to similar methods based on expected power, the sample size from a CEP design will always be less than or equal to the sample size required to achieve equivalent EP. While pure Bayesian methods of sample size determination that compute prior effective sample size to count the information contained in the prior towards the current study will generally yield a smaller sample size than traditional frequentist methods [10], the method presented here does not assume that prior information will be incorporated into the final analysis.
Conclusions
The hybrid classical and Bayesian procedure presented here integrates available prior information about the study design parameters into the calculation of study sample size for a binary endpoint. This method allows prior information on both the magnitude and uncertainty surrounding the parameters π _{1} and π _{2} to inform the design of the current study through the use of conditional expected power. When there is a distribution of plausible study parameters, the design based on conditional expected power tends to outperform the traditional design. Note that if the determined sample size N ^{∗} is greater than what can be feasibly recruited in the proposed trial, this may indicate excessive uncertainty about the study parameters and should encourage serious discussion concerning the advisability of the study. Thus, we do not recommend that N ^{∗} be blindly used as the final study sample size, but we hope that this method encourages a careful synthesis of the prior information and motivates thoughtful discussion about the feasibility of the study in order to reduce the number of poorly designed, underpowered studies that are conducted.
Appendix
Table 2 presents the values of shape paramaters [a, b] for given m and τ ^{2} for the beta distribution. Table 3 reports the values of minimum and maximum parameters [ a,b] for given μ and τ ^{2} for the uniform distribution.
Abbreviations
 CEP:

Conditional expected power
 EP:

Expected power
References
Lachin JM. Introduction to sample size determination and power analysis for clinical trials. Control Clin Trials. 1981; 2(2):93–113.
Fleiss J. Statistical methods for rates and proportions. New York: John Wiley & Sons; 1973.
Donner A. Approaches to sample size estimation in the design of clinical trials? a review. Stat Med. 1984; 3(3):199–214.
Halpern SD. Adding nails to the coffin of underpowered trials. J Rheumatol. 2005; 32(11):2065.
Ciarleglio MM, Arendt CD, Makuch RW, Peduzzi PN. Selection of the treatment effect for sample size determination in a superiority clinical trial using a hybrid classical and Bayesian procedure. Contemp Clin Trials. 2015; 41:160–71.
Ciarleglio MM, Arendt CD, Peduzzi PN. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure. Clin Trials. 2016; 12(3):275–85.
Spiegelhalter DJ, Abrams KR, Myles JP. Bayesian approaches to clinical trials and healthcare evaluation. 1st ed. Chinchester: Wiley; 2004.
Brown BW, Herson J, Neely Atkinson E, Elizabeth Rozell M. Projection from previous studies: a Bayesian and frequentist compromise. Control Clin Trials. 1987; 8(1):29–44.
Spiegelhalter DJ, Freedman LS. A predictive approach to selecting the size of a clinical trial, based on subjective clinical opinion. Stat Med. 1986; 5(1):1–13.
Joseph L, Belisle P. Bayesian sample size determination for normal means and differences between normal means. J R Stat Soc Series B Stat Methodol. 1997; 46(2):209–26.
Lee SJ, Zelen M. Clinical trials and sample size considerations: another perspective. Stat Sci. 2000; 15(2):95–110.
Inoue LY, Berry DA, Parmigiani G. Relationship between Bayesian and frequentist sample size determination. Am Stat. 2005; 59(1):79–87.
Lenth RV. Some practical guidelines for effective sample size determination. Am Stat. 2001; 55(3):187–93.
Wittes J. Sample size calculations for randomized controlled trials. Epidemiol Rev. 2002; 24(1):39–53.
Moussa MA. Exact, conditional, and predictive power in planning clinical trials. Control Clin Trials. 1989; 10(4):378–85.
Spiegelhalter DJ, Freedman LS, Parmar MK. Applying Bayesian ideas in drug development and clinical trials. Stat Med. 1993; 12(15–16):1501–11.
Spiegelhalter DJ, Freedman LS, Parmar MK. Bayesian approaches to randomized trials. J R Stat Soc Ser A Stat Soc. 1994; 157(3):357–416.
Gillett R. An average power criterion for sample size estimation. Statistician. 1994; 43(3):389–94.
Shih JH. Sample size calculation for complex clinical trials with survival endpoints. Control Clin Trials. 1995; 16(6):395–407.
O’Hagan A, Stevens JW, Campbell MJ. Assurance in clinical trial design. Pharm Stat. 2005; 4(3):187–201.
ChuangStein C. Sample size and the probability of a successful trial. Pharm Stat. 2006; 5(4):305–9.
Lan KG, Wittes JT. Some thoughts on sample size: a Bayesianfrequentist hybrid approach. Clin Trials. 2012; 9(5):561–9.
Wang Y, Fu H, Kulkarni P, Kaiser C. Evaluating and utilizing probability of study success in clinical development. Clin Trials. 2013; 10:407–13.
Rufibach K, Burger HU, Abt M. Bayesian predictive power: choice of prior and some recommendations for its use as probability of success in drug development. Pharm Stat. 2016; 15(5):438–46.
Sillitto G. Note on approximations to the power function of the 2 × 2 comparative trial. Biometrika. 1949; 36(3/4):347–52.
Cochran WG, Cox GM. Experimental designs. New York: John Wiley & Sons; 1957.
DerSimonian R, Laird N. Metaanalysis in clinical trials. Control Clin Trials. 1986; 7(3):177–88.
Thompson SG, Sharp SJ. Explaining heterogeneity in metaanalysis: a comparison of methods. Stat Med. 1999; 18(20):2693–708.
Smith TC, Spiegelhalter DJ, Thomas A. Bayesian approaches to randomeffects metaanalysis: a comparative study. Stat Med. 1995; 14(24):2685–99.
Neuenschwander B, CapkunNiggli G, Branson M, Spiegelhalter DJ. Summarizing historical information on controls in clinical trials. Clin Trials. 2010; 7(1):5–18.
Schmidli H, Gsteiger S, Roychoudhury S, O’Hagan A, Spiegelhalter D, Neuenschwander B. Robust metaanalyticpredictive priors in clinical trials with historical control information. Biometrics. 2014; 70(4):1023–32.
Freedman L, Spiegelhalter D. The assessment of the subjective opinion and its use in relation to stopping rules for clinical trials. Statistician. 1983; 32:153–60.
Chaloner K, Church T, Louis TA, Matts JP. Graphical elicitation of a prior distribution for a clinical trial. Statistician. 1993; 42:341–53.
Chaloner K. The elicitation of prior distributions In: Berry DA, Stangl DK, editors. Case studies in Bayesian biostatistics. New York: Dekker: 1996. p. 141–56.
Chaloner K, Rhame FS. Quantifying and documenting prior beliefs in clinical trials. Stat Med. 2001; 20(4):581–600.
Halpern SD, Karlawish JHT, Berlin JA. The continuing unethical conduct of underpowered clinical trials. JAMA. 2002; 288(3):358–62.
Aberegg SK, Richards DR, O’Brien JM. Delta inflation: a bias in the design of randomized controlled trials in critical care medicine. Crit Care. 2010; 14:R77.
Freedman B. Scientific value and validity as ethical requirements for research: a proposed explication. IRB Rev Hum Subjects Res. 1987; 9(6):7–10.
Acknowledgements
The views expressed in this paper are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States Government.
Authors’ contributions
MMC developed the concept and performed the literature search, simulations, data analysis, interpretation of results, and manuscript writing. CDA performed simulations and data analysis, interpretation of results, and manuscript writing. Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Ciarleglio, M.M., Arendt, C.D. Sample size determination for a binary response in a superiority clinical trial using a hybrid classical and Bayesian procedure. Trials 18, 83 (2017). https://doi.org/10.1186/s1306301717910
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1306301717910
Keywords
 Sample size
 Clinical trial
 Proportions
 Binary endpoint
 Conditional expected power
 Hybrid classicalBayesian