Commentary on ‘accelerating clinical development of HIV vaccine strategies: methodological challenges and considerations in constructing an optimized multi-arm phase I/II trial design’
© Dixon; licensee BioMed Central Ltd. 2014
Received: 12 February 2014
Accepted: 3 March 2014
Published: 13 March 2014
The well-written manuscript has lead me to reflect on the present state of the art of designing early trials, not necessarily limited to trials of investigational vaccines. Standard trial designs are inadequate for this trial, as the authors demonstrate by their careful literature search and review. In fact, more and more research plans seem to need one or another departure from standard designs. Maybe the traditional pdigm of choosing a design that: (a) has been ‘credentialed’ by publication in a peer-reviewed methodology journal; and (b) is as close as possible to matching the actual research objectives of the investigators, even if not a precise match, is obsolete.
Richert et al. illustrate a new pdigm, which may well be their real contribution. They summarize what is already known about the various effects of the candidate vaccine strategies. They carefully state what new knowledge they seek. They describe the proposed trial with all its specifications and assumptions, including those needed for them to study the design’s statistical properties. They describe the simulation study they performed, in enough detail that others could reproduce it, and tabulate the results. In fact, not only could others undertake to reproduce their results, it is clear how to proceed to study other specifications and assumptions.
What are the implications of following the new pdigm rather than the old one? Two come to mind readily. With regard to peer review of the clinical trial, evaluation of the design under the old pdigm would very often end with an observation that the proposed study employs a well-established plan as published by Gehan  or Simon  or Thall and Cheng  (and so on). Under the new pdigm, that would almost never suffice, and competent, serious review by a statistical scientist would be needed. I note that, in the U.S., at least, Institutional Review Boards (IRBs) (ethics review) have to address the validity of the science of each project, but many IRBs lack statistics expertise. The situation may be better in the context of reviewing funding applications, although peer reviewers rarely see full protocols in final form.
Another implication is a dramatic decline in articles on experimental design of trials in the statistical and trial methodology literature. Each new trial would follow the pdigm, but the particulars would be essentially unique. This is a less worrisome consequence, since professional statisticians can presumably find other ways to qualify for career advancement.
I received no support for this work.
- Gehan EA: The determination of the number of patients required in a preliminary and a follow-up trial of a new chemotherapeutic agent. J Chronic Dis. 1961, 13: 346-353. 10.1016/0021-9681(61)90060-1.View ArticlePubMedGoogle Scholar
- Simon R: Optimal two-stage designs for phase II clinical trials. Control Clin Trials. 1989, 10: 1-10. 10.1016/0197-2456(89)90015-9.View ArticlePubMedGoogle Scholar
- Thall PF, Cheng SC: Optimal two-stage designs for clinical trials based on safety and efficacy. Stat Med. 2001, 20: 1023-1032. 10.1002/sim.717.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.