Our data suggests that in registered HIV and HCV trials, regardless of publication, randomization is not significantly differentially utilized by industry, and is not a clear manifestation of industry bias. The goal of mandated public trial registration is to reduce bias by allowing increased critical evaluation of trial design and to encourage full outcome reporting. Our analysis is one of a recent few that have looked at trends in registered trials to see if registration may be meeting this goal, facilitated by the CTTI-initiated ClinicalTrials.gov transformation into a research dataset. While some authors suggest industry trials may manipulate study methodology to report a desired outcome or achieve other goals, other academics suggest that industry trials may actually have the same or higher standards for methodological quality because of closer scrutiny [5, 11, 24, 25].
Randomized clinical trials are the ‘gold standard’ for clinical research, but there are ways within randomized trials to introduce bias, and this should be further scrutinized in comparing ID trials with different sponsors. Previous authors have evaluated the use of blinding, selection of sample size, choice of comparator, analytic methods, partial versus full reporting of outcomes, and agreement between results and conclusions in industry-sponsored trials, providing some evidence for industry bias [6, 14, 26–28]. Lathyris, et al. for example, found that companies are more likely to choose only their own products as comparators rather than conducting more medically appropriate head-to-head trials with drugs from different companies , and other studies describe pharmaceuticals choosing placebo or a suboptimal agent when an effective comparator exists .
Randomized HIV and HCV trials may also not be the best option for some situations where the number of patients under study is too few and other designs, such as crossover trial designs, are more appropriate. Randomization is furthermore not essential in situations of clinical ‘equipoise’, when the optimal standard of care is unclear. Study design is also only one step in the clinical trial inception and implementation cascade, and there are multiple other points of potential bias introduction. We chose to focus on randomization as a methodologic choice, but prior studies have shown that sponsor bias may influence other downstream steps in the process [5, 6, 10, 12–14]. The most relevant final action susceptible to sponsor bias is physicians’ use of evidence-based medicine after study publication, which has been the major target of the Patient Protection and Affordable Care Act’s Physician Payment Sunshine provision .
The major limitations to our study were the fact that all trial data was self-reported by study sponsors, rather than determined by an objective third party, and that unregistered trials were not included, similar to other analyses using the ClinicalTrials.gov database . The registry also does not provide a means to assess the strength of the randomization process in any study. We furthermore measured industry involvement in trials by sponsorship, rather than by industry funding or author affiliation with industry, given that the latter are not directly reported in ClinicalTrials.gov. Given that many non-industry-sponsored trials may receive industry funding at some level and therefore be susceptible to some amount of industry bias, it is possible that we underestimated the association between industry involvement and use of randomization.
Our study also focused on trials registered with ClinicalTrials.gov, one of several registries where trials can be reported, including the International Standard Randomized Control Trial Number Register (http://isrctn.org), World Health Organization (WHO) International Clinical Trials Registry (http://www.who.int/trialsearch/), and corporate trial registries and databases of manufacturers of drugs. ClinicalTrials.gov is the largest US-based registry, however, and may be generalizable for all registered HIV and HCV interventional trials.
Given that trials cannot be registered without completion of all mandatory data elements and are required to conform to relevant national health regulations, we had few missing data. For the one variable that did have a significant percentage of missing data, utilization of a data monitoring committee, our analysis showed this was more likely to go unreported in industry- sponsored trials. This could be explained by other arrangements for safety monitoring in industry trials besides the use of DMC, such as use of a contract ethics review board.
We expected to see a stronger association of industry sponsorship and decreased use of randomization in phase 4 HIV and HCV trials (compared to phase 2 and 3 trials), since these are conducted after FDA drug approval and therefore generally have less governmental oversight. Phase 4 registered trials on ClinicalTrials.gov have also been shown previously to report less use of blinding and randomization overall . Effect measure modification was not significant in our analysis, however. This could be due to the fact that post-registration HIV/AIDS and HCV trials sponsored by industry are more likely to collaborate with academic institutions and consortia than other disease conditions, or are under more public scrutiny overall.