Data sources
Data were obtained from three sources: Drugs@FDA [36], ClinicalTrials.gov, and PubMed’s listing of MEDLINE-indexed journals. Drugs@FDA is a public database maintained by the FDA, providing access to regulatory actions and documents issued for each drug approved by the agency. ClinicalTrials.gov is a public clinical trial registry database maintained by the National Library of Medicine at the NIH [25]. PubMed’s list of MEDLINE-indexed journals includes more than 5500 biomedical journals.
Novel therapeutics approved for treating neurological and psychiatric disorders, 2005–2014
The Center of Drug Evaluation and Research, which is part of the FDA, provides annual reports summarizing all NDAs approved in each year [36, 37]. We downloaded the reports from 2005 to 2014, when available, and otherwise searched Drugs@FDA for those NDAs that were approved to treat neurologic and psychiatric disorders. Our study sample began with drugs approved in 2005 to align with our prior work [38] and because an earlier seminal study on the topic examined all antidepressants approved through 2004 [7]; we chose to exclude drugs approved after December 2014 to ensure that at least 24 months had passed between drug approval date and the date when we concluded the final search for the registration record, reported results, and publication, which was March 2017. For each NDA, we recorded its indication, orphan status, priority review status, accelerated approval status, sponsor, and approval date.
Efficacy trials supporting FDA new neuropsychiatric drug approval
As described in a comprehensive tutorial for how to use Drugs@FDA [39], we downloaded the relevant FDA files for each NDA from Drugs@FDA, including the approval letters, summary reviews, clinical reviews, and statistical reviews. Among these files, we searched for clinical trials evaluating the efficacy of the drugs under review. We included only trials for which the FDA discussed and characterized results, based on the assumption that these trials influenced the FDA’s decision to approve the study drug for the proposed indication. We excluded ongoing trials, phase I/safety-only trials, expanded access trials, terminated and withdrawn trials without enrollment, and trials evaluating indications different than that for which the drugs were originally approved. We also excluded failed trials. For each included trial, we recorded the following characteristics: pivotal status, phases, sponsors, study sites, trial length, randomization, blinding, types of control, description of the treatments, arms of the investigational drugs, enrollment numbers, and the primary efficacy endpoints. A pivotal study is defined by the FDA as “a definitive study in which evidence is gathered to support the safety and effectiveness evaluation of the medical product for its intended use” [40]. Pivotal status was frequently assigned prospectively by the FDA, occasionally assigned retrospectively by the FDA, or at times not assigned by the FDA and thus determined using a previously described method [38].
Determination of FDAAA status
The FDAAA, as enacted in 2007, clarified that new requirements would apply to trials that were initiated after September 27, 2007, as well as to trials initiated earlier but still ongoing as of December 26, 2007. Based on this, FDAAA applicable trials were categorized as post-FDAAA, while trials that were initiated or completed prior to the cut-off dates were categorized as pre-FDAAA.
Determination of registration and results reporting status on ClinicalTrials.gov
To determine whether trials were registered and reported results on ClinicalTrials.gov, one investigator (CXZ) performed the initial search using the following terms and their combinations: generic, or brand names of the study drugs, drug indications, trial IDs, trial acronyms, numbers of participants randomized, comparators, and study time frames. For trials that were not able to be matched with any registration record, a second investigator (JEB) independently performed a second round of searches. No new records were identified.
Determination of publication status
To determine whether trials were published, we searched PubMed for full-length publications using the same terms as we did for the registration record. Among identified publications, abstracts and conference reports were excluded. Publications reporting multiple trials, such as reviews and meta-analyses, were also excluded unless the results of each trial were analyzed and discussed individually in the level of detail as one would expect from a full-length publication. When the search terms returned too many similar entries in PubMed, we used Google Scholar to narrow the results. Google Scholar has the advantage that it can search among the full texts of publications hosted by a variety of online databases or platforms, while for many journals, especially those that require paid access, PubMed searches only among the titles and abstracts. We provide more detail on our search strategy in Additional file 1.
Interpretation of trial results: publication vs FDA
Trials were classified as positive, negative, or equivocal based on the FDA’s interpretation of the results as described in Additional file 1. The classification was based on whether the primary outcome(s) achieved statistical significance while taking into consideration the summary statements made by the FDA medical reviewers regarding whether or not the findings provided support for the efficacy claim of the study drugs. Published trial results were categorized similarly based on whether the primary outcomes achieved statistical significance according to the authors’ analysis while taking into consideration the authors’ conclusions in the abstract section. Trials with equivocal or negative results were grouped together as non-positive trials for purposes of calculating publication bias.
Validating the published interpretations
We validated the interpretations of the trial results made by the study investigators for each publication using the interpretations made by the FDA medical reviewers found in the FDA approval package as the gold standard. Both the conclusions in the abstract and the main text of the publications were validated. The two were considered in agreement if the interpretations were both categorized as positive, negative, or equivocal, and no major contradictions existed between the two statements. As an example of contradiction between two sources: the published interpretation of trial 02 of milnacipran (Savella) concluded that “both doses (100 and 200 mg/d) were associated with significant improvements in pain and other symptoms” [41]. This was considered different from the statement made by the FDA in the summary review documents, which stated that “[the] analysis of the ‘pain only’ responders does not indicate that there is a significant effect of MLN (Savella) on pain….(treatment effect) was driven by the patient global response outcome rather than the pain or function outcome…when studied in isolation, statistically significant treatment effects for pain and function were not demonstrated” [42]. Due to the interpretive nature of this comparison, two additional investigators (JEB and JSR) reviewed all instances where there was disagreement between the FDA’s and the publications interpretation.
Calculating the degree of publication bias
We calculated and compared two different measures of publication bias between pre- and post-FDAAA trials. First, we estimated the relative risk of publishing positive vs non-positive trials in each period. Second, we estimated the relative risk of publishing positive vs non-positive trials without misleading interpretations in each period. Thus, publication bias was calculated as the ratio of relative risks (RRR) pre-FDAAA vs post-FDAAA.
Data validation
Registration status, results reporting status, publication status, and publication-FDA interpretation agreement were validated as described previously. We performed quality control and data validation, having a second investiator (JEB) re-collect all data but not reported for purposes of this study. A second investigator (JEB) re-collected all data elements obtained for a random 10% sample of the included new drug approvals, using an online randomization tool [43] to randomly select 4 out of the 37 drugs. Among the 676 unique data elements collected by the two investigators, the rate of agreement was 99.6%, and disagreements were resolved through consensus.
Data analysis
We used descriptive statistics to characterize the proportions of trials that were registered and reporting results on ClinicalTrials.gov. We used two-tailed Fisher exact tests to compare the proportions among pre- and post-FDAAA trials. Analysis was performed using Epi Info Companion App for iOS version 3.1.1 (Centers for Disease Control and Prevention [CDC]; Atlanta, GA) [44], as well as MedCalc online statistical software [45], supplemented using an online program written by Hutchon [46] to calculate the RRRs to estimate both measures of publication bias.