Skip to main content

Consistency of trial reporting between ClinicalTrials.gov and corresponding publications: one decade after FDAAA

Abstract

The FDA Amendments Act (FDAAA) required that information for certain clinical trials, such as details about study design features and endpoints, as well as results, be publicly reported in ClinicalTrials.gov. We conducted a cross-sectional analysis of phase III trials with primary results published between January 1, 2016, and June 30, 2017, in high-impact journals and found 74% contained at least one discrepancy between results reported in ClinicalTrials.gov and the corresponding publication. Our findings underscore the necessity for monitoring of clinical trial information and result reporting between sources; a checklist may provide a systemized procedure for investigators and editors to monitor accurate reporting.

Peer Review reports

Background/aims

The 2007 Food and Drug Administration Amendments Act (FDAAA) [1] required that information for clinical trials, such as details about study sponsors, design features, sample eligibility criteria, and study endpoints, as well as study results, be publicly reported on ClinicalTrials.gov, a clinical trial registry managed by the National Library of Medicine. Previous studies identified inconsistencies between trial information and the results reported in ClinicalTrials.gov and their corresponding publications [2, 3]. A decade after these initial studies, we sought to characterize the consistency of information and result reporting between ClinicalTrials.gov and corresponding publications for trials published in high-impact journals.

Methods

Using PubMed, we identified all phase III clinical trials with primary results published between January 1, 2016, and June 30, 2017, in journals with a 2016 Journal Impact Factor of 10 or greater (Supplementary Table 1), linked to an NCTID, and with results reported in ClinicalTrials.gov(Supplementary Figure; list provided in Supplementary Table 2). For all trials, we compared the information and the results reported in the publication and in ClinicalTrials.gov for the following study features: cohort characteristics (completion rate, age, sex, race/ethnicity), intervention details, primary efficacy endpoints, and serious adverse events. These four study features were examined because they were (1) objectively comparable between the two sources and (2), in our estimation, the most important when weighing the design, significance, and interpretation of a trial.

For cohort characteristics, information was deemed concordant if the type of properties reported and the values for each were the same between sources. For intervention details, information was deemed concordant when the dosage, time course, and frequency of the intervention matched. For primary efficacy endpoints, information was deemed concordant when the measure description and the reported results matched. For serious adverse events, information was deemed concordant when the event description and reported results matched. Study features (cohort characteristics, trial intervention details, primary efficacy endpoint, serious adverse events) could not be compared between the two sources when they were reported in different formats. For example, if the age distribution was reported as the number of participants in certain age ranges (18–30, 31–45, ...) rather than as the mean age, or if adverse events were stratified in one source as serious vs. non-serious while in another were reported in aggregate. We conducted a cross-sectional analysis, characterizing the rate of reporting and consistency in the information and results reported for all study features between the two sources using descriptive statistics; all analyses were performed using Excel (version 16.24) and RStudio (version 1.1.447).

Results

There were 94 phase III clinical trials published in high-impact journals that had results reported on ClinicalTrials.gov; 89 (95%) were funded by industry, 4 (4%) were funded by government institutions, and 1 (1%) was funded by other academic/nonprofit institutions. Trials were most commonly published in The New England Journal of Medicine (n = 28; 30%), Annals of the Rheumatic Diseases (n = 14; 15%), and Lancet Oncology (n = 14; 15%).

Among the 94 trials, the cohort characteristics of completion rate, age, and sex were reported in both sources for 95 to 100% of trials, whereas race/ethnicity was reported in both sources for 35 (37%) trials (Table 1). For trials where completion rate, age, or sex were reported in both sources, these characteristics could not be compared or were discordant for 24–26%. For trials where race/ethnicity was reported in both sources, the race/ethnic distribution could not be compared or was discordant for 12 (34%). Intervention details were reported in both sources for 94 (100%) trials but could not be compared or were discordant for 11 (12%).

Table 1 Concordance of result information between ClinicalTrials.gov and corresponding publications for phase III trials published in high-impact journals between January 1, 2016, and June 30, 2017

Primary efficacy endpoints were reported by both sources for 94 (100%) trials, of which 4 (4%) could not be compared and 20 (21%) were discordant. Among these 20 discordant studies, 1 had a different endpoint reported between the two sources and 19 had different results reported between the two sources for the same endpoint. Serious adverse events were reported by both sources for 93 (99%) trials, of which 28 (30%) could not be compared and 24 (26%) were discordant. Serious adverse event results could most often not be compared because the two sources reported their results in different formats or stratifications. For example, the publication might report serious adverse events broken down by type, while ClinicalTrials.gov would report an aggregate number, or vice versa. Overall, excluding race/ethnicity, 74% of trials had at least one discrepancy in information and result reporting (discordant or could not be compared) between ClinicalTrials.gov and corresponding publications across all study features.

Conclusion

A decade after the FDAAA, 74% of phase III trials published in high-impact journals contained some discrepancy in the information and results reported in ClinicalTrials.gov and their corresponding publication, marking only a slight decrease from studies of a decade prior [2, 3]. Such high rates of discordance suggest that the challenges of providing clear and consistent trial information and reported results across public sources of information still need to be addressed. Most concerning were the inconsistencies observed in reporting of primary efficacy endpoint results and serious adverse events. While the magnitude of these discrepancies was often small, such as small differences in cohort characteristics (e.g., in reported mean age), these discrepancies raise questions as to which source is correct. Potential explanations include that the publications may have reported on a differently defined cohort than the original trial or that trials were published before additional study observations accrued or after statistical analyses were refined and ClinicalTrials.gov was not subsequently updated. Despite inconsistencies between registered and published primary outcomes of clinical trials being recently observed among a broad sample of clinical trials [4], our findings are particularly concerning because we focused on phase III trials published in high-impact journals, which are the trials that likely have the greatest influence on clinical care and are used in clinical practice guidelines.

Our study was limited to an 18-month sample of phase III trials that were registered and reported results in both ClinicalTrials.gov and published in high-impact journals. Thus, it is likely that our study examined those trials following the best practices with respect to result reporting, making our estimates of discrepancies in reporting conservative. Investigators and sponsors that are reporting results to ClinicalTrials.gov, in addition to publishing their study in the highest impact journals, are more likely to adhere to best practices as compared to those who fail to report results to ClinicalTrials.gov. Nevertheless, these findings underscore the necessity for monitoring for concordance of clinical trial information and results reported between these sources. We propose a three-pronged approach to ensuring a harmonious reporting of result information: a checklist for investigators to use to ensure congruent reporting pre-submission to a journal, an acknowledgement of any differences that investigators recognize in the submitted manuscript, and a post-submission check by the journal editors. Sponsors and investigators face several challenges to accurate and consistent result reporting, including a high rate of research staff turnover, lack of staff dedicated to monitoring result reporting at many academic institutions and smaller companies, and poor knowledge of FDA and NIH reporting requirements. A checklist—similar to those applied in surgical settings—may provide a systemized procedure for investigators to monitor accurate reporting to ClinicalTrials.gov throughout the trial process. Additionally, investigators that recognize differences between the results in their manuscript and those in ClinicalTrials.gov should explain these in the study publication. And finally, journal editors, upon receiving a submission, should request that the trial sponsors provide a link to the corresponding ClinicalTrials.gov entry and an itemized list of consistencies between the most important trial features (such as the four we examine in this study).

Availability of data and materials

The datasets generated and/or analyzed used for the current study were derived from PubMed and the ClinicalTrials.gov repository, https://clinicaltrials.gov/. The list of journals is provided in Supplementary Table 1, and the list of articles is provided in Supplementary Table 2.

Abbreviations

FDAAA:

The Food and Drug Administration Amendments Act

References

  1. Food and Drug Administration Amendments Act of 2007. (Public Law No. 110–85 § 801;2007.).

  2. Becker JE, Krumholz HM, Ben-Josef G, Ross JS. Reporting of results in ClinicalTrials.gov and high-impact journals. JAMA. 2014;311(10):1063–5. https://doi.org/10.1001/jama.2013.285634.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Hartung DM, Zarin DA, Guise J, McDonagh M, Paynter R, Helfand M. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med. 2014;160:477–83. https://doi.org/10.7326/M13-0480.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Chen T, Li C, Qin R, et al. Comparison of clinical trial changes in primary outcome and reported intervention effect size between trial registration and publication. JAMA Netw Open. 2019;2(7):e197242. https://doi.org/10.1001/jamanetworkopen.2019.7242.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Not applicable

Funding

This project was not supported by any external grants or funds. Mr. Talebi was supported by the Harvard Global Health Institute Cordeiro Fellowship, which had no part in the study design or execution.

Author information

Authors and Affiliations

Authors

Contributions

RT—conceptualization, methodology, investigation, data curation, formal analysis, visualization, writing of the original draft, review, editing, and funding acquisition. RR—conceptualization, supervision, validation, writing, review, and editing. JSR—conceptualization, supervision, validation, writing, review, and editing. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Joseph S. Ross.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable

Competing interests

In the past 36 months, JSR received research support through Yale University from Medtronic, Inc. and the Food and Drug Administration (FDA) to develop methods for postmarket surveillance of medical devices (U01FD004585), from the Centers of Medicare and Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting (HHSM-500-2013-13018I), and from the Blue Cross Blue Shield Association to better understand medical technology evaluation; JR currently receives research support through Yale University from Johnson and Johnson to develop methods of clinical trial data sharing, from the Food and Drug Administration to establish Yale-Mayo Clinic Center for Excellence in Regulatory Science and Innovation (CERSI) program (U01FD005938), from the Medical Device Innovation Consortium as part of the National Evaluation System for Health Technology (NEST), from the Agency for Healthcare Research and Quality (R01HS022882), from the National Heart, Lung and Blood Institute of the National Institutes of Health (NIH) (R01HS025164), and from the Laura and John Arnold Foundation to establish the Good Pharma Scorecard at Bioethics International and to establish the Collaboration for Research Integrity and Transparency (CRIT) at Yale.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1: Table S1.

Alphabetized list of journals with a 2016 Journal Impact Factor of 10 or greater. Table S2. List of articles reporting the primary results of Phase III clinical trials published between January 1st, 2016 and June 30th, 2017 in journals with a 2016 Journal Impact Factor of 10 or greater, linked to an NCTID, and with results reported in ClinicalTrials.gov. Figure S1. Sample Cohort Flow Diagram.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Talebi, R., Redberg, R.F. & Ross, J.S. Consistency of trial reporting between ClinicalTrials.gov and corresponding publications: one decade after FDAAA. Trials 21, 675 (2020). https://doi.org/10.1186/s13063-020-04603-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-020-04603-9

Keywords