- Research
- Open access
- Published:
From protocol to published report: a study of consistency in the reporting of academic drug trials
Trials volume 17, Article number: 100 (2016)
Abstract
Background
Unacknowledged inconsistencies in the reporting of clinical trials undermine the validity of the results of the trials. Little is known about inconsistency in the reporting of academic clinical drug trials. Therefore, we investigated the prevalence of consistency between protocols and published reports of academic clinical drug trials.
Methods
A comparison was made between study protocols and their corresponding published reports. We assessed the overall consistency, which was defined as the absence of discrepancy regarding study type (categorized as either exploratory or confirmatory), primary objective, primary endpoint, and – for confirmatory trials only – hypothesis and sample size calculation. We used logistic regression, χ2, and Fisher’s exact test.
Results
A total of 282 applications of academic clinical drug trials were submitted to the Danish Health and Medicines Authority in 1999, 2001, and 2003, 95 of which fulfilled the eligibility criteria and had at least one corresponding published report reporting data on trial subjects. Overall consistency was observed in 39 % of the trials (95 % CI: 29 to 49 %). Randomized controlled trials (RCTs) constituted 72 % (95 % CI: 63 to 81 %) of the sample, and 87 % (95 % CI: 80 to 94 %) of the trials were hospital based.
Conclusions
Overall consistency between protocols and their corresponding published reports was low. Motivators for the inconsistencies are unknown but do not seem restricted to economic incentives.
Background
The obligation to make results of clinical trials available to the public is stated in the Declaration of Helsinki [1]. Furthermore, the validity of trial conclusions depends on the use of stringent scientific methods as well as transparency in the reporting of the results.
Unacknowledged discrepancies between protocols and their corresponding published reports may undermine the validity of the scientific effort [2], produce unfounded conclusions, and lead to the unnecessary repetition of trials with identical hypotheses and loss of generated knowledge [3]. Particularly in the case of large long-lasting clinical trials that are unlikely to be repeated, such inconsistencies may jeopardize the risk-benefit ratio of the investigational drug. At the regulatory level, changes made during the conduct of a trial may invalidate the risk-benefit assessment that lead to the initial approval of the trial.
A recent Cochrane review [4] found that published reports of randomized controlled clinical trials (RCTs) frequently differ from their corresponding protocols or trial registry data, for example, in the primary outcomes [5–11] and sample size calculations [7, 8, 12, 13] for interventions such as surgery, cosmetics, drugs, and healthcare counseling. Discrepancies have also been found in the reporting of drug trials [10, 13–18]. Previous results mainly reflect the reporting of commercial trials (range: 61 % to 100 % of study samples [7, 12, 14, 17, 19, 20]) or highly selected cohorts of publicly funded phase III oncology trials and HIV RCTs [13, 15, 21]. However, all reviews indicated similar problems. Discrepancies have also been found in published reports of government-funded RCTs of various clinical specialties [8]. Whether these extend to non-commercial drug trials, called academic drug trials, is unknown. These trials are unrelated to drug companies or similar economic influences and conducted in an array of clinical specialties. We therefore investigated the prevalence of the consistency between protocols and corresponding published reports of Danish academic clinical drug trials.
Methods
The study sample consisted of all approved academic clinical drug trial applications submitted to the Danish Health and Medicines Authority in 1999, 2001, and 2003, which has been described previously [22]. The trials were defined on the basis of the data as well as the publication rights being the property of publicly employed researchers and the absence of a pharmaceutical company name on the first page of the protocol. Trials with the sponsor living outside Denmark were excluded, whereas 39 previously examined trials were included [23].
Screening
For each trial, the corresponding published reports were identified from May to September 2009 by a systematic PubMed search. The follow-up time was at least 5 years. The search terms were based on selected data from electronic files at the Danish Health and Medicines Authority: sponsor’s name, protocol title, investigational medicinal products, and a brief description, if available, of the study. A published report was defined as any article reporting data on the trial subjects. PhD theses, conference abstracts, reviews, and published reports not reporting data on trial subjects were excluded as these are either not indexed, not sufficiently detailed, or do not contain trial results.
The names of the submitting sponsors were extracted from all included trials, and contact information was updated by searching Google or a registry of Danish physicians. Contact to sponsor was made by e-mail or letter for confirmation or correction of the identified corresponding published report(s) or lack thereof. Two reminders were sent in case of no response.
Data collection
Data were extracted from the protocols, including correspondence and amendments, and from the corresponding published reports. Pre-specified definitions of consistency and discrepancy of the composite variables were developed and tested. Data were extracted by LB, and uncertainties were discussed with LGP and TC. A continuous decision log ensured reproducibility.
To avoid confusion, main outcomes denote those of our study, whereas primary endpoints denote those of the trials in the study sample. The main outcome was overall consistency between protocols and their corresponding published reports, which was a priori defined as consistency on all of the following variables: study type (exploratory/confirmatory), primary objective and primary endpoint and – for pairs of confirmatory protocols and corresponding confirmatory published reports – also as consistency in the hypothesis and sample size calculation. We also calculated the number of discrepancies per trial and the prevalence of discrepancy regarding each of the component variables.
If a published report showed discrepancy on a given variable but provided transparency, by either clearly stating the deviation from the protocol or referencing a previous published report that describes the study in accordance with the protocol, the variable was considered consistent.
The variables were defined as indicated below.
Discrepancy in the study type
We defined a confirmatory protocol/published report as a study testing a pre-specified hypothesis, which was associated with a formal sample size calculation. Studies with a primary confirmatory analysis and secondary exploratory analyses were considered confirmatory. All other studies were considered exploratory. A published report was categorized as discrepant if the study type differed from the study type derived from the protocol.
Discrepancy in the primary objective
The primary objective was defined as an objective explicitly defined as such. If there was no explicitly defined primary objective, the objective related to the primary endpoint was considered primary. In the special case of protocols consisting of more than one explicitly defined primary objective, consistency was determined as follows: 1) A published report stating the same or some of the protocol-specified primary objectives was considered consistent with the protocol. 2) A published report stating a non-protocol-specified primary objective was considered discrepant regardless of the consistency of other primary objectives. A published report only reporting secondary objectives and not stating the protocol-specified primary objective was considered consistent with the protocol only if a published report reporting or stating the primary objective was referenced (that is, providing transparency in the published report).
Consistency in the primary endpoint
The primary endpoint(s) was (were) defined as the one or two endpoints that were explicitly defined as primary. If no primary endpoint was explicitly defined, the endpoint used in the sample size calculation was considered as primary. If more than two endpoints were explicitly defined as primary, the protocol was considered to have no primary endpoints. In case of within-protocol or within-published report inconsistency, only the primary endpoint(s) substantiated in the body text was considered as primary. If one of two published report-specified primary endpoints differed from the primary endpoint(s) specified in the protocol, the published report was considered discrepant.
Pairs of confirmative protocol/confirmative published reports were also reviewed regarding discrepancy in the hypothesis and sample size calculation.
Discrepancy in the hypothesis
Hypotheses from the protocol and published report were compared. In the absence of an explicitly defined hypothesis, we formulated a hypothesis based on the sample size calculation as well as the rationale of the study (for example, “A better than B”). In case of a within-protocol inconsistency, the formulated hypothesis was based on the sample size calculation. For example, a protocol with a research question suited for an equivalence or noninferiority trial, but statistically designed to demonstrate superiority, was considered a superiority trial.
Discrepancy in the sample size calculation
The sample size calculation was considered discrepant if either the calculated sample size or any of the available components from the calculation differed between the protocol and the published report. It was also considered as a discrepancy if a sample size calculation was stated in the protocol but missing from the published report. The achieved sample size was not taken into account.
Data analysis and statistics
The sample size calculation was based on expected frequencies of overall consistency of 40 or 62 % of the trials. A sample size of 100 trials was chosen because the inclusion of at least 92 trials would yield a standard error of proportion (SEP)*z2α less than 0.1. Data were registered in a Microsoft Access database with audit trail and analyzed in SAS 9.2 using χ2 and Fischer’s exact tests and logistic regression. P values < 0.05 were considered statistically significant. Kappa values were analyzed with GraphPad QuickCalcs (http://graphpad.com/quickcalcs). Multivariate logistic regression was planned but not conducted because only a few of the pre-specified variables for the regression showed association with overall consistency in 2 × 2 tables. We conducted post hoc logistic regression analyses adjusted by the association between published reports of the same protocol. This was done by the use of a repeated measures statement and with published reports as the unit of analysis.
Intra-rater agreement during data collection was determined from the test-retest of five protocols and 16 corresponding published reports assessed within an interval of 6 months. The variables were assumed independent of each other. Study types, primary objectives, and primary endpoints were extracted from 21 documents (five protocols and 16 published reports). Hypotheses and sample sizes were extracted from 10 documents (four protocols and six published reports). Overall, 77 of the 83 data points showed perfect agreement. The six disagreements were distributed as follows: primary endpoint (2/10), hypothesis (1/10), primary objective (1/10), and sample size calculation (2/10). No disagreements were found regarding trial type (exploratory/confirmatory).
Results
A total of 282 applications of academic drug trials were submitted to the Danish Health and Medicines Authority in the period, 117 of which had more than one corresponding published report and were included for assessment (Fig. 1). During data collection, 22 trials were excluded for the following reasons: investigator not a resident in Denmark (n = 10), published report did not correspond to the protocol (n = 10, in a few cases, despite investigators verification), the published report contained no data on trial subjects (n = 1), or the year of application other than 1999, 2001, or 2003 (n = 1). The minimum follow-up time from approval of protocol to screening publication rate was 40 % (95/237). The final sample consisted of 95 approved clinical drug trials comprising 95 protocols and 143 corresponding published reports (median: one published report per trial, range: one to eight). Of those, 42 (46 %) protocols described an exploratory trial, whereas the remaining 53 (54 %) were confirmatory. Characteristics of the exploratory and confirmatory protocols are shown in Table 1, and characteristics of the subgroup of controlled exploratory and confirmatory trials in Table 2.
Most of the trials, 73 % (95 % CI: 63 to 81 %), were randomized controlled trials (RCTs). The majority had one or two pre-specified primary endpoints (77 %, 95 % CI: 68 to 85) and pre-specified statistical methods (76 %, 95 % CI: 67 to 84 %). Most sponsors were employed in hospitals within the Capital Region of Denmark (59 %, 95 % CI: 49 to 69 %) and were receiving, applying for, or going to apply for grants from external sources (73 %, 95 % CI: 64 to 83 %).
Overall consistency was observed in 39 % of the trials (95 % CI: 29 to 49 %, Table 3). The frequency was lower among confirmatory trials compared to exploratory trials (30 % versus 50 %). In comparison, overall consistency was observed in 49 % of the published reports (95 % CI: 41 to 57 %). Confirmatory published reports were less likely to show overall consistency compared to exploratory published reports (adjusted OR 0.37, 95 % CI: 0.17 to 0.83, Table 4).
The individual discrepancies are shown in Tables 3 and 4. The most prevalent was the primary endpoint discrepancy (41 % of the trials, 95 % CI: 31 to 51 %). Similarly, primary endpoint discrepancy was observed in 33 % of the published reports (95 % CI: 25 to 41 %). In neither of the analyses did the occurrence of primary endpoint discrepancy seem to be associated with the study type (exploratory or confirmatory).
Of the 58 trials with at least one discrepancy, 23 trials were associated with one discrepancy, and 35 trials with two or more discrepancies. Half of the published reports (73/143) showed discrepancy. Of these 73 published reports, 35 had one discrepancy, 31 had two, and seven had three discrepancies. None had more than three discrepancies.
Agreement on the published report status between the survey and the literature search was estimated from the 183 trials with a conclusive survey response. The Kappa statistic κ = 0.782 indicated good agreement (95 % CI: 0.692 to 0.872, agreement for 91 % of the trials).
Discussion
In this review and follow-up of academic drug trials in Denmark, we found overall consistency between the approved protocol and resulting published reports in 39 % (95 % CI: 51 to 70 %) of the trials. The assessment of overall consistency included the following composite variables: primary objective, primary endpoint, type of study, hypothesis, and reporting of power calculation. The most prevalent was discrepancy on the primary endpoint (41 %), but the type of study and primary objective differed frequently as well, among 23 % and 20 % of the trials, respectively. The publication rate of 40 % is comparable to our findings in a similar cohort (33 %) [22].
Few studies of the reporting of clinical trials have been conducted on academic drug trials [13, 15, 21] or even academic drug/non-drug trials [9]. To our knowledge, this is the first investigation of protocol-published report consistency in academic drug trials across medical specialties. To provide data as solid as possible, we used predefined eligibility criteria based on the ownership of trial data and publication rights rather than the source(s) of funding, we included RCTs and non-RCTs from all medical specialties, and we categorized the trials by the nature of their research question as exploratory or confirmatory. Furthermore, we constructed a composite outcome, overall consistency, which took into account some of the differences between exploratory and confirmatory trials. The bias from selection of trials was minimized because we had access to all trials approved in Denmark.
Previous studies on the reporting of drug trials [10, 13–18] primarily included commercial trials. However, our study demonstrates that drug trials that cannot be assumed to involve heavy economic interests also have a similar lack of consistency. This is in agreement with the findings of Chan et al. [8] in a cohort of government-funded drug/non-drug RCTs. The observed discrepancies are of such a magnitude that we believe our results represent a real problem in the reporting of academic clinical drug trials. Previously, the focus has primarily been on RCTs, which require a well-defined design for the testing of specific hypotheses [24]. However, control groups and randomization are also used in exploratory trials. In our study sample, 20/69 (29 %) of the randomized trials were categorized as exploratory, whereas confirmatory studies constituted 4/26 (15 %) of the nonrandomized trials. This suggests that limitations apply to the use of randomization as an inclusion criterion in the evaluation of methodological quality from a perspective of evidence-based medicine.
To ensure consistency and reproducibility from using a single assessor (LB), we implemented quality assurance and control measures, such as the development and test of clear definitions of consistency and discrepancy, the keeping of a decision log during data collection, and the test-retest of the assessment of the outcome variables. The intra-rater agreement was assessed and reported using the pre-specified unit of measurement and analysis plan. Alternatively, intra-rater agreement could have been analyzed using protocols as the unit of measurement. This would have required a larger sample for the assessment but would also have provided a better estimate.
In a few cases, difficulties in collecting data from the protocols was due to the contradictory definitions of the primary endpoint, which points to other problems in the writing of the protocol as well as in the approval procedures of the competent authorities. Since this was not predefined in our study, we did not investigate it further.
Discrepancies within a published report or between published reports of the same trial may be associated. The adjusted and unadjusted post-hoc logistic regressions at the published report level did not change our primary results. Discrepancies were frequent in published reports regardless of the study type of the underlying protocol (exploratory/confirmatory) and the study type derived from the published report. We found a discrepancy in half of the 143 published reports, each carrying a risk of the study results being misinterpreted due to inadequate or misleading information. We did not assess whether the discrepancies were associated with the direction of the trial results. Nevertheless, the risk of bias exists, which may be forwarded to future research and clinical decisions.
We have not found previous reports on the transformation of exploratory protocols to confirmatory published reports, a topic that is highly important because the study type implies certain strengths and limitations to the interpretation of the study results [25, 26]. Such discrepancy in 14 % and 30 % of exploratory and confirmatory trials is problematic. Similarly, the discrepancy in primary endpoints in 41 % of academic clinical drug trials is critical but consistent with earlier findings (33 % to 62 % of RCTs [7, 8, 10]). Discrepancy regarding the primary objective was less frequent but was associated with discrepancy of the primary endpoint (P < 0.0001, χ2, data not shown).
Published reports neither stating nor referencing the protocol-specified (or any other) sample size calculation, and therefore defined as exploratory, were found in 46 % of the confirmatory trials; this figure is comparable to the 53 % reported by Chan [20] and the 59 % by Mhaskar [15]. The transparent reporting of the sample size calculation allows the reader to assess the power and pre-specified relevant clinical benefit of the study [27].
We studied a cohort of trials approved until 2003. Since then, the Clinical Trials Directive has been introduced, and the International Committee of Medical Journal Editors (ICMJE) has facilitated transparency in the reporting of research by requiring clinical trials to be registered in a publicly available database [28]. As of 2011, information on all clinical drug trials approved since May 2004 by the European medicines authorities is publicly available [29, 30]. The information is limited but is directly uploaded by the authorities at the time of approval, thus serving to reduce the problem of retrospective registration. Such resources provide a valuable tool to journal editors and reviewers if kept up to date with accurate information and if used actively during journal review. However, the occurrence of missing or unclear registry data [31, 32] and many trials being registered retrospectively [33] substantiates the continued use of protocols as the primary source of trial characteristics [34]. Evidence of discrepancies, even between trial registries and published reports [4], indicate that a dedicated effort is still required by the researchers as well as the journal editors and reviewers to promote transparency in the reporting of clinical research. Studies assessing the consistency in the reporting of clinical trials should be conducted at regular intervals to ensure a continued improvement in the reporting.
In Denmark, academic trials constitute a third of clinical drug trials [22], with a high proportion of confirmatory trials providing a considerable contribution to the accumulation of clinical evidence. The impact on clinical practice of academic versus commercial trials is unknown, but differences may exist. Probably, commercial trials have their main impact on the registration of drugs, whereas academic drug trials probably have mostly other impacts.
The reasons for discrepancy between protocols and published reports are unknown and may be complex, but it seems there are other motivators than economic driving forces. Previous studies point to reasons such as a lack of clinical importance, lack of statistical significance, and unawareness of the consequences of not reporting all outcomes and protocol changes [8, 35].
Conclusion
In this study, we predefined overall consistency between protocols and published reports as the primary focus and found it to be low for academic clinical drug trials. The discrepancies pose an invisible threat to the validity of trial conclusions. These results indicate a general need for improving the consistency between protocols and the resulting published reports, particularly regarding the definition of the primary endpoint and of the trial as exploratory or confirmatory. Further studies are needed to assess improvements in the reporting of clinical trials over time.
Abbreviations
- CI:
-
confidence interval
- GCP:
-
good clinical practice
- HIV:
-
human immunodeficiency virus
- ICMJE:
-
International Committee of Medical Journal Editors
- OR:
-
odds ratio
- RCT:
-
randomized controlled clinical trial
- SEP:
-
standard error of proportion
References
World Medical Association. WMA Declaration of Helsinki - Ethical principles for medical research involving human subjects. www wma net/en/30publications/10policies/b3/index html. [serial online] 2008; Accessed 12 May 2013.
Chan AW, Upshur R, Singh JA, Ghersi D, Chapuis F, Altman DG. Research protocols: waiving confidentiality for the greater good. BMJ. 2006;332:1086–9.
Dickersin K CI. Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation. www jameslindlibrary org/illustrating/articles/recognising-investigating-and-dealing-with-incomplete-and-biase. [serial online] 2013; Available from: The James Lind Library. Accessed 13 May 2013.
Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev. 2011;1:MR000031.
Al-Marzouki S, Roberts I, Evans S, Marshall T. Selective reporting in clinical trials: analysis of trial protocols accepted by The Lancet. Lancet. 2008;372:201.
Blumle A, Antes G, Schumacher M, Just H, von Elm E. Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others. J Med Ethics. 2008;34:e20.
Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291:2457–65.
Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ. 2004;171:735–40.
Hahn S, Williamson PR, Hutton JL. Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. J Eval Clin Pract. 2002;8:353–9.
Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome Reporting in Industry-Sponsored Trials of Gabapentin for Off-Label Use. N Engl J Med. 2009;361:1963–71.
von Elm E, Rollin A, Blumle A, Huwiler K, Witschi M, Egger M. Publication and non-publication of clinical trials: longitudinal study of applications submitted to a research ethics committee. Swiss Med Wkly. 2008;138:197–203.
Pich J, Carne X, Arnaiz JA, Gomez B, Trilla A, Rodes J. Role of a research ethics committee in follow-up and publication of results. Lancet. 2003;361:1015–6.
Soares HP, Daniels S, Kumar A, et al. Bad reporting does not mean bad methods for randomised trials: observational study of randomised controlled trials performed by the Radiation Therapy Oncology Group. BMJ. 2004;328:22–4.
Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B. Evidence b(i)ased medicine--selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ. 2003;326:1171–3.
Mhaskar R, Djulbegovic B, Magazin A, Soares HP, Kumar A. Published methodological quality of randomized controlled trials does not reflect the actual quality assessed in protocols. J Clin Epidemiol. 2012;65:602–9.
Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358:252–60.
Rising K, Bacchetti P, Bero L. Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation. PLoS Med. 2008;5:e217.
von Elm E, Rollin A, Blumle A, Senessie C, Low N, Egger M. Selective reporting of outcomes of drug trials? Comparison of study protocols and published articles [abstract]. XIV Cochrane Colloquium; 2006 October 23-26; Dublin, Ireland 2006; 47.
Pildal J, Chan AW, Hrobjartsson A, Forfang E, Altman DG, Gotzsche PC. Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study. BMJ. 2005;330:1049.
Chan AW, Hrobjartsson A, Jorgensen KJ, Gotzsche PC, Altman DG. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols. BMJ. 2008;337:a2299.
Gandhi M, Ameli N, Bacchetti P, et al. Eligibility criteria for HIV clinical trials and generalizability of results: the gap between published reports and study protocols. AIDS. 2005;19:1885–96.
Berendt L, Hakansson C, Bach KF, et al. Effect of European Clinical Trials Directive on academic drug trials in Denmark: retrospective study of applications to the Danish Medicines Agency 1993-2006. BMJ. 2008;336:33–5.
Berendt L, Hakansson C, Bach KF, et al. Methodological characteristics of academic clinical drug trials--a retrospective cohort study of applications to the Danish Medicines Agency 1993-2005. Br J Clin Pharmacol. 2010;70:729–35.
Eccles M, Freemantle N, Mason J. North of England evidence based guidelines development project: methods of developing guidelines for efficient drug use in primary care. BMJ. 1998;316:1232–5.
Vandenbroucke JP. Observational research, randomised trials, and two views of medical science. PLoS Med. 2008;5:e67.
Sheiner LB. Learning versus confirming in clinical drug development. Clin Pharmacol Ther. 1997;61:275–91.
Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. PLoS Med. 2010;7:e1000251.
International Committee of Medical Journal Editors. Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Publishing and Editorial Issues Related to Publication in Biomedical Journals: Obligation to Register Clinical Trials. www icmje org/. [serial online] 2012; Accessed 13 May 2013.
European Medicines Agency. EU Clinical Trials Register. www clinicaltrialsregister eu [serial online] 2013; Accessed 12 August 2013.
WHO. International Clinical Trials Registry Platform (ICTRP). ww who int/ictrp/en/. [serial online] 2013; Accessed 12 August 2013.
Moja L, Moschetti I, Nurbhai M, et al. Compliance of clinical trial registries with the World Health Organization minimum data set: a survey. Trials. 2009;10:56.
Huic M, Marusic M, Marusic A. Completeness and Changes in Registered Data and Reporting Bias of Randomized Controlled Trials in ICMJE Journals after Trial Registration Policy. PLoS One. 2011;6:e25258.
Faure H, Hrynaszkiewicz I. The ISRCTN Register: achievements and challenges 8 years on. J Evid Based Med. 2011;4:188–92.
Reveiz L, Chan AW, Krleza-Jeric K, et al. Reporting of Methodologic Information on Trial Registries for Quality Assessment: A Study of Trial Records Retrieved from the WHO Search Portal. PLoS One. 2010;5:e12484.
Smyth RM, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson PR. Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists. BMJ. 2011;342:c7153.
Acknowledgements
The study was funded by the GCP Unit at Copenhagen University Hospital, Danish Health and Medicines Authority, Laboratory of Clinical Pharmacology at Rigshospitalet, Faculty of Health Sciences at University of Copenhagen, and the Lundbeck Foundation. The Lundbeck Foundation had no influence on the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation, review, and approval of the manuscript.
LB had full access to all data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
LB was employed by the GCP unit at Copenhagen University Hospital, Copenhagen, Denmark, and the Danish Health and Medicines Authority, Copenhagen, Denmark, during the conduct and reporting of this study. LB is currently employed by Novo Nordisk A/S. Novo Nordisk A/S was not involved in the study or the reporting of it.
Authors’ contributions
LB and HEP were responsible for the study concept. LB, KD, KFB, TC, LGP, and HEP designed the study. LB, LGP (supervision), and TC (supervision) were responsible for data collection. LB, KD, TC, LGP, KFB, and HEP were responsible for the analysis and interpretation. LB drafted the manuscript. KD, TC, LGP, KFB, and HEP critically revised the manuscript. LB and KD were responsible for the fundraising. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Berendt, L., Callréus, T., Petersen, L.G. et al. From protocol to published report: a study of consistency in the reporting of academic drug trials. Trials 17, 100 (2016). https://doi.org/10.1186/s13063-016-1189-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13063-016-1189-4