Skip to main content

Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials

Abstract

Background

Selective reporting of outcomes in clinical trials is a serious problem. We aimed to investigate the influence of the peer review process within biomedical journals on reporting of primary outcome(s) and statistical analyses within reports of randomised trials.

Methods

Each month, PubMed (May 2014 to April 2015) was searched to identify primary reports of randomised trials published in six high-impact general and 12 high-impact specialty journals. The corresponding author of each trial was invited to complete an online survey asking authors about changes made to their manuscript as part of the peer review process. Our main outcomes were to assess: (1) the nature and extent of changes as part of the peer review process, in relation to reporting of the primary outcome(s) and/or primary statistical analysis; (2) how often authors followed these requests; and (3) whether this was related to specific journal or trial characteristics.

Results

Of 893 corresponding authors who were invited to take part in the online survey 258 (29%) responded. The majority of trials were multicentre (n = 191; 74%); median sample size 325 (IQR 138 to 1010). The primary outcome was clearly defined in 92% (n = 238), of which the direction of treatment effect was statistically significant in 49%. The majority responded (1–10 Likert scale) they were satisfied with the overall handling (mean 8.6, SD 1.5) and quality of peer review (mean 8.5, SD 1.5) of their manuscript. Only 3% (n = 8) said that the editor or peer reviewers had asked them to change or clarify the trial’s primary outcome. However, 27% (n = 69) reported they were asked to change or clarify the statistical analysis of the primary outcome; most had fulfilled the request, the main motivation being to improve the statistical methods (n = 38; 55%) or avoid rejection (n = 30; 44%). Overall, there was little association between authors being asked to make this change and the type of journal, intervention, significance of the primary outcome, or funding source. Thirty-six percent (n = 94) of authors had been asked to include additional analyses that had not been included in the original manuscript; in 77% (n = 72) these were not pre-specified in the protocol. Twenty-three percent (n = 60) had been asked to modify their overall conclusion, usually (n = 53; 88%) to provide a more cautious conclusion.

Conclusion

Overall, most changes, as a result of the peer review process, resulted in improvements to the published manuscript; there was little evidence of a negative impact in terms of post hoc changes of the primary outcome. However, some suggested changes might be considered inappropriate, such as unplanned additional analyses, and should be discouraged.

Peer Review reports

Background

Peer review is considered fundamental to the integration of new research findings into the scientific community [1]. Journal editors rely on the views of independent experts (peer reviewers) in making decisions on the publication of submitted manuscripts [2] and peer review is widely considered necessary for evaluating the credibility of published research. Worldwide, peer review costs an estimated £1.9 billion annually and accounts for around one quarter of the overall costs of scholarly publishing and distribution [3].

Despite this huge investment and widespread acceptance of the peer review process by the scientific community, little is known about its impact on the quality of reporting of published research [4, 5]. Studies have shown that peer reviewers are not able to appropriately detect errors [6,7,8], improve the completeness of reporting [9], or decrease the distortion of the study results [10]. One such study evaluating the impact of peer review found that although peer reviewers often detect important deficiencies in the reporting of the methods and results of randomised trials they miss more omissions than they detect [9]. That study showed that on average peer reviewers requested few changes. Most changes were deemed by the authors to have had a positive impact on reporting of the final publication; for example, clarification of the primary and secondary outcomes or the toning down of conclusions to reflect the results. However, some changes requested by peer reviewers were deemed inappropriate and could have a negative impact on reporting of the final publication, such as the adding of unplanned post hoc additional analyses.

In this study, we investigated the influence of the peer review process within biomedical journals on the reporting of primary outcome(s) and statistical analyses within published reports of randomised trials. In particular, we examined how often relevant post hoc changes of the primary outcome(s) and/or the primary statistical analysis were requested, how often the authors followed these requests, and whether the frequency of these suggestions was related to specific journal or trial characteristics.

Methods

Sample selection

We searched the US National Library of Medicine’s PubMed database over a 1 year period to identify all primary reports of randomised trials published between May 2014 to April 2015 in the six general medical journals and 12 specialty journals with the highest ISI impact factor in 2012. The six general medical journals were the New England Journal of Medicine, Lancet, JAMA, BMJ, PLoS Medicine and Annals of Internal Medicine. The top 12 specialty journals (Table 1) were identified from the major ISI Web of Knowledge journal citation reports medical subject categories. We restricted inclusion to journals that had published at least 50 articles in 2013 with the Publication Type term ‘Randomized Controlled Trial’ (based on a PubMed search on 7 April 2014) (see Additional file 1).

Table 1 Identification of primary reports of randomised controlled trials (RCTs) (May 2014 to April 2015) and number of authors responding to the online survey

We developed a search strategy based on a modified version of the Cochrane highly sensitive search strategy (see Additional file 2). The search was run monthly to identify reports of randomised trials published in the previous month (e.g. the search was performed in the third week of June 2014 for all articles published in the print version of each journal in May 2014). One reviewer (SH) screened the titles and abstracts of all retrieved reports to exclude any obvious reports of ineligible trials. A copy of the full article was then obtained for all non-excluded reports, and assessed by two reviewers to determine if it met the inclusion criteria.

Eligibility criteria

We included all primary reports (that is, those reporting the main study outcome) of randomised trials, defined as a prospective study assessing healthcare interventions in human participants who were randomly allocated to study groups. We included all studies of parallel-group, crossover, cluster, factorial and split-body design. We excluded protocols of randomised trials, secondary analyses, systematic reviews, methodological studies, pilot studies and early phase (phase 1) trials.

Online survey

We sent an invitation email (see Additional file 3) to the corresponding author of each eligible report of a randomised trial identified from our search. These authors were asked to participate in an online survey investigating the type of changes made to manuscripts of randomised trials as part of the peer review process. If they agreed to participate, an Internet link included in the invitation email gave them access to the online survey. Participants were asked to confirm that they agreed to participate before being able to complete the online survey. The online survey was tested by members of the study team to ensure comprehension of the survey questions.

The survey consisted of short questions, which asked them to rate on a 1–10 Likert scale the overall handling of their manuscript by the journal, the quality of the peer review process, and whether the editor or peer reviewers had asked them to change any aspects of their study in a way that deviated from what was planned in their trial protocol. They were also asked specific questions about changes to the primary outcome measure(s) and/or the analysis of the primary outcome, additional analyses not included in the original manuscript and whether they had been asked to modify or change their overall conclusions. Authors were asked to comment on the nature of the changes requested in free-text boxes (see Additional file 4).

The invitation emails were sent out in a batch each month, approximately 6 weeks after publication of the journal articles. We inserted the reference to their trial publication in the invitation email to ensure that we could link author responses to individual journal articles. Participants were informed that all responses would be treated in the strictest confidence and that we would not identify any individual responses. A reminder email was sent 2 weeks after the original email was sent, to those who did not respond to the original request. A second reminder was sent 2 weeks after sending the first reminder.

Data extraction and analysis

Key information was extracted from each eligible report of a randomised trial for which the corresponding author had completed the online survey. After piloting of the data extraction form data extraction was carried out by one reviewer (OA); any uncertainties were resolved by discussion with a second reviewer (SH). We extracted information on the trial design, single or multicentre, number of study arms, number of participants, disease area, type of intervention, specification of the primary outcome and direction of the observed effect, trial registration and source of funding. We summarised the characteristics of the primary reports of randomised trials using proportions, median and interquartile range (IQR). Similarly, authors’ responses to the online survey were summarised using proportions, number, mean, standard deviation, median and interquartile range.

The primary analysis focussed on the responses to the online survey and the nature and extent of changes made to manuscripts by authors as part of the peer review process; in particular, in relation to the reporting of the primary outcome(s) and statistical analyses, how often the authors followed peer reviewer requests, and their main motivation for doing so. We assessed whether requested changes or modifications to the primary outcome(s) and statistical analysis were related to the type of journal (general/specialty), type of intervention (drug/non-drug), source of funding (industry/non-industry), or result of the primary outcome (significant/non-significant) using the chi-squared test or Fisher’s exact test as appropriate.

We also carried out a semi-qualitative analysis of free-text information provided by authors on changes and clarifications requested by peer reviewers and the extent to which authors responded to these requests. Two reviewers (CW, KL) first developed preliminary codes based on the questions posed and a screening of the free-text material. The free-text material was then systematically assigned to codes; if necessary codes were revised or new codes added.

Results

The search identified 2106 possible reports of randomised trials published in the top six general medical journals and top 12 specialty journals between May 2014 and April 2015. After screening the full-text articles we identified 893 primary reports of randomised trials, for which 258 (29%) authors completed the online survey (Table 1). The response rate by journal ranged from 11% (Journal of the American College of Cardiology) to 56% (PLoS Medicine).

Characteristics of primary reports of randomised trials

Table 2 shows the general characteristics of the 258 reports of randomised trials whose authors completed the online survey. The majority of trials were multicentre (n = 191; 74%), parallel group (n = 225; 87%), with two study groups (n = 202; 78%). The median sample size was 325 participants (IQR 138 to 1010). About half of the trials assessed drug interventions (n = 127; 49%), 23% (n = 59) surgical or procedural interventions and 16% (n = 40) behavioural or educational interventions. The primary outcome was clearly defined in 92% (n = 238) of trial reports, of which the estimated treatment effect was reported as statistically significant (P < 0.05) in 49% (n = 116). Most trials reported the trial registration number (n = 235; 91%) and around half (n = 141; 55%) gave a journal reference for the trial protocol; more than half (n = 159; 62%) were non-industry funded.

Table 2 Characteristics of primary reports of randomised trials (n = 258)

Response to online survey by authors of primary reports of randomised trials

Tables 3 and 4 summarise the authors’ responses to the online survey, and the nature and extent of changes made to manuscripts by authors as part of the peer review process in relation to the reporting of the primary outcome(s) and statistical analyses. The majority of authors responded that they were satisfied with the handling of their manuscript by the journal (mean 8.6, SD 1.5) and the quality of peer review (mean 8.5, SD 1.5). Fourteen percent (n = 36) of authors responded that the editor or peer reviewers asked them to change an aspect of their study in way that deviated from what was planned in the trial protocol.

Table 3 Authors’ responses to online survey (n = 258)
Table 4 Type of change or clarification requested to primary outcome and/or statistical analysis

Only eight authors (3%) said that the editor or peer reviewers asked them to change or clarify the trial’s primary outcome, but about a quarter (n = 69; 27%) reported that they had been asked to change or clarify the statistical analysis of the primary outcome. Most of those who were asked to change or clarify the trial’s primary outcome or statistical analysis responded that they had fulfilled the request. The main motivation for making the change to the statistical analysis was either to improve the statistical methods (n = 38; 55%) or to avoid rejection of paper (n = 30; 44%). Overall, there was no evidence of association between authors being asked to change or clarify the trial’s primary outcome and/or statistical analysis and whether the trial was published in a general or specialty journal, investigated a drug or was a non-drug intervention, or was solely/partly industry funded versus non-industry funded (Table 5). The primary outcome was more likely to be statistically significant where authors responded that they had been asked to change or clarify the trial’s primary outcome; however, the number of responses was very small (six out of eight responses; 86%). Conversely, requests for changes or clarification of the statistical analysis of the primary outcome were more common (40 out of 69 responses; 60%) for trials with non-significant primary outcomes.

Table 5 Association between whether editors or peer reviewers asked for changes or clarifications to the primary outcome(s) and/or statistical analysis and specific journal and trial characteristics

One third (n = 94; 36%) of authors responded that they were asked to include additional analyses. Again, most authors (n = 83; 88%) had complied with the request with the main motivation being to avoid rejection of paper (n = 49; 52%). Most of the published articles (n = 72; 77%) did not indicate that the additional analyses had not been specified in the protocol. Finally, around a quarter (n = 60; 23%) of authors were asked to modify their overall conclusion, in most cases (n = 53; 88%) to provide a more cautious conclusion. Again, the majority fulfilled the request, the main motivations being to improve reporting of the trial (n = 32; 53%) or to avoid rejection of paper (n = 29; 48%).

Textual analysis of author comments to the online survey questions

Examination of the free-text information provided by the eight authors who answered they were asked ‘to change or clarify the trials primary outcome measure(s)’ showed that only two referred to true ‘changes’. One author reported that editors and a peer reviewer “asked us to present just one primary outcome, despite our protocol specifying two” (trial 434); another that a reviewer asked that “we change the primary outcome from average depression severity during time in trial to depression severity at the end of treatment” (trial 451). In both cases authors successfully argued against these requests. Two further authors reported issues which are actually changes to the statistical analysis of the primary outcome (methods for imputation of missing data and adjustment; trials 478 and 584), two were requested to provide non-pre-specified sensitivity or additional analyses (trials 599 and 328). In the two remaining cases the short information provided suggested that the submitted manuscript had insufficiently defined the primary outcome (trials 396 and 513). True changes requested in the statistical analysis of the primary outcome most often concerned addition to or changes of methods to impute missing data (n = 14), or of the statistical model or test (n = 11), and less frequently to analysis populations or adjustment issues. More often, however, editors and reviewers requested clarification of statistical methods or presentations (see Table 6 for examples of free-text responses).

Table 6 Example of free-text answers provided by authors regarding changes or clarifications of the statistical analysis of the primary outcome

Free-text responses to the question on additional analyses requested were typically short. The request most often mentioned explicitly (n = 29) was for additional subgroup analyses. A further 20 authors simply said they were asked for additional analyses. A smaller number of requests can be summarised as for sensitivity analyses to check the robustness of findings, or for presentation of additional data. Twenty authors had been asked to draw more cautious conclusions; a further 20 reported a variety of specific changes which did not make conclusions more positive or negative. Four others were asked to draw stronger negative conclusions, one to draw more positive conclusions in a trial not finding expected differences; the remainder did not give a reason.

Discussion

Summary of main findings

This study provides a unique opportunity to investigate the influence of the peer review process within high-impact medical and specialty journals on the reporting of primary outcome(s) and statistical analyses in reports of randomised trials. Textual analysis of author comments from the online survey provides insight into the types of changes or modifications made by authors as part of the peer review process and the rationale behind these changes.

In our study, most authors who took part in the online survey responded positively regarding the overall handling and overall quality of peer review of their manuscript. We found evidence of journal editors or peer reviewers asking authors to change or clarify the trial’s primary outcome, in only eight out of the 258 (3%) published reports where authors responded to the online survey, which is encouraging given concerns regarding bias associated with selective reporting of primary outcomes in favour of significant outcomes [11]. We found some evidence of authors being asked to make changes to the statistical analysis of the primary outcome; however, in most cases these requests were in fact for clarifications of the existing methods, the main motivation being to either improve reporting of the statistical methods or to avoid rejection by the journal. Overall, we found no association between authors being asked to make these changes and the type of journal, intervention, significance of the primary outcome, or funding source. Some authors reported being asked to modify their overall conclusion, in most cases to provide a more cautious conclusion avoiding spin in interpretation of the trial results [10]. However, some changes requested as part of the peer review process were more concerning, whereby authors were asked to include additional analyses, such as subgroup analyses, that had not been included in the original manuscript, of which the majority were not pre-specified in the trial protocol. This is concerning as these additional analyses could be driven by an existing knowledge of the data, or the interests of the reader, rather than the primary focus of the study [12]. While the Consolidated Standards of Reporting Trials (CONSORT) Statement does not preclude additional analyses being performed it does stress the importance of distinguishing those which were pre-specified in the trial protocol and those which are exploratory [13]. It is, therefore, concerning that most of the reported changes to analyses were not reported as post hoc.

Comparison with other studies

We are not aware of other studies specifically looking at the impact of the peer review process on the reporting of the primary outcome and statistical analyses of randomised trials. Other studies on the impact of peer review have predominantly focussed on the editorial process, such as the use of reporting checklists, blinding of peer reviewers [14, 15], or the implementation of training strategies for peer reviewers [16]. A number of studies have looked at the selective reporting of primary outcomes [11] or switching of outcomes from either the trial registry [17,18,19], or trial protocol [20] and the published manuscript. The COMPare study [21] systematically tracked the switching of outcomes in 67 clinical trials published in the top five medical journals (October 2015 to January 2016) to compare trial outcomes reported in the trial registry or protocol with those reported in the published article. Of the 67 trials, they identified 354 pre-specified outcomes that were not reported in the published manuscript and 357 new non-pre-specified outcomes that were silently added. The COMPare study looked at all trial outcomes and did not differentiate between primary and secondary outcomes.

Limitations

Our study has several limitations. First, despite sending two reminder emails our response rate to the online survey was still relatively low at 29% and, therefore, we do not know about those who did not respond. One might expect both that those with more negative experiences might have been more likely to respond to the survey; or that people who experienced a positive spin after peer review were less likely to respond. Yet, our survey showed that most authors were happy with overall quality and handling of their manuscript as part of the peer review progress. Conversely, it is possible that our findings might present an overly positive picture as we focussed on authors whose manuscript had recently been accepted by a high-impact journal. Open peer review, whereby the peer reviewer’s comments are included alongside the published article would allow a more complete assessment of this problem, without the challenge of low response rates.

Due to the nature of our study, we do not have information on the influence of peer review on the reporting of primary outcome(s) and statistical analyses for those manuscripts which were subsequently rejected by the journal. Finally, our study focussed specifically on high-impact journals where one might expect the journal editors and peer reviewers to be more experienced and more likely to identify potential problems. It is unclear the extent to which these findings are generalizable to other journals with potentially less experienced editors and peer reviewers.

Conclusion

Overall, we found little evidence of a negative impact of the peer review process in terms of selective reporting of primary outcome(s). Most changes requested as part of the peer review process resulted in improvements to the trial manuscript, such as improving clarity of the statistical methods and providing more cautious conclusions. However, some requested changes could be deemed inappropriate and could have a negative impact on reporting in the final publication, such as the request to add non-pre-specified additional analyses.

Abbreviations

CONSORT:

Consolidated Standards of Reporting Trials

References

  1. Rennie R. Editorial peer review: its development and rationale. In: Godlee F, Jefferson T, editors. Peer Review in Health Sciences. 2nd edition. London: BMJ Books; 2003. p. 1-13.

  2. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007;(2):Mr000016.

  3. Public Library of Science. Peer review—optimizing practices for online scholarly communication. In: House of Commons Science and Technology Committee, editor. Peer Review in Scientific Publications, Eighth Report of Session 2010–2012. London: The Stationery Office Limited; 2011. p. 174–8.

    Google Scholar 

  4. Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Med. 2016;14(1):85.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Chauvin A, Ravaud P, Baron G, Barnes C, Boutron I. The most important tasks for peer reviewers evaluating a randomized controlled trial are not congruent with the tasks most often requested by journal editors. BMC Med. 2015;13:158.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann Emerg Med. 1998;32(3 Pt 1):310–7.

    Article  CAS  PubMed  Google Scholar 

  7. Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, Tierney WM. Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PloS One. 2010;5(4):e10072.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Yaffe MB. Re-reviewing peer review. Sci Signaling. 2009;2(85):eg11.

    Article  Google Scholar 

  9. Hopewell S, Collins GS, Boutron I, Yu LM, Cook J, Shanyinde M, et al. Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. BMJ (Clinical research ed). 2014;349:g4145.

    Google Scholar 

  10. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA. 2010;303(20):2058–64.

    Article  CAS  PubMed  Google Scholar 

  11. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291(20):2457–65.

    Article  CAS  PubMed  Google Scholar 

  12. Sun X, Briel M, Busse JW, You JJ, Akl EA, Mejza F, et al. Credibility of claims of subgroup effects in randomised controlled trials: systematic review. BMJ (Clinical research ed). 2012;344:e1553.

    Article  Google Scholar 

  13. Moher D, Hopewell S, Schulz KF, Montori V, Gotzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ (Clinical research ed). 2010;340:c869.

    Article  Google Scholar 

  14. Alam M, Kim NA, Havey J, Rademaker A, Ratner D, Tregre B, et al. Blinded vs. unblinded peer review of manuscripts submitted to a dermatology journal: a randomized multi-rater study. Br J Dermatol. 2011;165(3):563–7.

    Article  CAS  PubMed  Google Scholar 

  15. Cho MK, Justice AC, Winker MA, Berlin JA, Waeckerle JF, Callaham ML, et al. Masking author identity in peer review: what factors influence masking success? PEER Investigators. JAMA. 1998;280(3):243–5.

    Article  CAS  PubMed  Google Scholar 

  16. Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled trial. BMJ (Clinical research ed). 2004;328(7441):673.

    Article  Google Scholar 

  17. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302(9):977–84.

    Article  CAS  PubMed  Google Scholar 

  18. Hannink G, Gooszen HG, Rovers MM. Comparison of registered and published primary outcomes in randomized clinical trials of surgical interventions. Ann Surg. 2013;257(5):818–23.

    Article  PubMed  Google Scholar 

  19. Rosenthal R, Dwan K. Comparison of randomized controlled trial registry entries and content of reports in surgery journals. Ann Surg. 2013;257(6):1007–15.

    Article  PubMed  Google Scholar 

  20. Dwan K, Gamble C, Williamson PR, Kirkham JJ. Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PloS One. 2013;8(7):e66844.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Goldacre B, Drysdale H, Powell-Smith A, et al. The COMpare Trials Project. 2016. http://compare-trials.org/. Accessed 27 Nov 2017.

Download references

Acknowledgements

We are grateful for those authors who contributed to the online survey as part of this research study.

Funding

This study received no external funding.

Availability of data and materials

No additional data available.

Author information

Authors and Affiliations

Authors

Contributions

SH, DA, CL and KL were involved in the design, implementation and analysis of the study and in writing the final manuscript. KI, SK and OA were involved in the implementation of the study and in commenting on drafts of the final manuscript. SH is responsible for the overall content as guarantor. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sally Hopewell.

Ethics declarations

Authors’ information

The lead author (the manuscript’s guarantor) affirms that the manuscript is an honest, accurate and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

Ethical approval and consent to participate

Ethics approval was obtained from the University of Oxford Central University Research Ethics Committee MSD-IDREC-C1-2014-098.

Consent for publication

Not applicable.

Competing interests

Prof. Douglas G Altman is an Editor-in-Chief for Trials. All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/coi_disclosure.pdf and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous 3 years; no other relationships or activities that could appear to have influenced the submitted work.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Journal impact factor. (DOCX 36 kb)

Additional file 2:

Search strategy for the PubMed database available from the US National Library of Medicine, National Institutes of Health. (DOCX 33 kb)

Additional file 3:

Invitation email to survey participants. (DOCX 67 kb)

Additional file 4:

Survey questions. (DOCX 35 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hopewell, S., Witt, C.M., Linde, K. et al. Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials. Trials 19, 30 (2018). https://doi.org/10.1186/s13063-017-2395-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-017-2395-4

Keywords