- Open Access
- Open Peer Review
Why prudence is needed when interpreting articles reporting clinical trial results in mental health
© The Author(s). 2017
- Received: 26 October 2016
- Accepted: 13 March 2017
- Published: 28 March 2017
Clinical trial results’ reliability is impacted by reporting bias. This is primarily manifested as publication bias and outcome reporting bias.
Mental health trials’ specific features
Mental health trials are prone to two methodological deficiencies: (1) using small numbers of participants that facilitates false positive findings and exaggerated size effects, and (2) the obligatory use of psychometric scales that require subjective assessments. These two deficiencies contribute to the publication of unreliable results. Considerable reporting bias has been found in safety and efficacy findings in psychotherapy and pharmacotherapy trials. Reporting bias can be carried forward to meta-analyses, a key source for clinical practice guidelines. The final result is the frequent overestimation of treatment effects that could impact patients and clinician-informed decisions.
Mechanisms to prevent outcome reporting bias
Prospective registration of trials and publication of results are the two major methods to reduce reporting bias. Prospective trial registration will allow checking whether they are published (so it will help to prevent publication bias) and, if published, whether those outcomes and analyses that were deemed as appropriate before trial commencement are actually published (hence helping to find out selective reporting of outcomes). Unfortunately, the rate of registered trials in mental health interventions is low and, frequently, of poor quality.
Clinicians should be prudent when interpreting the results of published trials and some meta-analyses – such as those conducted by scientists working for the sponsor company or those that only include published trials. Prescribers, however, should be confident when prescribing drugs following the summary of product characteristics, since regulatory agencies have access to all clinical trial results.
- Clinical trials
- Publication bias
- Outcome reporting bias
It is well known that the reliability of published clinical trial results is far from optimal [1, 2]. Reporting bias is a structural problem present in both publicly funded and privately sponsored trials. It is primarily manifested as publication bias (many trials are not published at all, mainly because they did not yield positive results) and outcome reporting bias or selective outcome reporting (only outcomes or analyses yielding positive results are published). Both types of bias typically result in an overestimation of the benefits and an underestimation of the risks. This could easily lead to erroneous therapeutic decisions by clinicians and patients.
What it is not so well known is that the two disciplines dealing with mental health, psychology and psychiatry, are at the very top of the ranking of all natural and social sciences with regards to publication bias: 91.5% of all articles publish positive results . This finding was already described more than 50 years ago when Sterling  showed that 97% of all published studies on psychology rejected the null hypothesis. More recently, an unusually high prevalence of psychology studies published with a p value of just below 0.05 has been observed: the number of studies with a p value of between 0.045 and 0.05 is much higher than that expected . Furthermore, in 9% of psychology trials reported p values are inconsistent with the reported statistic .
Clinical trials assessing mental health interventions are prone to two major methodological deficiencies that, ultimately, contribute to the publication of unreliable results. The first deficiency relates to the low number of trial participants: the low statistical power associated with small numbers of participants facilitates the finding of false positive results and exaggerated size effects [7, 8].
A search conducted on ClinicalTrials.gov  on 9 December 2015 on clinical trials on psychotherapy, showed that 909 trials were registered and that the median number of participants among 91 trials randomly chosen from those 909 was 120; however, the median number of patients on 43 pivotal trials on drugs for psychiatric conditions approved between 2005 and 2012 in the US was 432 , i.e., 3.6 times higher. In addition to that, a recent study found that more than 50% of 100 psychology trials could not be replicated; furthermore, the mean effect size among replicated studies was half of that described in the original articles . Of note is that a low number of trial participants could also yield to false negative results that, if published, could mislead clinicians.
The second deficiency refers to the use of outcomes requiring a certain grade of interpretation, such as psychometric scales: since these require a subjective assessment, they are prone to remarkable variability depending on the implemented analytical option [7, 8]. In these cases, in a trial with a small sample size, the magnitude of the estimated effect could vary (the so-called ‘vibration effect’) and will depend on factors such as the principal endpoint (psychometric scale) chosen, the use of adjustments for certain confounders and the availability of alternative options in statistical approach .
Publication bias is common on mental health trials. Thus, in clinical trials on treatments of psychiatric conditions considerable publication bias has been found. In major depression a reduction of 25–29% of the effect size of psychotherapy was observed when adding the results of unpublished trials to those to published trials [13, 14]. Publication bias and selective outcome reporting have been also described for drug trials for major depression , anxiety disorders  and schizophrenia  in which sponsor companies decided to mainly publish positive trials, outcomes and analyses.
Safety information is rarely reported in both drug and psychological intervention trials. Outcome reporting bias of key safety results is common in trials on antidepressants and antipsychotics: when comparing the data included in the corresponding clinical trial registries, only 57%, 38%, and 47% of serious adverse events, cases of deaths or suicide were reported in articles, respectively . In psychotherapy the picture is remarkably worse since harms of the interventions are rarely reported: possible or actual adverse reporting is 9 to 20 times more likely in pharmacotherapy trials than in psychotherapy trials . All these biases have a considerable impact on the benefit and risks reported for mental health interventions.
The relevance of reporting bias is much greater if these are carried forward to meta-analyses, a key source for clinical practice guidelines. In disorders having a number of commercially available drugs, but with few head-to-head comparative trials, it is common to conduct network meta-analysis. In these, investigators aim to rank all medicines after conducting direct and, mostly, indirect comparisons between available drugs. In a network meta-analysis with trials of antidepressants versus placebo, publication bias modified the ranking order of the first three drugs if unpublished trials were taking into account . This was not the case in a network meta-analysis with antipsychotics, most likely due to the use of head-to-head comparative trials and the fact that publication bias in antipsychotic trials is less common than with antidepressants .
There are two major methods to reduce trial results’ reporting bias. One is to prospectively register the trial or systematic review or meta-analysis in a public registry before the start of the trial or review. Trials could be registered in a number of registries that accept both trials on medicines and psychotherapy such as, for instance, ClinicalTrials.gov  or ISRCTN , whereas systematic reviews could be registered on International Prospective Register of Systematic Reviews (PROSPERO) . The other method is to publish all the results obtained. Both are described in the Consolidated Standards of Reporting Trials (CONSORT) Statement  for trials and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines  for meta-analyses. However, these guidelines for reporting outcomes are rarely required by psychiatry journals . Prospective trial registration will allow checking whether they are published (so it will help to prevent publication bias) and, if published, whether those outcomes and analyses that were deemed as appropriate before trial commencement are actually published (hence helping to find out selective reporting of outcomes). These two requirements are mandatory for medicine trials in the European Union and the US; this, however, is not the case for nonregulated interventions, such as psychotherapy . However, this has changed but only for psychotherapy US National Institutes of Health-funded trials: in January 2017 a new policy on trial registration and publication of results has come into force , so psychotherapy trials will have to be registered and their results published. From an ethical perspective, the Declaration of Helsinki has also asked for these two requirements since 2008 .
Unfortunately, the rate of registered trials is low – only 25% of psychiatry journals ask for preregistration of trials to have the results published  – and, in many cases, of poor quality. Thus, among the top five psychiatry journals, that also ask for preregistration of trials, only 33% of 181 trials published in 2009–2013 were registered before the onset of participant enrollment, whereas only 14% had, in addition, no outcome reporting bias . In other analysis, among 170 trials on depression published between 2011 and 2013, only 33% and 20% of trials assessing drugs and cognitive behavior therapy were appropriately registered (i.e., before study start and reporting fully specified outcomes), respectively . With regards to psychotherapy, a recent systematic review on 112 randomized clinical trials published between 2010 and 2014, showed that only 18% were prospectively registered and only 5% were free of outcome reporting bias .
Prescribers should interpret the results of clinical trials and meta-analyses with caution: publication in a prestigious journal does not prevent selective reporting of outcomes. It would seem reasonable to expect that journal editors should implement rigorous quality control mechanisms to prevent outcome reporting bias . Because that is not implemented yet, clinicians should have a certain degree of skepticism with regards to all clinical trial results, irrespective of the types of intervention assessed. Clinicians cannot be expected to compare the information included in an article with that provided on the trial registry. On the other hand, although there are a number of methods to explore whether a meta-analysis presents any type of bias , they are not feasible for the vast majority of clinicians. Prescribers should be especially skeptical when reading the results of meta-analyses when (1) scientists of the sponsor company were involved , and (2) no unpublished trials are included, since this usually implies a variable impact on the direction and size of the therapeutic effect .
It should be highlighted, however, that since regulatory agencies have access to the results of all clinical trials with medicines, the clinician should be confident when filling a prescription following the authorized summary of product characteristics. The situation is very different with trials on off-label indications, where selective outcome reporting is common , and with trials conducted not to amend the approved indication, posology or target population of a drug, but to inform prescription habits (e.g., comparative effectiveness trials) that are not subject to regulatory agency in-depth review: articles of these two types of trial are subject only to the peer-review process which has no impact in rejecting for publication manuscripts with discrepancies in registries . Clinicians should be even more skeptical when reading psychotherapy trials that, since they are not regulated, could easily present outcome reporting bias of both benefits and harms, hence hindering the benefit/risk assessment.
The only way to ensure the absence (or minimization) of outcome reporting bias is by implementing better-quality control procedures during the editorial process, such as a thorough cross-checking between the manuscript and the protocol or registry . Until this happens, clinicians should be prudent when interpreting the results of published trials and some systematic reviews/meta-analyses. This is based on the fairly frequent presence of outcome reporting bias that tends to overestimate treatment effects in mental health trials. As a nonregulated intervention, psychotherapy trials are especially prone to this fact. Pharmacotherapy is also subject to outcome reporting bias, but since regulatory agencies will have access to pivotal trials results, the summary of product characteristics is a fair description on how a drug can be correctly prescribed.
We thank Dr. Eric Turner (Departments of Psychiatry and Pharmacology, Oregon Health and Science University, Portland, OR, USA) for helpful comments on a previous version of this paper.
This work required no funding.
Availability of data and materials
RD-R conceived the idea and wrote the first draft of the manuscript. JB and PC made substantial revisions for intellectual content. All authors approved the final version of the manuscript and are accountable for all aspects included in it.
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ. 2010;340:c365.View ArticlePubMedGoogle Scholar
- Saini P, Loke YK, Gamble C, Altman DG, Williamson PR, Kirkham JJ. Selective reporting bias of harm outcomes within studies: findings from a cohort of systematic reviews. BMJ. 2014;349:g6501.View ArticlePubMedPubMed CentralGoogle Scholar
- Fanelli D. “Positive” results increase down the hierarchy of the sciences. PLoS One. 2010;5:e10068.View ArticlePubMedPubMed CentralGoogle Scholar
- Sterling T. Publication decisions and their possible effects on inferences drawn from tests of significance, or vice versa. J Am Stat Assoc. 1959;285:30–4.Google Scholar
- Masicampo EJ, Lalande DR. A peculiar prevalence of p values just below.05. Q J Exp Psychol (Hove). 2012;65:2271–9.View ArticleGoogle Scholar
- Krawczyk M. The search for significance: a few peculiarities in the distribution of P values in experimental psychology literature. PLoS One. 2015;10:e0127872.View ArticlePubMedPubMed CentralGoogle Scholar
- Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124.View ArticlePubMedPubMed CentralGoogle Scholar
- Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013;14:365–76.View ArticlePubMedGoogle Scholar
- US National Institutes of Health. ClinicalTrials.gov https://clinicaltrials.gov/. Accessed 15 Feb 2017.
- Downing NS, Aminawung JA, Shah ND, Krumholz HM, Ross JS. Clinical trial evidence supporting FDA approval of novel therapeutic agents, 2005-2012. JAMA. 2014;311:368–77.View ArticlePubMedPubMed CentralGoogle Scholar
- Open Science Collaboration. Estimating the reproducibility of psychological sciences. Science. 2015;349(943):aac4716.View ArticleGoogle Scholar
- Ioannidis JPA. Why most discovered true associations are inflated. Epidemiology. 2008;19:640–8.View ArticlePubMedGoogle Scholar
- Cuijpers P, Smit F, Bohlmeijer E, Hollon SD, Andersson G. Efficacy of cognitive-behavioural therapy and other psychological treatments for adult depression: meta-analytic study of publication bias. Br J Psychiatry. 2010;196:173–8.View ArticlePubMedGoogle Scholar
- Driessen E, Hollon SD, Bockting CL, Cuijpers P, Turner EH. Does publication bias inflate the apparent efficacy of psychological treatment for major depressive disorder? A systematic review and meta-analysis of US National Institutes of Health-Funded Trials. PLoS One. 2015;10:e0137864.View ArticlePubMedPubMed CentralGoogle Scholar
- Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358:252–60.View ArticlePubMedGoogle Scholar
- Roest AM, de Jonge P, Williams CD, de Vries YA, Schoevers RA, Turner EH. Reporting bias in clinical trials investigating the efficacy of second-generation antidepressants in the treatment of anxiety disorders: a report of 2 meta-analyses. JAMA Psychiat. 2015;72:500–10.View ArticleGoogle Scholar
- Turner EH, Knoepflmacher D, Shapley L. Publication bias in antipsychotic trials: an analysis of efficacy comparing the published literature to the US Food and Drug Administration Database. PLoS Med. 2012;9:e1001189.View ArticlePubMedPubMed CentralGoogle Scholar
- Hughes S, Cohen D, Jaggi R. Differences in reporting serious adverse events in industry sponsored clinical trial registries and journal articles on antidepressant and antipsychotic drugs: a cross sectional study. BMJ Open. 2014;4:e005535.View ArticlePubMedPubMed CentralGoogle Scholar
- Vaughan B, Goldstein MH, Alikakos M, Cohen LJ, Serby MJ. Frequency of reporting of adverse events in randomized controlled trials of psychotherapy vs. psychopharmacotherapy. Compr Psychiatry. 2014;55:849–55.View ArticlePubMedPubMed CentralGoogle Scholar
- Trinquart L, Abbe A, Ravaud P. Impact of reporting bias in network meta-analysis of antidepressant placebo-controlled trials. PLoS One. 2012;7:e35219.View ArticlePubMedPubMed CentralGoogle Scholar
- Mavridis D, Efthimiou O, Leucht S, Salanti G. Publication bias and small-study effects magnified effectiveness of antipsychotics but their relative ranking remained invariant. J Clin Epidemiol. 2015;69:161–9.View ArticlePubMedGoogle Scholar
- BioMed central. ISRCTN registry. http://www.isrctn.com/. Accessed 15 Feb 2017.
- UK National Institute for Health Research. PROSPERO. International prospective register of systematic reviews. https://www.crd.york.ac.uk/PROSPERO/. Accessed 15 Feb 2017.
- CONSORT Transparent reporting of trials. CONSORT Statement. http://www.consort-statement.org/. Accessed 15 Feb 2017.
- EQUATOR Network. Enhancing the QUAlity and Transparency Of health Research. http://www.equator-network.org/reporting-guidelines/prisma/. Accessed 15 Feb 2017.
- Knuppel H, Metz C, Meerpohl JJ, Strech D. How psychiatry journals support the unbiased translation of clinical research. A cross-sectional study of editorial policies. PLoS One. 2013;8:e75995.View ArticlePubMedPubMed CentralGoogle Scholar
- Dal-Ré R, Bracken MB, Ioannidis JP. Call to improve transparency of trials of non-regulated interventions. BMJ. 2015;350:h1323.View ArticlePubMedGoogle Scholar
- National Institutes of Health. NIH Policy on the Dissemination of NIH-Funded Clinical Trial Information. Notice Number: NOT-OD-16-149. Release date: 16 September 2016. http://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-149.html. Accessed 15 Feb 2017.
- World Medical Association. The Declaration of Helsinki. http://www.wma.net/es/30publications/10policies/b3/17c.pdf. Accessed 15 Feb 2017.
- Scott A, Rucklidge JJ, Mulder RT. Is mandatory prospective trial registration working to prevent publication of unregistered trials and selective outcome reporting? An observational study of five psychiatry journals that mandate prospective clinical trial registration. PLoS One. 2015;10:e0133718.View ArticlePubMedPubMed CentralGoogle Scholar
- Shinohara K, Tajika A, Imai H, Takeshima N, Hayasaka Y, Furukawa TA. Protocol registration and selective outcome reporting in recent psychiatry trials: new antidepressants and cognitive behavioral therapies. Acta Psychiatr Scand. 2015;132:489–98.View ArticlePubMedGoogle Scholar
- Bradley HA, Rucklidge JJ, Mulder RT. A systematic review of trial registration and selective outcome reporting in psychotherapy randomized controlled trials. Acta Psychiatr Scand. 2017;135:66–75.View ArticleGoogle Scholar
- Dal-Ré R, Caplan AL. Journal editors’ impasse with outcome reporting bias. Eur J Clin Invest. 2015;45:895–8.View ArticlePubMedGoogle Scholar
- Mavridis D, Salanti G. How to assess publication bias: funnel plot, trim-and-fill method and selection models. Evid Based Ment Health. 2014;17:30.View ArticlePubMedGoogle Scholar
- Ebrahim S, Bance S, Athale A, Malachowski C, Ioannidis JP. Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. J Clin Epidemiol. 2016;70:155–63.View ArticlePubMedGoogle Scholar
- Hart B, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ. 2012;344:d7202.View ArticlePubMedGoogle Scholar
- Vedula SS, Li T, Dickersin K. Differences in reporting of analyses in internal company documents versus published trial reports: comparisons in industry-sponsored trials in off-label uses of gabapentin. PLoS Med. 2013;10:e1001378.View ArticlePubMedPubMed CentralGoogle Scholar
- van Lent M, IntHout J, Out HJ. Differences between information in registries and articles did not influence publication acceptance. J Clin Epidemiol. 2015;68:1059–67.View ArticlePubMedGoogle Scholar
- Ioannidis J, Caplan AL, Dal-Ré R. Outcome reporting bias in clinical trials: why monitoring matters. BMJ. 2017;356:j408.View ArticlePubMedGoogle Scholar