Research | Open | Open Peer Review | Published:
Adherence to the International Committee of Medical Journal Editors’ (ICMJE) prospective registration policy and implications for outcome integrity: a cross-sectional analysis of trials published in high-impact specialty society journals
Trialsvolume 19, Article number: 448 (2018)
Registration of clinical trials is critical for promoting transparency and integrity in medical research; however, trials must be registered in a prospective fashion to deter unaccounted protocol modifications or selection of alternate outcomes that may enhance favorability of reported findings. We assessed adherence to the International Committee of Medical Journal Editors’ (ICMJE) prospective registration policy and identified the frequency of registrations occurring after potential observation of primary outcome data among trials published in the highest-impact journals associated with US professional medical societies. Additionally, we examined whether trials that are unregistered or registered after potential observation of primary outcome data were more likely to report favorable findings.
We conducted a retrospective, cross-sectional analysis of the 50 most recently published clinical trials that reported primary results in each of the ten highest-impact US medical specialty society journals between 1 January 2010 and 31 December 2015. We used descriptive statistics to characterize the proportions of trials that were: registered; registered retrospectively; registered retrospectively potentially after initial ascertainment of primary outcomes; and reporting favorable findings, overall and stratified by journal and trial characteristics. Chi-squared analyses were performed to assess differences in registration by journal and trial characteristics.
We reviewed 6869 original research reports published between 1 January 2010 and 31 December 2015 to identify a total of 486 trials across 472 publications. Of these 486 trials, 47 (10%) were unregistered. Among 439 registered trials, 340 (77%) were registered prospectively and 99 (23%) retrospectively. Sixty-seven (68%) of these 99 retrospectively registered trials, or 15% of all 439 registered trials, were registered after potential observation of primary outcome data ascertained among participants enrolled at inception. Industry-funded trials, those with enrollment sites in the US, as well as those assessing FDA-regulated interventions each had lower rates of retrospective registration. Unregistered trials were more likely to report favorable findings than were registered trials (89% vs. 64%; relative risk (RR) = 1.38, 95% confidence interval (CI) = 1.20–1.58; p = 0.004), irrespective of registration timing.
Adherence to the ICMJE’s prospective registration policy remains sub-standard, even in the highest-impact journals associated with US professional medical societies. These journals frequently published unregistered trials and trials registered after potential observation of primary outcome data.
Registration of clinical trials is critical for promoting transparency and integrity in medical research, helping to ensure a complete and unbiased record of all clinical trials [1,2,3]. Registration alone, however, is insufficient, as trials must be registered in a prospective fashion to help mitigate selective reporting, which may include addition or removal of outcome measures, preferential publication of statistically significant findings, and modification of which outcome measures were pre-specified as primary .
The International Committee of Medical Journal Editors (ICMJE) adopted a policy to encourage prospective trial registration, mandating that all clinical trials beginning July 2005 register prospectively, at or before the time of first patient enrollment, as a condition for publication in its member journals . Since implementation of this policy, trial registration at ClinicalTrials.gov, the largest international clinical trial registry, and other trial registries has increased substantially . However, a small but notable number of trials remain unregistered, including those that are published [7,8,9,10,11,12,13].
Moreover, despite increasing rates of registration, prospective registration of trials is still lacking [7, 8, 11,12,13,14,15,16]. Although more than 2900 journals support general ICMJE manuscript publication guidelines , a 2014 survey found that journal editors do not consistently adhere to ICMJE’s prospective trial registration policy [18, 19]. Previous research suggests that even in the highest-impact general medical journals, 28% of published trials were registered retrospectively . In some cases, registration occurred late enough to raise concerns about whether the specified primary outcome measure had been modified after trial inception , as retrospective registration may provide opportunity for unaccounted protocol modifications or selection of alternate outcomes to enhance the favorability of reported findings.
While previous studies of journal adherence to the ICMJE prospective trial registration policy have thus far either focused on the highest-impact general medical journals or sampled within field-specific journals [7, 8, 11,12,13,14,15], little is known about registration of trials published among high-impact specialty society journals. Specialty society journals, administered by professional organizations (e.g., the American Society of Clinical Oncology), tend to represent the views of their constituent specialists. They publish trials that are of great interest to their respective communities, which influence clinical practice . Although specialty journals typically have lower impact factors than the highest-impact general medical journals, they are often a preferred source of clinical information and guidelines for specialists .
To assess trial registration within specialty society journal publications, we constructed a large sample of clinical trials recently published in ten high-impact US specialty society medical journals. We evaluated adherence to ICMJE prospective trial registration policy; identified instances of registration after potential observation of primary outcome data; and determined characteristics associated with prospective registration. We additionally compared registered and published primary outcomes among prospectively and retrospectively registered trials and identified predictors of primary outcome non-discordance. Finally, we examined whether trials that were unregistered or registered after potential observation of primary outcome data were more likely to report favorable study findings.
See Additional file 1 for the study protocol.
We identified US specialty society medical journals using a list of US-based medical professional organizations associated with any of the specialties registered with the American Board of Medical Specialties . We searched for additional journals using SCImago Journal & Country Rank listings, adding to our list any journals associated with a US-based medical specialty organization . We selected the ten journals with the highest-impact factors after excluding general practice journals and journals that do not publish clinical trials . For each journal in our sample, we verified endorsement of trial registration as indicated by a statement on the journal’s website or listing of the journal on the ICMJE’s catalog of journals that follow its recommendations as a condition for inclusion .
Clinical trial sample selection
We reviewed original research articles, including brief reports and communications but not research letters or correspondences, to identify the 50 most recent primary publications of clinical trials in each journal, beginning with articles published in print journal issues in December 2015 and continuing in reverse chronology as far back as January 2010. We used the table of contents of each print journal issue to identify articles for possible inclusion. Clinical trials were systematically identified by screening the article’s abstract and, if necessary, the Methods section, for statements meeting the World Health Organization’s (WHO) definition of a clinical trial, also used by ICMJE, namely any study that “prospectively assigns people or a group of people to an intervention, with or without concurrent comparison or control groups, to study the cause-and-effect relationship between a health-related intervention and a health outcome” .
We limited our sample to publications reporting the findings of a trial’s primary outcome(s), which is most pertinent to the information contained within its registration, and excluded trials reporting secondary analyses of previously published results, secondary outcomes only, or interim analyses of primary outcomes. We further excluded publications describing phase I trials, as these studies typically do not assess effectiveness and have minimal impact, if any, on clinical practice. We additionally excluded trials beginning prior to July 2005, since trials preceding the ICMJE policy were unlikely to be prospectively registered.
From trial publications, we extracted information on journal, intervention, allocation, manuscript submission date, enrollment start date, primary outcome(s) with associated findings, and registration number(s) corresponding to the trial(s) reported. To account for the possibility of duplicate registrations, we searched the WHO International Clinical Trials Registry Platform (ICTRP), which aggregates registrations across registries endorsed by WHO, and in turn endorsed by ICMJE, using the reported registration numbers and additionally reviewed registrations for alternate identifiers to determine the earliest registration for each trial. For publications not reporting registration information, we searched the WHO ICTRP platform using search terms pertaining to intervention, first author, senior author, and sponsor to identify unpublished registrations, cross-referencing potential matches against sample size and enrollment criteria. We contacted corresponding authors of unmatched trials for registration information before concluding that the published trial was unregistered.
Using the earliest registration for each trial, we collected registration date, primary outcome submission date, primary completion date, start date, primary outcome(s) at initial registration, enrollment, phase, location, and funding source. We supplemented information on the latter four elements from trial publications when missing from the registry. Among trials we determined to have been first registered at ClinicalTrials.gov, we additionally collected date of original primary outcome submission, which the registry uniquely lists separately from the trial registration date. We categorized intervention, funding source, location, enrollment, and allocation as outlined in Table 1 for use in pre-specified stratified analyses. We considered interventions involving drugs, devices, or biologicals as regulated by the Food and Drug Administration (FDA).
Data abstractions were performed in tandem by ADG and JAA. Consistency and accuracy were verified through a 10% random sample validation of each investigator’s collections. A third author (JDW) repeated all searches for trials that were determined to be unregistered, supplementing with additional searches of the National Institutes of Health (NIH) funding database using grant funding identifiers when listed in publications . All disagreements were resolved by consensus with input from the senior investigator (JSR).
Main outcome measures
We first determined whether each trial was registered by ensuring that a corresponding registration record could be located. For registered trials, we next ascertained timeliness of registration by determining whether the trial was registered within 30 days of its enrollment start date. Trials registered greater than 30 days after enrollment initiation were categorized as “retrospective.” Although ICMJE policy mandates registration at or before the time of first patient enrollment, we allowed a 30-day grace period between registration and enrollment initiation in order to account for potential flexibility on the part of journal editors with regard to registration timeliness. Further, because the FDA Amendments Act (FDAAA) of 2007 specifically mandates registration within 21 days of first patient enrollment as a legal mandate for studies of therapeutics and devices regulated by FDA, a 30-day window grace period was chosen to accommodate inconsistencies between legal and journal requirements. Month-based representations of dates were recorded as the last day of the corresponding month (i.e., September 2012 was transcribed as 30 September 2012) to conservatively classify registrations as “retrospective.” We elected to use enrollment start dates reported in registries as opposed to those reported in publications in our determinations of registration timeliness, as we believed that enrollment start date may not be consistently reported in publications, while, in registries, it is a mandatory registration element and, hence, less easily excluded or otherwise misrepresented.
Among trials registered retrospectively, we established whether registration might have occurred after initial ascertainment of the primary outcome, and hence potentially permitted unaccounted protocol modifications after interim analyses, by comparing the trial’s registration date against the date on which the primary outcome would have been collected for the trial’s first enrolled participant(s). For example, a trial with a primary outcome assessing serum creatinine levels at 6 weeks that registered in November 2012 but that began enrolling patients in February 2012 would have been retrospectively registered after observation of the primary outcome among participants enrolled at the trial’s initiation. In this case, investigators could have in theory observed primary outcome data for participants that enrolled at trial initiation and modified the trial based on data collected from the trial’s first participants. Without information on trial-specific recruitment, however, we cannot comment on how robust or useful this initial primary outcome data may have been in an interim analysis informing post hoc modifications, though we can comment on the theoretical potential for an interim analysis to have occurred in this scenario. In instances where multiple time frames were designated as primary, we based our calculations on the shortest primary outcome time frame specified in the registry. If no time frame was listed in association with the registered primary outcome, we used the time frame described in the trial’s publication. We noted cases where the nature of the primary outcome (e.g., median survival) did not permit this determination.
We next compared primary outcomes at initial registration against those specified in publications, excluding any primary outcomes pertaining specifically to safety or tolerability. We classified primary outcomes as discordant or non-discordant. Primary outcome comparisons were performed in a systematic manner across three sequential domains: number of primary outcome(s), definition(s) of primary outcomes, and specified time frame(s) for primary outcome(s) ascertainment. Specifically, we began by first comparing the number of primary outcomes in the initial registration record against the number of outcomes explicitly stated as primary in the publication. If the number of primary outcomes differed, the primary outcomes were classified as discordant. If the number of primary outcomes was the same, we compared the definition of the primary outcome (e.g., all-cause mortality vs. recurrent VTE). If the primary outcomes differed in how they were defined, they were classified as discordant. Because primary outcomes may be registered in varying levels of detail, it may not be possible to know with certainty whether registered and published primary outcomes are truly concordant. Accordingly, we classified registered and published primary outcome definitions as discordant on the basis of explicit known differences in what was specified, rather than focusing on differences in level of specification. For example, if a trial registration specified a primary outcome definition of “anxiety level” and the publication reported a primary outcome definition of “Hamilton Anxiety Rating Scale,” we classified these outcome definitions as non-discordant rather than discordant. In not doing so, we sought to avoid classifying primary outcomes as discordant based simply on differences in the level of detail provided between registrations and published reports. In cases where the number of registered and published primary outcomes was the same but greater than one, registered and published primary outcome definitions were paired and compared on the basis of clinical judgment with input from the senior investigator. If primary outcome definitions did not differ, we next compared the time frame, if available, at which the primary endpoints were ascertained. If the explicitly stated time frames differed (i.e., 3-month mortality vs. 30-day mortality), these outcomes were classified as discordant. We noted cases where registered outcomes were too poorly specified (e.g., vague study of “efficacy of intervention”) to permit comparison. Trials without a designated primary outcome specified in the publication were excluded from outcome comparison analyses.
Finally, we categorized each trial on the basis of its primary outcome findings whenever formal hypothesis testing had been conducted or inferences could be made regarding the statistical significance of reported findings (i.e., inferential studies). By examining trial publications, we determined whether the trial’s findings indicated, based on the reported primary outcome(s), that a study intervention was statistically significantly better (i.e., positive), statistically significantly worse (i.e., negative), or not statistically significantly different (i.e., not significant) than a designated comparison (placebo group, active group, or predefined threshold) and classified the overall trial accordingly. For trials that assessed a non-inferiority hypothesis, we considered establishment of non-inferiority to represent a “positive” result and failure to establish non-inferiority a “not significant” result. In instances where more than one primary outcome was reported, we categorized trials with at least one significant primary outcome as “positive” or “negative” on the basis of the statistically significant outcome; trials with mixed findings (some positive and some negative primary outcomes) were classified by prioritizing the results of clinical outcomes over surrogate markers. Trials with mixed all clinical or all surrogate primary outcome results were arbitrated based on the relative importance of the significant outcomes in question. For trials that did not specifically designate a primary outcome, any outcomes reported in the trial’s abstract were considered primary and the overall study was categorized using the scheme described previously. Trials that presented analyses in a solely descriptive manner or that lacked a designated comparison against which to judge the statistical significance of reported results were noted as “non-inferential” and excluded from analyses of association. Trials categorized as “positive” were judged to report overall favorable findings, whereas those categorized as “negative” or “not significant” were judged to report overall unfavorable findings.
We used descriptive statistics to characterize the proportion of trials registered, the proportion registered retrospectively, as well as the proportion registered after initial primary outcome ascertainment, overall, and stratified by specialty journal and trial characteristics. We additionally determined the proportion of trials with non-discordant registered and published primary outcome measures and the proportion reporting favorable findings, overall, and stratified by journal, registration timeliness, intervention, funding source, location, allocation, and enrollment. We used chi-squared testing to assess differences in registration and registration timeliness by journal and by each of the aforementioned trial characteristics. We also used chi-squared testing to assess differences in primary outcome concordance and study findings by journal, trial characteristics, and timeliness of registration. All statistical hypothesis testing was performed using chi-squared analyses with a two-sided type I error level of 0.006 to account for multiple comparisons. Accordingly, all reported p values correspond to chi-squared testing. We additionally report relative risks (RR) with 95% confidence intervals (CI) for all chi-squared comparisons involving two binary variables. Statistical analyses were performed using JMP Pro Version 11.2.0 (SAS Institute, Inc.).
We reviewed 6869 original research reports published in the period between 1 January 2010 and 31 December 2015 to identify the 50 most recent primary trial publications in each of ten high-impact specialty journals (Fig. 1). Two journals (Annals of Neurology, n = 37; Journal of the American Society of Nephrology, n = 35) published fewer than 50 eligible primary trial publications in this period. After excluding publications describing phase I trials (n = 44) and trials initiating enrollment prior to July 2005 (n = 60), there were 472 publications reporting the primary results of 486 clinical trials (14 articles described multiple trials).
Characteristics of eligible trials
Among our final sample of 486 trials, 76% (n = 372) were randomized studies (Table 1). Eighty-one percent (n = 392) assessed interventions involving drugs, devices, or vaccines/biologicals. Forty-four percent received industry funding (n = 216), and just over half recruited patients at one or more sites located in the US (n = 250; 51%). Phase II designations were most frequent (n = 190; 39%). Median enrollment across all trials was 127 participants (interquartile range (IQR), 49–300). Eighty-nine percent (n = 433) of trials were published since 2013. The median impact factor among journals in our sample was 12.24 (range, 8.5–21.0).
Forty-seven (10%) of the 486 trials were not registered. Two trials (0.4%) reported registration numbers for which a matching registration could not be located. Of 439 registered trials, 33 (8%) did not report a trial registration number in their publication, requiring further searching to identify corresponding registrations records. All registered trials (n = 439, 100%) were registered in registries endorsed by ICMJE, though duplicate registrations across more than one registry were not uncommon (n = 81; 18%). Eighty-seven % of registered trials (n = 383) were registered at ClinicalTrials.gov, which accounted for 79% of initial trial registrations (n = 346).
Specialty journals differed in their rates of trial registration (Table 2) (p < 0.001). Annals of Neurology published the greatest proportion of unregistered trials (43%; 16 of 37), accounting for 34% of trials without registration; in comparison, all trials published in the Journal of Clinical Oncology and Blood, both of which primarily publish oncology trials, were registered. Registration was more frequent among trials involving drugs, devices, or vaccines/biologicals (361 of 392; 92%) compared to those involving other intervention types (78 of 94; 83%), though this did not reach statistical significance (RR = 1.11, 95% CI = 1.01–1.22; p = 0.007). Randomization, larger trial size, enrollment sites in the US, and industry funding were each additionally associated with higher rates of registration (Table 3).
Timeliness of registration
Among the 439 registered trials, 99 (23%) were registered retrospectively (i.e., at least 30 days after beginning patient enrollment) based on the enrollment start date reported in the registry. The median delay in registration was 8 months (IQR, 5–19; range, 1–88). Sixty-seven (68%) of the 99 retrospectively registered trials, or 15% of all 439 registered trials, were registered after potential observation of primary outcome data after collection of the primary outcome among participants enrolled at inception (Table 4). Of 302 trials with a registered primary completion date, 7 (2%) were registered after reported completion of data collection for the trial’s primary outcomes. Two (2%) of 88 retrospectively registered trials that listed a manuscript submission date were found to have registered after submission of the manuscript to the publishing journal, though this does not account for retrospective registrations that may have occurred after prior journal submissions to journals that first had an opportunity to consider the trial for publication. Only one (1%) of 99 retrospectively registered trials acknowledged late registration in its publication, attributing the delay to principal investigator oversight and offering access to the original study protocol upon request .
Journals did not differ significantly in their rates of overall prospective registration (p = 0.21), but did differ in their rates of registration before initial primary outcome ascertainment (p = 0.004) (Table 2). Trials involving industry funding, enrollment sites in the US, and assessing drugs, devices, or vaccines/biologicals each had higher rates of prospective registration as compared to those without industry funding, enrolling at only non-US sites, and assessing non-regulated interventions (Table 3).
Primary outcome concordance
Among the 439 registered trials, 15 (3.4%) failed to register a primary outcome at initial registration, though 14 of these 15 published a primary outcome. Twenty-six trials, nearly all of which (n = 25; 96%) registered a primary outcome at initial registration, did not designate one in their publications. Of 413 registered trials designating at least one primary outcome in their publications, 60 % (n = 249) published primary outcomes that were non-discordant from those specified at initial registration. Twenty-six percent (n = 109) published primary outcomes discordant from those initially registered. Seventy-eight (72%) of these 109 discrepancies were based on either the number or definition of primary outcomes, whereas 31 (28%) were based on the specified time frame of primary outcome ascertainment. The remaining 13% (55 of 413) registered initial primary outcomes that were too poorly specified to permit comparison with published outcomes. Among the 346 trials registered first at ClinicalTrials.gov, 19 (5%) trials listed original primary outcome measures that were submitted at least 30 days subsequent to the reported registration date. Seven (37%) of these 19 involved trials whose registration was already retrospective.
Among the 249 trials reporting discordant published and registered primary outcomes, 80% (n = 198) were registered prospectively; 20% (51 of 249) were retrospectively registered. Neither overall prospective registration (RR = 1.06, 95% CI = 0.95–1.18; p = 0.31) nor registration prior to initial primary outcome ascertainment (RR = 1.05, 95% CI = 0.96–1.14; p = 0.29) was associated with non-discordance between registered and published primary outcomes. Even so, just one of seven trials determined to have been registered after their primary completion date published outcomes non-discordant with those initially registered, despite the significant delay in registration.
Favorability of trial findings
Among the 486 trials in our sample, 425 (87%) reported primary outcome findings from which inferences about the statistical significance of reported outcomes could be drawn; 61 trials (13%) were non-inferential, including descriptive or single-arm studies without a specified comparator, and could not be judged accordingly. Sixty-six percent (n = 282) of the 425 inferential trials reported favorable primary outcome findings. Of 143 (34%) trials reporting unfavorable primary outcome findings, most (n = 135; 94%) reported findings that were not significant, while eight (6%) reported negative findings. Unregistered trials were more likely to report favorable findings (31 of 35; 89%) than were registered trials (251 of 390; 64%) (RR = 1.38, 95% CI = 1.20–1.58; p = 0.004), irrespective of registration timing. Favorable findings reporting appeared to be more frequent among trials potentially vulnerable to unaccounted primary outcome modifications (73 of 96; 76%), which included those that were unregistered and those registered after initial primary outcome ascertainment, compared to those registered prior to initial primary outcome ascertainment (206 of 321; 64%), but our findings did not reach statistical significance (RR = 1.18, 95% CI = 1.03–1.36; p = 0.03).
Our study of clinical trials recently published in ten high-impact specialty society journals, all requiring trial registration, found that 10% of published trials were unregistered. Moreover, among registered trials, nearly one quarter were registered retrospectively. Of these, more than two thirds, or 15% of all registered trials, were registered after potential observation of primary outcome data, affording opportunity for unaccounted protocol modifications based on potential premature analyses of observed primary outcome data. Irrespective of registration timing, post-registration modifications to primary outcomes were frequent, as 26% of trials published primary outcomes that differed from those specified at initial registration. Finally, unregistered trials reported favorable findings at a higher rate than trials that had registered. The publication of unregistered trials and trials registered after initial primary outcome ascertainment raises concerns about selective reporting and the integrity of reported outcomes, as these trials are vulnerable to potential changes obscured from public record.
Despite policies to improve registration rates [25, 28, 29], publication of unregistered trials persists. Our study demonstrates that even the highest-impact journals associated with US professional medical societies publish unregistered trials, albeit some more frequently than others. Consistent with earlier studies [7,8,9,10,11,12,13], our findings suggest that, more than a decade since implementation of policies designed to promote universal registration, continued efforts are needed to ensure that all trials are registered, even among those that are published. Registration was more frequent among trials assessing FDA-regulated interventions as compared to trials evaluating non-regulated interventions, such as behavioral and procedural interventions, as prior research has suggested . Unlike trials of FDA-regulated interventions, trials evaluating non-regulated interventions are not subject to legal requirements, which may explain observed differences in registration rates. Inconsistencies between legal and ICMJE registration policies may in fact serve as a potential source of confusion to investigators. We additionally noted higher registration rates among larger trials and those receiving industry support. As each of the specialty society journals we assessed requires trial registration, our results indicate that some journals do not consistently adhere to their own registration policies. Prior work indicates that journals may in fact relax their own registration requirements for various reasons, including reluctance to penalize otherwise sound research, apprehension about losing manuscripts to rival journals, and misconceptions about the applicability of registration policies . Regardless of the rationale, publication of unregistered trials risks dissemination of trials lacking accountability and potentially influenced by selective reporting. Our study and prior work examining cardiovascular clinical trials demonstrate that unregistered trials more frequently report favorable findings , though a recent study examining a large sample of unselected trials found only a marginal association . Nevertheless, stricter adherence to registration policies may help prevent the publication of trials that are selectively reporting results, biasing the medical literature.
While registering trials can help mitigate selective reporting, registration must occur prospectively, in accordance with ICMJE policy, to effectually detect and deter biased reporting. Despite the importance of prospective registration, nearly one in four trials in our sample was published despite having been registered retrospectively. Furthermore, the majority of late registrations were delayed to such a degree after enrollment of the trials’ first participants that it could have permitted investigators the opportunity to amend primary outcomes after conducting interim analyses. For trials registered after ascertainment of outcomes, it is nearly impossible to ascertain the degree to which published reports diverge from the original protocol given the potential for modifications occurring covertly pre-registration. While the frequency of post-registration outcome modifications does not appear to depend on the timeliness of trial registration, we cannot comment on the frequency and effects of pre-registration protocol modifications beyond identifying situations in which they could have potentially occurred.
Our findings are consistent with prior research that prospective registration is more frequent among certain trial types, including those involving FDA-regulated interventions and those receiving industry support [8, 15]. Compared with existing studies [8, 15, 32], however, retrospective registration was overall less frequent in our sample. Notwithstanding the possibility that specialty society journals are in better overall adherence, there are several methodological explanations for this observation, including utilization of each trial’s earliest registration record, which is not always reported in publications, application of a 30-day grace period between enrollment initiation and registration, and our conservative treatment of month-based reporting of enrollment start dates to ensure that true prospective registrations were not misclassified. Only one prior study has assessed the timeliness of registration as it relates to its potential effect on reported outcomes, specifically within the context of the six highest-impact general medical journals . While late registration was less frequent among specialty society journals as compared to the general medical journals assessed in the prior study (23% vs. 28%), we observed a higher proportion of late registrations that potentially permitted an opportunity for outcome modification informed by potential interim analyses (15% vs. 8%). Our findings are further consistent with prior research on selective outcome reporting, indicating that discrepancies between registered and published primary outcomes persist among clinical trials [33,34,35]. A systematic review of studies comparing protocol or registry entries against publications of randomized controlled trials found that 4–50% of trial reports contained evidence of selective reporting involving primary outcomes, a range in line with our observation that 26% of trials published primary outcomes that differed from those specified at initial registration . No prior study has examined the association between registration timeliness and selective reporting in terms of discordance between registered and published primary outcomes. Our findings suggest that the frequency of primary outcome discordance is not different between prospectively and retrospectively registered trials.
Implications of study findings
Because journals control the dissemination of research, they are well positioned to help ensure the integrity of published material, which includes adequate and prospective registration of published trials. Specialty society journals, in particular, bear a significant responsibility to this end, as they publish trials that are of great interest and potential influence to their targeted clinical readerships. As part of the peer-review process, journals generally require the disclosure of trial registration information, though discrepancies between registered and reported material do not appear to influence the decision to accept or reject manuscripts , suggesting that editors may not scrutinize or may choose to disregard discrepancies. If oversight is, in fact, the driver, greater attention paid to trial registration during editorial review may reduce the rate at which potentially biased trials are published, including those that are unregistered or retrospectively registered.
However, while ICMJE policy advocates barring retrospectively registered trials from publication, it acknowledges that editors may judge for themselves the circumstances surrounding late registration and its potential bearing on reported outcomes . Accordingly, our findings may instead stem from editors deliberately choosing to publish non-compliant trials, which they may do for reasons suggested previously . A survey of editors from journals endorsing ICMJE guidelines found that two thirds would consider publication of retrospectively registered trials, though just 13% indicated that consideration would be situation-dependent . For journal editors weighing the decision to publish such trials, ascertaining whether registration was sufficiently delayed to have potentially biased the reported results may help guide decisions regarding appropriate exceptions. The significance of study findings should be carefully evaluated in the decision to accept or reject given the potential for bias that exists among unregistered or retrospectively registered trials. If journals elect to move forward with publishing these trials, steps should be taken to ensure that original trial protocols, approved by and obtained directly from Institutional Review Boards, are made publicly available. Additionally, as ICMJE policy suggests, publication of non-compliant trials should be accompanied by published statements explaining why registration did not occur or was delayed and, further, why journal editors nonetheless judged the trial fit for publication . Just one retrospectively registered trial in our sample addressed its delayed registration, offering to make available its original protocol upon request. While routine posting of original protocols for all trials, regardless of registration compliance, may mitigate concerns regarding biased reporting, such practices are infrequent . Among journals in our sample, only the Journal of Clinical Oncology requires submission and publication of trial protocols, albeit only for phase II and III trials .
Our study has limitations. First, the ICMJE definition of a clinical trial is subject to interpretation, particularly in terms of what constitutes a “health-related intervention” and a “health outcome.” ICMJE adopted an expanded clinical trial definition in 2007 clarifying the scope of these terms . Nevertheless, confusion regarding the applicability of registration requirements for interventional clinical studies may exist among investigators and journals editors. While ICMJE believes that investigators should err towards prospectively registering all interventional studies of human subjects in cases of uncertainty , subjectivity in classifying studies as “clinical trials” may have influenced our observed frequency of unregistered trials, particularly in cases where the applicability of the ICMJE definition may not be patent. Second, our analysis does not represent a perfect audit of ICMJE registration policy, given our concession of a 30-day grace period and exclusion of phase I studies. Nevertheless, we aimed to capture the spirit of the policy rather than the strict letter of the law to account for potential flexibility on the part of journals in the case of minimally delayed registrations. Third, our sample by design comprised a group of clinical trials recently published in select high-impact specialty society journals; accordingly, our findings may not be representative of overall trial registration patterns or of all specialty society journals. Nevertheless, we selected the highest-impact specialty journals, the most prestigious in their respective fields, which are expected to adhere to the highest standards of trial registration practices. Fourth, our cross-sectional analysis did not examine potential improvements in trial registration within journals over time nor account for the fact that journals may have adopted the ICMJE’s registration policy at different time points since its implementation. Even so, the earliest trials in our sample were published in January 2010, nearly half a decade since the policy went into effect, with 89% of sampled trials being published in a 3-year span since 2013. Finally, our study only assessed frequency of modifications to primary outcomes, though selective reporting may manifest through post-registration protocol modifications to other elements of trial design, including secondary outcomes and sample size, which we did not examine. Moreover, we were only able to comment on the possibility of retrospective registration to invite unaccounted interim analyses or pre-registration protocol modifications and not on whether such analyses or modifications actually occurred. Such information could only be ascertained through examination of original trial protocols, which are often unavailable and lack complete information . Additionally, how informative interim analyses are, in some cases, depends on the trial’s experience of participant accrual, details of which are also generally not readily accessible.
Our large study of clinical trials published in ten high-impact specialty society journals demonstrates that registration of trials continues to fall short of the ICMJE’s standards necessary to ensure a complete and unbiased evidence base. Ten percent of published trials were unregistered. Moreover, nearly a quarter of registered trials were registered late, the majority of these after potential observation of primary outcome data, potentially affording investigators the chance to implement modifications potentially informed by collected data. Unregistered trials reported favorable study findings at a higher rate than registered trials, raising concerns that lack of accountability may exert undue influence on reported findings. While journals should generally avoid publishing improperly registered trials, exceptions should be acknowledged, justified, and furthermore accompanied by an evaluation and public posting of the study’s original protocol. Greater adherence to the ICMJE’s prospective trial registration policy may help reduce the publication of studies failing to meet proper standards and improve the integrity of published trial findings.
US Food and Drug Administration
International Committee of Medical Journal Editors
International Clinical Trials Registry Platform
World Health Organization
Chen R, Desai NR, Ross JS, Zhang W, Chau KH, Wayda B, Murugiah K, Lu DY, Mittal A, Krumholz HM. Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ. 2016;352:i637.
Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358(3):252–60.
Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Db Syst Rev. 2009;1:1-28.
Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291(20):2457–65.
De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJ, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004;351(12):1250–1.
Zarin DA, Tse T, Ide NC. Trial registration at ClinicalTrials.gov between May and October 2005. N Engl J Med. 2005;353(26):2779–87.
Rayhill ML, Sharon R, Burch R, Loder E. Registration status and outcome reporting of trials published in core headache medicine journals. Neurology. 2015;85(20):1789–94.
Scott A, Rucklidge JJ, Mulder RT. Is mandatory prospective trial registration working to prevent publication of unregistered trials and selective outcome reporting? An observational study of five psychiatry journals that mandate prospective clinical trial registration. PLoS One. 2015;10(8):e0133718.
Miller JE, Korn D, Ross JS. Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012. BMJ Open. 2015;5(11):e009758.
Dal-Re R, Bracken MB, Ioannidis JP. Call to improve transparency of trials of non-regulated interventions. BMJ. 2015;350:h1323.
Mann E, Nguyen N, Fleischer S, Meyer G. Compliance with trial registration in five core journals of clinical geriatrics: a survey of original publications on randomised controlled trials from 2008 to 2012. Age Ageing. 2014;43(6):872–6.
Huser V, Cimino JJ. Evaluating adherence to the International Committee of Medical Journal Editors’ policy of mandatory, timely clinical trial registration. J Am Med Inform Assoc. 2013;20(e1):e169–74.
Jones CW, Platts-Mills TF. Quality of registration for clinical trials published in emergency medicine journals. Ann Emerg Med. 2012;60(4):458–464.e451.
Harriman SL, Patel J. When are clinical trials registered? An analysis of prospective versus retrospective registration. Trials. 2016;17:187.
Dal-Re R, Ross JS, Marusic A. Compliance with prospective trial registration guidance remained low in high-impact journals and has implications for primary end point reporting. J Clin Epidemiol. 2016;75:100–7.
Gill CJ. How often do US-based human subjects research studies register on time, and how often do they post their results? A statistical analysis of the ClinicalTrials.gov database. BMJ Open. 2012;2(4) https://doi.org/10.1136/bmjopen-2012-001186.
Journals Following the ICMJE Recommendations. http://www.icmje.org/journals-following-the-icmje-recommendations/.
Hooft L, Korevaar DA, Molenaar N, Bossuyt PM, Scholten RJ. Endorsement of ICMJE’s clinical trial registration policy: a survey among journal editors. Neth J Med. 2014;72(7):349–55.
Wager E, Williams P. “Hardly worth the effort?” Medical journals’ policies and their editors’ and publishers’ views on trial registration and publication bias: quantitative and qualitative study. BMJ. 2013;347:f5248.
McKibbon KA, Haynes RB, McKinlay RJ, Lokker C. Which journals do primary care physicians and specialists access from an online service? J Med Lib Assoc. 2007;95(3):246–54.
Jones TH, Hanney S, Buxton MJ. The role of the National General Medical Journal: surveys of which journals UK clinicians read to inform their clinical practice. Medicina clinica. 2008;131(Suppl 5):30–5.
Specialty and Subspecialty Certificates. http://www.abms.org/member-boards/specialty-subspecialty-certificates/.
Scimago Journal & Country Rank. http://www.scimagojr.com/journalrank.php.
InCites Journal Citation Reports https://jcr.incites.thomsonreuters.com/JCRJournalHomeAction.action?SID=B1-m5x2Fuhhulok2QSvoFJGNsabHTxxPeJx2ByEV-18x2dFzQ5x2FLIa0nLXs0XqhOfBNgx3Dx3DM0AGXpKz5WoNuS42BP6VjQx3Dx3D-YwBaX6hN5JZpnPCj2lZNMAx3Dx3D-jywguyb6iMRLFJm7wHskHQx3Dx3D&SrcApp=IC2LS&Init=Yes.
International Committee of Medical Journal Editors. Clinical Trial Registration. http://icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html.
National Institutes of Health (NIH) Research Portfolio Online Reporting Tools (RePORTER) Database. https://projectreporter.nih.gov/reporter.cfm.
Rudd MD, Bryan CJ, Wertenberger EG, Peterson AL, Young-McCaughan S, Mintz J, Williams SR, Arne KA, Breitbach J, Delano K, et al. Brief cognitive-behavioral therapy effects on post-treatment suicide attempts in a military sample: results of a randomized clinical trial with 2-year follow-up. Am J Psychiatry. 2015;172(5):441–9.
Food and Drug Administration Amendments Act of 2007. In: vol. 21 USC 301. United States of America; 2007.
European Commission. Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001 on the approximation of the laws, regulations, and administrative provisions of the member states relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use. http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2001:121:0034:0044:en:PDF.
Emdin C, Odutayo A, Hsiao A, Shakir M, Hopewell S, Rahimi K, Altman DG. Association of cardiovascular trial registration with positive study findings: Epidemiological Study of Randomized Trials (ESORT). JAMA Intern Med. 2015;175(2):304–7.
Odutayo A, Emdin CA, Hsiao AJ, Shakir M, Copsey B, Dutton S, Chiocchia V, Schlussel M, Dutton P, Roberts C, et al. Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials-ESORT). BMJ. 2017;356:j917.
Zarin DA, Tse T, Williams RJ, Rajakannan T. Update on trial registration 11 years after the ICMJE policy was established. N Engl J Med. 2017;376(4):383–91.
Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302(9):977–84.
Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PLoS One. 2013;8(7):e66844.
Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med. 2009;361(20):1963–71.
Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev. 2011;1:MR000031.
van Lent M, IntHout J, Out HJ. Differences between information in registries and articles did not influence publication acceptance. J Clin Epidemiol. 2015;68(9):1059–67.
Iqbal SA, Wallach JD, Khoury MJ, Schully SD, Ioannidis JP. Reproducible research practices and transparency across the biomedical literature. PLoS Biol. 2016;14(1):e1002333.
Journal of Clinical Oncology. Manuscript Guidelines. http://ascopubs.org/jco/site/ifc/manuscript-guidelines.xhtml.
International Committee of Medical Journal Editors. Clinical Trials Registration. http://icmje.org/about-icmje/faqs/clinical-trials-registration/.
The authors would like to thank Alexander R. Bazazi, MD, PhD and Joseph Herrold, MD, MPH for their efforts in drafting earlier versions of the study protocol. Drs. Bazazi and Herrold were not compensated for their contributions to this work.
This work was supported in part by the Laura and John Arnold Foundation to support the Collaboration for Research Integrity and Transparency (CRIT) at Yale, including JDW, GG, and JSR. JEM also receives support from the Laura and John Arnold Foundation. The Foundation played no role in the design of the study, analysis or interpretation of findings, or drafting the manuscript and did not review or approve the manuscript prior to submission. The authors assume full responsibility for the accuracy and completeness of the ideas presented.
Availability of data and materials
Requests for the statistical code and the dataset can be made to the corresponding author at firstname.lastname@example.org. The dataset will be made available via a publicly accessible repository on publication at the Dryad Digital Repository (datadryad.org).
Ethics approval and consent to participate
Consent for publication
ADG receives support through the Yale University School of Medicine Office of Student Research. In addition, JSR receives support through Yale University from Johnson and Johnson to develop methods of clinical trial data sharing, from the Centers of Medicare and Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting, from Medtronic, Inc. and the Food and Drug Administration (FDA) to develop methods for post-market surveillance of medical devices, from the FDA as part of the Centers for Excellence in Regulatory Science and Innovation (CERSI) program, and from the Blue Cross Blue Shield Association to better understand medical technology evaluation.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Study protocol. This file contains the original study protocol in addition to a listing of protocol amendments. (DOCX 156 kb)