Skip to main content

The SPIRIT Checklist—lessons from the experience of SPIRIT protocol editors


Crystal clear RCT protocols are of paramount importance. The reader needs to easily understand the trial methodology and know what is pre-planned. They need to know there are procedures in place if there are, for instance, protocol breaches and protocol amendments are required, there is loss to follow-up and missing data, and how solicited and spontaneous reported adverse events are dealt with. This plan is important for the trial and for the results that will be published when the data is analysed. After all, individuals have consented to participate in these trials, and their time and their well-being matter. The Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) provides guidance to structure RCT protocols and ensures all essential information is included. But sadly, not all trialists follow the guidance, and sometimes, the information is misunderstood. Using experience peer-reviewing for Trials over the last 2 years, we have prepared information to assist authors, peer reviewers, editors, and other current and future SPIRIT protocol editors to use the SPIRIT guidance and understand its importance.

Peer Review reports


The Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statement was published in 2013 as “evidence-based recommendations” for the minimum information that should be provided to describe a randomised controlled trial (RCT) protocol [1]. Trials, like many journals, endorsed and adopted the checklist, requiring that unstructured protocols published in Trials must be accompanied by a complete SPIRIT checklist.

In September 2019, due to inconsistency in the standard of protocols submitted and the large number of submissions, Trials piloted the use of dedicated SPIRIT protocol editors to review the submissions that claimed to already have undergone peer review as part of their funding application with a specific focus on the clarity and comprehensiveness of SPIRIT reporting. As a result of the pilot’s success, the project was expanded, and there are now 18 SPIRIT protocol editors working to improve the standard of protocols published in Trials. Often, reviews by these editors note missing information that has not been picked up during routine peer review.

In November of 2019, an alternative submission type was introduced which follows a structured template that includes all SPIRIT items and does not require an associated checklist. The use of the Trials-structured protocol can improve the flow of protocols and ensure that all information is included, as well as enabling readers to easily search for specific items in a protocol [2]. This framework is particularly useful to readers for items which can be lost in the middle of some protocols which have few headings or are written narratively, such as item 8 (specific trial design) and item 14 (sample size).

The SPIRIT Checklist has now been translated into Chinese, French, Italian, Japanese, Korean, and Spanish, so although Trials is an English language publication, authors have an opportunity to read an accepted translation and understand exactly what each of the SPIRIT items entails [3]. Additionally, many extensions have been developed for SPIRIT to accommodate the differences in requirements for various subspecialties of medicine and subtypes of trials (Table 1).

Table 1 Extensions to the SPIRIT Checklist

Although the checklist was published in 2013, as with many reporting checklists such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Consolidated Standards of Reporting Trials (CONSORT), the adherence and compliance among publications have seen only moderate improvement. A recent methodological study compared the overall proportion of checklist items adequately reported in RCT protocols published before and after the SPIRIT statement, respectively in 2012 and 2019 [10]. The investigators found an average of 57% of items were adequately reported in 2019 protocols, as compared with 48% in protocols from 2012 [10]. While this is a mean improvement of 9%, for the 55 items in the SPIRIT checklist, the results suggest that after 6 years, not even two-thirds of all items are adequately reported, and the investigators found no protocols among the 150 from 2019 addressed all items [10]. Understanding the features of protocols associated with non-adherence and how reporting standards may be improved is still an area of active research interest [11].

With regard to the SPIRIT reporting experience at Trials specifically, an editorial was published in 2017 [12] that addressed the questions the following: “What is expected in a protocol submission?”, “When to submit a protocol for publication”, “What is the purpose of peer review of protocol submissions to Trials journal?”, and “Can we improve the process?” The information and advice in the 2017 editorial are useful supplements to the original information in the original SPIRIT statement and its associated explanation and elaboration documents [1, 12, 13]. The editorial made four suggestions to improve the peer review process of protocol submissions to Trials: (1) that protocol authors optimise the quality of their reporting and adhere to journal guidelines for submission, (2) that editors and peer reviewers of the journal familiarise themselves with all the journal guidelines, (3) that more contributions be made from the trials community as editors and reviewers, and (4) that peer reviewers continue to provide constructive comments to improve the quality of reporting [12].

We believe that the various SPIRIT guidance documents and editorials, as well as the extensions, should be considered complementary and required reading for any protocol authors, regardless of whether the protocol is submitted for publication or not. The SPIRIT explanation and elaboration documents provide detailed descriptions, rationale, and examples for all items that are important in describing the design and conduct of a trial, in general. The extensions provide additional insight and recommendations about the items that are unique to certain trial designs and are not covered by the primary SPIRIT documentation. The editorial by Li et al. provides insight into the need for transparency and accountability in reporting trial design and suggests a path towards reaching these goals. Lastly, this current article provides additional guidance on some SPIRIT items that are commonly misinterpreted or missed entirely to hopefully improve trialists’ and editors’ understanding of how to make sure a protocol does not inappropriately ignore relevant aspects of the trial’s design and conduct.

We (i.e. the authors of this editorial) are designated protocol editors with the journal and have between ourselves submitted over 1876 [466 RQ + 356 AG + 1054 reviews KL] reviews for 1110 [240 RQ +216 AG + 654 KL] unique trial protocols submitted to Trials since fall of 2019. Each of us has received extensive training in trial design and analysis methods through our various degrees and work experiences, including with the Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Birmingham University Medical and Dental School, Cambridge University Department of Veterinary Medicine, the Cochrane Collaboration, the University of Dundee and Tayside Clinical Trials Unit, and the Edinburgh Clinical Trials Unit. Additionally, we have all received training and mentoring from senior Trials editors regarding the rationale and implementation of the SPIRIT Checklist, and we all have had Good Clinical Practice training. We believe our training and number of reviews completed give us a unique perspective on common issues and opportunities for improvement in the reporting of trial protocols.

The aims of this article are to describe common errors in the submission of protocols and to make suggestions to improve the quality of the submitted protocols, informed by our experience of reviewing submissions to Trials. This information should be useful to authors, peer reviewers, editors, and other current and future SPIRIT protocol editors.

What are the most common errors in SPIRIT Checklists?

In order to determine which SPIRIT items require special attention, we independently listed the 12 items which we each believed to be the most commonly inappropriately or inadequately addressed and requiring a comment. From this informal poll, we took any overlap in our listed items as being those requiring special clarification for protocol authors. The SPIRIT Explanation and Elaboration document contains a detailed explanation of why each of these items is necessary and how the information is useful [1]. Rather than repeating what is already written and recommended as reading for authors, we present some of our own insights into these commonly inadequately addressed items as to why they are often unaddressed and how authors may consider them.

Item 5d—Composition, roles, and responsibilities of the coordinating centre, steering committee, endpoint adjudication committee, data management team, and other individuals or groups overseeing the trial, if applicable.

While not every trial has a need for multiple groups involved in trial oversight, such as a data monitoring committee, endpoint adjudication committee, or even an official steering committee, there needs to be someone, or some group, tasked with managing the trial. This item is often left incomplete or as “not applicable” because authors assume it does not need explanation if they do not have any formal committees. In fact, if there are no such formal groups involved in trial oversight, it is just as important for the protocol to describe who is in charge of the trial and making all relevant decisions, in what capacity they are acting as well as their roles and responsibilities, and why it was deemed not necessary to create any of the aforementioned formal committees. It may be that the trial investigators are handling all aspects of the trial management, from monitoring enrolment and training of study staff to checking the data quality, but if this is the case, it needs to be clearly stated and the lack of other groups rationalised.

Item 8—Description of trial design including type of trial (e.g., parallel group, crossover, factorial, single group), allocation ratio, and framework (e.g., superiority, equivalence, noninferiority, exploratory).

The most common omission in this item is the failure to specify the framework of the trial. While most randomised controlled trials have a superiority framework (i.e. they aim to prove that one treatment is superior to another or to a placebo), some trials aim to prove a non-inferiority (that a new treatment is not unacceptably worse than an existing treatment) or equivalence (whether a new treatment is equivalent to an existing treatment) framework. The choice of the framework has important implications for many aspects of the trial’s design including the hypotheses, the expected effect sizes, the sample size, the analytical considerations such as handling of missing data, and the interpretation of the statistical results. A detailed discussion is beyond the scope of this article, but further information can be found in numerous published articles such as Stefanos et al. [14]

Item 12—Primary, secondary, and other outcomes, including the specific measurement variable (e.g., systolic blood pressure), analysis metric (e.g., change from baseline, final value, time to event), method of aggregation (e.g., median proportion), and time point for each outcome. Explanation of the clinical relevance of chosen efficacy and harm outcomes is strongly recommended.

Common omissions in this item include the analysis metric (e.g. comparison at a specific time point, comparison of the change from baseline) and the method of aggregation (e.g. comparison of the mean/median or the proportion who experience a dichotomized outcome). Complete specification of the planned outcomes is important because not addressing these items can lead to ambiguity in the interpretation of the expected outcome. However, another common and important error concerning outcome specification is the nomination of multiple primary outcomes without accounting for this in the statistical plan. This multiplicity greatly increases the probability that a significant result is due to random chance. Multiple primary outcomes can appear in a protocol both because different measurement variables are nominated and the same measurement variable is nominated at multiple time points. While it is common that multiple primary outcomes are not adjusted for, even in published research [15], trial investigators should strongly consider whether they have multiple primary outcomes of equal importance and adjust their analyses accordingly or if there is a single designated primary outcome followed by multiple secondary outcomes.

Item 14—Estimated number of participants needed to achieve study objectives and how it was determined, including clinical and statistical assumptions supporting any sample size calculations.

While most trial protocols do include a section to state their sample size and some of the assumptions to accompany the final number (e.g. power and alpha), many protocols fail to include all necessary elements for estimating the sample or provide the rationales and sources to support the assumptions regarding detectable effect size. It is not enough to state the given sample size for the trial. The authors must state the software and hypothesis test used to generate the sample size and all parameters used in the generation of the sample size, provide sources—or rationale if no sources exist—for estimates of effect, any additional assumptions for non-two-arm parallel designs (e.g. intracluster correlation coefficient for cluster trials, clinically relevant non-inferiority margins for non-inferiority trials, etc.), clearly specify which outcome is being used to inform the estimation (and justification if it is not the primary outcome), and note whether the final estimate includes any accounting for potential loss-to-follow-up. Protocol authors often fail to include all of these necessary components. A good rule of thumb for protocol authors to follow is to ensure that the estimate can be reproduced (or at least approximated) with what is given in the protocol. Additionally, if a trial protocol does not have a formal sample size estimation (e.g. some phase II trials), it is still important that the authors provide their reasoning and support for the target sample size.

Item 20c—Definition of analysis population relating to protocol non-adherence (e.g., as randomised analysis), and any statistical methods to handle missing data (e.g., multiple imputation).

Many protocols include the name for their analysis population(s), such as intention-to-treat; however, it is very common that protocols fail to define which participants exactly are included in all analysis populations. It is important to define the analysis populations in the context of the trial because the name itself may be incorrectly used. For example, oftentimes, an analysis is called “ITT” when it is actually a modified ITT, that is, it applies some additional criteria for inclusion on top of being randomised such that it is no longer a pure as-randomised analysis (e.g. “We included all participants who attended at least three out of four follow up visits in the groups to which they were assigned”). Specificity helps the readers know what exactly was planned for the trial. Additionally, protocols must specify the planned methods for assessing any missing data during the analyses.

Item 21b—Description of any interim analyses and stopping guidelines, including who will have access to these interim results and make the final decision to terminate the trial.

Interim analyses allow the early termination of trials because of unacceptable harms (i.e. adverse events), evidence of futility, or even overwhelming evidence of efficacy meaning it would be unethical to deny the control arm the effective treatment. However, unplanned interim analyses risk damaging the trial integrity, for example, by breaking the blinding, and may also risk an unjustified rejection of the null hypothesis (i.e. a type I error). Stopping guidelines must be carefully formulated to take into account the risk of taking a decision to stop the trial based on incomplete data. Further information on the interim analysis and stopping guidelines can be found in Kumar and Chakroborty [16].

Item 22—Plans for collection, assess, reporting, and managing solicited and spontaneously reported adverse events and other unintended effects of trial interventions or trial conduct.

Many protocol authors will give a definition of harms to describe what they might consider to be adverse events (AEs) or serious adverse events (SAEs) and include a note about reporting harms to Institutional Review Boards or Data Monitoring Committees (DMCs). However, the description of harm assessment in protocols is often incomplete. If there are any potentially expected harms given prior experiences or knowledge of the intervention(s) being assessed, these should be listed. The authors should also note if unexpected harms will be collected and define how all harms will be collected: systematically (i.e. solicited from all participants in a standardised manner) or non-systematically (e.g. unsolicited collection using participant’s spontaneous report). It is also good for investigators to note whether harms will be classified or codified according to any standard language (e.g. Medical Dictionary for Regulatory Activities (MedDRA) or Common Terminology Criteria for Adverse Events (CTCAE)), as well as the plans for reporting harms in trial publications (e.g. whether all collected harms will be reported or only a subset that meets specific criteria). All of these details about harms are often missing from trial protocols, but they are important for readers who want to understand how a trial assessed harms. Special consideration of these details should be given to trials that claim to assess the “benefit and safety” of an intervention.

Item 25—Plans for communicating important protocol modifications (e.g., changes to eligibility criteria, outcomes, analyses) to relevant parties (e.g., investigators, REC/IRBs, trial participants, trial registries, journals, regulators).

Many protocols initially include this item as “not applicable” under the assumption that no modifications are planned. However, the item is always applicable as it entails the plan for any possible changes that may be necessary over the course of the trial. After specification, many authors note that important protocol modifications will be notified to the ethics committee or trial registries, but it is also necessary to communicate the changes to all investigators (especially in multi-centre studies or trials with large numbers of investigators) and to participants if this impacts the treatment recommendations they should be following or may alter their appreciation of trial risks or other aspects which could lead to requiring the investigators to obtain an updated informed consent.

Item 26b—Additional consent provisions for collection and use of participant data and biological specimens in ancillary studies, if applicable.

In 2001, the Redfearn report into the Alder Hey organ retention scandal was published, in which the unauthorised removal and retention of human tissue and organs, including children’s hearts, were revealed ( It is important for ethical reasons that investigators are open about their plans for the retention and future use of biological specimens and obtain consent for any plans for these tissues and organs. Also, many authors read item 26b as only being applicable to the additional use of collected biological specimens, and if there are none collected in the trial, they will leave this SPIRIT item as “not applicable”. There are two aspects to consider for this item, however. The first is that it also applies to participant data in general, that is, any data collected over the trial that might be used in later studies. The second is that if the item is not applicable, a statement should clearly state why (i.e. that no additional studies are planned and consent will not be obtained for that potentiality).

Item 30—Provisions, if any, for ancillary and post-trial care, and for compensation to those who suffer harm from trial participation.

Randomised trials have some risks to participants. As it is unknown whether one treatment is superior to another, some participants may receive an inferior treatment, or they may experience unexpected harms. It is important that clinical trials minimise participant harm where risk is possible, by providing ancillary and post-trial care, and if there is a significant risk to the participant, compensation may be appropriate.

Item 31c—Plans, if any, for granting public access to the full protocol, participant-level dataset, and statistical code.

This item is often left as not applicable because protocol authors may believe that the item is specific to the protocol itself and because there is no data associated with the protocol, the item is therefore not applicable. However, this item is always relevant and applicable for a trial protocol as this is the declaration of whether the trial data, once completed, will be shared or made available to the public. This item should always have a statement at least describing whether trial data will be shared and how it can be accessed. In 2018, the International Committee of Medical Journal Editors (ICMJE) stated that manuscripts submitted must contain a data sharing statement and promoted sharing of de-identified data [17]. Note that it is acceptable for investigators to not share the data in some way, although it is greatly encouraged and may be required depending on the source of funding. Even if the data will not be shared, this SPIRIT item is still applicable and should be addressed with a statement that no trial data will be made available.

These SPIRIT items are the most commonly misunderstood by protocol authors according to our subjective assessment; however, there are many other common comments that are raised to address issues with SPIRIT reporting in submitted protocols. Table 2 contains a list of common comments for protocols that can be used by editors and peer reviewers at Trials if a protocol fails to adequately address the SPIRIT guidelines. Authors of protocols wishing to submit to Trials should take careful note to address these items.

Table 2 Common comments that are made on many initial submissions

In addition to the subjective assessment of uncommon SPIRIT items, two authors (RQ and KL) have collected data from a set of protocols assessed during the piloting of the SPIRIT Reviewer/Protocol Editor program at Trials. This assessment, which was conducted 2 years ago with protocols submitted in 2019, reveals the same patterns and that many SPIRIT items remain problematic in that they are forgotten and left unspecified in initial submissions. Table 3 presents the items that were left unspecified or marked as “not applicable” with no explanation in more than 10% of a sample of 90 protocol submissions.

Table 3 SPIRIT Checklist items left unspecified in at least 10% of original protocol submissions (n = 90)

In addition to providing objective evidence that many SPIRIT items are inappropriately completed on the SPIRIT Checklist, our examination of 90 protocols also shed light on the reasons why items may be left unspecified as follows:

  1. (i)

    Because they are truly not applicable to the trial (e.g. unblinding of participants and clinicians is not applicable if a trial is open-label)

  2. (ii)

    Because they were not done in the trial (e.g. trial is low risk and investigators choose not to form a data safety and monitoring committee)

  3. (iii)

    Because they were done but the authors did not include it in the protocol (e.g. authors will make data from the trial available upon request, but did not include such a statement in the protocol)

Most items that were left unspecified or marked as “not applicable” without rationale were actually applicable to the trials and revised protocols included missing descriptors. Of the protocols that left these items incomplete, only a few items were affirmed as actually being not applicable in at least 50% of protocols after revisions: 17b “unblinding procedures”, 21b “interim analyses and stopping guidelines”, 26b “additional consent for ancillary studies”, and 33 “procedures for handling biological specimens.”

Non-SPIRIT issues

While reviewing submissions for compliance with the SPIRIT Checklist, problems with the submissions are often identified by the reviewers that are not directly SPIRIT related. One very common issue that necessitates significant revisions is the quality of the English language. We recognize the challenge in writing in a second language and respect those authors who submit to English publications when it is not their first language; however, clarity is important when describing specific aspects of trial design—for both readers and reviewers—and we highly recommend that professional translators and editing services be used. Another related issue is that of potential ethical issues in the design of trials which can be related to several SPIRIT items including the rationale for the design, choice of comparator, monitoring, and ethical approval. Most trials submitting a protocol for publication in Trials are already underway, restricting us from commenting on potential design problems; however, if a trial does not appear to have an ethical basis (e.g. an intervention is compared to a placebo without the current standard of care or instead of an existing proven treatment option, without explicit and clear rationale [21]), this can lead to requests for clarification and may even result in rejection. Lastly, trial status is often not correctly completed in protocols submitted to Trials, which can influence acceptability as Trials is committed to transparency and accountability in prospective trial design and does not accept protocols from trials which have completed recruitment. Authors should state the protocol version number and date, the date recruitment began, and the approximate date when recruitment will be completed.

How can SPIRIT Checklist compliance be improved?

Recommendations for authors

It is highly recommended that the explanation and elaboration document is read in conjunction with the SPIRIT Checklist [1]. This paper gives both the reason for the inclusion of the SPIRIT items as well as thorough clarification of what is required. The Trials editorial from Li et al. in 2017 also provides useful supplemental information [12].

We also highly recommended that the new SPIRIT template be used for submissions. The use of curly brackets allows the questions of the SPIRIT Checklist to be answered directly and clearly within the protocol manuscript, without the need to complete a separate checklist. All the guidance is “right there” for authors to read and save them looking it up. But the template must be strictly adhered to. Authors cannot remove SPIRIT items they believe to be not applicable, and they cannot combine items or change the order of the SPIRIT items. If they do, then the protocol will need to be edited and corrected for publication in this format.

Recommendations for editors and peer reviewers

Editors and peer reviewers should be familiar with the SPIRIT guidelines and be aware of issues. Most trial protocols submitted to the journal should be sent to one of the 18 SPIRIT reviewers or protocol editors for review, since they often find issues with SPIRIT compliance that other peer reviewers have not commented on. Editors and peer reviewers should also be aware of non-SPIRIT issues that the SPIRIT protocol editors often pick up on, such as language, ethics, and problems with statistical analysis.

We recommend that editors and reviewers keep a copy of our Table 2 handy when reviewing protocols and use comments where appropriate, modifying any aspects as needed. We hope that this editorial serves as a useful reference to editors, peer reviewers, and authors alike. Tables 4, 5, and 6 include some of our personal recommendations that we encourage authors and editors to remember when writing and reviewing protocols.

Table 4 Riaz Pet Peeves
Table 5 Kirsty Pet Peeves
Table 6 Alex Pet Peeves

Given the various existing documentation that has been published to improve the quality and comprehensiveness of SPIRIT reporting in trial protocols, it is surprising that compliance is not higher. Recent research on trial protocol reporting has shown a significant improvement in the overall proportion of protocol items that are addressed since the SPIRIT guidance was published; however, this increase was only by approximately 9%, and several items were found to be less commonly reported [10, 11]. Studies that have examined interventions to improve adherence to reporting guidelines in general have found many different types of interventions for different stages of the publication process [22,23,24]. Although the effectiveness of many has not been evaluated, and even fewer with RCTs, among those that have been tested, only a couple have shown promise including a completeness of reporting check by editors, as is currently done by Trials protocol editors [22, 25]. Additionally, qualitative studies into the reasons for author and editor adherence to reporting guidelines have revealed several factors that influence their use, and a similar assessment with specific relevance to trial protocols may provide targets for future interventions to further improve the quality and completeness of SPIRIT reporting [26].


Publishing of trial protocols in advance of publishing results is necessary in order to make methods of investigation transparent, which is vital for the integrity of the scientific process. The SPIRIT Checklist was designed to improve the quality of reporting of protocols of randomised controlled trials, but despite detailed guidance being available, compliance with the requirements of the checklist remains poor. The advice in this article from experienced protocol editors should help authors and editors ensure that their manuscripts are compliant with the recommendations of the SPIRIT statement. This will enhance the transparency and completeness of published protocols, which will benefit not only the authors and editors, but also trial participants, sponsors and funders, ethics committees, peer reviewers, trial registries, journals, and other stakeholders.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.



Adverse events


Common Terminology Criteria for Adverse Events


Consolidated Standards of Reporting Trials


Data Monitoring Committee


Institutional Review Board




Medical Dictionary for Regulatory Activities


Preferred Reporting Items for Systematic Reviews and Meta-Analyses


Research Ethics Committee


Randomised controlled trial


Serious adverse events


Standard Protocol Items: Recommendations for Interventional Trials


  1. Chan A, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, Berlin JA, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ. 2013;346:e7586.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Treweek S. Protocols—more structure, less ‘Wuthering Heights’. Trials. 2019;20:649.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Zhong LD, Cheng CW, Wu TX, Li YP, Shang HC, Zhang BL, et al. SPIRIT 2013 statement: define standard protocol items for clinical trials. Chin J Integr Med. 2014;34:115–22.

    Google Scholar 

  4. Porcino AJ, Shamseer L, Chan A, Kravitz RL, Orkin A, Punja S, et al. SPIRIT extension and elaboration for n-of-1 trials: SPENT 2019 checklist. BMJ. 2020;368:m122.

    Article  PubMed  Google Scholar 

  5. Calvert M, King M, Mercieca-Bebber R, Aiyegbusi O, Kyte D, Slade A, et al. SPIRIT-PRO Extension explanation and elaboration: guidelines for inclusion of patient-reported outcomes in protocols of clinical trials. BMJ Open. 2021;11:e045105.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Rivera SC, Liu X, Chan A, Denniston AK, Calvert MJ. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020;26(9):1351–63.

    Article  CAS  Google Scholar 

  7. Dai L, Cheng CW, Tian R, Zhong LL, Li YP, Lyu AP, et al. Standard Protocol Items for Clinical Trials with Traditional Chinese Medicine 2018: recommendations, explanation and elaboration (SPIRIT-TCM Extension 2018). Chin J Integr Med. 2019;25:71–9.

    Article  PubMed  Google Scholar 

  8. Kendall TJ, Robinson M, Brierley DJ, Lim SJ, O’Connor DJ, Shaaban AM, et al. Guidelines for cellular and molecular pathology content in clinical trial protocols: the SPIRIT-Path extension. Lancet Oncol. 2021;22(10):e435–45.

    Article  CAS  PubMed  Google Scholar 

  9. McCarthy M, O’Keefe L, Willaimson PR, Sydes MR, Farrin A, Lugg-Widger F, et al. A study protocol for the development of a SPIRIT extension for trials conducted using cohorts and routinely collected data (SPIRIT-ROUTINE). HRB Open Res. 2021;4:82.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Tan ZW, Tan AC, Li T, Harris I, Naylor J, Siebelt M, et al. Has the reporting quality of published randomised controlled trial protocols improves since the SPIRIT statement? A methodological study. BMJ Open. 2020;10:e038283.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Gryaznov D, Odutayo A, Niederhausern B, Speich B, Kasenda B, Ojeda-Ruiz E, et al. Rationale and design of repeated cross-sectional studies to evaluate the reporting quality of trial protocols: the adherence to SPIrit Recommendations (ASPIRE) study and associated projects. Trials. 2020;21:896.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Li T, Boutron I, Salman RA, Cobo E, Flemyng E, Grimshaw JM, et al. Review and publication of protocol submissions to Trials – what have we learned in 10 years? Trials. 2017;18:34.

    Article  Google Scholar 

  13. Chan A, Tatzlaff JM, Altman DG, Dickersin K, Moher D. SPIRIT 2013: new guidance for content of clinical trial protocols. Lancet. 2013;381(9861):91–2.

    Article  PubMed  Google Scholar 

  14. Stefanos R, Graziella DA, Giovanni T. Methodological aspects of superiority, equivalence, and non-inferiority trials. Intern Emerg Med. 2020;15(6):1085–91.

    Article  PubMed  Google Scholar 

  15. Vickerstaff V, Ambler G, King M, Nazareth I, Omar RZ. Are multiple primary outcomes analysed appropriately in randomised controlled trials? A review. Contemp Clin Trials. 2015;45(Pt A):8–12.

    Article  PubMed  Google Scholar 

  16. Kumar A, Chakraborty BS. Interim analysis: a rational approach of decision making in clinical trial. J Adv Pharm Technol Res. 2016;7(4):118–22.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Clinical Trials Recommendations for Publishing and Editorial Issues. International Committee of Medical Journal Editors. 2018. Retrieved from: on December 18, 2021

  18. Dunn DT, Copas AJ, Brocklehurst P. Superiority and non-inferiority: two sides of the same coin? Trials. 2018;19(1):499.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  19. Zarin DA, Tse T, Williams R, Califf RM, Ide NC. The results database – update and key issues. N Engl J Med. 2011;364:852–60.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Saldanha IJ, Dickersin K, Wang X, Li T. Outcomes in Cochrane systematic reviews addressing four common eye conditions: an evaluation of completeness and comparability. PLoS One. 2014;9(10):e109400.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Millum J, Grady C. The ethics of placebo-controlled trials: methodological justifications. Contemp Clin Trials. 2013;36(2):510–4.

    Article  PubMed  Google Scholar 

  22. Blanco D, Altman D, Moher D, Boutron I, Kirkham J, Cobo E. Scoping review on interventions to improve adherence to reporting guidelines in health research. BMJ Open. 2019;9:e026589.

    Article  Google Scholar 

  23. Stevens A, Shamseer L, Weinstein E, Yazdi F, Turner L, Thielman J, et al. Relation of completeness of reporting of health research to journals’ endorsement of reporting guidelines: systematic review. BMJ. 2014;348:g3804.

    Article  Google Scholar 

  24. Blanco D, Schroter S, Aldcroft A, Moher D, Boutron I, Kirkham J, et al. Effect of an editorial intervention to improve the completeness of reporting of randomised trials: a randomised controlled trial. BMJ Open. 2020;0:e036799.

    Article  Google Scholar 

  25. Pandis N, Shamseer L, Kokich V, Fleming P, Moher D. Active implementation strategy of CONSORT adherence by a dental specialty journal improved randomized clinical trial reporting. J Clin Epidemiol. 2014;67(9):1044–8.

    Article  Google Scholar 

  26. Fuller T, Pearson M, Peters J, Anderson R. What affects authors’ and editors’ use of reporting guidelines? Findings from an online survey and qualitative interviews. PLoS One. 2015;10(4):e0121585.

    Article  Google Scholar 

Download references




None to declare.

Author information

Authors and Affiliations



All authors conceived the idea for this paper and contributed to the drafting and revision of the manuscript. RQ conducted the description analyses. All authors read and approved the final manuscript for publication.

Corresponding author

Correspondence to Riaz Qureshi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qureshi, R., Gough, A. & Loudon, K. The SPIRIT Checklist—lessons from the experience of SPIRIT protocol editors. Trials 23, 359 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: