We consider five standard documents that, if developed with a clear and shared understanding, will inform operational staff how to conduct a methodologically robust trial. While a trial will have many other documents, we believe these particular documents require special attention. Relevant co-applicants should be fully engaged in the decisions informing each of these documents, because they carry joint responsibility with the lead investigator, to the funder, for the successful delivery of the trial.
Combined with effective TMG, DMC and TSC meetings, where issues are escalated for discussion in a structured manner, these documents provide clear guidance to operational staff and sites, such that trials can be conducted efficiently and effectively.
These documents and their associated processes should be discussed openly in the context of TMG meetings. They should be reviewed extensively and efficiently and formally approved by the lead investigator and relevant co-applicants. The lead investigator and co-applicant statistician, in particular, must be assured that trial conduct, as specified in these documents, is consistent with their trial design. While operational staff employed on the trial may draft many of these documents, it is reasonable for the trial leadership team to be expected to fully engage in the content and make recommendations for change where needed. Many organisations have processes in place for the sponsor to undertake a formal risk assessment of the trial, the output of which should be incorporated where appropriate into these documents.
Study protocol
The study protocol is a trial plan containing the co-applicants’ specifications for delivery of trial objectives. This document is often finalised before operational staff are assigned or employed. The development of trial protocol content has been made easier with publication of SPIRIT [8, 9] and TiDIER [10] guidance.
The protocol is a quality control tool [11]. In multicentre trials, in particular, content ambiguity may lead to differing interpretation between co-applicants, operational staff, recruiting sites and oversight committee members. A common issue identified by the authors includes lack of protocol guidance on whether participant data collection should continue if they discontinue the intervention. In early phase pharmaceutical trials, it is not unusual for the protocol to instruct that participant’s data collection cease if they discontinue intervention. Operational staff and study sites, experienced in such trials, may assume in good faith that data collection in late-phase academic-led trials should also cease in such circumstances, unless there is clear and unambiguous protocol guidance to the contrary. This mistaken assumption can lead to poor follow-up data or randomised ‘non-completers’ being omitted from the trial dataset entirely.
The setup phase of a trial is hectic, but a few hours spent verifying protocol interpretation is invaluable. In an early TMG meeting, the protocol should be reviewed, section by section, to bring to the surface any assumptions that may be held and ensure clarity on content. Before the meeting, attendees should thoroughly review the protocol against the SPIRIT [8, 9] and TIDIER [10] guidance and identify points for clarification. An explicit discussion about protocol non-compliance that requires escalation to the TMG is recommended. Ambiguities, omissions or errors should be rectified via a protocol amendment. Verbal clarifications are not recommended since they may not be communicated to recruiting sites or may be forgotten over time, particularly in the event of staff turnover.
Trial protocols will also be discussed and agreed at the first meeting of the TSC, usually a joint meeting with the DMC conducted before participant recruitment commencing.
Case report form (CRF)
A CRF is a protocol driven document used to standardise trial data collection. It is used by recruiting sites to record data and by database developers for system specification. A CRF must be comprehensive and user-friendly since upon the completion of all trial activities, the ‘product’ of the trial is the final dataset which forms the basis of the analysis and primary publication.
Validated measures are often sourced from previous studies and are assumed to contain no errors. This is not a safe assumption as validated measures are commonly re-typed from paper sources, introducing errors, or consciously adapted for use in prior studies. Statisticians may incorrectly assume the measure used is the original validated version. Such measures should be sourced from the authors or distributors. The scoring algorithm should be available to the statisticians before the trial begins.
Given the time, money and effort to deliver a trial, co-applicants using the data must be intimately involved in developing the CRF and associated database. The risk of incorrect assumptions being made is high and the consequences of misunderstanding significant [12]. Co-applicants must assure themselves the content will permit preparation of DMC, regulatory, ethics and other reports, allow for Consolidated Standard of Reporting Trials (CONSORT) diagram preparation and permit pre-specified primary and secondary analyses. The CRF should be finalised before any data collection begins.
Operational staff should strive to present draft CRFs to the lead investigator and statisticians in a way that permits rapid and detailed review, discussion and amendment. Increased use of web-based electronic data capture systems requires that decisions are made early and with careful consideration, since changing live datasets adds complexity and is best avoided.
As with protocol review, it need not take more than a few hours to carefully review, as a team, each variable on each CRF page to agree wording, format, coding, missing data codes, range checks and validations. Finally, the CRF pack should be reviewed alongside the protocol to verify that all planned content is needed; the protocol should then be reviewed alongside the CRF pack to verify from the opposite perspective that all requirements are covered in the planned data collection pack.
Monitoring plan
A trial monitoring plan is a protocol-driven document that details activity required, on site or centrally, to assure compliance with the protocol and relevant regulatory requirements. It contains the specification for monitoring activities undertaken to verify the internal and external validity of the trial. The document may contain instructions in relation to site initiation visits including staff training, verification of data in any electronic data capture (EDC), randomisation and intervention management systems, remote activities conducted between site visits and even scheduling of key trial activities such as reports, budget management and meetings.
Most monitoring tasks are relevant to the validity of the trial. Without co-applicant oversight, time may be spent on monitoring tasks with limited impact on trial quality, at the expense of activities essential to study integrity. Operational staff may omit to communicate important information to co-applicants though lack of awareness of what needs to be escalated, unless the monitoring plan provides proper guidance.
It may seem unlikely to those who have not monitored sites, but even specifying in a monitoring plan that ‘20% of secondary outcome data will be source data verified’ can lead to differing interpretation in respect of what is physically done at site, depending on your underlying assumptions. For example, this may mean all the secondary outcomes of 20% of the patients, 20% of the secondary outcomes of each participant within each visit, the secondary outcomes relating to 20% of the visits an individual patient has over the course of a trial, or the secondary outcomes relating to 20% of the visits patients have completed by the time of the monitoring visit. If multiple staff are undertaking site visits, each may interpret the plan differently.
Instruction to ‘check consent’ or ‘check eligibility’ means different things to different people. One monitor might check only that the source data say the patient consented or that eligibility criteria were met. Another might spend considerable time reading the full historic medical notes to verify eligibility criteria are met. Unless explicit, unambiguous guidance is given, monitors will use their initiative and judgement. At best, this will lead to variation, but, at worst, checks will not be done as the senior project staff expected them to be done. In one trial, a fairly cursory check of recent notes may be adequate. In another, the risks might be much higher and it may be worth spending considerable time reviewing the clinical history. There is no rigid right or wrong. However, sending monitors to site with only a vague notion of what they are meant to do when they get there is not an efficient use of their time.
We recommend the monitoring plan is developed with the active support of relevant co-applicants. The Adaptiertes Monitoring (ADAMON) project [13, 14] explored whether a risk-based approach to study site monitoring was non-inferior to extensive on-site monitoring and concluded that this is the case. A risk assessment document is available (www.adamon.de/ADAMON_EN/Downloads.aspx) which can be used to identify specific risks in the study that on-site or central monitoring aim to mitigate; it should be completed, risks agreed, and strategies to mitigate each specific risk discussed, agreed and documented in the monitoring plan, alongside escalation instructions for each monitoring activity.
The monitoring plan may also include site initiation and greenlight processes, intervention management and distribution processes, central monitoring of EDC system warnings, centralised data checking, pharmacovigilance processes, TMG, DMC and TSC meeting organisation, annual ethics and regulatory reporting, periodic reviews of trial finances, database lock and study close-out processes. Explicit instruction on which EDC system variables should be source data verified, and against which source documents (e.g. paper CRFs, pharmacy logs, medical notes or laboratory results), is recommended. The frequency or timing of each activity should be defined, with guidance on how to select patients or patient visits for review, and escalation parameters agreed with relevant co-applicants relating to each monitoring activity.
The ADAMON approach ensures that the priority focus of monitoring is agreed with relevant co-applicants and that monitors are not taking a ‘one size fits all’ approach. It is an effective way to ensure no incorrect assumptions are made about who is doing what, why, when, where and how, and may include monitoring activity undertaken by multiple individuals. Progress against the plan should be discussed in regular TMG meetings, making it easier for co-applicants to make informed decisions.
A detailed monitoring plan mitigates risk in the event of staff turnover, provides much wanted structure to new monitors and reassures co-applicants that the often-mysterious world of ‘monitoring’ has been thoroughly demystified. Monitoring plan development is time well spent and is as important as protocol and CRF development to the successful conduct of a study.
DMC report templates
DMC reports are protocol-driven documents presented to the committee overseeing data integrity and patient safety [15]. Report content must be clearly presented to enable the DMC to make recommendations to continue or stop the trial.
Open DMC reports are commonly presented subsequently to the TSC, which usually meets two weeks after the DMC, as the information is relevant to both committees. In some cases, a trial may not need a DMC. However, in these circumstances, the open DMC report can be prepared in the usual way and presented only to the TSC.
Co-applicants and operational staff make assumptions, often based on how previous trial teams have worked, as to what activities are undertaken by which staff. When team members come to the trial with prior expectations and assumptions of roles, it may be unclear what data sources are to be used for different aspects of the DMC reports and who should prepare tables or CONSORT diagrams. In practice, this can lead to inaccurate data being presented to the DMC, either due to the use of ‘informal’ data sources which contain estimates rather than raw data (e.g. tracking spreadsheets) or due to errors in data manipulation by non-statisticians (e.g. trial managers or data managers creating CONSORT diagrams).
The DMC charter and CONSORT diagrams may be prepared by different staff in different teams and agreement should be reached, based on relative skills and experience, on who will draft and circulate these documents. In most trials, the operational statistician drafts DMC report templates, the co-applicant statistician and lead investigator review them and the DMC members approve or request changes [16].
We recommend that a content review of the DMC charter and blank DMC template reports are scheduled at an early TMG, as is done with the protocol, CRF and monitoring plan, to ensure the operational staff understand what is being reported and are clear what data inform the reports.
The TMG should agree what data the statistician requires for DMC reporting and the data cut-off points and the timing of related monitoring activities for each data source.
Consideration should be given to verifying that serious adverse event (SAE) reports are entered in the EDC system before DMC report preparation, not just faxed or emailed to the coordinating centre, or they may be omitted from reports. A mechanism to communicate emergency code breaks to the statisticians should be agreed.
Agreement should be reached upon what data top line CONSORT reporting will be based and how that will be communicated to the statistician. Individual patient level data, including screen failure data, can only be entered in the trial EDC system once a participant has consented to screening. However, if the top line of CONSORT will include a count of the overall numbers of potentially eligible participants within the site, including those who were not approached or who declined to participate, consideration will need to be given to how this data will be collected, collated and communicated as aggregate data to the trial statistician.
Statistical analysis plan (SAP)
The SAP is a key document involved in the transparent reporting of clinical trial data. A SAP contains a more technical and detailed elaboration of the principal features of the analysis described in the protocol and includes detailed procedures for executing the statistical analysis of the primary and secondary variables and other data [17]. A comprehensive template for constructing a minimum set of items for inclusion in a SAP is available [18].
The meaning of the term ‘visit window’ can differ between staff within the trial, leading to data being wrongly omitted from the dataset. Different staff may make assumptions about the purpose of visit windows, the validity of any data collected outside visit windows and the relative importance of visit windows around particular study visits such as the primary outcome visit. Trial databases can, technically, be programmed to reject data outside visit windows and if the operational staff believe data to be ‘invalid’ if collected outside visit windows, this may be programmed into the database system without the knowledge of the trial statisticians.
The trial statisticians may assume other operational staff know what is important to communicate to them or that no issues are arising. The trial manager or monitor may assume the statistician does not need to know about a particular issue or already knows by some other mechanism.
We recommend that the co-applicant or operational statistician present the SAP in the context of a TMG, in order that any mistaken assumptions that the statisticians, trial manager or other operational staff may hold about the trial conduct are identified early, when it is still possible to prevent issues.
A review of the SAP in the context of a TMG provides an opportunity to review how issues relevant to DMC report preparation or analysis should be communicated to the statistician. Examples of important issues include situations where patients cross trial arms unintentionally, emergency code breaks or accidental unblinding occur, specific cases where primary outcome integrity might be compromised or serious breaches of Good Clinical Practice (GCP) are identified that may be crucial to the analysis. While these should have been addressed in the protocol, CRF, monitoring plan or DMC report development stages, a SAP review is the final opportunity to identify any areas of concern.