Skip to main content

Monitoring in practice – How are UK academic clinical trials monitored? A survey

Abstract

Background

Despite the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) encouraging the use of risk-based monitoring for trials in 2013, there remains a lack of evidence-based guidelines on how to monitor. We surveyed the academic United Kingdom Clinical Research Collaboration (UKCRC) registered clinical trials units (CTUs) to find out their policy on monitoring of phase III randomised clinical trials of an investigational medicinal product (CTIMPs).

Methods

An online survey of monitoring policy with sections on the CTU, central monitoring and on-site monitoring was sent to all 50 UKCRC registered CTUs in November 2018.

Descriptive data analysis and tabulations are reported using the total number answering each question.

Results

A total of 43/50 (86%) of CTUs responded with 38 conducting phase III randomised CTIMP trials. Of these 38 CTUs, 34 finished the survey. Most CTUs (36/37, 97%) use a central monitoring process to guide, target or supplement site visits. More than half (19/36, 53%) of CTUs do not use an automated monitoring report when centrally monitoring trials and all units use trial team knowledge to make a final decision on whether an on-site visit is required.

A total of 31/34 (91%) CTUs used triggers to decide whether or not to conduct an on-site monitoring visit. On-site, a mixture of source data verification and checking of processes was carried out.

The CTUs overwhelmingly (27/34, 79%) selected optimising central monitoring as their most pressing concern.

Conclusion

The survey showed a wide variation in phase III randomised CTIMP trial monitoring practices by academic clinical trials units within a single research-active country. We urgently need to develop evidence-based regulator-agreed guidance for CTUs on best practice for both central and on-site monitoring and to develop tools for all CTUs to use.

Peer Review reports

Background

Clinical trialists monitor trial data in order to protect the rights and well-being of participants, to ensure that the trial data are accurate, complete, and verifiable, and to confirm that the trial is being run in compliance with the currently approved protocol, with the principles of good clinical practice (GCP), and with the relevant regulatory requirements [1]. In 2013, publications from both the US Food and Drug Administration (FDA) [2] and European Medicines Agency (EMA) [3] promoted trial sponsors moving to a risk-based approach to monitoring. In risk-based monitoring, the monitoring activities are directed at preventing or mitigating important and likely risks to data quality, to processes critical to human subject protection and to trial integrity. Rather than monitoring routinely throughout the trial, the monitoring is directed at any risks to the trial. The new interest in risk-based monitoring was subsequently enshrined in the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) GCP E6(R2) [1] guidance in December 2016. This publication gave the advice:

The sponsor should develop a systematic, prioritized, risk-based approach to monitoring clinical trials. […] The sponsor may choose on-site monitoring, a combination of on-site and centralized monitoring, or, where justified, centralized monitoring.

The monitoring literature of interventional trials and studies is surprisingly sparse. Three Studies Within A Trial (SWATs) have been published in this area, showing that compared to full monitoring, risk-based monitoring is a reasonable approach [4,5,6]. Tudur Smith et al. [7] and Embleton-Thirsk et al. [8] have compared the analyses of the clinical trial data collected centrally by clinical trials units (CTUs) with that collected by using 100% on-site source data verification. Each found that the extra on-site monitoring made little difference to the primary efficacy results of the trial, and these results were their key dissemination message. The TEMPER study [9] showed that triggers based on centrally stored data are potentially a good way of being able to choose which sites of a clinical trial to visit providing there are no consent issues. Most recently, the START monitoring substudy published their results [10] showing the benefit and cost of on-site monitoring. Balancing these two issues, they concluded that the value to the START trial of onsite monitoring was limited.

In the last 10 years, three surveys have reported how trialists deal with certain aspects of monitoring. Morrison [11] reported a survey with a 37% (79/216) response rate carried out in late 2009 of pharmaceutical, regulatory and academic groups in the USA conducting clinical trials. From a maximum of 65 respondents they found that 78% (46/59) always or sometimes used risk-based monitoring, 83% (48/58) used central data to evaluate site performance and that 70% (42/60) always did site visits. Tudur Smith [12] reported a 2011 monitoring survey with a 48% (23/48) response rate of the then 48 United Kingdom Clinical Research Collaboration (UKCRC) registered CTUs. These are UK academic CTUs that have been assessed as reaching a standard by an international panel of experts in clinical trials research. Risk-based monitoring was being used by 53% of CTUs (number not given, maximum number of respondents 22) and most CTUs used some level of central monitoring for each trial, sometimes supplemented by on-site monitoring. Beever and Swaby [13] reported a 2017 survey of the then 49 UKCRC registered CTUs on the assessment of risk in risk-based monitoring. All 23 respondents (response rate 47% (23/49)) carried out risk-based monitoring with 96% (22/23) using a combination of remote and on-site monitoring.

There is clearly an appetite for risk-based monitoring among regulators, which has been supported by the studies assessing this approach. However, there are few evidence-based guidelines on best practice and no recent data on how monitoring post risk assessment is currently performed. To determine current monitoring practice, we sent a survey to the 50 UKCRC registered CTUs to determine their policy on monitoring phase III randomised CTIMP (Clinical Trial of an Investigational Medicinal Product) trials.

Methods

We aimed to find out each UKCRC registered CTU’s policy on monitoring their phase III randomised CTIMP trials. The survey included questions on central and on-site monitoring, considering both data quality and whether the trial was being run appropriately at each site. We defined central monitoring as monitoring using data collected at the CTU and on-site monitoring as monitoring where a visit is made to the site where the source data is collected.

Survey development

There was no pre-existing robust tool for collecting this information. We created our survey by adapting the survey questions of Morrison [11] to the UK registered CTU situation. We asked (i) four questions to determine the typical characteristics of the phase III randomised CTIMP trials that the CTU carried out, (ii) eight questions about central (within CTU) monitoring and (iii) nine questions about on-site monitoring. We gave space for comments in each of the three sections. Screenshots of the survey are provided in Additional file 1 and the questionnaire transcript is given in Additional file 2. The survey’s content validity was checked by all authors. The survey was piloted by eight clinical project managers at the MRC Clinical Trials Unit at UCL (MRC CTU at UCL) to check content and ensure ease of completion. The authors agreed upon the final question selection and wording.

Definitions for ‘central monitoring’ and ‘triggers’ were included in the survey to ensure consistency. Central monitoring was defined as any monitoring of the data or the sites that the CTU covers from the trials office, but it excluded the automatic queries programmed in the database used during data entry. This broad definition was used to cover central statistical monitoring, trial managers looking at reports and fully programmed triggering systems delivering the names of sites requiring a visit. Triggers were defined as the means by which CTUs decide to visit a site based on the information held centrally in the CTU.

The survey was set up in Opinio [14] enabling online completion. Care was taken to make the questions easy to read and complete, with screens of questions ordered into sections and notification given on each screen of how many questions had been answered out of the total.

Eligibility and selection

We sent the survey to the then 50 UKCRC registered CTUs in the UK because they have reached a demonstrable standard (registration criteria) and therefore should reflect current best practice within a single research-active country. UK CTUs are specialised units within universities, hospitals or institutes who design, conduct, analyse and publish many multicentre trials. They have expertise in trial management, statistics and trial design, and often have medical experts; this core team of experts ensures trials are conducted to meet appropriate standards and regulations. Chief investigators may be part of the CTU or external to it. Staff and trial funding is often from a mixture of charity and university funding with peer-reviewed grants achieved from charity, government and industry funders for each trial [15]. Fully registered CTUs have a proven track record with robust quality assurance systems, evidence of long-term viability of trial coordination, and a trials portfolio; provisionally registered CTUs are those that do not meet the criteria of full registration but plan to within 3 years. Monitoring practice varies across the trial phases with more intensive monitoring in phase I trials, so we limited the survey to CTUs that conduct phase III trials. The conduct of CTIMPs is inspected by the Medicines and Healthcare products Regulatory Agency (MHRA) as they are considered to have higher associated risks than non-CTIMPs. Randomised trials are considered to be the gold standard of interventional research; we therefore chose to further limit the survey to CTUs that run randomised CTIMPs.

Survey distribution

The survey link was emailed by the Senior Administrative Assistant of the UKCRC Registered CTU Network on 13 November 2018 to each CTU Director, asking them to pass it to the monitoring lead in the unit. A reminder email was sent by the Senior Administrative Assistant to all CTU Directors on 3 December 2018. If a CTU had still not completed the survey, an alternative contact at the CTU was sent a personal message from an author (SBL) on 11 December 2018, with a final reminder email on 17 December 2018. The final response was received on 20 December 2018.

Ethics and consent

We did not require ethics permission as invitees were non-NHS staff and we inferred implied consent when we received a response. The aims of the survey were described in the email and on the first screen of the survey. We collected each CTU’s name in order to avoid duplication and to facilitate reminders but we agreed not to use this in the analysis or dissemination.

Analysis

The raw data was downloaded from Opinio and stored as per MRC CTU at UCL policy, with only those involved in the survey having access. Descriptive analyses were performed using Stata version 15.1 [16] including all responses on each question.

We attempted to identify differences between the group of CTUs that responded and those that did not through the information on the UKCRC registered CTU website [17]. This information included whether they conduct cancer trials, conduct international trials, carry out 24-hour randomisation, or have full or provisional UKCRC registration.

Results

The majority of UKCRC registered CTUs responded to the survey (86% (43/50)), including five who clarified that they do not run phase III CTIMP trials. CTUs that do not carry out 24-hour randomisation were less likely to respond to the survey (Table 1).

Table 1 A comparison of invited CTUs response status and four key characteristics

A maximum of 38 CTUs completed at least one survey question and their characteristics are described in the last column of Table 1. The questionnaire took a median 19 minutes to complete, interquartile range (8.5, 64.0). As some CTUs did not answer every question, the actual number doing so for each question is given. Many CTUs considered their typical phase III randomised CTIMP trial to have 101–1000 patients and 11–49 sites (Table 2).

Table 2 Number of participants and sites for phase III randomised CTIMP trials run by included CTUs

A total of 28/38 (74%) of CTUs had some non-UK sites. For all CTUs an assessment of risk informed their monitoring approach, although for one CTU this was only sometimes and for four CTUs this assessment of risk was done by the sponsor.

For one CTU, the sponsor had responsibility for all monitoring, so they could not complete the sections on central and on-site monitoring. Thirty-four of the remaining 37 CTUs completed the questionnaire.

Central monitoring

Almost all CTUs use centrally available data to evaluate site performance (34/37, 92%) with two further CTUs (total 36/37, 97%) using a central monitoring process to guide, target or supplement site visits. One sixth (6/36, 17%) reported never using a centralised monitoring process to replace on-site visits, while two reported always doing so (2/36, 6%).

Over half reported running central monitoring processes at least once per month on each trial (20/35, 57%) with only one (3%) running them just annually.

For more than half of CTUs (19/36, 53%), central monitoring is not explicitly programmed, i.e. standard reports may be used, but a monitoring report is not automatically produced. For the remainder, 5 (14%) use the same monitoring programming code for all of their trials, 4 (11%) choose pre-written modules with some bespoke programming and 8 (22%) write bespoke software programming for each trial.

The assessment of triggers showing a site should be visited is not solely defined by software for any CTU, instead there is always human input in choosing which sites to visit. Figure 1 show the factors likely to trigger an on-site monitoring visit.

Fig. 1
figure 1

Frequency of factors likely to trigger an on-site monitoring visit. CTU could choose multiple options. CTU clinical trial unit, No number, pt participant

On-site monitoring

All 34 CTUs responding to the on-site questions performed on-site monitoring at least sometimes, with most CTUs finding the on-site visit to take 1 day (27/34, 79%) and needing one person (27/34, 79%). This person was often a trial manager or dedicated monitor (18/34, 53%) but in some cases could be a member of the CTU’s Quality Assurance team, the Chief Investigator or a member of staff from a contract research organisation (CRO) (Additional file 3).

Most CTUs used formal triggers to decide whether or not to conduct an on-site monitoring visit (31/34, 91%). Of these, only one (1/31, 3%) solely used triggers to choose whether to conduct an on-site visit, with the remainder also conducting some on-site visits after fixed time periods, based on the number of patients that had been recruited, because of a trial event (e.g. independent data monitoring committee [IDMC]), or for a mixture of these reasons.

The stated reasons behind the frequency of on-site visits are given in Fig. 2.

Fig. 2
figure 2

Reasons for frequency of on-site monitoring visits. CTU could choose multiple options. CTU clinical trial unit

The pre-defined analysis of risks, the study design and the monitoring plan were each listed by more than 20/34 (59%) of CTUs as being a reason behind the frequency of on-site visits. We asked how much on-site source data verification (SDV) was done for various classifications of data. Eight CTUs commented that the question was too difficult to answer as a unit policy across all their trials due to the variability of SDV even within phase III randomised CTIMP trials and so this question was attempted by 24 CTUs (Table 3). One CTU (1/24, 4%) never did any SDV.

Table 3 Percentage of SDV done for differing classifications of data

Many CTUs reported doing 100% SDV of consent, eligibility criteria, primary endpoint reports and serious adverse events at any given visit.

Other activities achieved during on-site visits are detailed in Table 4.

Table 4 Other activities achieved during on-site visits

Many CTUs frequently or always assessed the site staff’s understanding of study procedures (96%, 26/27), informed consent updates/modifications (91%, 29/32), regulatory documents and communications (76%, 25/33) and checked the site file was complete (94%, 31/33).

Other results

Table 5 shows what aspect of monitoring the CTUs would most like to change. We gave the top three options plus an ‘other’ category. The majority of CTUs would most like to optimise central monitoring (27/34, 79%).

Table 5 Aspects of monitoring the CTU would most like to change

Discussion

Our survey collected valuable information on how the UKCRC registered CTUs monitor phase III randomised CTIMP trials. The main finding is the wide variety of ways in which central and on-site monitoring are conducted. The survey showed for central monitoring a variety of use, method, frequency of execution, method of trigger assessment and items in triggers. For on-site monitoring, variety was shown in who attended, how many attended, how long site visits were, how to decide when to visit a site, the determinants of a site visit and what to do when there, in terms of SDV and monitoring processes.

Although almost all responding CTUs (36/37 97%) used central monitoring to guide, target or supplement on-site visits and more than half of these CTUs run central monitoring at least monthly (20/36, 56%), for many (19/36, 53%) the central monitoring is not programmed (fully automated). This would likely form a large and repeated burden on the trial management team, adding to the cost of running the trial. It should be possible to fully program this work, with the programming overhead being less than the trial management team burden for longer trials.

Our survey showed that a variety of people are involved in the on-site monitoring. Though this is related in part to the differing job titles used in the CTUs, it would be good to know if the CTUs using specific monitors (19/34, 56%) found there was an advantage in this practice.

The variety of items triggering on-site monitoring and their varying frequency of use shows that there is considerable scope for prospective research to better specify where CTUs should expend their energy. Whitham et al. [18] suggest eight ‘performance metrics’ to be used for all trials alongside just one or two trial-specific metrics. TransCelerate has published eight overlapping metrics [19]. These suggested metrics need prospective testing and reporting of experiential data to show whether they work.

Our survey showed that many CTUs claim to do 100% SDV of some data. However, the two publications looking at this has shown 100% SDV does not change the primary efficacy trial results [7, 8]. SDV may be necessary if particular data have been assessed as being critical to the trial and only able to be monitored in this way, but the FDA has set out guidelines [2] encouraging trialists to consider the needs of each specific trial rather than assume that all trials need SDV.

The Morrison survey from 2009 [11] acquired data from 65 mainly US groups performing trials and concluded that there was a wide variety of monitoring practices and called for research to develop an evidence base for monitoring practice. Our UK survey, nearly a decade later reaches the same conclusions. The previous surveys of UKCRC registered CTUs in 2011 [12] and 2017 [13] agreed with the current survey, also finding that all CTUs at least sometimes informed their monitoring plan through a risk assessment and that most CTUs used some level of central monitoring, either with or without on-site monitoring. Our survey gives more information on how the monitoring is actually done.

There are several limitations to our study. Although the response rate of 86% was high (and double that of the other monitoring surveys), there was complete data from only 34 CTUs. This may be considered a small number to represent the monitoring processes of UK clinical trials. It is limited to the UKCRC registered CTUs who have demonstrated a sufficiently high standard to achieve UKCRC accreditation. These are not fully representative of academic clinical trials units in the UK who are not registered, though fewer randomised phase III CTIMP trials are run outside registered CTUs. The monitoring set-up of academic trials units in other countries and of industry-led or contract research organisation-coordinated trials may differ. However, the information here should be useful to all groups running trials of any phase.

As survey data of policy across a CTU, this information on monitoring lacks detail and does not represent the monitoring that happens on a specific trial. This is more obvious on the SDV questions where eight CTUs felt the SDV was so variable they could not complete the table. It may be that our results are of an ideal world situation.

The findings of this survey have highlighted the need for clarity with regard to terminology in monitoring; multiple definitions for single terms are problematic when communicating with practitioners and researchers across the field of clinical trials. For example, in our survey we used the word ‘triggers’ to describe the items that are reviewed to decide whether a site visit is needed. However, trialists and researchers use differing terms such as triggers, metrics and possibly other terms that our team has not yet come across. On this point, we favour using the term ‘metrics’ and considering visiting a site if the metric threshold is breached. As a community, we need to show how the quality tolerance limits mentioned by ICH [1] relate to metrics. There is also confusion between data cleaning and monitoring, particularly since a monitoring outcome is to prompt for data corrections/clarification. Clarifying the terminology will go a long way to enabling us to move this area forward.

Some research on whether monitoring is best carried out by dedicated monitors, both centrally and on-site would be useful. Our terminology of jobs and some idea of the job description part of whoever is tasked with the monitoring would be beneficial.

With the strong steer from the surveyed CTUs, the next priority is optimising the processes of central monitoring. We think the metrics that are used to improve the data integrity and patient safety need research to confirm that they are helpful, to select which are required, to clarify how they should be used (particularly in terms of frequency and post-metric actions) and to consider where they sit (are they a way of improving the data integrity and the patients safety or are they a way of judging it?).

We think there is research work to be conducted on how on-site visits are best performed. Should we be looking at SDV, processes or a mixture of these? And are site visits necessary or could the effect of a site visit be replicated in other ways by staying at the CTU?

We would like monitoring to get to the same place as protocols and statistical analysis plans for clinical trials have reached (Spirit guidelines [20], Statistical analysis plan guidelines [21],). As part of their systematic approach to making trials more efficient by improving all trial processes, Trial Forge [22] is aiming to increase the evidence base for efficient trial monitoring. We see a practical monitoring guidance document that shows, for differing risks of trials and individual processes, the monitoring that is required. This could perhaps be in the form of a template data monitoring plan.

Conclusions

This survey provides valuable policy information on how academic CTUs are conducting monitoring in phase III randomised CTIMP trials. All of the responding UKCRC registered CTUs carried out a risk assessment to inform their monitoring approach, the majority did central monitoring and all did on-site monitoring. This monitoring was carried out in a variety of ways. The CTUs have resoundingly called for optimising the central monitoring as the feature of monitoring they would most like to change. We urgently need to give evidence-based regulator-agreed guidance to CTUs on best practice for both central and on-site monitoring and to develop tools for all CTUs to use.

Availability of data and materials

The dataset supporting the conclusions of this article and a copy of the questionnaire text is available in the UCL Research Data Repository, https://doi.org/10.5522/04/7992998.

Abbreviations

CRO:

Contract Research Organisation

CTIMP:

Clinical Trial of an Investigational Medicinal Product

CTU:

Clinical Trials Unit

EMA:

European Medicines Agency

FDA:

US Food and Drug Administration

GCP:

Good Clinical Practice

ICH:

International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use

IDMC:

Independent data monitoring committee

MHRA:

Medicines and Healthcare products Regulatory Agency

MRC CTU at UCL:

MRC Clinical Trials Unit at UCL

SDV:

Source data verification

SWAT:

Study Within A Trial

UKCRC Registered CTU Network:

United Kingdom Clinical Research Collaboration Registered Clinical Trials Unit Network

References

  1. International Conference on Harmonisation of technical requirements for pharmaceuticals for human use (ICH). Guideline for good clinical practice E6(R2) 2018 Accessed 7 Feb 2019. Available from: https://www.ema.europa.eu/en/ich-e6-r2-good-clinical-practice. Accessed 30 Dec 2019.

  2. Federal Drugs Agency. Guidance for industry. Oversight of clinical investigations —a risk-based approach to monitoring. Accessed 7 Feb 2019. Available from: https://www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf

  3. European Medicines Agency. Reflection paper on risk based quality management in clinical trials. Accessed 7 Feb 2019. Available from: https://www.ema.europa.eu/documents/scientific-guideline/reflection-paper-risk-based-quality-management-clinical-trials_en.pdf

  4. Brosteanu O, Schwarz G, Houben P, Paulus U, Strenge-Hesse A, Zettelmeyer U, et al. Risk-adapted monitoring is not inferior to extensive on-site monitoring: Results of the ADAMON cluster-randomised study. Clin Trials. 2017;14(6):584–96.

    Article  Google Scholar 

  5. Fougerou-Leurent C, Laviolle B, Bellissant E. Cost-effectiveness of full versus targeted monitoring of randomized controlled trials. Fund Clin Pharmacol. 2018;32(supplement):49.

    Google Scholar 

  6. Journot V. OPTIMON. Accessed 7 Feb 2019]. Available from: https://ssl2.isped.u-bordeaux2.fr/OPTIMON/docs/Communications/2015-Montpellier/OPTIMON%20-%20EpiClin%20Montpellier%202015-05-20%20EN.pdf

  7. Tudur Smith C, Stocken DD, Dunn J, Cox T, Ghaneh P, Cunningham D, et al. The value of source data verification in a cancer clinical trial. PLoS One. 2012;7(12):e51623.

    Article  Google Scholar 

  8. Embleton-Thirsk A, Deane E, Townsend S, Farrelly L, Popoola B, Parker J, et al. Impact of retrospective data verification to prepare the ICON6 trial for use in a marketing authorization application. Clin Trials. 2019;16(5):502–11.

    Article  Google Scholar 

  9. Stenning SP, Cragg WJ, Joffe N, Diaz-Montana C, Choudhury R, Sydes MR, et al. Triggered or routine site monitoring visits for randomised controlled trials: results of TEMPER, a prospective, matched-pair study. Clin Trials. 2018;15(6):600–9.

    Article  Google Scholar 

  10. Engen NW, Hullsiek KH, Belloso WH, Finley E, Hudson F, Denning E, et al. A randomized evaluation of on-site monitoring nested in a multinational randomized trial. Clin Trials. 2019. https://doi.org/10.1177/1740774519881616.

  11. Morrison BW, Cochran CJ, White JG, Harley J, Kleppinger CF, Liu A, et al. Monitoring the quality of conduct of clinical trials: a survey of current practices. Clin Trials. 2011;8(3):342–9.

    Article  Google Scholar 

  12. Tudur Smith C. Clinical trial monitoring: towards establishing best practice? 2012. Accessed 7 Feb 2019. Available from: https://www.methodologyhubs.mrc.ac.uk/files/5014/3713/3750/cts_monitoring_htmr_2012.pdf

    Google Scholar 

  13. Beever D, Swaby L. An evaluation of risk-based monitoring in pragmatic trials in UK Clinical Trials Units. Trials. 2019;20:556.

    Article  Google Scholar 

  14. Opinio, 1998–2019 ObjectPlanet, Inc. Øvre Slottsgate 5, 0157 Oslo, Norway.

  15. UKCRC Registered CTUs CTU Role and Services. Accessed 28 Nov 2019. Available from: https://www.ukcrc-ctu.org.uk/page/CTURole

  16. Stata, StataCorp, College Station, TX.

  17. UKCRC Registered CTUs Resource Finder. Accessed 7 Feb 2019. Available from: https://www.ukcrc-ctu.org.uk/search/custom.asp?id=468

  18. Whitham D, Turzanski J, Bradshaw L, Clarke M, Culliford L, Duley L, et al. Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study. Trials. 2018;19:557.

    Article  Google Scholar 

  19. TransCelerate. Risk-based monitoring quantitative metrics toolkit 2016. Accessed 8 Aug 2019. Available from: http://www.transceleratebiopharmainc.com/wp-content/uploads/2017/01/RBM_Quantitative-Metric-Collection-Toolkit-For-External-Posting-FINAL.pdf

    Google Scholar 

  20. Chan AW, Tetzlaff JM, Altman DG, Laupacis A, Gotzsche PC, Krleza-Jeric K, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013;158(3):200–7.

    Article  Google Scholar 

  21. Gamble C, Krishan A, Stocken D, Lewis S, Juszczak E, Dore C, et al. Guidelines for the content of statistical analysis plans in clinical trials. JAMA. 2017;318(23):2337–43.

    Article  Google Scholar 

  22. Trial Forge. Accessed 7 Feb 2019. Available from: https://www.trialforge.org/pathway/managing-and-monitoring-trial-sites/

Download references

Acknowledgements

Thank you to the UKCRC Registered CTU Network and to all those who completed the questionnaire. Thank you to Macey Murray for helping us with the reviewers comments.

Funding

The MRC grant MC_UU_12023/24 funded the salaries of Sharon Love, Victoria Yorke-Edwards, Sarah Lensen and Matt Sydes.

Author information

Authors and Affiliations

Authors

Contributions

SBL led on designing and implementing the questionnaire, and prepared the first and subsequent drafts of this manuscript. SL, MS, and VYE helped design the questionnaire and evaluate the results. VYE conducted the data analysis. All authors read and approved the final manuscript.

Authors’ information

SBL, VYE, SL and MS are part of the MRC CTU at UCL Trial Conduct Team. Monitoring is one of the three streams of research from this team.

Corresponding author

Correspondence to Sharon B. Love.

Ethics declarations

Ethics approval and consent to participate

We did not require ethics approval as these were non-NHS staff and we inferred implied consent when we received a response.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Screenshot CRCUK registered CTU questionnaire.

Additional file 2.

Transcript CRCUK registered CTU questionnaire.

Additional file 3.

Frequency of differing combinations of people attending on on-site monitoring visits.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Love, S.B., Yorke-Edwards, V., Lensen, S. et al. Monitoring in practice – How are UK academic clinical trials monitored? A survey. Trials 21, 59 (2020). https://doi.org/10.1186/s13063-019-3976-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-019-3976-1

Keywords