- Open Access
Digital tools for the recruitment and retention of participants in randomised controlled trials: a systematic map
Trials volume 21, Article number: 478 (2020)
Recruiting and retaining participants in randomised controlled trials (RCTs) is challenging. Digital tools, such as social media, data mining, email or text-messaging, could improve recruitment or retention, but an overview of this research area is lacking. We aimed to systematically map the characteristics of digital recruitment and retention tools for RCTs, and the features of the comparative studies that have evaluated the effectiveness of these tools during the past 10 years.
We searched Medline, Embase, other databases, the Internet, and relevant web sites in July 2018 to identify comparative studies of digital tools for recruiting and/or retaining participants in health RCTs. Two reviewers independently screened references against protocol-specified eligibility criteria. Included studies were coded by one reviewer with 20% checked by a second reviewer, using pre-defined keywords to describe characteristics of the studies, populations and digital tools evaluated.
We identified 9163 potentially relevant references, of which 104 articles reporting 105 comparative studies were included in the systematic map. The number of published studies on digital tools has doubled in the past decade, but most studies evaluated digital tools for recruitment rather than retention. The key health areas investigated were health promotion, cancers, circulatory system diseases and mental health. Few studies focussed on minority or under-served populations, and most studies were observational. The most frequently-studied digital tools were social media, Internet sites, email and tv/radio for recruitment; and email and text-messaging for retention. One quarter of the studies measured efficiency (cost per recruited or retained participant) but few studies have evaluated people’s attitudes towards the use of digital tools.
This systematic map highlights a number of evidence gaps and may help stakeholders to identify and prioritise further research needs. In particular, there is a need for rigorous research on the efficiency of the digital tools and their impact on RCT participants and investigators, perhaps as studies-within-a-trial (SWAT) research. There is also a need for research into how digital tools may improve participant retention in RCTs which is currently underrepresented relative to recruitment research.
Not registered; based on a pre-specified protocol, peer-reviewed by the project’s Advisory Board.
The problem: inadequate patient recruitment and retention
Recruiting and retaining participants in clinical and health studies is a key determinant of research efficiency, but is highly challenging [1, 2], and poor rates of recruitment to randomised controlled trials (RCTs) are commonly reported . In reviews of clinical trials funded by the UK Medical Research Council and/or the National Institute for Heath Research (NIHR) Health Technology Assessment (HTA) programme [4,5,6,7] the proportion of trials achieving their original recruitment target ranged from only 31% (of 114 trials that recruited between 1994 and 2002)  to 60% (of 151 trials published from 2004 to April 2016) . Although there appears to have been some improvement over time, the latest data  suggest that around 40% of trials may still fail to reach their intended enrolment numbers, despite considerable interest in approaches that might improve study recruitment [2, 8].
Once patients have been recruited into clinical or health studies the rates at which they are retained in the studies can be highly variable, depending on the nature of the population, condition, intervention, comparator and outcomes. For example, estimates from trials in Alzheimer’s disease  suggest that that there is an average dropout rate of 30% across trials and 85% of trials fail to retain enough patients , whilst in a systematic review of 87 RCTs of asthma inhalers we found that dropout rates ranged from 0% to over 40% .
Consequences of inadequate trial recruitment include: reduced statistical power of trials due to inadequate sample size, increasing the risk of type II error (failure to detect real treatment effects) ; wasting patients’ and investigators’ time and money (e.g. 481 trials that closed during 2011 due to inadequate recruitment had already involved over 48,000 patients ); extending study duration which increases costs and delays the adoption of trial findings  (clinical studies often double their original timelines in an attempt to meet their intended enrolment target ); and failure to identify participants who are eligible (e.g. up to 60% of potentially eligible patients may miss being identified , which disadvantages the potential participants as well as the study investigators, sponsors and end-users of the research). Inadequate retention of patients in clinical trials also reduces the available sample size for analysis and hence the statistical power to detect treatment effects . Failure to achieve adequate recruitment and retention of participants in clinical trials therefore creates research waste  and may mean that policy-makers have to delay their decisions or base them on inferior evidence.
A survey of chief investigators of NIHR HTA trials and UK Clinical Research Collaboration (UKCRC)-registered clinical trials units (CTUs) identified that a diverse range of participant-retention strategies has been used in clinical trials, but for many of these, no research had been conducted to assess their effectiveness . Another survey found that the top three priorities of UK CTU directors for trials methodological research included ‘Research into methods to boost recruitment in trials’(considered the highest priority) and ‘Methods to minimise attrition’ .
Potential solutions: digital tools
To address inadequate patient recruitment and/or retention in studies, CTUs, funding agencies and study investigators are increasingly exploring the use of digital tools to identify, recruit and retain study participants. The range of digital tools that could assist with the recruitment and/or retention of study participants is wide. For example, a systematic review of the use of computers for patient recruitment into clinical trials found 79 different recruitment systems  and a survey of trials and CTUs identified a wide range of retention strategies . Digital recruitment approaches may help potential participants to locate trials for which they are eligible, and/or help trial investigators or health professionals to identify potentially eligible participants. Specific digital tools for recruitment and/or retention of people in trials include (among others): automated telephone messaging [2, 21]; audio messages ; videos [2, 22]; radio and television advertisements [23, 24]; online advertisements [23, 25]; Internet websites and tools, including online surveys [21, 24, 26, 27]; social media [21, 23, 25, 27, 28]; ‘smartphone Apps’ ; computer pop-up reminders ; email messages [2, 21, 25]; text-messaging [2, 21, 23, 25] and automated eligibility screening of electronic health records, data warehouses or other patient data sources [15, 30,31,32,33,34,35,36]. The automated screening approaches may be subdivided according to the different algorithms employed for predicting patient eligibility, which include, among others, machine-learning approaches [34, 37] and case-based reasoning models [32, 33]. These digital tools may either be used alone or in various combinations, and may be combined with non-digital approaches. For example, a strategy to improve patient recruitment, enrolment, engagement and retention in a clinical trial of a weight-loss intervention for students included a smartphone App, television screens, email messaging, text-messaging, online advertisements and social media, as well as printed materials (flyers, coasters, pens, posters and postcards) . In another example, an intervention to increase research participation of young female cancer survivors included Internet, email and social media components as well as newspaper advertisements that appeared both online and in print .
The range of digital tools potentially available to assist with the recruitment and/or retention of patients in clinical and health trials is diverse, and a number of systematic reviews have investigated whether digital approaches for recruitment and/or retention are effective [2, 3, 20, 39,40,41,42,43,44,45,46,47,48,49,50,51,52,53] (summarised in Additional Table 1). However, it is difficult to get a comprehensive picture of the characteristics of the overall evidence base, because:
These reviews have focussed on a single specific digital approach (e.g. Facebook [40, 51, 53] or social networking [39, 49]) or population (e.g. young people [40, 47, 49] or people with a specific condition [41, 48, 50]) or
The reviews have been broad, not limited to digital approaches [2, 41, 43, 46,47,48] or
The searches are now several years out of date [2, 3, 20, 39,40,41,42,43,44,45,46,47,48,49,50,51,52, 54]
Given the growth in the availability of digital tools in the last decade and the importance of effective and efficient recruitment and retention, we constructed a systematic map to investigate what evidence exists to support the use of these tools.
Rationale for systematic mapping as an evidence-synthesis approach
The diversity, uniqueness, and, in some cases complexity, of strategies for improving patient recruitment and retention suggest that systematic mapping is an appropriate initial approach to the evidence synthesis, to identify and characterise the range of digital tools for improving patient recruitment and/or retention and characterise the studies that have evaluated these tools. Systematic mapping (NB also referred to in the literature as evidence mapping) begins in the same way as a systematic review, based on extensive searches for evidence and systematic screening to determine eligibility of the identified studies, but provides a descriptive output rather than an estimate of effects . Systematic maps are helpful in summarising and describing a broad and/or heterogeneous evidence base in order to plan and prioritise focussed evaluative syntheses, e.g. using one or more subsequent systematic review(s) [55,56,57,58,59,60,61,62]. Evidence gaps can also be identified where little or no primary research has been conducted. We chose systematic mapping as the method for this project because no comprehensive overview of the evidence base relating to the use of digital tools for trial participant recruitment and/or retention currently exists.
Systematic map research question
The question addressed by the systematic map is ‘what are the types and characteristics of the digital tools that have been evaluated for improving the recruitment and/or retention of people in RCTs?’
Aims and objectives
The aim of this research is to systematically identify and describe the research studies that have evaluated the accuracy or effectiveness of digital tools for improving the recruitment and/or retention of people in RCTs, and to describe the characteristics of those digital tools that have been evaluated.
This research has two objectives:
To develop a systematic map to identify and characterise the research studies that have evaluated digital tools for recruiting and retaining participants in clinical and health RCTs
Based on the results of the map, to summarise any key evidence gaps and areas where a more detailed, focused, evidence synthesis of effectiveness could be worthwhile; and to provide recommendations for further primary research as necessary
Project management and protocol
This research was part of a broader project ‘User-focussed research to identify the benefits of innovative digital recruitment and retention tools for more efficient conduct of randomised trials’. The project had three parts: Phase 1: Scoping searches to identify digital tools for patient recruitment and retention. Phase 2: A survey of the UKCRC-registered CTUs on what digital tools they currently use, and interviews with stakeholders to identify which digital tools for trial recruitment and retention are currently being used in practice and the performance characteristics required for digital tools to be judged useful . This informed the development of logic models to capture the functions of different types of digital tools for recruitment and retention and the theory behind their mechanism of action in a generic way, without focussing on any specific digital tool, product or service. Phase 3: Systematic mapping to identify and describe the characteristics of research studies which have evaluated the effectiveness of digital tools for patient recruitment and/or retention. The research we are reporting here is specifically on the systematic mapping work (Phase 3).
The systematic map methods were based on a protocol (Appendix 1; see Additional file 4) which was peer-reviewed by the full project team and an Advisory Board prior to the commencement of the work. The Advisory Board included representatives of the NIHR research infrastructure (i.e. Research Design Service and Clinical Research Network), clinical trials units, a patient and public involvement representative, and academic members of a National Health Service (NHS) Foundation Trust (see the ‘Acknowledgements’ section). The systematic mapping was conducted by a team experienced in evidence synthesis (the authors of this paper) who met monthly with, and received feedback from, the full project team. The work was conducted in accordance with those steps of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Checklist  that are applicable to systematic mapping (see Additional Table 2).
A comprehensive search for relevant studies was undertaken by an experienced health information specialist in the following bibliographical databases:
Ovid MEDLINE (including MEDLINE Epub Ahead of Print, MEDLINE In-Process and Other Non-Indexed Citations and MEDLINE Daily; 1990 to July 2018)
Ovid Embase (1990 to 17 July 2018)
Inspec (Institute of Engineering and Technology) (1990 to 18 July 2018)
Web of Science (a cross-database search including the BIOSIS Citation Index, BIOSIS Previews, Current Contents Connect, Data Citation Index, Derwent Innovations Index, Inspec and the KCI Korean Journal Database; 2000 to 18 July 2018)
The search strategies for MEDLINE, Embase, Web of Science and Inspec searches are shown in Appendix 2 (see Additional file 5). Where possible, the databases were searched from 1990 onwards, as specified in the protocol (Appendix 1; see Additional file 4). As noted below (see the ‘Eligibility criteria’ section), a further date limit was subsequently applied during eligibility screening.
All searches were conducted in the English language.
Eligibility screening process
Titles and abstracts of references identified by the search were screened against the eligibility criteria (stated below) independently by two reviewers. In cases of disagreement a third reviewer was consulted to reach a consensus. Full-text articles were retrieved for those titles and abstracts which met the eligibility criteria or which had unclear relevance, and for references that did not have an abstract or summary (e.g. Internet pages and reports). Full-text articles were screened by one reviewer and checked by a second, and a third reviewer was consulted in cases of disagreement. The screening process is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline  (see the ‘Results’ section).
References were screened using an eligibility worksheet, which was pilot-tested on 29 studies identified by our scoping searches [16, 18, 28, 35, 61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85] to ensure consistency of the review team’s interpretation. The final eligibility criteria (Appendix 3; see Additional file 6) were then applied to all the identified references. The same eligibility criteria were applied for screening titles, abstracts and full-text articles.
Studies featuring one or more of the following population groups were eligible:
Health professionals (e.g. physicians, nurses, therapists)
Researchers and study administration staff (e.g. study investigators, research managers)
Patients, their carers or the general public
People from disciplines outside health, medical or clinical research
Mixed populations in which not all participants met the inclusion criteria and outcomes were not separately reported for those who met the inclusion criteria
Studies were eligible for inclusion if they evaluated one or more digital approach to recruit and/or encourage retention of participants into clinical and health studies. To be classed as a digital approach, the method of participant recruitment or retention had to include one or more digital tools. A digital tool was defined broadly in this project as: an Internet, software or social media application, computer, electronic tablet or virtual assistant/gadget, to support recruitment and/or retention of participants in RCTs. According to the logic models we developed (unpublished data, available from the authors on request), digital approaches for the recruitment of participants could include methods that: raise awareness of trials; help eligible people to locate trials; identify eligible patients from databases or during clinical consultations; and/or check a person’s eligibility for a trial. Digital approaches to support retention of participants in trials could include, among others, methods that provide trial information and/or reminders for trial participants to provide data or to attend visits or tests.
Studies of multi-component digital approaches (e.g. which included digital in addition to non-digital tools) were eligible for inclusion.
Only studies that included a comparator were eligible. The comparator could include:
Standard practice for participant recruitment or retention (for the given study sponsor or institution conducting the study)
Non-digital approaches (e.g. approaches that comprise paper-based or manual tools)
Recruitment or retention approaches comprising digital tools other than those included in the intervention
Digital approaches comprising ‘bundles’ of tools (i.e. where the comparator included more than one digital and/or non-digital tool)
Studies were excluded if the configuration of the comparator and intervention was such that effects of digital tools could not be separated from the effects of non-digital tools (e.g. if the intervention and comparator both contained digital and non-digital strategies but the digital strategies were identical and, therefore, their effects would ‘cancel out’).
Studies that reported one or more of the following outcome measures were eligible for inclusion:
Recruitment rate (e.g. the proportion of the intended number of participants enrolled in the study)
Quantitative assessment of recruitment accuracy (e.g. the proportion of participants included in a study accurately meeting study inclusion criteria, as assessed by sensitivity, specificity and/or area under the curve estimates [31, 36])
Qualitative assessment of recruitment accuracy (e.g. similarity of the characteristics of the recruited participants against the study eligibility criteria)
Participant retention in a study (e.g. the proportion of recruited participants who remained in the study at the end)
Two types of study design are relevant to the systematic map:
The design of the RCT into which participants were recruited and retained. For clarity we refer to this as the host trial
The design of the research that investigated the accuracy or effectiveness of the digital approaches for recruitment and/or retention in the host trial. We refer to this as the primary evaluation study. For clarity, throughout the rest of this paper we refer to ‘trial’ when referring to the host trial and ‘study’ when referring to the primary evaluation study
To be eligible for inclusion, the host trial had to be an RCT. The primary evaluation study could be of any design (e.g. RCT, quasi-experimental study or observational study), provided that relevant outcomes (as specified above) were reported for the intervention (i.e. the digital recruitment or retention approach of interest) and at least one comparator. The primary evaluation studies could conform to the broad definition of a ‘study within a trial’ (SWAT), which is ‘a self-contained research study that has been embedded within a host trial with the aim of evaluating or exploring alternative ways of delivering or organising a particular trial process’ . However, in order to fully capture the range of evaluative research studies that have been conducted on RCT recruitment and retention, our inclusion criteria are wider than those that would define a SWAT. For example, we permitted retrospective studies to be included; and we did not require evidence that studies were based on a formal protocol, as is recommended for SWATs .
Studies were limited to those published during the last 10 years (i.e. those published from the start of 2008 to the date of searches in 2018), to ensure that digital tools included in the studies are likely to be reflective of the tools available for use in current practice. A cut-off date for study eligibility was not specified in the original protocol when searches were conducted but was subsequently agreed in consultation with the project Advisory Board, as specified in a protocol amendment (Appendix 1; see Additional file 4).
Coding of eligible studies and development of the systematic map
All studies meeting the criteria for inclusion in the map were classified by systematically applying pre-specified keywords to each study. The purpose of the keyword list was to capture the characteristics of the evidence base in a flexible manner such that the key attributes of studies of interest to end-users could be clearly summarised in the map, and specific combinations of characteristics of interest explored. The map, and hence the keywords, did not seek to critically appraise, synthesise or evaluate the results of the studies, for which systematic review and meta-analysis would be more appropriate evidence-synthesis methods. The draft keyword list and coding process were initially pilot-tested on eight studies that were identified in scoping searches (reported in five papers [21, 31, 67,68,69]), to ensure that the map would consistently capture relevant information. The draft keyword list was refined based on consultation with the full project team and the project Advisory Board (which we consider representative of most stakeholders likely to consult the map), to give a final version (Appendix 4; see Additional file 7).
Data extraction strategy
Each study was coded by one reviewer and a random sample of 20% of the studies was checked by a second reviewer. The coding decisions for each study were recorded in a Microsoft Excel spreadsheet template, which contained the list of keywords (rows) and a list of the included studies (columns) to generate a data matrix that would form the systematic map database. In cases where checking identified that refinements to coding would be appropriate these were agreed by the review team and applied to all the included studies to minimise the risk of introducing bias. Once the coding of all the included studies had been completed, Excel chart and pivot table functions were applied to the data matrix to produce a descriptive map of the key characteristics of the evidence.
Searching, study selection and map coding
A flow diagram summarising the eligibility screening process is shown in Fig. 1. After removing duplicates, searches identified 9163 unique references published from January 2008 to mid-July 2018. The majority of these (n = 8806) were excluded because their title and/or abstract did not meet the eligibility criteria, leaving 357 articles for full-text screening. A further 251 articles were excluded at full-text screening (Fig. 1) (a full list of the 251 excluded studies and the reasons for exclusion is provided in Additional Table 3). The remaining 104 articles, reporting 105 unique studies, passed the full-text assessment and were included in the systematic map [28, 32,33,34,35,36, 70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167].
Reviewer eligibility screening agreement
Two reviewers independently conducted all eligibility screening steps and we considered the overall rates of reviewer agreement to be good, given the topic complexity and the heterogeneity of reporting styles in the articles. The overall agreement rate for title and abstract screening was 97%, which primarily reflects very good agreement on the large number of decisions to exclude records. The overall agreement rate at the full-text screening step was 91%, with 9% of the full-text articles (n = 31) requiring discussion before the eligibility decision was finalised.
Reviewer map-coding agreement
The 20% check of coded studies by a second reviewer showed good overall agreement between reviewers in how they had applied the keywords. Generally, only minor changes were made to coding studies after discussion between the reviewers. The main issue identified by the check was that the ‘recruitment reach’ outcome keyword had not been applied consistently by all the reviewers. As a result of this, we created a new, clearer definition for this outcome (Appendix 4; see Additional file 7) and updated all the studies in the map to ensure that where studies reported ‘reach’ this was coded consistently.
Systematic map results
The full systematic map database is provided in Appendix 5 (see Additional file 8).
Characteristics of the studies
The 105 identified studies of digital recruitment and retention approaches have nearly all been conducted in single countries (100/105; 95%), with five multi-national studies identified (5%). The 100 single-country studies were conducted most often in the USA (61%), UK (17%), Australia (9%), Germany (4%) and Canada (3%), with a slight increase in the diversity of countries involved over time (Fig. 2).
The health areas studied most often were health promotion and public health (37/105; 35%), cancers (17/105; 16%), circulatory system disorders (13/105; 12%), mental health (10/105; 10%) and endocrine and metabolic disorders (7/105; 7%) (Fig. 3). Note that the total number of studies in Fig. 3 is more than 105, since several studies covered more than one health condition.
Studies on health promotion and public health and on circulatory system disorders have recently increased, but those on cancers, mental health and endocrine and metabolic disorders have remained relatively infrequent (Fig. 4).
The systematic map (Appendix 5; see Additional file 8) shows that the most common specific health topics that the 37 studies on health promotion and public health investigated were smoking cessation or tobacco control (11 studies) and sexual health promotion (seven studies).
Most of the studies (96/105; 91%) have investigated digital approaches for recruitment, with fewer (20/105; 19%) investigating digital approaches for retention, and 11/105 (10%) investigating both recruitment and retention (85/105 studies (81%) were on recruitment only and 9/105 studies (9%) were on retention only). The greater frequency of studies on recruitment than retention has been consistent across the different health topics investigated (Fig. 3). The number of studies published each year has approximately doubled during the past decade; this primarily reflects an increase in studies of recruitment, with only a slight increase evident in the number of studies that assessed retention (Fig. 5). The study designs have been primarily observational and/or retrospective (90/105; 86%), with 11 randomised experiments (10%) and five non-randomised experimental studies (5%).
Among the 11 studies that evaluated digital tools for both recruitment and retention, five (45%) used different digital tools for recruitment and for retention [79, 80, 97, 104, 162]. Four studies employed at least one digital tool that was common to both recruitment and retention, i.e. email [73, 106, 126, 139] and Facebook , but in each case the tool was tailored to function differently for recruitment (it included adverts) and retention (it included reminders or facilitated engagement). The remaining two studies assessed the effect of their digital recruitment approach on retention, effectively assuming that the recruitment tools also acted as retention tools [99, 150].
Characteristics of the digital approaches
The systematic map (Appendix 5; see Additional file 8) shows that the digital interventions for recruitment and/or retention tested in the studies were a single digital approach in 41% of the studies (43/105), digital approaches combined with non-digital approaches in 50% (53/105), and multiple combined digital approaches in 10% (10/105) (totals exceed 100% as one study included different approaches for recruitment and retention).
Nearly half of the studies (46/105; 44%) did not explicitly state that they included a comparator, but they presented outcomes in such a way that a comparator was discernible (e.g. outcomes were reported separately for different tools). Sixty-four studies defined their comparator(s) (the total number of studies exceeds 105 as some studies included more than one comparison). Of these, 13% of the studies (8/64) employed a comparator that was a single digital approach, whilst 48% (31/64) employed a comparator that was a single non-digital approach. In 16% of the studies (10/64) the comparator comprised a mixture of digital and non-digital approaches whilst in a further 16% the comparator was a mixture of different non-digital approaches. Only one study (1%) employed a combination of multiple different digital approaches as the comparator. The remaining 4/64 studies (6%) gave unclear information, classified as “other” in the map.
Recruitment approaches and tools
An overview of the digital approaches and tools evaluated for recruitment is shown in Fig. 6. Several studies reported more than one approach, so the number of approaches (n = 110) exceeds the number of recruitment studies (n = 96).
The most commonly evaluated digital recruitment approaches aimed to raise awareness of trials among potential trial participants (68/110; 62%) and to help investigators and clinicians to identify eligible participants for their trials (28/110; 25%). A further 7% of the recruitment approaches (8/110) aimed to obtain informed consent, while 2% (2/110) provided search aids for people to identify specific trials that they could join.
The specific digital recruitment tools evaluated in the 96 studies that assessed recruitment are summarised in Table 1. The most frequently investigated tools in the last decade have been Internet sites (51/96; 53% of studies), social media (40/96; 42%), television or radio (30/96; 31%), email (30/96; 31%), and automated approaches for identifying potential participants (22/96; 23%). Note that some studies employed more than one recruitment approach, so the sum of these percentages exceeds 100.
The map shows that there has been a recent increase in the number of studies evaluating Internet sites and social media for recruitment (Fig. 7).
Overall, the specific recruitment approaches employed, and their combinations, have been diverse, meaning that each study has effectively assessed a unique recruitment approach. These have included, for example: online advertisements using banners [79, 80, 106] or Google Adwords [81, 105]; advertisements in cinemas ; Internet press releases ; electronic newsletters ; podcasts and webinars ; and online community notice boards . Where the recruitment approach included both digital and non-digital tools, the most frequent non-digital components were flyers (32/96 studies; 33%) or mail outs (27/96 studies; 28%).
Retention approaches and tools
An overview of the digital approaches and tools evaluated for retention is shown in Fig. 8.
Several studies reported more than one approach, so the number of approaches (n = 26) exceeds the number of retention studies (n = 20). The digital retention approaches in most cases (17/26; 65%) provided prompts or reminders for trial participants to attend study appointments or to complete outcome assessments. The remaining approaches mainly involved the digital collection of new or existing data to improve the completeness of data availability (5/26; 19%) or communication approaches to maintain participants’ engagement with the trial (2/26; 8%).
The specific digital retention tools evaluated are summarised in Table 2.
Among the 20 studies that investigated retention, the most frequently investigated digital retention tools were email (14/20 studies; 70%) and/or instant-messaging or text-messaging (10 studies; 50%). The distribution of research on retention is more sparse than that for recruitment, without any clear trends through time (Fig. 9). Although social media has been widely used for participant recruitment, only two of the studies identified in the map investigated the potential of social media for improving retention. No studies had investigated whether chatbot or video-based approaches could assist the retention of participants in trials.
Types of recruitment and retention outcomes studied
The types of recruitment and retention outcomes assessed in the studies are summarised in Fig. 10.
The most frequently reported recruitment outcome in the map (Appendix 5; see Additional file 8) was the recruitment rate, reported in 79 of the 96 recruitment studies (82%). Seventeen of the recruitment studies (18%) reported the quantitative accuracy of recruitment compared against a reference standard, and the time to complete one or more parts of the recruitment process was also reported in 18%. The costs of the digital approaches were reported in 30 of the 105 studies in the map (29%), but the focus was on recruitment, with only three of these studies mentioning any costs in relation to retention. Twenty-eight of the 105 studies in the map (27%) reported the retention rate, but eight of these had investigated digital tools for recruitment rather than for retention. Few studies assessed the satisfaction with digital tools perceived by study personnel (5/105 studies; 5%) or study participants (2/105 studies; 2%), or people’s attitudes towards using the tools (also 2%).
In addition to the pre-specified keywords, the map coding (Appendix 4; see Additional file 7) allowed us to record ‘other’ outcomes if they appeared relevant. We recorded other outcomes for 21 studies. The textual descriptions for these outcomes in the map spreadsheet (Appendix 5; see Additional file 8) indicate that five studies assessed the descriptive characteristics of recruited people, three studies assessed follow-up questionnaire response times, and two studies reported outcomes relating to study workloads (Fig. 10). The remaining 11 studies reported other outcomes that were unique (i.e. specific to each study).
Application of digital tools in specific populations
Sixteen studies (15% of all those included in the map) evaluated the use of digital tools for trial recruitment or retention with a focus on individuals from minority or under-served populations (as defined by the study authors). Minority or under-served populations included in studies of recruitment were: Black, African-American and Hispanic people [77, 89, 110, 113, 135, 146, 149, 154]; Maori or Pacific Island populations ; Turkish migrants with depressive symptoms ; men who have sex with men ; transgender women ; other people at risk of HIV infection ; people with low computer and health literacy ; Gulf War veterans ; infertile couples ; and low-income gay, lesbian and bisexual smokers . One study focussed specifically on the retention of a minority/under-served population: of low-income, vulnerable, smokers with mental illness in psychiatric units .
Overall, the most commonly evaluated digital tools in studies on minority or under-served populations were Internet sites for recruitment (reported in 13/16 studies; 81%), television or radio for recruitment, social media for recruitment, and email for recruitment (each reported in 9/16 studies; 56%) (the percentages do not summate to 100 as some studies evaluated more than one tool).
The health-topic area with the largest number of studies that included a focus on minority or under-served populations was health promotion and public health (8/37 studies; 22%) (Table 3). For most health-topic areas the total number of studies was small, but it is notable that only 1 of 17 studies on cancers, and only 2 of 13 studies on circulatory system disorders included a focus on minority or under-served populations.
We did not code participant age in the map keywords. However, inspection of the study publications after the map was completed shows that none of the studies on digital tools for retention included populations of older people or children. In contrast, four of the studies of digital recruitment tools were specifically on populations of older people [125, 143, 152, 155] (although the definition of ‘older’ varied, e.g. range 50 to 75 years , mean 69 years , median 70 years  or range 70 to 93.9 years ), whilst three studies were on children [34, 130, 141].
The work that we have reported here is the first systematic map to describe the characteristics of studies that have evaluated digital tools for improving participant recruitment and retention in clinical trials. We have employed systematic mapping as our aim was to provide information on the scope, nature and content of the empirical research that has been carried out on digital tools for trial recruitment and retention, rather than to answer a specific policy question. The systematic map database, which is provided as supporting material alongside this article (Appendix 5; see Additional file 8) may be examined in detail by anyone interested in exploring the characteristics of the studies and digital tools further, and may be updated, or expanded in scope, to suit emerging research needs.
Links between recruitment, retention and other digital aspects of trial management
Tools that optimise both recruitment and retention would be desirable, since failure to retain adequate participants in a trial would negate the benefits of achieving good recruitment. In a survey of trial investigators and UK CTUs, the need for strategies to improve retention in trials was identified as being a key research priority , whilst the PRioRiTy studies have identified the use of technology in the trial recruitment and retention processes as being within the top 10 research priorities for the future . However, the map shows that most of the studies (81%) have evaluated digital tools only for recruitment. Where studies did utilise digital tools both for recruitment and retention the tools that they used for recruitment and retention were different [73, 79, 80, 97, 104, 106, 126, 139, 162]. This is not surprising, since the approaches for recruitment were primarily to raise awareness whilst the approaches for retention were primarily to provide reminders or to collect data (see Figs. 6 and 8).
We did not specifically seek studies of digital approaches for managing other components of trial management beyond recruitment and retention and so our map keywords do not cover these aspects. However, according to the study publications, several of the studies in the map that evaluated digital tools for both recruitment and retention had included further digital methods for other purposes. These digital methods covered: online patient verification [79, 80]; online randomisation [73, 104]; allocation concealment ; automated data collection [73, 104]; information dissemination from investigators to participants ; email and website options for participants to contact investigators ; data monitoring and tracing ; and data-quality checking . A key question is whether the efficiency of clinical trials could be improved by developing a small set of compatible digital tools that would cover all of these aspects of trial management, rather than requiring trial investigators to select separate digital tools to cover recruitment, retention and other aspects of trial management. Further research would be helpful to understand whether (and, if so, how) the different digital tools interact with one another, to ensure that the most efficient combinations of tools can be selected by trial investigators. However, as there are many different possible permutations and combinations of different tools that could be employed in clinical trials for different populations and health conditions, it may be necessary to identify for further research a core set of tools that appear to show the most promise in improving recruitment and retention efficiency.
Use of digital tools in relation to the population and health topic
The appropriateness and efficiency of digital tools for recruitment and retention is likely to be vary across populations and health conditions. The 17 studies on cancers in the map focussed solely on recruitment, with none having investigated digital tools for retention (Fig. 3). A possible explanation could be that retention tools are not required in studies where the primary outcome is survival, or that for other outcomes cancer patients in general are highly motivated, by their need for life-prolonging therapy, to remain in clinical trials as long as possible, reducing the need for retention tools. Whilst a proportion of cancer patients inevitably drop out from clinical trials due to adverse effects of therapy or worsening of their condition, they may no longer be eligible for continuation of the same therapy and so it may not be possible to improve rates of retention in this group. We also noted a relative lack of retention studies for RCTs of circulatory system disorders (Fig. 3). Without an understanding of the underlying mechanisms explaining these observations it is unclear whether the lack of digital retention tools studies in these health conditions represents a major evidence gap, or that retention tools are less useful for certain health conditions.
The map shows that the impact of digital tools for recruitment or retention has mainly been investigated for trial participants who were either patients with a specified health condition or were people eligible for a health promotion intervention. Only one study specifically evaluated digital recruitment of a health professional group: general practitioners being recruited to an online trial to develop and evaluate theory-based interventions to improve antibiotic prescribing . Among the minority or under-served populations (as defined by the study authors), there has been a particular focus on African-American and Hispanic people (eight out of 16 studies on minority or under-served populations), suggesting that these populations are especially challenging to recruit in clinical trials. As with the majority of studies in the map, the research on minority and under-served populations has mainly focussed on recruitment rather than retention. Nevertheless, the authors of a study included in the map commented that they had some difficulty at retaining African American people in their clinical trial of an HIV-prevention intervention in young people .
It is unclear why there is an imbalance in the population age distribution in the studies included in the map, i.e. children and older people were represented in some studies of recruitment but not in any of the studies of retention. The age of recruited participants differed between digital and non-digital recruitment methods in a number of the studies (e.g. [78, 83, 103, 107, 129, 140, 152]), indicating that the age of the target population should be taken into consideration when selecting recruitment tools. National statistics show that the proportion of adults who use the Internet (and also social media [170, 171]) is lowest in the over-65 years’ group [171,172,173], although the proportion of older people using the Internet has risen steadily in recent years [172, 173], suggesting that some digital tools that may currently be out of reach of older people may become increasingly more accessible.
Combinations of digital and non-digital tools
The studies included in the map evaluated single digital tools, combinations of multiple digital tools or combinations of digital and non-digital tools, with comparators that could be a single digital tool, a single non-digital tool or a combination of multiple digital and/or non-digital tools. The relevance of these digital tool comparisons to clinical trial investigators would depend on whether the investigators’ aim is to supplement existing non-digital recruitment or retention strategies with digital methods; or whether the aim is to employ only digital methods (e.g. if digital tools are considered to be more appropriate or more efficient than standard non-digital recruitment or retention tools, and hence should replace them). Although not specifically captured by our map keywords, we observed that whilst some studies in the map deployed digital tools from the outset of the host trial, others added digital tools after the trial had started, as a response to inadequate recruitment using existing methods.
The problem of how to increase the rates and accuracy of participant recruitment and retention in clinical trials appears well-suited to prospective experimental research, but our systematic map shows that the majority of the primary evaluation studies have been observational in design. A recent paper providing guidance to researchers on the use of social media for recruiting trial participants noted that most of the studies that have shown benefits of social media were observational , although observational studies are at increased risk of bias. Direct proof of the effectiveness of digital tools may, therefore, be difficult to establish conclusively unless further experimental studies are conducted. The lack of experimental studies might reflect the idea that SWATs is a relatively new concept , and few organisations yet provide funding for these (although this is improving ).
The map shows there were many cases where ‘bundles’ of digital and/or non-digital tools were employed for recruitment and/or retention, but the outcomes reported for the digital and non-digital components were often not separable. For example, 12 of the studies in the map employed monetary incentives (e.g. gift vouchers) in addition to their digital tools to encourage participants to join and/or remain in clinical trials. Some studies provided similar monetary incentives to participants in both their digital and non-digital tool study groups (e.g. [104, 106, 114, 126, 158, 162]), whilst other studies provided different monetary incentives for their digital and non-digital tool groups (e.g. [79, 80, 82]). Employing a ‘bundle’ of recruitment or retention tools could have practical relevance (e.g. monetary payment is an integral part of Amazon Mechanical Turk, which was employed as a recruitment tool in one study ), but it is important when considering recruitment and retention outcomes to be aware that in some studies estimates of the effectiveness of digital tools could be confounded with the influence of other tools that are present, such as incentives. The authors of one of the studies included in the map acknowledged that they could not rule out the possibility that the observed rates of retention in their study were influenced by the ‘generous’ monetary incentives that they provided .
A key challenge when conducting research is to ensure that appropriate outcomes are measured, but the map suggests that a wide range of recruitment and retention outcomes are relevant and it might not be practical for an individual study to assess all of these. The outcomes that appear to have been considered most important by researchers (Fig. 10) are the rate, reach and costs of recruitment and the efficiency and accuracy of recruitment tools. Among these outcomes, accuracy, efficiency and costs of recruitment were reported in only 18%, 24% and 28% of the recruitment studies, whilst hardly any studies assessed the tool users’ attitudes or satisfaction, despite these being important factors that may determine whether a digital tool is likely to used, and work effectively, in practice. For instance, a tool that achieves a high recruitment reach would be of little value if it inaccurately matches people to the trial eligibility criteria, or is considered burdensome or too expensive to use routinely. Where possible, studies evaluating digital tools should, therefore, measure costs, accuracy and efficiency alongside rates of recruitment or retention, and should also capture key process measures, including the attitudes and satisfaction of end-users of the digital tools.
Strengths and limitations of our study
Our study benefitted from comprehensive and systematic methods to identify and characterise the evidence for the effectiveness of digital tools for recruitment and retention in RCTs in health. Our inclusion criteria were necessarily broad, to chart the range of digital approaches that have been evaluated, something that has not been achieved by previous evidence syntheses in this area.
We consulted with relevant stakeholders, via our Advisory Board, to ensure that the scope of the project was as relevant as possible to contemporary practice in trial management. The Board advised us on issues such as the inclusion criteria, and the choice of keywords.
We are publishing our map in its entirety, to help other researchers in this area to understand which areas have been well-researched and where the gaps are in the evidence base. We have identified a number of clear recommendations for researchers that should help to improve the quality and relevance of research in this area and, thus, help to fill these important evidence gaps.
There are some potential limitations to this study that should be acknowledged. Whilst our intention was to map studies that investigated RCTs as their host trial, the host-trial design was not stated explicitly in 15 studies (15%); these were included in the map as the host trial appeared likely to be an RCT in the review team’s judgement. Clearer reporting of study designs would be helpful.
A challenge with developing the keywords was that it is difficult to find keywords that are mutually exclusive. For example, a smartphone App could access an Internet site or social media or advertisements (or none of these). A fine-grained systematic map database that splits out all the possible combinations (e.g. distinguishing between: smartphone App with Internet site; smartphone App with social media; smartphone App with advertisements) would be impractically large and would result in relatively small sample sizes per category. We feel that the granularity of the current map is appropriate since it provides useful information whilst maintaining a manageable breadth of keywords.
We limited our searches to the English language, meaning that the scope of the systematic map is specific to English-language studies. We believe it unlikely that the exclusion of non-English-language studies would make our map unrepresentative, as we are not aware of any reasons why English-language and non-English-language studies would differ systematically in the areas of digital tools research covered.
Due to our eligibility criteria requiring studies to have a comparator, some innovative or more recently developed digital approaches or tools for trial recruitment and/or retention that have not been evaluated against a comparator may not have been identified and included in our map. For example, point-of-care clinical trials using electronic health record systems to carry out the research, when a patient presents to, and is being cared for by, a health professional, can potentially be used to enrol and randomise patients, and to collect outcome data. Studies by D’Avolio et al.  and van Staa et al.  are examples of point-of-care trials that were excluded from our systematic map due to having no comparator.
Our choice of keywords, agreed with our Advisory Board, aimed to enable us to produce a systematic map with a useful level of detail within a specific timescale. A limitation of our map keywords is that they do not go into detail about the methodology of the host trials and primary evaluation studies. For instance, we did not record systematically whether the studies provided a rationale for why a digital approach was appropriate for their particular setting, although we noticed that the reporting of this appeared to be quite variable. We did not assess whether the trials and studies followed the reporting standards for interventional studies such as the Template for Intervention Description and Replication (TIDieR) Checklist , or the Guidelines for reporting embedded recruitment trials , or whether the studies strictly meet the definition of a SWAT  (many of the studies included in the map were conducted before these guidelines were published). However, further keywords could be added to the map database to capture other information on aspects of study rigor, if required. Further keywords could also be added to address specific research questions of interest (for example, to ascertain the extent to which patients and the public were involved in the conception, design and application of digital tools; or to explore how the selection of digital tools is influenced by host-trial endpoints).
A wide range of digital tools has been evaluated to improve the recruitment and retention of participants in clinical trials. However, few experimental studies, especially randomised controlled studies, have been conducted on digital tools for recruitment or retention, which would limit the availability of robust evidence on the effectiveness of these tools. Further experimental studies of the effectiveness and efficiency of digital tools are needed to ensure that estimates of the effectiveness and efficiency of digital tools are reliable and unbiased.
Our systematic map highlights a number of knowledge gaps where further research would be helpful. These include a lack of research on retention, and a lack of research on certain populations such as children and older people, and on process outcomes (i.e. facilitators and barriers), including the attitudes and satisfaction of the digital tool users. Where possible, studies on digital tools should include process indicators (e.g. measures of costs and acceptability to users) alongside effectiveness and efficiency outcomes, to help understand why digital tools may or may not be effective in particular populations and settings.
Given the complexity of the digital tools’ comparisons identified in the systematic map (bundles of digital and/or non-digital tools were often compared against other combinations of one or more digital and/or non-digital tools), further research would be helpful to clarify which tools work best individually and which work best in combination, for particular populations and health conditions. A question arising from our systematic map is whether a core set of digital tools could be identified that: optimises both recruitment and retention; has utility across a range of health conditions; and is compatible with other tools that are used for the general management and conduct of RCTs (such as for online participant verification, online randomisation, communication and information dissemination between investigators and participants, and data monitoring, checking and tracing).
Since we did not carry out a synthesis of the results of the studies, we are unable to recommend any specific tools for immediate application for trial recruitment or retention. However, the map provides a resource that trial investigators may find useful when considering which tools are available, which tools have been tested in certain populations, and some of the potential limitations of the tools and their comparisons that may need to be considered. Stakeholders may also find the map helpful when considering the prioritisation of which populations, health topics, types of digital tool, and outcomes to focus research on, given that it is unlikely to be possible to conduct studies to cover all populations and health conditions in detail. The map is readily updatable and may be extended in scope, or updated, as necessary to suit the research needs of trial investigators and CTUs. With further development the map could also support guidance (e.g. a checklist) to assist trial investigators and CTUs with the selection of digital tools that are appropriate for their research question.
Availability of data and materials
All data generated and analysed during this study, including the full systematic map database, are included in this published article and its supplementary information files.
Clinical trials unit(s)
Health Technology Assessment
National Health Service
National Institute for Health Research
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
Randomised controlled trial(s)
Study within a trial
UK Clinical Research Collaboration
McDonald AM, Knight RC, Campbell MK, Entwistle VA, Grant AM, Cook JA, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials. 2006;7(9):1–8.
Treweek S, Pitkethly M, Cook J, Fraser C, Mitchell E, Sullivan F, et al. Strategies to improve recruitment to randomised trials. Cochrane Database Syst Rev. 2018;2, MR000013 1–185.
Fletcher B, Gheorghe A, Moore D, Wilson S, Damery S. Improving the recruitment activity of clinicians in randomized controlled trials: a systematic review. BMJ Open 2012;2:e000496 1-14.
Campbell MK, Snowdon C, Francis D, Elbourne D, McDonald AM, Knight R, et al. Recruitment to randomised trials: strategies for trial enrolment and participation study. The STEPS study. Health Technol Assess. 2007;11(48):1–121.
Raftery J, Young A, Stanton L, Milne R, Cook A, Turner D, et al. Clinical trial metadata: defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the NIHR Health Technology Assessment Programme. Health Technol Assess. 2015;19(11):1–138.
Sully BGO, Julious SA, Nicholl J. A reinvestigation of recruitment to randomised, controlled, multicenter trials: a review of trials funded by two UK funding agencies. Trials. 2013;14:166 1-9.
Walters SJ, Henriques-Cadby IBDA, Bortolami O, Flight L, Hind D, Jacques RM, et al. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme. BMJ Open. 2017;7:e015276 1-10.
Watson JM, Torgerson DJ. Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6(34):1–9.
Bairu M, Weiner M. Global clinical trials for Alzheimer’s disease. Cambridge: Academic Press; 2013. p. 432.
Lopienski K. Retention in clinical trials—keeping patients on protocols: forte research systems; 2015. Available from: https://forteresearch.com/news/infographic/infographic-retention-in-clinical-trials-keeping-patients-on-protocols/.
Frampton GK, Shepherd J. Ambiguity of study population analysis and reporting in asthma clinical trials. Z Evid Fortbild Qual Gesundhwes. 2008;102(Suppl 6):76–7.
Stuardi T, Cox H, Torgerson DJ. Database recruitment: a solution to poor recruitment in randomized trials? Fam Pract. 2011;28:329–33.
Carlisle B, Kimmelman K, Ramsay T, MacKinnon N. Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials. Clin Trials. 2015;12(1):77–83.
Griesel D. Clinical trial recruitment in the digital era: some smart ideas. 2015. Available from: http://www.appliedclinicaltrialsonline.com/clinical-trial-recruitment-digital-era-some-smart-ideas.
Weng C, Batres C, Borda T, Weiskopf NG, Wilcox AB, Bigger JT, et al. A real-time screening alert improves patient recruitment efficiency. AMIA Ann Symp Proc. 2011;2011(2011):1489–98.
Akl EA, Briel M, You JJ, Sun X, Johnston BC, Busse JW, et al. Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): systematic review. BMJ. 2012;344:e2809 1-12.
Salman RA-S, Beller E, Kagan J, Hemminki E, Phillips RS, Savulescu J, et al. Increasing value and reducing waste in biomedical research regulation and management. Lancet. 2014;383(9912):156–65.
Kearney A, Daykin A, Shaw ARG, Lane AJ, Blazeby JM, Clarke M, et al. Identifying research priorities for effective retention strategies in clinical trials. Trials. 2017;18(406):1–12.
Tudur Smith C, Hickey H, Clarke M, Blazeby J, Williamson P. The trials methodological research agenda: results from a priority setting exercise. Trials. 2014;15(1):32 1-7.
Köpcke F, Prokosch H-U. Employing computers for the recruitment into clinical trials: a comprehensive systematic review. J Med Internet Res. 2014;16(7):e161 1-18.
Leonard A, Hutchesson M, Patterson A, Chalmers K, Collins C. Recruitment and retention of young women into nutrition research studies: practical considerations. Trials. 2014;15(23):1–7.
Afolabi MO, Bojang K, D’Alessandro U, Imoukhuede EB, Ravinetto RM, Larson HJ, et al. Multimedia informed consent tool for a low literacy African reserach population: development and pilot-testing. J Clin Res Bioeth. 2014;5(3):1–8.
Nguyen TT, Jayadeva V, Cizza G, Brown RJ, Nandagopal R, Rodriguez LM, et al. Challenging recruitment of youth with type 2 diabetes into clinical trials. J Adolesc Health. 2014;54:247–54.
Scholle SH, Peele PB, Kelleher KJ, Frank E, Jansen-McWilliams L, Kupfer D. Effect of different recruitment sources on the composition of a bipolar disorder case registry. Soc Psychiatry Psychiatr Epidemiol. 2000;35:220–7.
Gupta A, Calfas KJ, Marshall SJ, Robinson TN, Rock CL, Huang JS, et al. Clinical trial management of participant recruitment, enrollment, engagement, and retention in the SMART study using a Marketing and Information Technology (MARKIT) model. Contemp Clin Trials. 2015;42:185–95.
Etter J-F, Perneger TV. A comparison of cigarette smokers recruited through the Internet or by mail. Int J Epidemiol. 2001;30:521–5.
Yuan P, Bare MG, Johnson MO, Saberi P. Using online social media for recruitment of human immunodeficiency virus-positive participants: a cross-sectional survey. J Med Internet Res. 2014;16(5):e117 1-9.
Frandsen M, Walters J, Ferguson SG. Exploring the viability of using online social media advertising as a recruitment method for smoking cessation clinical trials. Nicotine Tob Res. 2014;16(2):247–51.
Bower P, Brueton V, Gamble C, Treweek S, Smith CT, Young B, et al. Interventions to improve recruitment and retention in clinical trials: a survey and workshop to assess current practice and future priorities. Trials. 2014;15(399):1–9.
Clinithink Limited. A paradigm shift in patient recruitment for clinical trials. White paper. 2017.
Köpcke F, Kraus S, Scholler A, Nau C, Schüttler J, Prokosch H-U, et al. Secondary use of routinely collected patient data in a clinical trial: an evaluation of the effects on patient recruitment and acquisition. Int J Med Inform. 2013;82:185–92.
Köpcke F, Lubgan D, Fietkau R, Scholler A, Nau C, Stürzl M, et al. Evaluating predictive modeling algorithms to assess patient eligibility for clinical trials from routine data. BMC Med Inform Decis Mak. 2013;13(134):1–9.
Miotto R, Weng C. Case-based reasoning using electronic health records efficiently identifies eligible patients for clinical trials. J Am Med Inform Assoc. 2015;22:e141–50.
Ni Y, Kennebeck S, Dexheimer JW, McAneney CM, Tang H, Lingren T, et al. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department. J Am Med Inform Assoc. 2015;22:166–78.
Penberthy L, Brown R, Puma F, Dahman B. Automated matching software for clinical trials eligibility: measuring efficiency and flexibility. Contemp Clin Trials. 2010;31:207–17.
Schmickl CN, Li M, Li G, Wetzstein MM, Herasevich V, Gajic O, et al. The accuracy and efficiency of electronic screening for recruitment into a clinical trial on COPD. Respir Med. 2011;105(10):1501–6.
Shivade C, Hebert C, Regan K, Fosler-Lussier E, Lai AM. Automatic data source identification for clinical trial eligibility criteria resolution. AMIA Ann Symp Proc. 2016;2016:1149–58.
Gorman JR, Roberts SC, Dominick SA, Malcame VL, Dietz AC, Su HI. A diversified recruitment approach incorporating social media leads to research participation among young adult-aged female cancer survivors. J Adolesc Young Adult Oncol. 2014;3(2):59–65.
Alshaikh F, Ramzan F, Rawaf S, Majeed A. Social network sites as a mode to collect health data: a systematic review. J Med Internet Res. 2014;16(7):e171.
Amon KL, Campbell AJ, Hawke C, Steinbeck K. Facebook as a recruitment tool for adolescent health research: a systematic review. Acad Pediatr. 2014;14(5):439–447.e434.
Boland J, Currow DC, Wilcock A, Tieman J, Hussain JA, Pitsillides C, et al. A systematic review of strategies used to increase recruitment of people with cancer or organ failure into clinical trials: implications for palliative care research. J Pain Symptom Manag. 2015;49(4):762–72.
Bonevski B, Randell M, Paul C, Chapman K, Twyman L, Bryant J, et al. Reaching the hard-to-reach: a systematic review of strategies for improving health and medical research with socially disadvantaged groups. BMC Med Res Methodol. 2014;14:42.
Brueton VC, Tierney J, Stenning S, Harding S, Meredith S, Nazareth I, et al. Strategies to improve retention in randomised trials. Cochrane Database Syst Rev. 2013;12:MR000032 1-127.
Caldwell PHY, Hamilton S, Tan A, Craig JC. Strategies for increasing recruitment to randomised controlled trials: systematic review. PLoS Med. 2010;7(11):e1000368 1-16.
Cuggia M, Besana P, Glasspool D. Comparing semi-automatic systems for recruitment of patients to clinical trials. Int J Med Inform. 2011;80:371–88.
Foster CE, Brennan G, Matthews A, McAdam C, Fitzsimons C, Mutrie N. Recruiting participants to walking intervention studies: a systematic review. Int J Behav Nutr Phys Act. 2011;8:137.
Lam E, Partridge SR, Allman-Farinelli M. Strategies for successful recruitment of young adults to healthy lifestyle programmes for the prevention of weight gain: a systematic review. Obes Rev. 2016;17(2):178–200.
Marcano Belisario JS, Bruggeling MN, Gunn LH, Brusamento S, Car J. Interventions for recruiting smokers into cessation programmes. Cochrane Database Syst Rev. 2012;12:CD009187.
Park BK, Calamaro C. A systematic review of social networking sites: innovative platforms for health research targeting adolescents and young adults. J Nurs Scholarsh. 2013;45(3):256–64.
Rosenbaum DL, Piers AD, Schumacher LM, Kase CA, Butryn ML. Racial and ethnic minority enrollment in randomized clinical trials of behavioural weight loss utilizing technology: a systematic review. Obes Rev. 2017;18(7):808–17.
Thornton L, Batterham PJ, Fassnacht DB, Kay-Lambkin F, Calear AL, Hunt S. Recruiting for health, medical or psychosocial research using Facebook: systematic review. Internet Interv. 2016;4(1):72–81.
Topolovec-Vranic J, Natarajan K. The use of social media in recruitment for medical research studies: a scoping review. J Med Internet Res. 2016;18(11):e286.
Whitaker C, Stevelink S, Fear N. The use of Facebook in recruiting participants for health research purposes: a systematic review. J Med Internet Res. 2017;19(8):e290.
Lane TS, Armin J, Gordon JS. Online recruitment methods for web-based and mobile health studies: a review of the literature. J Med Internet Res. 2015;17(7):e183.
James KL, Randall NP, Haddaway NR. A methodology for systematic mapping in environmental sciences. Environ Evid. 2016;5:7.
Althuis MD, Weed DL. Evidence mapping: methodologic foundations and application to intervention and observational research on sugar-sweetened beverages and health outcomes. Am J Clin Nutr. 2013;98(3):755–68.
Frampton GK, Harris P, Cooper K, Cooper T, Cleland J, Jones J, et al. Educational interventions for preventing vascular catheter bloodstream infections in critical care: evidence map, systematic review and economic evaluation. Health Technol Assess. 2014;18(15):1–365.
Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5(1):1–21.
Schuchan Bird K, Newman M, Hargreaves K, Sawtell M. Workplace-based learning for undergraduate and pre-registration healthcare professionals: a systematic map of the UK research literature 2003-2013. London; 2015.
Shepherd J, Frampton GK, Pickett K, Wyatt JC. Peer review of health research funding proposals: a systematic map and systematic review of innovations for effectiveness and efficiency. PLoS One. 2018;12(5):e0196914 1-26.
Shepherd J, Kavanagh J, Picot J, Cooper K, Harden A, Barnett-Page E, et al. The effectiveness and cost-effectiveness of behavioural interventions for the prevention of sexually transmitted infections in young people aged 13-19: a systematic review and economic evaluation. Health Technol Assess. 2010;14(7):1–206.
Wang DD, Shams-White M, Bright OJM, Parrott JS, Chung M. Creating a literature database of low-calorie sweeteners and health studies: evidence mapping. BMC Med Res Methodol. 2016;16(1):1–11.
Blatch-Jones AJ, Nuttall J, Bull A, Worswick L, Mullee M, Peveler R, et al. Using digital tools in the recruitment and retention in randomised controlled trials: Survey of UK Clinical Trial Units and a qualitative study. Trials. 2020;21:1–11.
Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA Statement. BMJ. 2009;339:b2535 1-8.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA Statement for reporting reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700 1-27.
Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. Trial forge guidance 1: what is a study within a trial (SWAT)? Trials. 2018;19(139):1–5.
Arch JJ, Carr AL. Using mechanical turk for research on cancer survivors. Psychooncology. 2017;26(10):1593–603.
van Oosterhout WP, Weller CM, Stam AH, Bakels F, Stijnen T, Ferrari MD, et al. Validation of the web-based LUMINA questionnaire for recruiting large cohorts of migraineurs. Cephalalgia. 2011;31(13):1359–67.
Wasilewski MB, Stinson JN, Webster F, Cameron JI. Using Twitter to recruit participants for health research: an example from a caregiving study. Health Inform J. 2018. https://doi.org/10.1177/1460458218775158.
Adam LM, Manca DP, Bell RC. Can Facebook be used for research? Experiences using Facebook to recruit pregnant women for a randomized controlled trial. J Med Internet Res. 2016;18(9):e250.
Aguiar EJ, Morgan PJ, Collins CE, Plotnikoff RC, Young MD, Callister R. Process evaluation of the type 2 diabetes mellitus PULSE program randomized controlled trial: recruitment, engagement, and overall satisfaction. Am J Mens Health. 2017;11(4):1055–68.
Alheresh R, Allaire SH, Lavalley MP, Vaughan M, Emmetts R, Keysor JJ. ‘Work it’ recruitment: lessons learned from an arthritis work disability prevention randomized trial. Arthritis Rheum. 2013;10:S1220.
Bailey JV, Pavlou M, Copas A, McCarthy O, Carswell K, Rait G, et al. The sexunzipped trial: optimizing the design of online randomized controlled trials. J Med Internet Res. 2013;15(12):e278.
Beauharnais CC, Larkin ME, Zai AH, Boykin EC, Luttrell J, Wexler DJ. Efficacy and cost-effectiveness of an automated screening algorithm in an inpatient clinical trial. Clin Trials. 2012;9(2):198–203.
Berk S, Greco BL, Biglan K, Kopil CM, Holloway RG, Meunier C, et al. Increasing efficiency of recruitment in early Parkinson’s disease trials: a case study examination of the STEADY-PD III Trial. J Park Dis. 2017;7(4):685–93.
Bickmore TW, Utami D, Matsuyama R, Paasche-Orlow MK. Improving access to online health information with conversational agents: a randomized controlled experiment. J Med Internet Res. 2016;18(1):e1.
Brodar KE, Hall MG, Butler EN, Parada H, Stein-Seroussi A, Hanley S, et al. Recruiting diverse smokers: enrollment yields and cost. Int J Environ Res Public Health. 2016;13(12):16.
Buckingham L, Becher J, Voytek CD, Fiore D, Dunbar D, Davis-Vogel A, et al. Going social: success in online recruitment of men who have sex with men for prevention HIV vaccine research. Vaccine. 2017;35(27):3498–505.
Bull S, Pratte K, Whitesell N, Rietmeijer C, McFarlane M. Effects of an Internet-based intervention for HIV prevention: the Youthnet trials. AIDS Behav. 2009;13(3):474–87.
Bull SS, Vallejos D, Levine D, Ortiz C. Improving recruitment and retention for an online randomized controlled trial: experience from the Youthnet study. AIDS Care. 2008;20(8):887–93.
Buller DB, Meenan R, Severson H, Halperin A, Edwards E, Magnusson B. Comparison of 4 recruiting strategies in a smoking cessation trial. Am J Health Behav. 2012;36(5):577–88.
Bunge E, Cook HM, Bond M, Williamson RE, Cano M, Barrera AZ, et al. Comparing Amazon Mechanical Turk with unpaid Internet resources in online clinical trials. Internet Interv. 2018;12:68–73.
Burrell ER, Pines HA, Robbie E, Coleman L, Murphy RD, Hess KL, et al. Use of the location-based social networking application GRINDR as a recruitment tool in rectal microbicide development research. AIDS Behav. 2012;16(7):1816–20.
Caperchione CM, Duncan MJ, Rosenkranz RR, Vandelanotte C, Van Itallie AK, Savage TN, et al. Recruitment, screening, and baseline participant characteristics in the WALK 2.0 study: a randomized controlled trial using web 2.0 applications to promote physical activity. Contemp Clin Trials Commun. 2016;2:25–33.
Carmi L, Zohar J. A comparison between print vs. Internet methods for a clinical trial recruitment—a pan European OCD study. Eur Neuropsychopharmacol. 2014;24(6):874–8.
Chin Feman SP, Nguyen LT, Quilty MT, Kerr CE, Nam BH, Conboy LA, et al. Effectiveness of recruitment in clinical trials: an analysis of methods used in a trial for irritable bowel syndrome patients. Contemp Clin Trials. 2008;29(2):241–51.
Coday M, Richey P, Thomas F, Tran QT, Terrell SB, Tylavsky F, et al. The recruitment experience of a randomized clinical trial to aid young adult smokers to stop smoking without weight gain with interactive technology. Contemp Clin Trials Commun. 2016;2:61–8.
Cornelius VR, McDermott L, Forster AS, Ashworth M, Wright AJ, Gulliford MC. Automated recruitment and randomisation for an efficient randomised controlled trial in primary care. Trials. 2018;19(1):341.
Coronado GD, Ondelacy S, Schwarz Y, Duggan C, Lampe JW, Neuhouser ML. Recruiting underrepresented groups into the Carbohydrates and Related Biomarkers (CARB) cancer prevention feeding study. Contemp Clin Trials. 2012;33(4):641–6.
Cuggia M, Campillo-Gimenez B, Bouzille G, Besana P, Jouini W, Dufour JC, et al. Automatic selection of clinical trials based on a semantic web approach. Stud Health Technol Inf. 2015;216:564–8.
Du W, Mood D, Gadgeel S, Simon MS. An educational video to increase clinical trials enrollment among breast cancer patients. Breast Cancer Res Treat. 2009;117(2):339–47.
Dugas M, Lange M, Berdel WE, Muller-Tidow C. Workflow to improve patient recruitment for clinical trials within hospital information systems - a case-study. Trials. 2008;9:2.
Dugas M, Lange M, Muller-Tidow C, Kirchhof P, Prokosch HU. Routine data from hospital information systems can support patient recruitment for clinical studies. Clin Trials. 2010;7(2):183–9.
Edwards L, Salisbury C, Horspool K, Foster A, Garner K, Montgomery AA. Increasing follow-up questionnaire response rates in a randomized controlled trial of telehealth for depression: three embedded controlled studies. Trials. 2016;17.
Effoe VS, Katula JA, Kirk JK, Pedley CF, Bollhalter LY, Brown WM, et al. The use of electronic medical records for recruitment in clinical trials: Findings from the Lifestyle Intervention for Treatment of Diabetes trial. Trials. 2016;17:496.
Erickson LC, Ritchie JB, Javors JM, Golomb BA. Recruiting a special sample with sparse resources: lessons from a study of Gulf War veterans. Clin Trials. 2013;10(3):473–82.
Ethier JF, Curcin V, McGilchrist MM, Choi Keung SNL, Zhao L, Andreasson A, et al. eSource for clinical trials: implementation and evaluation of a standards-based approach in a real world trial. Int J Med Inform. 2017;106:17–24.
Fazzino TL, Rose GL, Pollack SM, Helzer JE. Recruiting U.S. and Canadian college students via social media for participation in a web-based brief intervention study. J Stud Alcohol Drugs. 2015;76(1):127–32.
Frandsen M, Thow M, Ferguson SG. The effectiveness of social media (Facebook) compared with more traditional advertising methods for recruiting eligible participants to health research studies: a randomized, controlled clinical trial. JMIR Res Protoc. 2016;5(3):215–24.
Frawley H, Whitburn L, Daly JO, Galea M. E-recruitment: the future for clinical trials in a digital world? Neurourol Urodyn. 2011;30(6):811–2.
Free C, Hoile E, Robertson S, Knight R. Three controlled trials of interventions to increase recruitment to a randomized controlled trial of mobile phone based smoking cessation support. Clin Trials. 2010;7(3):265–73.
Funk KL, Elder CR, Lindberg NM, Gullion CM, DeBar LL, Meltesen G, et al. Comparison of characteristics and outcomes by initial study contact (website versus staff) for participants enrolled in a weight management study. Clin Trials. 2012;9(2):226–31.
Gioia CJ, Sobell LC, Sobell MB, Agrawal S. Craigslist versus print newspaper advertising for recruiting research participants for alcohol studies: cost and participant characteristics. Addict Behav. 2016;54:24–32.
Hamilton FL, Hornby J, Sheringham J, Linke S, Ashton C, Moore K, et al. DIAMOND (DIgital Alcohol Management ON Demand): a feasibility RCT and embedded process evaluation of a digital health intervention to reduce hazardous and harmful alcohol use recruiting in hospital emergency departments and online. Pilot Feasibility Stud. 2018;4:114.
Heffner JL, Wyszynski CM, Comstock B, Mercer LD, Bricker J. Overcoming recruitment challenges of web-based interventions for tobacco use: the case of web-based acceptance and commitment therapy for smoking cessation. Addict Behav. 2013;38(10):2473–6.
Horvath KJ, Nygaard K, Danilenko GP, Goknur S, Oakes JM, Rosser BR. Strategies to retain participants in a long-term HIV prevention randomized controlled trial: lessons from the MINTS-II study. AIDS Behav. 2012;16(2):469–79.
Iribarren SJ, Ghazzawi A, Sheinfil AZ, Frasca T, Brown W 3rd, Lopez-Rios J, et al. Mixed-method evaluation of social media-based tools and traditional strategies to recruit high-risk and hard-to-reach populations into an HIV prevention intervention study. AIDS Behav. 2018;22(1):347–57.
Johnson EJ, Niles BL, Mori DL. Targeted recruitment of adults with type 2 diabetes for a physical activity intervention. Diab Spectr. 2015;28(2):99–105.
Jones L, Saksvig BI, Grieser M, Young DR. Recruiting adolescent girls into a follow-up study: benefits of using a social networking website. Contemp Clin Trials. 2012;33(2):268–72.
Jones R, Lacroix LJ, Porcher E. Facebook advertising to recruit young, urban women into an HIV prevention clinical trial. AIDS Behav. 2017;21(11):3141–53.
Jonnalagadda SR, Adupa AK, Garg RP, Corona-Cox J, Shah SJ. Text mining of the electronic health record: an information extraction approach for automated identification and subphenotyping of HFpEF patients for clinical trials. J Cardiovasc Transl Res. 2017;10(3):313–21.
Juraschek SP, Plante TB, Charleston J, Miller ER, Yeh HC, Appel LJ, et al. Use of online recruitment strategies in a randomized trial of cancer survivors. Clin Trials. 2018;15(2):130–8.
Kennedy BM, Kumanyika S, Ard JD, Reams P, Johnson CA, Karanja N, et al. Overall and minority-focused recruitment strategies in the PREMIER multicenter trial of lifestyle interventions for blood pressure control. Contemp Clin Trials. 2010;31(1):49–54.
Kim R, Hickman N, Gali K, Orozco N, Prochaska JJ. Maximizing retention with high risk participants in a clinical trial. Am J Health Promot. 2014;28(4):268–74.
Klein JP, Gamon C, Spath C, Berger T, Meyer B, Hohagen F, et al. Does recruitment source moderate treatment effectiveness? A subgroup analysis from the EVIDENT study, a randomised controlled trial of an Internet intervention for depressive symptoms. BMJ Open. 2017;7(7):e015391.
Korde LA, Micheli A, Smith AW, Venzon D, Prindiville SA, Drinkard B, et al. Recruitment to a physical activity intervention study in women at increased risk of breast cancer. BMC Med Res Methodol. 2009;9:27.
Koziol-McLain J, McLean C, Rohan M, Sisk R, Dobbs T, Nada-Raja S, et al. Participant recruitment and engagement in automated eHealth trial registration: challenges and opportunities for recruiting women who experience violence. J Med Internet Res. 2016;18(10):e281.
Krischer J, Cronholm PF, Burroughs C, McAlear CA, Borchin R, Easley E, et al. Experience with direct-to-patient recruitment for enrollment into a clinical trial in a rare disease: a web-based study. J Med Internet Res. 2017;19(2):e50.
Krusche A, Rudolf von Rohr I, Muse K, Duggan D, Crane C, Williams JM. An evaluation of the effectiveness of recruitment methods: the staying well after depression randomized controlled trial. Clin Trials. 2014;11(2):141–9.
Kye SH, Tashkin DP, Roth MD, Adams B, Nie WX, Mao JT. Recruitment strategies for a lung cancer chemoprevention trial involving ex-smokers. Contemp Clin Trials. 2009;30(5):464–72.
Layi G, Albright CA, Berenberg J, Plant K, Ritter P, Laurent D, et al. UH Cancer Center Hotline: recruiting cancer survivors for an online health-behavior change intervention: are different strategies more beneficial? Hawaii Med J. 2011;70(10):222–3.
Lesher LL, Matyas RA, Sjaarda LA, Newman SL, Silver RM, Galai N, et al. Recruitment for longitudinal, randomised pregnancy trials initiated preconception: lessons from the effects of aspirin in gestation and reproduction trial. Paediatr Perinat Epidemiol. 2015;29(2):162–7.
Li L, Chase HS, Patel CO, Friedman C, Weng C. Comparing ICD9-encoded diagnoses and NLP-processed discharge summaries for clinical trials pre-screening: a case study. Proc AMIA Symp. 2008;2008:404–8.
McGregor J, Brooks C, Chalasani P, Chukwuma J, Hutchings H, Lyons RA, et al. The Health Informatics Trial Enhancement Project (HITE): using routinely collected primary care data to identify potential participants for a depression trial. Trials. 2010;11:39.
Miller EG, Nowson CA, Dunstan DW, Kerr DA, Solah V, Menzies D, et al. Recruitment of older adults with type 2 diabetes into a community-based exercise and nutrition randomised controlled trial. Trials. 2016;17(1):467.
Mohan Y, Cornejo M, Sidell M, Smith J, Young DR. Re-recruiting young adult women into a second follow-up study. Contemp Clin Trials Commun. 2017;5:160–7.
Moreno MA, Waite A, Pumper M, Colburn T, Holm M, Mendoza J. Recruiting adolescent research participants: in-person compared to social media approaches. Cyberpsychol Behav Soc Netw. 2017;20(1):64–7.
Morgan AJ, Jorm AF, Mackinnon AJ. Internet-based recruitment to a depression prevention intervention: lessons from the Mood Memos study. J Med Internet Res. 2013;15(2):e31.
Nash EL, Gilroy D, Srikusalanukul W, Abhayaratna WP, Stanton T, Mitchell G, et al. Facebook advertising for participant recruitment into a blood pressure clinical trial. J Hypertens. 2017;35(12):2527–31.
Ni Y, Wright J, Perentesis J, Lingren T, Deleger L, Kaiser M, et al. Increasing the efficiency of trial-patient matching: automated clinical trial eligibility pre-screening for pediatric oncology patients. BMC Med Inf Decis Making. 2015;15:28.
Partridge SR, Balestracci K, Wong AT, Hebden L, McGeechan K, Denney-Wilson E, et al. Effective strategies to recruit young adults into the TXT2BFiT mHealth randomized controlled trial for weight gain prevention. JMIR Res Protoc. 2015;4(2):e66.
Polak E, Apfel A, Privitera M, Buse D, Haut S. Daily diaries in epilepsy research: does electronic format improve adherence? Epilepsy Curr. 2014;1:180.
Pressler TR, Yen PY, Ding J, Liu J, Embi PJ, Payne PR. Computational challenges and human factors influencing the design and use of clinical research participant eligibility pre-screening tools. BMC Med Inf Decis Making. 2012;12:47.
Rabin C, Horowitz S, Marcus B. Recruiting young adult cancer survivors for behavioral research. J Clin Psychol Med Settings. 2013;20(1):33–6.
Ramsey TM, Snyder JK, Lovato LC, Roumie CL, Glasser SP, Cosgrove NM, et al. Recruitment strategies and challenges in a large intervention trial: systolic blood pressure intervention trial. Clin Trials. 2015;13(3):319–30.
Raviotta JM, Nowalk MP, Lin CJ, Huang HH, Zimmerman RK. Using FacebookTM to recruit college-age men for a human papillomavirus vaccine trial. Am J Mens Health. 2016;10(2):110–9.
Rellis L, Haidari G, Ridgers H, Jones CB, Miller A, Shattock R, et al. How the use of social media and online platforms can enhance recruitment to HIV clinical trials. HIV Med. 2015;2:64–5.
Rollman BL, Fischer GS, Zhu F, Belnap BH. Comparison of electronic physician prompts versus waitroom case-finding on clinical trial enrollment. J Gen Intern Med. 2008;23(4):447–50.
Rorie DA, Flynn RWV, Mackenzie IS, MacDonald TM, Rogers A. The treatment in morning versus evening (TIME) study: analysis of recruitment, follow-up and retention rates post-recruitment. Trials. 2017;18:557.
Routledge FS, Davis TD, Dunbar SB. Recruitment strategies and costs associated with enrolling people with insomnia and high blood pressure into an online behavioral sleep intervention: a single-site pilot study. J Cardiovasc Nurs. 2017;32(5):439–47.
Ryan C, Dadabhoy H, Baranowski T. Participant outcomes from methods of recruitment for videogame research. Games Health J. 2018;7(1):16–23.
Sahoo SS, Tao S, Parchman A, Luo Z, Cui L, Mergler P, et al. Trial prospector: matching patients with cancer research studies using an automated and scalable approach. Cancer Informat. 2014;13:157–66.
Sanders KM, Stuart AL, Merriman EN, Read ML, Kotowicz MA, Young D, et al. Trials and tribulations of recruiting 2000 older women onto a clinical trial investigating falls and fractures: Vital D study. BMC Med Res Methodol. 2009;9(1):78.
Schroy IPC, Glick JT, Robinson P, Lydotes MA, Heeren TC, Prout M, et al. A cost-effectiveness analysis of subject recruitment strategies in the HIPAA era: Results from a colorectal cancer screening adherence trial. Clin Trials. 2009;6(6):597–609.
Severi E, Free C, Knight R, Robertson S, Edwards P, Hoile E. Two controlled trials to increase participant retention in a randomized controlled trial of mobile phone-based smoking cessation support in the United Kingdom. Clin Trials. 2011;8(5):654–60.
Sharp LK, Fitzgibbon ML, Schiffer L. Recruitment of obese black women into a physical activity and nutrition intervention trial. J Phys Act Health. 2008;5(6):870–81.
Shere M, Zhao XY, Koren G. The role of social media in recruiting for clinical trials in pregnancy. PLoS ONE. 2014;9(3):e92744.
Spokoyny I, Lansberg M, Thiessen R, Kemp SM, Aksoy D, Lee Y, et al. Development of a mobile tool that semiautomatically screens patients for stroke clinical trials. Stroke. 2016;47(10):2652–5.
Staffileno BA, Zschunke J, Weber M, Gross LE, Fogg L, Tangney CC. The feasibility of using Facebook, Craigslist, and other online strategies to recruit young African American women for a web-based healthy lifestyle behavior change Intervention. J Cardiovasc Nurs. 2017;32(4):365–71.
Stanczyk NE, Bolman C, Smit ES, Candel MJ, Muris JW, de Vries H. How to encourage smokers to participate in web-based computer-tailored smoking cessation programs: a comparison of different recruitment strategies. Health Educ Res. 2014;29(1):23–40.
Stanton AL, Morra ME, Diefenbach MA, Miller SM, Perocchia RS, Raich PC, et al. Responding to a significant recruitment challenge within three nationwide psychoeducational trials for cancer patients. J Cancer Surviv. 2013;7:392–403.
Switzer JA, Hall CE, Close B, Nichols FT, Gross H, Bruno A, et al. A telestroke network enhances recruitment into acute stroke clinical trials. Stroke. 2010;41(3):566–9.
Sygna K, Johansen S, Ruland CM. Recruitment challenges in clinical research including cancer patients and their caregivers. A randomized controlled trial study and lessons learned. [Erratum appears in Trials. 2016;17(1):133; PMID: 26965306]. Trials. 2015;16:428.
Tate DF, LaRose JG, Griffin LP, Erickson KE, Robichaud EF, Perdue L, et al. Recruitment of young adults into a randomized controlled trial of weight gain prevention: message development, methods, and cost. Trials. 2014;15:326.
Taylor-Piliae RE, Boros D, Coull BM. Strategies to improve recruitment and retention of older stroke survivors to a randomized clinical exercise trial. J Stroke Cerebrovasc Dis. 2014;23(3):462–8.
Thadani SR, Weng C, Bigger JT, Ennever JF, Wajngurt D. Electronic screening improves efficiency in clinical trial recruitment. J Am Med Inform Assoc. 2009;16(6):869–73.
Treweek S, Barnett K, Maclennan G, Bonetti D, Eccles MP, Francis JJ, et al. E-mail invitations to general practitioners were as effective as postal invitations and were more efficient. J Clin Epidemiol. 2012;65(7):793–7.
Treweek S, Pearson E, Smith N, Neville R, Sargeant P, Boswell B, et al. Desktop software to identify patients eligible for recruitment into a clinical trial: using SARMA to recruit to the ROAD feasibility trial. Inform Prim Care. 2010;18(1):51–8.
Unlu Ince B, Cuijpers P, van’t Hof E, Riper H. Reaching and recruiting Turkish migrants for a clinical trial through Facebook: a process evaluation. Internet Interv. 2014;1(2):74–83.
Usadi RS, Diamond MP, Legro RS, Schlaff WD, Hansen KR, Casson P, et al. Recruitment strategies in two reproductive medicine network infertility trials. Contemp Clin Trials. 2015;45(Pt B):196–200.
Varner C, McLeod S, Nahiddi N, Borgundvaag B. Text messaging research participants as a follow-up strategy to decrease emergency department study attrition. Can J Emerg Med. 2018;20(1):148–53.
Volkova E, Michie J, Corrigan C, Sundborn G, Eyles H, Jiang Y, et al. Effectiveness of recruitment to a smartphone-delivered nutrition intervention in New Zealand: analysis of a randomised controlled trial. BMJ Open. 2017;7(6):e016198.
Weng C, Bigger JT, Busacca L, Wilcox A, Getaneh A. Comparing the effectiveness of a clinical registry and a clinical data warehouse for supporting clinical trial recruitment: a case study. AMIA Ann Symp Proc. 2010;2010(2010):867–71.
Ashby R, Turner G, Cross B, Mitchell N, Torgerson D. A randomized trial of electronic reminders showed a reduction in the time to respond to postal questionnaires. J Clin Epidemiol. 2011;64:208–12.
Clark L, Ronaldson S, Dyson L, Hewitt C, Torgerson D, Adamson J. Electronic prompts significantly increase response rates to postal questionnaires: a randomized trial within a randomized trial and meta-analysis. J Clin Epidemiol. 2015;68:1446–50.
Man MS, Tilbrook HE, Jayakody S, Hewitt CE, Cox H, Cross B, et al. Electronic reminders did not improve postal questionnaire response rates or response times: a randomized controlled trial. J Clin Epidemiol. 2011;64:1001–4.
Starr K, McPherson G, Forrest M, Cotton SC. SMS text pre-notification and delivery of reminder e-mails to increase response rates to postal questionnaires in the SUSPEND trial: a factorial design, randomised controlled trial. Trials. 2015;16(295):1–8.
Kearney A, Harman NL, Bacon N, Daykin A, Heawood AJ, Lane A, et al. Online resource for recruitment research in clinical trials research (ORRCA). Trials Conference: 4th International Clinical Trials Methodology Conference , ICTMC and the 38th Annual Meeting of the Society for Clinical Trials United Kingdom. 2017;18 Suppl 1.
PRIORITY Study. Prioritising recruitment and retention in randomised trials (PRioRiTy) Galway. Ireland: University of Galway; 2019. Available from: https://priorityresearch.ie/wp-content/uploads/2017/04/priority-research-logo-small.jpg.
Pew Research Center. Social media use by age. 2018.
Sensis. Sensis social media report 2017. How Australian people and businesses are using social media. 2017.
Office for National Statistics. Internet users in the UK: 2016. 2016.
Pew Research Center. Internet/broadband fact sheet. 2017.
Arigo D, Pagoto S, Carter-Harris L, Lillie SE, Nebeker C. Using social media for health research: methodological and ethical considerations for recruitment and intervention delivery. Digit Health. 2018;4:1–15.
Parker A, Arundel C, Beard D, Bower P, Brocklehurst P, Coleman E, et al. PROMoting THE USE OF SWATs (PROMETHEUS): routinely embedding recruitment and retention interventions within randomised trials. Trials. 2019;20(Suppl 1:579):160.
D’Avolio L, Ferguson R, Goryachev S, Woods P, Sabin T, O’Neil J, et al. Implementation of the Department of Veterans Affairs’ first point-of-care clinical trial. J Am Med Inform Assoc. 2012;19:e170–6.
van Staa P, Goldacre B, Gulliford M, Cassell J, Pirmohamed M, Taweel A, et al. Pragmatic randomised trials using routine electronic health records: putting them to the test. BMJ. 2012;344:e55 1-7.
Hoffman TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: Template for Intervention Description and Replication (TIDieR) Checklist and Guide. BMJ. 2014;348(g1687):1–12.
Madurasinghe VW, Eldridge S, Forbes G. Guidelines for reporting embedded recruitment trials. Trials. 2016;17(27):1–25.
The following people kindly informed the work reported in this paper:
• Karen Welch, Information Specialist (Southampton Health Technology Assessments Centre), for conducting the literature searches
• Other members of the full project team (Phases 1 and 2): Amanda Blatch-Jones (NETSCC, University of Southampton); Dr Jeremy Hinks (NETSCC, University of Southampton); Dr Athene Lane (Director, Bristol Randomised Trials Collaboration, University of Bristol); Jacqui Nuttall (Southampton Clinical Trials Unit, University of Southampton); Dr Louise Worswick (NETSCC, University of Southampton)
• Members of the project Advisory Board: Dr Andrew Cook (NETSCC and Southampton Clinical Trials Unit); Dr Stephen Falk (NIHR Clinical Research Network West of England); Helen George (Patient and Public Involvement representative – Wessex Public Engagement Network); Professor Mark Mullee (NIHR Research and Design Service South Central); Professor Robert Peveler (Chair, NIHR Wessex Clinical Research Network); Neil Tape (University Hospital Southampton NHS Foundation Trust); Dr Karen Underwood (University Hospital Southampton NHS Foundation Trust)
This study was funded by an NIHR grant to the Southampton Clinical Trials Unit for supporting efficient and innovative delivery of NIHR research. The funder had no role in the design, conduct or analysis of the research reported here. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional file 1: Additional Table 1.
Example systematic reviews of digital approaches for recruitment or retention in clinical and health studies.
Additional file 2: Additional Table 2.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2009 Checklist.
Additional file 3: Additional Table 3.
List of studies excluded at full-text screening (N = 251) (reference list below table).
Additional file 4: Appendix 1.
Systematic map protocol
Additional file 5: Appendix 2.
Additional file 6: Appendix 3.
Eligibility screening worksheet.
Additional file 7: Appendix 4.
List of map keywords with explanations and comments.
Additional file 8: Appendix 5.
Systematic map database (November 2019).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Frampton, G.K., Shepherd, J., Pickett, K. et al. Digital tools for the recruitment and retention of participants in randomised controlled trials: a systematic map. Trials 21, 478 (2020). https://doi.org/10.1186/s13063-020-04358-3
- Clinical trial management
- Clinical trial efficiency
- Recruitment strategies
- Retention strategies
- Participant identification and recruitment
- Online recruitment
- Participant retention
- Digital tools
- Electronic tools
- Systematic map