Skip to main content

Trials need participants but not their feedback? A scoping review of published papers on the measurement of participant experience of taking part in clinical trials

Abstract

Background

Participant recruitment and retention are long-standing problems in clinical trials. Although there are a large number of factors impacting on recruitment and retention, some of the problems may reflect the fact that trial design and delivery is not sufficiently ‘patient-centred’ (i.e., sensitive to patient needs and preferences). Most trials collect process and outcome measures, but it is unclear whether patient experience of trial participation itself is routinely measured. We conducted a structured scoping review of studies reporting standardised assessment of patient experience of participation in a trial.

Methods

A structured search of Medline, PsycINFO, Embase and CINAHL (Cumulative Index to Nursing and Allied Health Literature) and hand searching of included studies were conducted in 2016. Additional sources included policy documents, relevant websites and experts. We extracted data on trial context (type, date and location) and measure type (number of items and mode of administration), patient experience domains measured, and the results reported. We conducted a narrative synthesis.

Results

We identified 22 journal articles reporting on 21 different structured measures of participant experience in trials. None of the studies used a formal definition of patient experience. Overall, patients reported relatively high levels of global satisfaction with the trial process as well as positive outcomes (such as the likelihood of future participation or recommendation of the trial to others).

Conclusions

Current published evidence is sparse. Standardised assessment of patient experience of trial participation may provide opportunities for researchers to enhance trial design and delivery. This could complement other methods of enhancing the patient-centredness of trials and might improve recruitment, retention, and long-term patient engagement with trials.

Peer Review reports

Background

Randomised controlled trials (RCTs) are often described as the ‘gold standard’ method for assessment of effectiveness because they offer the most rigorous way of determining cause–effect relationships [1]. Yet, despite the efforts expended to deliver trials, recruitment and retention still pose significant challenges. A review of 73 publically funded multi-centre trials in the UK—delivered through the National Institute for Health Research (NIHR) Health Technology Assessment and Medical Research Council programmes—found that only 55% recruited 100% of their target sample size within their pre-agreed timescale but that nearly 45% received an extension of some kind [2]. There is little evidence of major improvement over time [3].

Success in recruitment and retention is often considered to reflect trial design and management. However, trials are dependent on the willingness of patients to give their time and effort and to agree to randomisation and follow-up. One potential way of increasing the willingness of patients to participate is to design and conduct trials that are aligned with the ‘wants, needs and preferences’ of patients (i.e., applying the concept of patient-centred care to trials) [4].

Patient and public involvement (PPI) in research is the process of involving patients and the public in shaping the design of research and has the aim of making research responsive to the needs of potential research participants [5, 6]. There is growing evidence concerning the optimal ways to achieve effective involvement of patients in research design and emerging evidence that such work is having demonstrable benefits [7,8,9,10]. However, the focus of PPI is on patient and public input to trial design and delivery. There is far less focus on assessing the output of such work in terms of the actual experience of participants in those trials where PPI has been implemented. Comprehensive and routine measurement of patient experience in trials could provide important evidence on the effects of endeavours to make trials ‘patient-centred’.

The concept of participant experience

There is no formal definition of patient experience in the context of trials or research. A recent review identified four aspects common to many definitions of patient experience, which are relevant for trials in the healthcare setting: (1) the sum of all interactions (2) shaped by an organisation’s culture (3) which influences patient perceptions (4) across the continuum of care [11].

Different types of patient experience have been distinguished in the literature [12]. Preferences are ideas about what should occur in interactions with research studies. Reports are objective observations of those interactions (e.g., the amount of time spent completing questionnaires as a measure of ‘research burden’). Evaluations are reactions to the experience of doing research, in terms of whether it was good or bad. For the purposes of this article, we were interested in reports and evaluations as measures of patient experience in trials as opposed to preferences about what should occur.

A number of different aspects of a trial may be important in terms of patient experience. These might include recruitment (information and consent), randomisation (the need for such allocation and the way it is explained and conducted), research treatment delivery, outcome measurement and follow-up, and ‘close out’ (results sharing). Patient experience of these aspects may impact on their overall satisfaction with participation and wider outcomes of participation (whether a patient would participate again or would recommend participation to friends and family).

What are the potential benefits of patient experience measurement?

There is increasing consensus that measurement is a necessary aspect of quality improvement [13]. Traditionally, the assessment of patient experience in routine healthcare settings (outside the context of clinical trials) has been secondary with a far greater focus on outcomes. However, increasing patient involvement in healthcare decision-making has led to renewed interest in measurement of patient experience as a way of understanding the performance of healthcare systems and as a driver of quality improvement [14]. For example, in the UK, large-scale measurement of the experience of millions of primary care patients is conducted routinely and is used as a barometer of system performance and an impetus to quality improvement [15]. Although patient experience may not be accorded the same weight as health outcomes, it may be an important complement to those traditional measures.

Although traditional outcome measures will always remain the focus of trials, we argue in this paper that routine measurement of patient experience in trials could provide similar benefits to those found in routine health care settings. There is much research that has explored participants’ experience within trials but has involved qualitative research focussed on certain aspects of the trial (e.g., understanding of randomisation or informed consent) and on a subset of patients or professionals [16, 17]. Though critical to improving trial delivery by exploring the perceptions of participants and developing an understanding of the process [18], such work could be usefully complemented by structured assessments of the wider experience of the whole trial sample. Detailed qualitative evaluation is resource-intensive and may not be practical in routine trial contexts.

Most trials already undertake comprehensive measurement on patients and so have a ready platform to assess the experience of their participants. If assessment of experience were carried out on a routine basis, it might allow measurement of variation over time, between population subgroups within the trial, and could potentially allow identification of problems and challenges which may act as barriers to successful completion of current or future trials. Effective feedback loops might allow deployment of interventions to enhance participant experience and increase engagement with research—a policy goal the UK NIHR has outlined in its report Promoting a Research Active Nation [19].

However, at present, there is no agreed-upon standardised methodology to capture patient experience in trials, and it is unclear whether the routine measurement of patient experience is widespread. Our aim was to review the literature to identify use of standardised measures of participant experience in a trial.

Our objectives were to do the following:

  • Identify studies (involving any type of participant, intervention, comparison or outcomes) using a standardised measure of patient experience of trial participation.

  • Characterise the measures in terms of purpose, format and aspects of participant experience that were assessed.

  • Report existing findings on patient experience within the identified studies.

  • Make recommendations for future development and application of participant experience measurement.

Method

We conducted a ‘scoping review’, which allowed us to ‘map’ this research area and provide an initial overview [20]. We reported the study according to the new guidelines for scoping reviews [21]. There was no review protocol.

Searches

The search of databases was performed in June 2016 by the lead author CP. Searches were limited to English language articles and non-English articles with English abstracts. The reference lists of included studies, grey literature, policy documents and relevant websites were also searched, and experts in the field of clinical trials were contacted via the UK Clinical Research Collaboration (UKCRC) Registered Clinical Trials Units Network (https://www.ukcrc-ctu.org.uk/) to discuss published work and ongoing studies in this area.

Information sources

We searched Medline, PsycINFO and CINAHL (Cumulative Index to Nursing and Allied Health Literature) from 1999 to 2016. The following search terms (text words and medical subject headings) were used: Trial* OR RCT OR treatment effectiveness evaluation AND experience OR satisfaction or patient experience OR participant experience OR attitude. We checked the reference lists of included studies for further references but did not conduct citation searches on eligible studies. There were no restrictions placed on type of trial, population or condition. Search terms were reviewed and tested for sensitivity with an information specialist. The search prioritised sensitivity over specificity.

Study inclusion and exclusion criteria

We included studies using standardised measures which could be either patient self-report or interviewer administered. All titles and abstracts were screened by CP, and the decision to include or exclude was recorded. If multiple papers were published (e.g., reporting on different outcomes), the multiple reports were treated as a single study but all publications were referenced. Studies were managed by using Reference Manager software. We excluded measures not related to health research and measures which focus on only one aspect of a trial (e.g., recruitment and informed consent).

Data charting and synthesis

Data were extracted from papers according to three main categories:

  • Trial context (type of trial, date run, and location of sites) and population (age, gender and condition)

  • Participant experience measure, which included the measure name, type (report and evaluation), administration (interviewer and self-report), and the aspects of patient experience measured

  • Summary of results reported

Data were extracted and recorded by CP only. We undertook a narrative analysis of the results in line with our research objectives.

Results

The search identified 2041 records, and 67 full text articles were retrieved. Fifty-seven were excluded for various reasons (the most common being a conference abstract only). Twelve additional articles were identified through searching the reference lists of those studies. We identified a total of 22 journal articles reporting on 21 different structured measures of participant experience in trials (Fig. 1). The key features of the measures are described below and summarised in Tables 1 and 2 [22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43]. (Fuller details of the included studies are presented in Additional file 1.)

Fig. 1
figure 1

Scoping study inclusion flowchart

Table 1 Summary of articles reporting on participant experience measures
Table 2 Summary of domains measured (shaded boxes represent domains measured)

Trial context

Of the measures identified, the majority assessed experience on single trials (n = 13). The majority (n = 12) were published since 2000 and were conducted in the United States (n = 7), United States and Canada (n = 2), or Europe (n = 7). The measures assessed participant experience in varied clinical contexts, including cancer care, dentistry, arthritis and emergency medicine.

Types of measures within the trials

All of the measures focussed on evaluations (reactions to the experience of doing research, in terms of whether it was good or bad) rather than reports (objective observations of interactions; e.g., the amount of time spent completing questionnaires).

Defining participant experience and measure development

None of the articles offered a formal definition of participant experience, and only one reported a psychometric analysis which included validity and reliability testing.

Format, mode of delivery and time points

Where reported, the administration of the measure was by self-report (n = 15) or interview (n = 4) or a combination (n = 1) and measures were sent to participants by post (n = 10) or used in person (n = 3) or a combination (n = 1).

Of the 16 articles that reported the number of items included in the measure, the mean was 21 (range of 6–76). None of the articles reported the completion time for either self-report or interviewer administered measures. Of those that reported the administration time point, the majority were administered at the end of the trial (n = 14), one study measured 4 weeks after enrolment, and only one study required participants to complete a measure on more than one occasion.

Participants

Participant characteristics were reported in 19 of the articles, and there was considerable variation in the range of characteristics reported. Three articles reported the percentage of people who withdrew from the trial but completed the measure, which ranged from 3.9% to 8.6%.

Domains assessed

As summarised in Table 2, content of the measures varied, but common aspects included global measures of experience (such as satisfaction with overall experience), measures of specific aspects of relevance to all trials (such as informed consent), measures of aspects relevant to trial interventions (such as treatment side effects), and outcomes (such as willingness to participate again or the likelihood of recommending a trial to family and friends).

Some studies included additional measures which were not related to the specific experience of the trial but might be important drivers of that experience (such as motivation for taking part and expectations around experience). None of the studies asked participants for their feedback on the participant experience measure itself.

Results of the patient experience measures

Given the variability in the trials and study populations, comparison of the results must be made with caution. Response rates to the patient experience measure varied but were broadly in line with what might be expected from the usual rates achieved in trial outcome measures. Response rates ranged from 29% to 100%. Of those studies reporting response rates, 15 (75%) out of 20 reported rates over 80%, which is usually seen as a marker of quality [44].

Overall, participants reported high levels of global satisfaction with the process. When asked, a majority of participants suggested that they would participate in trials in the future, and 11 (85%) out of 13 studies reported that outcome. Nevertheless, some trials did report less positive experience from a significant minority of participants. For example, one in five participants did not rate information and informed consent highly in one study [25], whereas in another study, nearly half felt that participation interfered with their family routine [37]. In another study, only 52% reported willingness to participate in further research in another [29]. There were insufficient studies to permit any sensible assessment of factors across studies related to participant experience.

Discussion

Summary of results

Our search of the literature found a relatively small number of eligible studies. Measuring participant experience does not appear to be common in the published literature, making it difficult to quantify key aspects of experience, such as levels of satisfaction or dissatisfaction with trial processes, or to explore patient or trial characteristics associated with satisfaction. However, there may be a significant grey literature that our search failed to uncover.

From the limited data presented, it would seem that generally respondents express high levels of satisfaction and are positive about further participation and recommendations to others. Although this is an important finding, there is a significant potential for publication bias (as results critical of a trial may be less likely to be published), and it is possible that patients who are likely to take part in trials may have experienced higher-quality care more generally. Even in the context of the broadly positive results, some of the trials did report less positive experience from a significant minority of participants.

It is not clear how much focus should be placed on the experience of patients participating in trials compared with those preferences and experiences of people who do not participate in trials. Obviously, in many trial contexts, the latter far outnumber the former. Of course, it is possible that the experience of patients in trials will provide insights which can translate to better proportions of patients being recruited in the first place. However, the drivers of participation and good experience may be different.

Limitations of the studies

Studies had a number of limitations, including the lack of a formal definition of patient experience and the use of measures without detailed data about their development or psychometric characteristics. However, these limitations are to be expected in a developing research area. Response rates to the patient experience measures were generally acceptable, but clearly there is the potential for bias if patients with particular experiences are less likely to return measures, along with participants who withdraw or are eligible but decide not to participate. As with trials more generally, research could usefully explore ways to maximise return rates [45, 46].

Strengths and limitations of the review

We have characterised our study as a scoping review, as we did not conduct a formal quality appraisal. The search was designed to provide a reasonable balance between sensitivity and the resources available to the review. Assessment of papers and data extraction were conducted by a single reviewer. We do not feel that these limitations are critical given our focus on scoping the current evidence on the use of standardised questions and the very limited evidence reported. There is no standardised system for quality assessment of the types of studies included in the scoping review, which could have been assessed as surveys (through criteria such as response rates and data completeness) and as measurement studies (in terms of psychometric criteria such as reliability and validity). This would have required a potentially complex assessment which may not have been proportionate given the aims of the scoping review.

Our search for the scoping review was relatively circumscribed in order to keep the yield of the search manageable. The search was conducted in mid-2016, and resource limitations meant that we have been unable to update the search. It is possible that new studies have been published, although we would not expect major changes in the evidence base or the conclusions of our review. As with any search, the focus was on published work and we may have missed unpublished work carried out by trial teams or research units. There may be a wider literature in the commercial sector, where there is much current interest in concepts such as ‘patient-centricity’ as applied to trials. We did not contact industry experts as part of our scoping work. Our discussions with local trial teams do not suggest that measurement of patient experience is widespread, although some research contexts (such as dedicated research facilities) may be better able to conduct this sort of work, and the Clinical Research Network in the UK has begun systematic work on patient experience [47]. If there is ‘hidden literature’ about what happens in trials (in internal reports or in the tacit knowledge of investigators), it would be important to understand how that could be better reported and used.

The author team has tried to ensure a patient perspective on the issues raised in this paper. Workshops exploring the concept of a ‘patient-centred’ trial were run alongside this scoping review (http://research.bmh.manchester.ac.uk/patientcentredtrials/resources/) and patients were involved in those workshops alongside a range of professional stakeholders. Author AD is a patient representative who has had a long-standing involvement in this project, and our future funded work in this area will include extensive PPI. Nevertheless, it is important to be aware of the tension between the patient perspective on trial participation and the interests of professional stakeholders, which are often (though not exclusively) focussed on recruitment and retention. We expect that, in many cases, the goal of improving patient experience will be aligned with the outcomes of improved recruitment and retention, but it is important to be aware that there may be cases in which there is tension between them (e.g., where retention may be enhanced by proactive follow-up, which some patients may find burdensome).

Implications

From the literature identified by our review, it would appear that information about participant experience is not systematically reported on individual studies, trial units, centres or research facilities. As well as providing feedback for research staff on individual trials, standardised assessment could be aggregated to allow assessment of participant experience across multiple trials across a trials unit or across a funder’s portfolio. This might allow identification of broader trends which need higher-level intervention.

Before adoption of standardised measurement of participant experience, there are many issues that need consideration. We outline some recommendations for future research in this area, in terms of both the practical issues about how data are collected and wider issues concerning the meaning and interpretation of the data.

Practical issues concerning the collection of patient experience data

  • What are the core dimensions of a ‘patient-centred’ trial that should be included in a participant experience measure? Table 2 highlights variation in what aspects of patient experience are measured in different studies and suggests that global experience, specific aspects of the trial (such as informed consent) and positive and negative aspects of participation are most likely to be measured. It is not clear which aspects are most important to patients or other stakeholders (such as trial teams and funders) or how evaluations of the different aspects are associated, as they may reflect a global assessment. Effective priority-setting methods such as those used in previous assessments of patient priorities around trials may be useful in this regard [48]. It will be important to identify the generic questions of relevance to all trials and others that may be important for particular trials or in particular contexts. A modular approach to measurement (with generic and trial-specific measures) may be optimal.

  • What are the optimum format and delivery mode of patient experience measures? Further work is required to understand the optimum way in which to collect experience data. All of the studies found in our review measured evaluations rather than reports, although it is unclear why that is the case. Most studies evaluated experience at the end of the study, which allows a more comprehensive assessment of the entire experience of participation but raises issues concerning the ability of participants to recall earlier aspects of the trial.

  • What is the correct balance between quantitative and qualitative approaches to measuring patient experience? Clearly, there is a significant qualitative literature on patient experience in trials [16, 49] and it will be important to explore the optimal methods by which they can complement each other to take advantage of their relative strengths.

  • How can developments in technology facilitate measurement? Developments in technology may improve the collection of patient experience data in the future. For example, digital recording of patient narratives might be analysed by using text mining to allow efficient capture of data that is richer than standardised measures.

  • Should measurement of patient experience be independent of the trial team? Independence may better avoid bias and the perception of pressure but that may not be feasible in the context of limited resources available to trial teams. The impact of such independence on assessments could be assessed by using the Study-Within-A-Trial design [50].

Issues in the interpretation of patient experience data

  • It is important to understand what influences patient experience and how much of the variation in experience is due to context and trial type, patient characteristics, or aspects of the trial. There is an ongoing debate as to whether adjusting for such factors is a fairer way of assessing performance or whether such adjustment removes the imperative to improve care [51].

  • In terms of factors related to the trial itself, it will be important to determine how much of the variation in patient experience is due to specific processes (how patients are approached, how consent is gained, and preferences considered) [52] compared with the general interpersonal and communication skills of staff [53]. Effective ‘closure’ in trials (thanking patients for participation and providing results) may be as important as their experience in the trial itself [54].

  • Another methodological issue of interest is whether patients can distinguish between their experience of the interventions within a trial and their experience of the other trial procedures. Acceptability of interventions will often be assessed in pragmatic trials as part of a comprehensive assessment of the value of the intervention. Some aspects of patient experience may be beyond the control of the trial team (such as the result of their randomised allocation and the outcomes patients achieve from treatment).

  • It will be important to explore the relative importance placed on the measurement of patient experience in different types of trials. For example, some trials have little active patient participation or even awareness of participation (such as cluster trials without individual consent), where a focus on patient experience may be less relevant.

  • It will also be important to consider the costs and other disadvantages of a focus on patient experience in trials. There may be potential unintended consequences of measures of patient experience (such as causing trial teams to focus on aspects of experience that are easily measurable compared with more complex issues). Work in this area will also have to be aware of the wider literature on the concept of ‘satisfaction’ and its measurement [55,56,57].

Although measurement of participant experience may be necessary for quality improvement, it is unlikely to be sufficient. It will be important to assess what other facilitators and resources need to be in place to ensure that results lead to improvement and that trial teams ensure a ‘virtuous circle’ between measurement, feedback, and the design and delivery of trials. We will be exploring how data can be used for quality improvement in our ongoing funded work, drawing on published examples of the use of feedback in other contexts [58]. The wider literature in audit and feedback would suggest that positive impacts are most likely when baseline performance is poor and when the feedback comes from a colleague (which in this context might be other trialists rather than others sharing a particular professional background). Developing a ‘virtuous circle’ would require regular feedback, using multiple formats, with clear targets and a plan for remedial action [59]. Adoption of appropriate theory may have an important role to play [60].

Recommendations for describing participant experience measurement

We identified some deficiencies in the reporting of the use of patient experience measures in our scoping review. We recommend reporting on the following:

  • Whether the measure was used to assess experience in one trial or across a trial portfolio.

  • Trial context (trial phase, condition under investigation, and core features of intervention)

  • Location where the trial or trials were conducted (country and facility where applicable)

  • Development detail (e.g., how the items were selected, estimated completion times, and whether the measure has been subject to reliability and validity testing)

  • Number of participants invited to complete the measure and response rate

  • Percentage of participants who complete the measure who withdrew from the trial (failure to adhere to the protocol or provide routine follow-up data)

  • Participant characteristics, including demographics (and other characteristics which may be relevant to specific trial/facility)

  • Delivery mode (postal, face-to-face interview, telephone interview, and online) and administration time points

  • Any incentives or tokens of thanks given to participants for completing the measure

  • Number of items and response options (if the measure is not published as part of the article)

  • Details for how the measure can be sourced (and languages available).

Conclusions

The regular and standardised assessment of participant experience of trials could provide useful feedback for trial teams, complement other methods of assessing patient experience, and assist in the development of patient-centred trials. At present, there is little evidence that measurement is conducted routinely. We outline key questions in this area to promote research around this issue.

Availability of data and materials

All data generated or analysed during this study are included in this published article and its supplementary information files.

Abbreviations

NIHR:

National Institute for Health Research

PPI:

Patient and public involvement

RCT:

Randomised controlled trial

References

  1. Sibbald B, Roland M. Why are randomised controlled trials important? BMJ. 1998;316:201.

    Article  CAS  Google Scholar 

  2. McDonald A, Knight R, Campbell M, Entwistle V, Grant A, Cook J, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials. 2006;7:9.

    Article  Google Scholar 

  3. Sully BG, Julious SA, Nicholl J. A reinvestigation of recruitment to randomised, controlled, multicenter trials: a review of trials funded by two UK funding agencies. Trials. 2013;14:166.

    Article  Google Scholar 

  4. Davidoff F. Patient-Centered Medicine-Reply. JAMA. 1996;275:1157.

    Article  Google Scholar 

  5. Dudley L, Gamble C, Allam A, Bell P, Buck D, Goodare H, et al. A little more conversation please? Qualitative study of researchers’ and patients’ interview accounts of training for patient and public involvement in clinical trials. Trials. 2015;16:190.

    Article  Google Scholar 

  6. Gamble C, Dudley L, Newman J. Evidence base for patient and public involvement in clinical trials (EPIC). Trials. 2013;14(Suppl 1):O34.

    Article  Google Scholar 

  7. Ennis L, Wykes T. Impact of patient involvement in mental health research: longitudinal study. Br J Psychiatry. 2013;203:381–6.

    Article  Google Scholar 

  8. de Wit M, Abma T, Koelewijn-van Loon M, Collins S, Kirwan J. Involving patient research partners has a significant impact on outcomes research: a responsive evaluation of the international OMERACT conferences. BMJ Open. 2013;3:e002241.

    Article  Google Scholar 

  9. Dudley L, Gamble C, Preston J, Buck D, The EPIC Patient Advisory Group, Hanley B. What Difference Does Patient and Public Involvement Make and What Are Its Pathways to Impact? Qualitative Study of Patients and Researchers from a Cohort of Randomised Clinical Trials. PLoS One. 2015;10:e0128817.

    Article  Google Scholar 

  10. Crocker JC, Ricci-Cabello I, Parker A, Hirst JA, Chant A, Petit-Zeman S, et al. Impact of patient and public involvement on enrolment and retention in clinical trials: systematic review and meta-analysis. BMJ. 2018;363:k4738.

    Article  Google Scholar 

  11. Wolf JA, Niederhauser V, Marshburn D, SL LV. Defining patient experience. Patient Exp J. 2014;1:7–19.

    Google Scholar 

  12. Wensing M. Improving the quality of health care: Methods for incorporating patients’ views in health care. BMJ. 2003;326:877–9.

    Article  Google Scholar 

  13. Soni Raleigh V, Foot C. Getting the measure of quality: opportunities and challenges. London: King’s Fund; 2010.

    Google Scholar 

  14. Coulter A, Collins A, King’s Fund C. Making shared decision-making a reality: no decision about me, without me. London: King’s Fund; 2011.

    Google Scholar 

  15. Roland M, Elliott M, Lyratzopoulos G, Barbiere J, Parker R, Smith P, et al. Reliability of patient responses in pay for performance schemes: analysis of national General Practitioner Patient Survey data in England. BMJ. 2009;339:b3851.

    Article  Google Scholar 

  16. Donovan J, Brindle L, Mills N. Capturing users’ experiences of participating in cancer trials. Eur J Cancer Care. 2002;11:210–4.

    Article  CAS  Google Scholar 

  17. Donovan JL, Lane JA, Peters TJ, Brindle L, Salter E, Gillatt D, et al. Development of a complex intervention improved randomization and informed consent in a randomized controlled trial. J Clin Epidemiol. 2009;62:29–36.

    Article  Google Scholar 

  18. Donovan J. Quality improvement report: Improving design and conduct of randomised trials by embedding them in qualitative research: ProtecT (prostate testing for cancer and treatment) study * Commentary: presenting unbiased information to patients can be difficult. BMJ. 2002;325:766–70.

    Article  Google Scholar 

  19. Denegri S. Promoting a ‘research active’ nation. 2014. https://www.nihr.ac.uk/02-documents/get-involved/Promoting%20A%20Research%20Active%20Nation_NIHR%20Strategic%20Plan_May%202014.pdf. Accessed 16 Sept. 2014.

  20. Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5:69.

    Article  Google Scholar 

  21. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. Prisma extension for scoping reviews (prisma-scr): Checklist and explanation. Ann Intern Med. 2018;169:467–73.

    Article  Google Scholar 

  22. Almeida L, Azevedo B, Nunes T, Vaz-da-Silva M, Soares-da-Silva P. Why healthy subjects volunteer for phase I studies and how they perceive their participation? Eur J Clin Pharmacol. 2007;63:1085–94.

    Article  Google Scholar 

  23. Aman MG, Wolford PL. Consumer Satisfaction with Involvement in Drug Research: A Social Validity Study. J Am Acad Child Adolesc Psychiatry. 1995;34:940–5.

    Article  CAS  Google Scholar 

  24. Ngongo Bahati P, Kidega W, Ogutu H, Odada J, Bender B, Fast P, et al. Ensuring quality of services in HIV prevention research settings: findings from a multi-center quality improvement pilot in East Africa. AIDS Care. 2010;22:119–25.

    Article  Google Scholar 

  25. Bertoli AM, Strusberg I, Fierro GA, Ramos M, Strusberg AM. Lack of correlation between satisfaction and knowledge in clinical trials participants: A pilot study. Contemp Clin Trials. 2007;28:730–6.

    Article  Google Scholar 

  26. Bevan EG, Chee LC, McGhee SM, McInnes GT. Patients’ attitudes to participation in clinical trials. Br J Clin Pharmacol. 1993;35:204–7.

    Article  CAS  Google Scholar 

  27. Cain MA, McGuinness C. Patient recruitment in paediatric clinical trials. Pract Diabetes Int. 2005;22:328–32.

    Article  Google Scholar 

  28. Dias L, Schoenfeld E, Thomas J, Baldwin C, McLeod J, Smith J, et al. Reasons for high retention in pediatric clinical trials: comparison of participant and staff responses in the Correction of Myopia Evaluation Trial. Clin Trials. 2005;2:443–52.

    Article  Google Scholar 

  29. Fearn P, Avenell A, McCann S, Milne AC, Maclennan G, For the Mavis Trial Group. Factors influencing the participation of older people in clinical trials — data analysis from the MAVIS trial. J Nutr Health Aging. 2010;14:51–6.

    Article  CAS  Google Scholar 

  30. Friesen LR, Williams KB. Attitudes and motivations regarding willingness to participate in dental clinical trials. Contemp Clin Trials Commun. 2016;2:85–90.

    Article  Google Scholar 

  31. van Gelderen CE, Savelkoul TJ, van Dokkum W, Meulenbelt J. Motives and perception of healthy volunteers who participate in experiments. Eur J Clin Pharmacol. 1993;45:15–21.

    Article  Google Scholar 

  32. Hassar M, Weintraub M. “Uniformed” consent and the wealthy volunteer: an analysis of patient volunteers in a clinical trial of a new anti-inflammatory drug. Clin Pharmacol Ther. 1976;20:379–86.

    Article  CAS  Google Scholar 

  33. Henzlova MJ, Blackburn GH, Bradley EJ, Rogers WJ. Patient perception of a long-term clinical trial: Experience using a close-out questionnaire in the studies of left ventricular dysfunction (SOLVD) trial. Control Clin Trials. 1994;15:284–93.

    Article  CAS  Google Scholar 

  34. Kost RG, Lee LM, Yessis J, Wesley RA, Henderson DK, Coller BS. Assessing Participant-Centered Outcomes to Improve Clinical Research. N Engl J Med. 2013;369:2179–81.

    Article  CAS  Google Scholar 

  35. Yessis JL, Kost RG, Lee LM, Coller BS, Henderson DK. Development of a Research Participants’ Perception Survey to Improve Clinical Research. Clin Transl Sci. 2012;5:452–60.

    Article  Google Scholar 

  36. Luzurier Q, Damm C, Lion F, Daniel C, Pellerin L, Tavolacci M-P. Strategy for recruitment and factors associated with motivation and satisfaction in a randomized trial with 210 healthy volunteers without financial compensation. BMC Med Res Methodol. 2015;15:2.

    Article  Google Scholar 

  37. Martin S, Gillespie A, Wolters PL, Widemann BC. Experiences of families with a child, adolescent, or young adult with neurofibromatosis type 1 and plexiform neurofibroma evaluated for clinical trials participation at the National Cancer Institute. Contemp Clin Trials. 2011;32:10–5.

    Article  Google Scholar 

  38. McAdam DB, Zarcone JR, Hellings J, Napolitano DA, Schroeder SR. Effects of risperidone on aberrant behavior in persons with developmental disabilities: II. Social validity measures. Am J Ment Retard. 2002;107:261–9.

    Article  Google Scholar 

  39. Mattson M, Curb D, McArdle R. Participation in a clinical trial: the patients’ point of view. Control Clin Trials. 1985;6:156–67.

    Article  CAS  Google Scholar 

  40. Renfroe EG, Heywood G, Foreman L, Schron E, Powell J, Baessler C, et al. The end-of-study patient survey: methods influencing response rate in the AVID Trial. Control Clin Trials. 2002;23:521–33.

    Article  Google Scholar 

  41. Schron EB, Wassertheil-Smoller S, Pressel S. Clinical Trial Participant Satisfaction: Survey of SHEP Enrollees. SHEP Cooperative Research Group. Systolic Hypertension in the Elderly Program. J Am Geriatr Soc. 1997;45:934–8.

    Article  CAS  Google Scholar 

  42. Tangrea JA, Adrianza ME, Helsel WE. Patients’ perceptions on participation in a cancer chemoprevention trial. Cancer Epidemiol Biomark Prev. 1992;1:325–30.

    CAS  Google Scholar 

  43. Verheggen F, Nieman F, Jonkers R. Determinants of patient participation in clinical studies requiring informed consent: why patients enter a clinical trial. Patient Educ Couns. 1998;35:111–25.

    Article  CAS  Google Scholar 

  44. Fewtrell MS, Kennedy K, Singhal A, Martin RM, Ness A, Hadders-Algra M, et al. How much loss to follow-up is acceptable in long-term randomised trials and prospective studies? Arch Dis Child. 2008;93:458.

    Article  Google Scholar 

  45. Brueton V, Tierney J, Stenning S, Harding S, Meredith S, Nazareth I, et al. Strategies to improve retention in randomised trials. Cochrane Database Syst Rev. 2013;(12):MR000032. https://doi.org/10.1002/14651858.MR000032.pub2.

  46. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324:1183.

    Article  Google Scholar 

  47. Golsorkhi M, Steel R. Report of the Patient Research Experience Survey 2017/18: Clinical Research Network Coordinating Centre: NIHR; 2018.

  48. Healy P, Galvin S, Williamson PR, Treweek S, Whiting C, Maeso B, et al. Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership – the PRioRiTy (Prioritising Recruitment in Randomised Trials) study. Trials. 2018;19:147.

    Article  Google Scholar 

  49. Elliott D, Husbands S, Hamdy FC, Holmberg L, Donovan JL. Understanding and Improving Recruitment to Randomised Controlled Trials: Qualitative Research Approaches. Eur Urol. 2017;72:789–98.

    Article  Google Scholar 

  50. Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)? Trials. 2018;19:139.

    Article  Google Scholar 

  51. Paddison C, Elliott M, Parker R, Staetsky L, Lyratzopoulos G, Campbell JL, et al. Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey. BMJ Qual Saf. 2012;21:634.

    Article  Google Scholar 

  52. Mills N, Donovan J, Wade J, Hamdy F, Neal D, Lane J. Exploring treatment preferences facilitated recruitment to randomized controlled trials. J Clin Epidemiol. 2011;64:1127–36.

    Article  Google Scholar 

  53. Townsend D, Mills N, Savović J, Donovan JL. A systematic review of training programmes for recruiters to randomised controlled trials. Trials. 2015;16:432.

    Article  Google Scholar 

  54. Tarrant C, Jackson C, Dixon-Woods M, McNicol S, Kenyon S, Armstrong N. Consent revisited: the impact of return of results on participants’ views and expectations about trial participation. Health Expect. 2015;18:2042–53.

    Article  Google Scholar 

  55. Coyle J. Exploring the meaning of ‘dissatisfaction’ with health care: the importance of ‘personal identity threat’. Sociol Health Illn. 1999;21:95–124.

    Article  Google Scholar 

  56. Williams B, Coyle J, Healy D. The meaning of patient satisfaction: an explanation of high reported levels. Soc Sci Med. 1998;47:1351–9.

    Article  CAS  Google Scholar 

  57. Williams B. Patient satisfaction: a valid concept? Soc Sci Med. 1994;38:509–16.

    Article  CAS  Google Scholar 

  58. Carter M, Roland M, Bower P, Greco M, Jenner D. Improving your practice with patient surveys: University of Manchester; 2004.

  59. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(6):CD000259.

  60. Hysong SJ, Kell HJ, Petersen LA, Campbell BA, Trautner BW. Theory-based and evidence-based design of audit and feedback programmes: examples from two clinical intervention studies. BMJ Qual Saf. 2017;26:323.

    Article  Google Scholar 

Download references

Funding

CP is funded by the NIHR School for Primary Care Research (Launching Fellowship). Earlier work informing this review was funded by the Medical Research Council (MRC) Hub for Trials Methodology (MR/L004933/2 - R46).

Author information

Authors and Affiliations

Authors

Contributions

CP and PB designed the study and analysed the data generated. CP drafted the manuscript. All authors interpreted the data, contributed to the drafting of the manuscript, and read and approved the final manuscript.

Corresponding author

Correspondence to Claire Planner.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Structured measures of participant experience reported in peer-reviewed journal articles. (DOCX 67 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Planner, C., Bower, P., Donnelly, A. et al. Trials need participants but not their feedback? A scoping review of published papers on the measurement of participant experience of taking part in clinical trials. Trials 20, 381 (2019). https://doi.org/10.1186/s13063-019-3444-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-019-3444-y

Keywords