The methods are described in full in the following sections, but as a brief overview, this PRioRiTy II PSP followed the same overall method as that used during the PRioRiTy I PSP [11]. Key stages included an initial stage that consisted of data collection and analysis to generate a list of unanswered questions. This led onto an interim stage that generated an indicative question list for use in the interim survey. The project then culminated in bringing 21 questions from the indicative list to a final consensus meeting to agree on the top priorities for future research into trial retention. These priorities are uncertainties raised by the stakeholders and judged to be unanswered by existing evidence. All three stages of this project were open to anyone over the age of 18 years who had been involved in randomised trials in the UK and Ireland. For better precision during data collection, seven categories were given as options to describe the role of the stakeholders during the initial survey. For the remainder of the project, we combined stakeholder roles and organised them into the following four groups:
-
Patient or member of the public involved in a trial (as a participant or parent/carer of a participant or as a contributor to design/delivery of trial)
-
Frontline staff or other staff involved in trial retention (e.g. Research Nurse, Trial Manager, regulatory or oversight role such as Sponsor or Research Director)
-
Investigator (e.g. Chief Investigator, Principal Investigator, Co-investigator)
-
Trial methodologist
This PSP did not consider uncertainties relating to adherence to trial interventions. The objectives of the PSP were to:
-
Bring the public, clinicians, and researchers together to identify unanswered questions around retention in randomised trials
-
Agree by consensus on a prioritised list of those unanswered questions which will be used to inform future research.
Steering Group
We established a Steering Group to oversee the PSP in accordance with JLA guidance and held the first meeting in January 2018. The Steering Group was composed of 24 members: 6 patient partners (3 with experience of trials methodology research and 3 without), 6 frontline staff or other staff involved in trial retention, 5 investigators, 5 trial methodologists, and 2 JLA representatives. Contributors were identified through known personal contacts and key members working in trial retention and invited to join the Steering Group. At the first Steering Group meeting a gap analysis of representation was conducted and efforts made to purposefully fill those gaps through active recruitment, e.g. Twitter adverts for patient partners and direct contacts of known research staff. Membership reflected the range of stakeholders with whom we wished to engage during the PSP. Drawing on members’ expertise and networks, the Steering Group helped identify and recruit stakeholders during each stage. We also held regular meetings to ensure that the work proceeded to agreed timetables and to continue engagement and momentum. The JLA was also represented on the Steering Group to ensure that the process adhered to JLA principles.
Initial online survey
Identification of stakeholders and development of initial survey
Convenience sampling was used to sample survey respondents. Steering Group members, including the patient partners, identified and engaged a wide range of appropriate potential stakeholders through their networks of contacts. The target population mirrored those groups represented by individuals on the Steering Group. Specifically, these were patients or members of the public involved in a trial, frontline staff or other staff involved in trial retention, investigators, and trial methodologists, all of whom needed to be based in the UK or Ireland and be over 18. We developed an eight-question online survey in SurveyMonkey (SurveyMonkey, Palo Alto, CA, USA) to gather uncertainties for our initial stage. This was open for 4 weeks. We also made a paper copy of the survey with pre-paid return envelopes available if required. We set no formal target sample size for the number of responses. The eight questions included six open-ended questions (Appendix 1) that explored the respondent’s views on unanswered questions for trial retention and general comments relating to retention in randomised trials that stakeholders would like to see answered. Based on the experience of the PRioRITy I project and discussion by the project Steering Group, these six open-ended questions were modelled on broad areas of trial retention: why participants stay involved; planning of data collection; processes of collecting data; information provided about data collection; aspects relating to trial staff involved in data collection; any other comments. It was felt that using six questions rather than one generic question may allow broader coverage of all aspects of trial retention. The questions also included an additional two demographic questions about the respondents to help monitor the geographic spread and roles of people responding to the survey. A pilot to test question comprehension and website usability was conducted with a small sample (n = 6) of volunteers from within the Health Services Research Unit (HSRU) but included non-academic staff members. We then distributed a weblink to the survey to the four stakeholder groups (described earlier) and also promoted the survey through social media channels and Twitter hashtags. The initial survey was launched in March 2018 and closed in May 2018 (8 weeks of data collection). We also asked respondents if they would consider attending the final consensus meeting. Electronic data was stored on password-protected university computers supported by secure servers. Paper copies of questionnaires were stored in locked tambour filing systems. The electronic and paper data was held in locked offices and only accessible by key personnel.
Coding and analysing responses
The initial survey was hosted by the JLA, who provided the Steering Group with regular updates and the compiled answers once completed. We used samples of responses as they were returned to us to identify key themes and questions. This allowed us to generate a representative series of thematic groups efficiently once the survey closed. The JLA collated the survey responses into a single Excel spreadsheet, and we coded the responses using a process of constant comparison analysis [14] into a thematic group where appropriate. We repeated this process of comparison until the range and number of thematic groups truly reflected the whole data set. The determination of thematic groups was an iterative process and generated through discussion amongst the Aberdeen team members, who also conducted the subsequent analysis (DB, HG, KG, and ST). Where an item did not fit into an existing thematic group, we either expanded a thematic group or created a new one. Each stakeholder response could contain numerous items, and multiple themes may have arisen. We therefore sub-divided responses into constituent parts during coding to allow their mapping across different thematic groups. We did not assign responses that were out of scope to a thematic code; rather, we categorised these responses separately for potential future use. All responses that did not refer to a process (such as recruitment) were assumed to be about retention as per the survey questions and hence were considered to be within scope. This process also involved regular group discussion and consultation to ensure consistency in approach and accuracy of the coding.
Once coded, we analysed the data to determine the initial sub-questions and broader main questions present within each theme as well as how often they occurred over the course of 4 weeks. To guide this process, we used word-for-word responses as a framework for developing the sub-questions, which grew iteratively as the data was analysed. We compiled the broader questions from each theme together and conducted a check to ensure that they remained representative of their respective sub-questions. To evaluate reliability, once we connected all coded data items to sub-questions, a 10% sample was selected at random from each team member’s analysis to compare findings. We held group discussions to identify discrepancies and resolve disagreements. We also conducted a check with the questions identified in the initial scoping survey against existing sources of evidence reporting trial retention research. This ensured that questions raised for the interim stage were unanswered by research. The evidence sources used for checking were:
-
1.
The Cochrane review of interventions to improve retention in trials, with the 2012 search updated by members of the Aberdeen team (October 2017) and screened [4]
-
2.
A qualitative synthesis of barriers and facilitators to participant retention in trials [15]
-
3.
A systematic review of non-randomised evaluations of interventions to improve retention in trials, with members of the PRioRiTy II team actively involved as reviewers ([16], with completed review submitted for publication).
Together with the Steering Group, we grouped and merged the longlist of broad questions where appropriate and removed duplicates to create a shortlist of questions in advance of the interim stage. Through consultation with the Steering Group, we discussed and sometimes revised the terminology to improve the clarity of the original meaning of the questions whilst ensuring the items remained true to the voices of respondents.
Interim priority setting stage
Development of the indicative question list and interim survey
In the interim stage, we conducted a ‘back-categorisation’ on the initial stage shortlist in which we asked for feedback and comments from stakeholders who were not involved in the project. These individuals were identified through two processes: (1) email invitation to members of the HSRU (i.e. people who were familiar with trials); (2) invitation to friends and family (who were not familiar with trials) of the Aberdeen team. This process of back-categorisation involved presenting stakeholders with the shortlist of questions from the previous stage and conducting short individual interviews to assess their understanding of the questions. We also asked individuals involved in the back-categorisation process to provide examples of the types of research activities they would expect to see covered by each question. This process ensured that the language used was broad enough to ensure the correct coding of sub-questions under broader indicative questions. For example, questioning around the broad question ‘How could technology be best used in trial follow-up processes?’ included probing stakeholders on what their understanding of ‘technology’ is in relation to trials. Open questions such as ‘From this question, who might you assume would be using technology within trial follow-up processes?’ were used to gather responses to assess whether the question was unintentionally focussing on one specific group of trial stakeholders (e.g. patients). In this example, individuals explained that technology could be used by both the people involved in doing the trial (e.g. research nurses, clinicians) and the people taking part in the trial, so the language of the question was not changed.
The results of the back-categorisation were combined with the earlier responses from the Steering Group to create the list of indicative questions for the interim survey.
For the interim survey, we used SurveyMonkey to ask stakeholders to choose up to 10 of the questions that they believed were the most important. This survey was open for 6 weeks, and we made paper copies of the survey available if required. Invitations to this survey were open to anyone, and not restricted to the participants from the initial survey. As with the initial survey, no formal target sample size was set. However, the number of respondents within each reported group was checked weekly. This allowed us to target groups with lower representation during the ongoing dissemination of the survey.
We distributed the survey link through email, institution websites, blogs, newsletters, and social media. The HSRU at the University of Aberdeen also issued a press release and coordinated promotion alongside the JLA. The interim survey was launched in August 2018 and closed in September 2018 (6 weeks of data collection).
Voting and ranking interim survey items
The online survey included a drop-down menu showing the indicative question list from which no more than 10 could be selected. This generated a total score for each question to represent the overall number of times the question was selected. We also used ranked weighted scores to decide which of the interim survey research questions would be taken forward to the final consensus meeting, using the following standard JLA approach as described in the JLA Guidebook [17] (www.jla.nihr.ac.uk/jla-guidebook/).
Each time a question was chosen, we assigned it one point. To ensure equal influence, points for each stakeholder group were tallied separately, generating separate total scores for each group for the questions. Within each of the four stakeholder groups, the scores for each question were arranged in order from highest to lowest. We then gave these a new score according to their position, from 27 for the most popular question down to 1 for the least popular. This resulted in the lowest ranked question receiving the lowest total score, through to the highest ranked question receiving the highest. The list was then ordered by score from highest to lowest and presented to the Steering Group. In cases when the questions had the same total, we ranked them in joint place. This gave the overall interim ranking to the research questions and the rankings for each of the stakeholder groups, whilst minimising bias owing to numbers of responses from each stakeholder group.
Consensus meeting
The final prioritisation consensus meeting was a one-day event held in Birmingham, UK, in October 2018 to identify and agree on a ‘Top 10’ list of research questions. We brought together representatives from the key stakeholder groups (in roughly equal numbers) to determine the Top 10 list of priorities from the top 21 questions from the interim survey. The consensus meeting followed the standard approach described in the JLA Guidebook, namely using small and whole group discussions in a face-to-face meeting with a particular emphasis on the Top 10 [17] (www.jla.nihr.ac.uk/jla-guidebook/). We remunerated patient participants for their time according to INVOLVE UK guidance, and travel expenses for all attendees were reimbursed. Members of the Aberdeen team planned and organised the event alongside members of the JLA.
The consensus meeting was a full day of plenary and small group discussion, chaired by a JLA Senior Adviser. All attendees were provided with the list of 21 questions in advance of the meeting, to allow time to familiarise themselves with the questions and consider their thoughts on the importance of each one. A JLA facilitator led each of three small groups, which consisted of even representation of the stakeholder groups. The JLA facilitators acted as neutral guides for the process and ensured equal participation in order to minimise authority effects. After an introductory plenary session with the entire group, the three small groups were convened and asked to discuss and prioritise all the listed questions. To support the discussions, individual question cards were used with example quotes from related initial survey responses to provide context. Tri-colour segmented tables were used (red, amber, and green) to represent areas of increasing importance, with red meaning less important and green meaning most important. These initial small groups were then mixed for the second round of discussion and prioritisation to ensure exposure to a range of ideas and eliminate the potential bias of group think. Finally, the small groups all came back together in a plenary session to agree on the final prioritised list.