Skip to main content

Challenges in the design, planning and implementation of trials evaluating group interventions



Group interventions are interventions delivered to groups of people rather than to individuals and are used in healthcare for mental health recovery, behaviour change, peer support, self-management and/or health education. Evaluating group interventions in randomised controlled trials (RCTs) presents trialists with a set of practical problems, which are not present in RCTs of one-to-one interventions and which may not be immediately obvious.


Case-based approach summarising Sheffield trials unit’s experience in the design and implementation of five group interventions. We reviewed participant recruitment and attrition, facilitator training and attrition, attendance at the group sessions, group size and fidelity aspects across five RCTs.


Median recruitment across the five trials was 3.2 (range 1.7–21.0) participants per site per month. Group intervention trials involve a delay in starting the intervention for some participants, until sufficient numbers are available to start a group. There was no evidence that the timing of consent, relative to randomisation, affected post-randomisation attrition which was a matter of concern for all trial teams. Group facilitator attrition was common in studies where facilitators were employed by the health system rather than the by the grant holder and led to the early closure of one trial; research sites responded by training ‘back-up’ and new facilitators. Trials specified that participants had to attend a median of 62.5% (range 16.7%–80%) of sessions, in order to receive a ‘therapeutic dose’; a median of 76.7% (range 42.9%–97.8%) received a therapeutic dose. Across the five trials, 75.3% of all sessions went ahead without the pre-specified ideal group size. A variety of methods were used to assess the fidelity of group interventions at a group and individual level across the five trials.


This is the first paper to provide an empirical basis for planning group intervention trials. Investigators should expect delays/difficulties in recruiting groups of the optimal size, plan for both facilitator and participant attrition, and consider how group attendance and group size affects treatment fidelity.

Trial registration

ISRCTN17993825 registered on 11/10/2016, ISRCTN28645428 registered on 11/04/2012, ISRCTN61215213 registered on 11/05/2011, ISRCTN67209155 registered on 22/03/2012, ISRCTN19447796 registered on 20/03/2014.

Peer Review reports

FormalPara Included trials

JtD [1] Journeying through Dementia.

LM [2] Lifestyle Matters.

PLINY [3] Putting Life IN Years.

REPOSE [4] Relative Effectiveness of Pumps Over Structured Education.

STEPWISE [5] STructured lifestyle Education for People WIth SchizophrEnia.


Group interventions in healthcare

Group interventions are used as an alternative, or in addition to, interventions delivered to individuals in healthcare [6, 7] and involve an intervention delivered to small groups of people by one or more group leaders rather than to individuals; this includes activity, support, problem solving/educational and psychodynamic groups, but does not includes task or work groups or large education groups [8]. Originally focusing on mental health recovery [6], they now often also focus on behaviour change, peer support, self-management and/or health education [7].

Group interventions can present opportunities for costs savings by treating more than one person at the same time. In addition, advocates of group interventions have proposed mechanisms of action that are important for behaviour change that arise from being in a group that are not present in individual therapies, such as inter-personal change processes, universalisation, social comparison, social learning and modelling [6, 7, 9, 10]. The role of group process and dynamics in these mechanisms is contested, with some believing that these mechanisms of action can be triggered by individual–therapist interaction [11] and others proposing that the group aspect is an essential part of the intervention [12].

Mixed evidence exists for the effectiveness of group interventions. Group interventions improve health outcomes compared to individual therapy in smoking cessation [13], breastfeeding [14] and weight management [15, 16]; compared to usual care or no intervention in diabetes [17]; and, are equally effective as individual therapy in obsessive-compulsive disorder [18].

Clinically effective group interventions do not always lead to anticipated cost savings compared to individual treatments, with trade-offs between numbers of patients treated and the duration or quality of the programmes [19, 20]. Compared with an individual modality, cognitive behavioural therapy for insomnia [21] and weight management [15] groups were found to be cost-effective, whereas smoking cessation groups were not [13]. Particularly in mental health, there is some concern that the cost-effectiveness of group interventions compares poorly with one-to-one therapy [22,23,24,25,26]. It is also said that certain populations may not be suited to group therapy, including those with communication problems, disruptive behaviour or co-morbidities that make it hard to relate to other group members [25].

Group interventions in healthcare tend to be small groups which involve interaction between members [8]. Small groups are said to move through five stages: the establishment of ground rules; conflict; cohesion; structure supportive of task performance; and, termination [7, 27, 28]. This staged development is sometimes used as an argument for closing group membership after initial sessions, notwithstanding member attrition, which is common [29]. Optimal group size for group interventions is said to depend on the type and duration of therapy, as well as the target population. There is consensus that ideal group size is 7–8 members, with a range of 5–10 members [6, 30,31,32,33]. Groups with five or more members allow the formation of meaningful relationships [34] and cohesive group functioning [6]. Although some maintain that therapeutic benefit can be derived in groups with < 5 members [35, 36], there is evidence that with < 5 members, interaction, group identity, attendance and group image is poor [6, 37]. Upper limits to group size may depend on how many people a therapist can practically manage [38] but it has been found that fewer verbal interrelationships occur [33] in groups with > 8 members, and social fission [39] and conflict [40] are more common in larger groups.

Evaluation of group interventions

In addition to well-documented statistical concerns around therapist effects and clustering [41], a number of approaches to evaluating group interventions have been proposed. Recognising that the design, evaluation and reporting of group interventions require additional information to that which is routinely collated for individual interventions, Hoddinott and colleagues developed a framework [19] to supplement the Medical Research Council (MRC) guidance on complex interventions [42]. For instance, in addition to the intervention content and theory, which would be the same in one-to-one delivery, documentation of group membership and maintenance processes (planning, setting up, organising and sustaining the group), as well as well as the leader/member attributes are pivotal to understanding how the intervention works. Borek and colleagues developed a checklist for the reporting of group-based behaviour change interventions and a framework detailing the mechanisms of action for group interventions, which helps researchers describe intervention design and content, participants and facilitators, and to determine the mechanisms of action present in group interventions [10, 43].

This paper is intended as a supplement to these developments and outlines practical challenges to the implementation of group-based therapies in randomised controlled trials (RCTs). The data provide a ‘reference-class’ – data from past, similar projects which can be used for forecasting [44]. Researchers can use reference class data to plan and manage trials as well as forecast contingencies related to: participant recruitment, randomisation and attrition; the demand and supply aspects of intervention delivery; therapeutic dose; group size; and process evaluation.

The aim of the present paper is to provide practical guidance to the implementation of group-based intervention randomised trials based on previous experience of five group intervention trials conducted by the Sheffield Clinical Trial Research Unit (CTRU).


The primary objective is to present reference class data specific to group intervention trials on participant recruitment and attrition, facilitator training and attrition, group attendance, therapeutic dose and group size.

The secondary objectives are to provide explanations and potential solutions for problems observed in group intervention trials which are substantively different to those observed in studies of individual-level interventions.


Case studies

A case-based approached was adopted to present the challenges of implementing group interventions in five RCTs [1,2,3,4,5] evaluating group interventions (Table 1) managed by Sheffield CTRU [45]—a UK Clinical Research Collaboration (UKCRC)-registered clinical trial unit managing phase III RCTs of a range of interventions across varied research areas. The CTRU has managed a number of evaluations of complex interventions, including five completed group intervention trials.

Table 1 Details of case studies

Data were collated from trial reports and journal articles, from the trial data held in Sheffield CTRU and from the study managers; descriptive statistics are presented.

Of the included trials, one was cluster-randomised [4] and all others were individually randomised. Lifestyle Matters [2] (LM) was a two-centre trial assessing a psychosocial group intervention to promote healthy ageing in adults aged ≥ 65 years with reasonable cognition. Putting Life IN Years [3] (PLINY) was a single-centre RCT that aimed to evaluate a group telephone-befriending intervention to prevent loneliness in adults aged ≥ 75 years with reasonable cognition. Relative Effectiveness of Pumps Over Structured Education [4] (REPOSE) was an eight-centre cluster RCT assessing an existing group educational course for use with multiple daily injections compared to the same intervention adapted for use with a pump for adults aged ≥ 18 years with type 1 diabetes. The STructured lifestyle Education for People WIth SchizophrEnia [5] (STEPWISE) RCT ran in 10 mental health organisations and evaluated a group structured weight management lifestyle education intervention in adults aged ≥ 18 years with schizophrenia, schizoaffective disorder or first episode psychosis. Journeying through Dementia [1] (JtD) was a 13-centre RCT assessing a group intervention designed to support people in the early stages of dementia to maintain independence. All trials took place in the UK.

Various methods for recruitment were used in these trials and some studies used more than one method [1,2,3], including: mail-outs via general practitioners (GPs)/NHS care teams [1,2,3,4]; mail-outs to the research cohort [1, 3]; referrals via NHS care teams [1, 4, 5]; and self-referral [1, 2].

Individual randomisation was used in four of the trials [1,2,3, 5] and cluster randomisation [4] was used in one. Randomisation was delayed from the point of consent in two trials [1, 4] to ensure that the groups were filled and could be run in the time frame required. Follow-up data collection was anchored to the time of randomisation in four of the trials [1,2,3, 5] and to the commencement of the first group in one trial [4].

All groups ran for more than one session: one group intervention [4] took place on five consecutive days, all other included studies had weekly sessions in the range of 4–16 weeks and all of the studies had additional sessions to the main group intervention. All included interventions were face-to-face sessions, except for one which was a telephone-befriending group [3]. A variety of people facilitated the group sessions in the trials such as NHS staff [1, 2, 4, 5] and volunteers [3]; all received structured training in the group intervention and collected research data in relation to the attendance at group sessions. At least two facilitators delivered all of the face-to-face interventions and one person delivered the intervention via telephone in PLINY [3].

All included studies used some aspect of treatment fidelity assessment: direct observation [1, 4, 5] or recording [2, 3] of a session using a checklist; self-report by facilitators using a checklist [1] in addition to observation; and assessment of facilitator–participant interaction [5]. In addition, training fidelity was assessed in three trials by two researchers either by direct observation [1, 2] or using audio recordings [3] of training sessions.

Many of the elements discussed above are relevant to RCTs in general and to RCTs of complex interventions but some need particular consideration in relation to group interventions. The type and timing of recruitment and randomisation are particularly important as these will dictate when the group sessions can be arranged and how much time there is to train facilitators. Practical arrangements for group sessions will be affected by the population [46], group size, type and length of training, the mode of group delivery and who the facilitator is.


Participant recruitment and attrition

Table 2 shows the number of individuals approached and recruited for each trial. Four studies recorded data on the numbers invited to screen for eligibility and the associated response rate: 4.1% (LM [2]); 2.9% (PLINY [3]); 69.2% (REPOSE [4]); and 7.1% (JtD [1]). In REPOSE [4], acute care teams targeted people with type 1 diabetes, compared with the other studies in which GPs sent out mass mail-outs. LM [2], PLINY [3] and STEPWISE [5] were also prevention trials rather than treatment trials, which have shown to be harder to recruit to [47]. The proportion of those screened providing consent is higher for trials using initial GP mass mail-outs than for other trials; it is lowest in STEPWISE [5], which recruited participants with schizophrenia which can be a difficult population to recruit to trials [48].

Table 2 CONSORT data

Setting group dates

The trials had different approaches to setting the days and times for the group sessions. Due to the intervention being used outside of the trial, REPOSE set the dates in advance of participant recruitment, patients knew when the groups were at the time of consent and the courses were randomised once the required numbers were met (usually a minimum of five participants per group). LM [2] set provisional dates or windows for the group sessions but finalised the times and dates with the participants once group numbers were met. STEPWISE [4] asked sites to block book consent visits (where practical) and to set course dates in advance which delayed consent for some participants; sites decided how they would implement this. The purpose was to minimise post-randomisation attrition, ensure follow-up occurred after intervention delivery and to optimise group size. JtD [5] commenced without pre-planning the dates for the intervention but as the trial progressed, the trial team advised sites to set the dates before consent and many did so. Although these dates sometimes changed, the trial team ensured that any moved dates were on the same time and day of the week to increase the possibility of attendance. PLINY [3] did not pre-plan timing for the groups and relied on the service provider to set the date once the group had been recruited. As only one trial explicitly set the dates before randomisation, we cannot explore the impact of these differences in our data.


Attrition of participants between consent and randomisation occurred where randomisation was delayed, as can be seen in the data for REPOSE [4] (n = 4) and JtD [1] (n = 40). Although randomisation was not delayed in STEPWISE, there is some attrition between consent and randomisation (n = 9). Reasons for this were withdrawal of consent (n = 4), mental health deterioration (n = 4) and surgery (n = 1), which suggests that there was a delay in randomising after consent [5], though it was not designed this way. The percentage of those attending at least one group session appears unaffected by the timing of randomisation or by when the days and times of the group sessions were set.

We have found that maintaining contact with participants between any of these stages can reduce attrition while they are waiting for randomisation or for group sessions to be arranged [49, 50]. In LM, once randomised, facilitators contacted the participants allocated to the intervention arm to introduce themselves and start discussing possible dates/times for the next group meeting. The participant would then be aware of timings including how long it might be to get a group started; they would also arrange the first one-to-one session with the participant to start relationship building. The facilitators maintained this contact while waiting for the group intervention to start. Another challenge that arose from delayed randomisation related to follow-up: when groups of people were randomised at the same time and follow-up was anchored to randomisation, all of the group members needed to be followed up at the same time point.

Table 3 shows the recruitment rate by site and by month for each trial; this is a crude estimate as we have assumed all sites were open for the whole recruitment period, which is rarely the case. The median (range) recruitment rate for all included studies is 3.2 (1.7–21.0) participants per site per month.

Table 3 Recruitment rates

Participant demand and facilitator supply

With group interventions, the planned (and actual) recruitment rate needs to be linked to the delivery of the intervention so that enough people are randomised to a group without having to wait too long to start the sessions in order to reduce attrition. This should be forecast in the early stages of RCT design to ensure an accurate schedule for the whole trial, taking into account facilitator training, room booking and other practical aspects of delivery. Training varied in intensity (See Table 1 for details), with the training for REPOSE [4] being the most intensive although, unlike in other trials, facilitators were trained before and independently of the research programme.

Facilitator training

Attrition and replacement of trained facilitators should be anticipated. Apart from LM [2], studies where facilitators were trained solely for the research had some attrition of facilitators and both STEPWISE and JtD had to run more training sessions than had initially been planned for the trial. Although LM [2] did not experience facilitator attrition, one of the facilitators had a period of sick leave and their sessions were covered by the chief investigator and another person who required facilitator training. Recruitment of facilitators can also present difficulties. In JtD [1], the facilitators were supposed to be provided by the trust, but they often filled these roles with NHS R&D staff as other staff could not be recruited to fill the roles. PLINY [3] did not manage to recruit the required number of volunteers to deliver the intervention (Table 4).

Table 4 Facilitator training and delivery

PLINY case study: facilitator supply did not meet participant demand

The PLINY [3] trial had to be stopped prematurely as there were not enough facilitators to deliver the intervention. PLINY [3] and the service providers (facilitators) planned to have seven groups of at least six participants, with staggered start dates so that all groups were running concurrently by week 16. The start of recruitment was delayed from May 2012 to June 2012 and an increased mail-out was required in October 2012 in order to achieve the recruitment target. This successful recruitment strategy meant there were randomised participants (demand) that required group sessions to be delivered (supply); in this case, supply did not match the demand.

PLINY [3] was particularly vulnerable to poor supply–demand matching. Funding for the training and hosting of facilitators sat outside of the University research team, as demanded by the excess treatment cost system – a peculiarity of UK NHS R&D funding [51,52,53,54,55]. Notwithstanding contractual obligations to a research project, if a service provider has other priorities, the research team have little leverage. In LM [2] and other trials where facilitators were funded through research grants and employed by the research project, we have observed efficient supply–demand profiles, despite the common problems in participant recruitment.

Figure 1 shows the availability of facilitators against the demand for group sessions. Experienced volunteer coordinators provided induction and supervision, and an experienced external trainer provided formal group facilitation training to facilitators so that the group intervention could be delivered to the target number of participants (n = 124). Funding was secured from a national charity to do this, which meant that only local branches of their charity could deliver the intervention, rather than a number of service providers originally planned. Recruitment, training and supervision of facilitators was therefore the community organisation’s contracted responsibility and they were in close contact with the trial team and were informed of participant recruitment numbers during the trial. Out of the 42 volunteers that expressed an interest in delivering the group intervention, 10 completed the training and only three delivered the group sessions; the mean time a volunteer stayed with the project after they had been trained was 62 days (range 12–118).

Fig. 1
figure 1

Participant demand, supply of facilitators and group delivery graph for PLINY

Therapeutic dose

The ‘therapeutic dose’ necessary for a change to occur in complex interventions may be related to certain criteria being delivered rather than the number of sessions attended [56]. However, a ‘therapeutic dose’ relating to attendance is often agreed upon in trials to define the per-protocol population. In our experience, this has been decided through consensus of the trial management groups and the trial steering committees for each trial. Table 5 shows that the ‘therapeutic dose’ in our trials was an attendance rate in the range of 28.6%–80% of the planned sessions.

Table 5 Number of sessions attended and numbers achieving therapeutic dose

Across five group therapy programmes, the median percentage of participants receiving a ‘therapeutic dose’ was 76.7% (range 42.9%–97.8%). REPOSE [4], a treatment trial, where the course ran on five consecutive days was the most successful at achieving the defined therapeutic dose (97.8%) and also achieving attendance at all sessions (93.6%). Participant motivation to attend group interventions may be related to the motivation to enrol in research and therefore may be higher for treatment trials than for prevention trials [47]. However, JtD, a treatment trial, does not achieve the high ‘therapeutic doses’ of REPOSE and STEPWISE, and only REPOSE had > 50% of participants attending all sessions. In addition, participants usually had to take a week off work to ensure attendance at all group sessions for REPOSE [4]. For groups that ran weekly for several weeks, availability may have been more difficult and the time in between sessions may have led to a change in motivation or willingness to attend. This can be seen in STEPWISE as total attendance at the group sessions reduced each week (144 participants attended their week 1 session, 138 participants attended weeks 2 and 3, and 131 participants attended week 4). Booster sessions were 4, 7 and 10 months after randomisation and had fewer attendees than the foundation group sessions (100, 89 and 90, respectively).

Group size

Table 6 presents the ideal and actual group sizes for each group intervention.

Table 6 Group sizes

A total of 45 of 840 (5.3%) planned sessions could not go ahead as only 1 or 0 participants turned up to the session; therefore, a group session could not be delivered. All studies have run groups outside of the ideal range identified for their intervention, with the majority of sessions running with fewer than the ideal numbers (619/826 sessions, 74.9%); STEPWISE [5] ran some groups with more than the ideal numbers (3/826 sessions, 0.4%). REPOSE [4] achieved the ideal group size in 78.3% of cases whereas all other trials managed to achieve the desired group size in < 60% of sessions (median 33.4%). In addition to being a treatment trial that ran daily for one week, REPOSE [4] delayed randomisation until there were sufficient numbers to meet the required group size and, in the early stages, allowed non-participants to join the usual care arm to maintain group size and dynamics. When one group was too small in JtD [1], they allowed additional participants to join the group for the second session so that the ideal group size was met. All included studies involved the monitoring of metrics, such as recruitment and attrition, and intervention adherence there was the opportunity to ensure the ideal group size, for example by combining small groups or adding new members, but only one trial team opted for the addition of new members. In our experience, investigators are often reluctant to add new members to group interventions after initiation as it may affect the group dynamics, and if the intervention is time-limited, it would mean new participants do not have the opportunity to receive the whole course.

Process evaluation

Process evaluations are often conducted in trials of complex interventions in order to find out what (if any) elements of the intervention are effective, in what circumstances and to whom [57, 58]. For group interventions, the process evaluation should determine if and why people respond differently to the same group sessions. Process evaluation has a number of components: context; reach; dose delivered; dose received; fidelity; implementation; and recruitment [57]—which can all impact on the effectiveness of the intervention. Four of our trials [1,2,3, 5] included a formal process evaluation based on these fidelity components and also used the MRC framework on the evaluation of complex interventions [42]; three of these trials [2, 3, 5] were designed before the publication of the MRC Process Evaluation Guidance [58]. All trials collected data on the trial population, which provides data relating to reach and recruitment but only three trials used these data a part of a formal process evaluation. LM found that the intervention was delivered correctly and was tailored to groups but reach and recruitment were issues that led to the intervention not being effective as the participants may not have been at a stage where the intervention would have helped them. STEPWISE found reach and recruitment to be acceptable but fidelity to the intervention was incomplete. As previously discussed, PLINY [3] experienced issues with implementation due to facilitator attrition which relates to reach, dose delivered and dose received, but the fidelity assessments also identified issues with delivery and receipt of treatment.

Table 7 details the fidelity strategies and assessments used in the trials, apart from in relation to design, as all five trials fully described the interventions in the protocol, including the programme theory where applicable. The programme theory determines the important aspects for the process evaluation and, for group interventions, will include group specific processes. All trials standardised training and intervention materials as a strategy for training fidelity. All trials assessed fidelity in relation to treatment using checklists at a group rather than an individual level using checklists to determine what was delivered by the facilitator. These assessed the delivery of the intervention to the whole group and whether the members took part as intended. The fidelity checklists often included questions asking if the group leader was able to facilitate group processes such as peer exchange, mutual support, group cohesion, group engagement and group goals.

Table 7 Fidelity elements included in the trials [59]

STEPWISE [5] used an observation tool during direct observation of sessions to assess a group specific process—the interaction between the facilitator and the participants, as this was considered a key component of the group intervention. The checklists used for assessing treatment delivery fidelity for STEPWISE [5] also included elements relating to the receipt of the intervention and enactment of skills while in the group session.

All included trials conducted some qualitative research that covered acceptability or satisfaction for a subset of participants and facilitators; STEPWISE [5] also explored implementation using Normalisation Process Theory (NPT) [59] and interviewed the intervention developers to inform the process evaluation. In addition, all studies used the qualitative research undertaken with participants to assess fidelity in terms of the receipt of the intervention, with LM [2], REPOSE [4], STEPWISE [5] and JtD [1] also looking at enactment of skills.

Clustering concerns

Couple recruitment

LM [2] recruited 18 couples which presented the study team with issues that are not well documented in the literature, though statistical concerns regarding the analysis of group interventions, or clusters, are well documented [60,61,62,63,64]. In LM [2], couples were randomised as a pair so that they received the same allocation, which reduces the risk of contamination between arms, and is often preferred by paired participants [65]. If couples (or twins) are randomised to the same group, outcomes are likely to be more similar in this group than in others. To account for this, the statistical analysis of the LM outcome data used a multi-level mixed effects model [2]. JtD also allowed the inclusion of couples and stated at the outset that they would be randomised together as in LM; one couple was recruited. The statistical analysis plan detailed the use of a multi-level mixed effects model if > 10 couples had been recruited, with the intervention as a top-level random effect and couples/singles as a lower-level random effect. There are two other potential solutions to this: average the couple’s continuous outcomes and treat them as one individual; or only collect outcome data on one member, the index member. When averaging outcomes across a couple results in a hybrid rather than an individual, the data are difficult to fit in the baseline characteristics table and categorical outcomes cannot be handled in the same way. Indexing is a simple solution, though decisions regarding how to choose the index member from the couple are required and it is wasteful discounting one participant’s data when they are included in the research, especially when recruitment to trials can be difficult.

More than one facilitator

More than one facilitator may run a group during the intervention period. Two facilitators delivered LM, REPOSE, STEPWISE and JtD intervention sessions as standard. Additionally, if the group interventions run for more than one session, the facilitator may (and often did) change during the course for a number of reasons. For example, in LM, one facilitator was sick for a number of weeks and two other facilitators covered the group sessions that they missed: four different people (in three combinations of pairs) delivered the intervention to one group of participants. This creates a problem for those wishing to conduct fidelity analyses. In principle, the effect of therapists can be modelled either by using the therapist identifier as a fixed effect in the statistical model or by characterising them in terms of experience. However, where there is more than one therapist per group, it is difficult to identify a therapist effect on an individual participant’s outcome – analysts soon require degrees of freedom which are unavailable from trial samples. Instead, it is common to analyse group interventions using a random effect; doing so does not attempt to explain variation in terms of the participants or the facilitators but rather say that outcomes for individuals in the same group are more similar than for individuals across two different groups. This allows each group (rather than each facilitator) to have different outcomes and acknowledges that facilitators are only one part of this [66]. Nevertheless, the theory of a group effect was not borne out in REPOSE and STEPWISE where the clustering effects were zero.


Principal findings

Participant recruitment and attrition

We have presented the recruitment and attrition rates for our group intervention trials so that future investigators can use these for forecasting recruitment for group intervention trials for similar populations and settings. Recruitment to our group intervention trials was higher than has been reported in individually randomised trials (which may include group interventions) [67], suggesting that recruitment to group intervention trials may be easier than recruitment to individual intervention trials, though comparing recruitment rates across a range of interventions, disease areas and settings is problematic as there are a multitude of factors involved.

A key factor in designing RCTs assessing group interventions is the timing of the various steps required before a participant attends a group session – consent, randomisation and setting dates for the group sessions. There is insufficient evidence from our trials to show that the timing of consent and randomisation affects the rate of attrition before initiation of groups. Attrition before randomisation may be preferred to post-randomisation attrition to maintain statistical power. Delaying randomisation could reduce the time between randomisation and group initiation, therefore reducing the waiting time for participants and the potential for post-randomisation attrition. However, the two trials that delayed randomisation experienced a similar level of post-randomisation attrition to two of the trials that randomised at the point of consent. Attrition also appears unaffected by the point at which the dates for the group sessions are decided, but the timing of setting dates may affect recruitment and attrition in a way not captured by our data. Knowing the dates (or even just the day and time) of the groups before consent could, in theory, reduce recruitment as potential participants may not be able to attend on those dates, but it should in turn reduce attrition after consent as they have already checked their availability.

Delaying randomisation also has implications for capacity of those collecting data as participants may need to be follow-up at the same time.

Facilitator training and attrition

Sustaining delivery of group sessions is affected by facilitator attrition and the ability to train new facilitators. We have provided evidence to show that facilitator attrition should be expected for group intervention trials and training sessions should be planned accordingly, throughout the trial. As there are often two facilitators required to deliver group interventions, this may have a bigger impact on group intervention trials than trials assessing individual interventions which usually only have one person delivering the session. Centres attempted to address facilitator attrition and absence, either by having ‘back-up’ facilitators or by training new facilitators. In one case where this was not possible [3, 68], the trial was stopped prematurely.

When designing RCTs of group interventions, consideration should be given to who will be delivering it and how this is funded as this may impact on implementation.

Therapeutic dose

Across five trials participants had to attend a median of 62.5% (range 16.7%–80%) sessions, in order to have received a ‘therapeutic dose’; a median of 76.7% (range 42.9%–97.8%) of participants received the ‘therapeutic dose’. These figures can be used to help future investigators determine a per-protocol population for group intervention trials, bearing in mind that this will vary according to the intervention depending on the mechanisms of action. In general, setting the bar low for a therapeutic dose meant that more people received it, though this may influence the effectiveness of the intervention, and should be considered in any process evaluation and analysis.

Group size

All studies ran group sessions that were outside the pre-specified ideal size range: across five group interventions, 74.9% of all sessions ran with fewer than ideal numbers and 0.4% ran with more than the ideal numbers. The group intervention aimed at treatment that ran daily for a week was the most successful at meeting the ideal group size; the trial with intervention sessions that were further from the point of randomisation, and further apart in time (booster sessions in STEPWISE), was the least successful and had the lowest average group size. This suggests that the duration of the intervention may be important in maintaining group membership and how many individuals attend all sessions or the number of sessions defining the per-protocol population.

Two trials responded to small group size; one by adding new participants in the second week and one by allowing non-participants to join the groups, which along with merging small groups, are potential solutions to less than ideal group sizes but usage will depend on the intervention and what elements of group processes are important [7].

Process evaluation

By nature, group interventions are complex interventions and participants can have different outcomes even if they have received the same intervention delivered by the same facilitator. Process evaluations should be conducted alongside group intervention evaluations to provide information on when the intervention might be successful or when it might fail. Aspects of process evaluation can be assessed at a group or individual level, though guidance assumes interventions work on an individual level. At a group level, quantitative process data, such as non-recruited data and attendance data (recruitment, reach and dose delivered) can be collected, and elements of fidelity, such as treatment receipt and enactment, can be built into quantitative checklists. On an individual level, receipt and enactment can be investigated in participants using qualitative methods.

Some group-specific processes may need a specific group size or for a certain number of sessions to be attended or for certain criteria to be delivered during the sessions. The recently published mechanisms of action in group-based interventions (MAGI) framework [10] may help investigators to identify the group-specific processes essential to the success of a group intervention which should then be used to inform the process evaluation.

Clustering issues

We have highlighted two potential issues relating to clustering that may arise in the sample size estimation and the analysis for group interventions: the inclusion of couples and the delivery of the intervention by multiple therapists, which should be accounted for in sample size calculations or in the interpretation of the findings.

Challenges and solutions for group intervention implementation

Table 8 presents the challenges and potential solutions to the implementation of group interventions in RCTs.

Table 8 Challenges and potential solutions to the implementation of group interventions

Strengths and limitations

The data presented here provide a reference class [44, 69] that researchers can use to plan/manage trials and forecast contingencies. This is valuable as CONSORT diagrams tend to under-report activity before randomisation [47]. Using a case-based approach to explore the experiences of implementing group interventions in trials is appropriate and provides useful data from a range of trials. However, the corpus represents one CTRU’s experience and, while it represents a wide range of clinical and geographic contexts, the settings, roles, interactions and relationships [70] associated with each trial inevitably affect outcomes in ways not captured by our dataset. For instance, the group intervention trials in our sample is weighted towards prevention [2, 3, 5] rather than therapy [1, 4], which are known to have different recruitment dynamics [47], possibly due to motivation to attend and engage [71,72,73].


Those planning group intervention trials should consider demand forecasting procedures, as are used in clinical settings characterised by surges and slumps [74,75,76]. Anecdotal testimony from site staff and trial managers suggests that maintaining contact with participants during recruitment and follow-up stages helps to reduce attrition from research and intervention protocols [49]. Post-randomisation exclusions should be avoided [77] but if randomisation is delayed to reduce the attrition after randomisation [78], then trialists should be aware of the possibility of attrition between consent and randomisation.

Thought should be given to selection and justification of the therapeutic dose and how this may be affected by the number of sessions and group size. As it is unlikely that complex interventions are characterised by linear dose-response patterns [79], trialists should reflect on whether the idea of a ‘therapeutic dose’, proposed by some process evaluators [57], is a useful one. Those retaining session delivery/receipt as an index of ‘therapeutic dose’ should consider how the level at which it is set affects the number of people who will achieve it; the same will be true for fidelity assessment based on satisfying a threshold number of criteria. Guidance on process evaluation [80] currently assumes interventions work at an individual level so constructs may require adaptation in group intervention trials: recruitment and ‘dose delivered’ can be assessed at the group level whereas ‘dose received’ can be assessed at the individual level; fidelity be assessed at the group (delivery) or individual level (receipt and enactment of skills). Recently developed checklists and frameworks [10, 19, 43] for group-based behaviour change interventions can be used to aid the reporting and design of these interventions and for identifying the relevant mechanisms of action, which should inform the associated process evaluation.

As attrition can affect fidelity, study design should include courses of action (group cessation, combination of two groups, membership replenishment, inclusion of non-research participants) for when, inevitably, group sizes drop below an acceptable threshold. As the group context and process are often said to ‘constitute the treatment intervention’ [12], investigators are often reluctant to replenish groups after member attrition, although this is common in many successful ‘open/rolling’ therapy groups [81], including some that have been the subject of trials [50]. Planning for therapist attrition can involve the properly resourced use of contracts, supervision and the training of back-up therapists [50].

Challenges discussed in this paper will vary depending on the population and disease area being studied and the type of group intervention being evaluated and these may be identified in a pilot or feasibility study implementing the intervention.

Further research

A threat to the implementation of cluster RCTs involving group interventions, not addressed in this paper, is the timing of cluster randomisation. To contain costs, investigators must work to reduce the time between ethical approvals and the set-up of participating centres. Research is needed on how contracting, the allocation of resources, staffing and training (which are not needed at all sites) can be expedited to allow rapid site initiation. Poor group composition due to errors in patient selection can result in disruption of therapy or participant attrition [82, 83]. Further work is required to understand how investigators can employ rational methods of participant allocation to therapy groups [83] in the context of cluster RCTs.


This paper provides a rational basis for planning group intervention trials, especially how to match the demand of research participants to the supply of trained group facilitators. Investigators need to consider how to time consent and randomisation to minimise post-randomisation attrition. They should plan for both facilitator and participant attrition and consider how group attendance and group size affects treatment fidelity. Further research is needed on expedited set-up of sites in cluster randomised RCTs involving group therapies as well as appropriate baseline group composition and participant replenishment following attrition.

Availability of data and materials

Requests for patient-level data should be made to the corresponding author and will be considered by all authors who, although specific consent for data sharing was not obtained, will release data on a case-by-case basis following the principles for sharing patient-level data as described by Smith et al. [84]. The presented data do not contain any direct identifiers; we will minimise indirect identifiers and remove free-text data to minimise the risk of identification.



Clinical Trials Research Unit


Interquartile range


Medical Research Council


National Health Service


National Institute for Health Research


Normalisation Process Theory


Research & Development


Randomised controlled trials


School of Health and Related Research


Standard deviation


UK Clinical Research Collaboration


  1. Wright J, Foster A, Cooper C, Sprange K, Walters S, Berry K, et al. Study protocol for a randomised controlled trial assessing the clinical and cost-effectiveness of the Journeying through Dementia (JtD) intervention compared to usual care. BMJ Open. 2019;9:e029207.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Mountain G, Windle G, Hind D, Walters S, Keertharuth A, Chatters R, et al. A preventative lifestyle intervention for older adults (lifestyle matters): a randomised controlled trial. Age Ageing. 2017;46:627–34.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Hind D, Mountain G, Gossage-Worrall R, Walters SJ, Duncan R, Newbould L, et al. Putting Life in Years (PLINY): a randomised controlled trial and mixed-methods process evaluation of a telephone friendship intervention to improve mental well-being in independently living older people. Public Heal Res. 2014;2:1–222.

    Article  Google Scholar 

  4. Heller S, White D, Lee E, Lawton J, Pollard D, Waugh N, et al. A cluster randomised trial, cost-effectiveness analysis and psychosocial evaluation of insulin pump therapy compared with multiple injections during flexible intensive insulin therapy for type 1 diabetes: The REPOSE Trial. Health Technol Assess (Rockv). 2017;21:1–277.

    Article  Google Scholar 

  5. Holt RIG, Gossage-Worrall R, Hind D, Bradburn MJ, McCrone P, Morris T, et al. Structured lifestyle education for people with schizophrenia, schizoaffective disorder and first-episode psychosis (STEPWISE): randomised controlled trial. Br J Psychiatry. 2019;214:63–73.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Yalom ID, Leszcz M. The theroy and practice of group psychotherapy. 5th ed. New York: Basic Books; 2005.

    Google Scholar 

  7. Borek AJ, Abraham C. How do Small Groups Promote Behaviour Change? An Integrative Conceptual Review of Explanatory Mechanisms. Appl Psychol Heal Well Being. 2018;10:30–61.

    Article  Google Scholar 

  8. Montgomery C. Role of dynamic group therapy in psychiatry. Adv Psychiatr Treat. 2002;8:34–41.

    Article  Google Scholar 

  9. Corsini RJ, Rosenberg B. Mechanisms of group psychotherapy: Processes and dynamics. J Abnorm Soc Psychol. 1955;51:406–11.

    Article  CAS  Google Scholar 

  10. Borek AJ, Abraham C, Greaves CJ, Gillison F, Tarrant M, Morgan-Trimmer S, et al. Identifying change processes in group-based health behaviour-change interventions: development of the mechanisms of action in group-based interventions (MAGI) framework. Health Psychol Rev. 2019;13:227–47.

    Article  Google Scholar 

  11. Hill CE. Is individual therapy process really different from group therapy process? The jury is still out. Couns Psychol. 1990;18:126–30 Accessed 29 Aug 2018.

    Article  Google Scholar 

  12. Huebner RA. Group procedures. In: Chan F, Berven NL, Thomas KR, editors. Counseling theories and techniques for rehabilitation health professionals. New York: Springer; 2004. p. 244–63.

    Google Scholar 

  13. Stead LF, Carroll AJ, Lancaster T. Group behaviour therapy programmes for smoking cessation. Cochrane Database Syst Rev. 2017;3:CD001007.

    Article  PubMed  Google Scholar 

  14. Hoddinott P, Chalmers M, Pill R. One-to-One or Group-Based Peer Support for Breastfeeding? Women’s Perceptions of a Breastfeeding Peer Coaching Intervention. Birth. 2006;33:139–46.

    Article  PubMed  Google Scholar 

  15. Paul-Ebhohimhen V, Avenell A. A systematic review of the effectiveness of group versus individual treatments for adult obesity. Obes Facts. 2009;2:17–24.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Renjilian DA, Perri MG, Nezu AM, McKelvey WF, Shermer RL, Anton SD. Individual versus group therapy for obesity: Effects of matching participants to their treatment preferences. J Consult Clin Psychol. 2001;69:717–21.

    Article  CAS  PubMed  Google Scholar 

  17. Deakin TA, McShane CE, Cade JE, Williams R. Group based training for self-management strategies in people with type 2 diabetes mellitus. Cochrane Database Syst Rev. 2005;2:CD003417.

    Article  Google Scholar 

  18. Fals-Stewart W, Marks A, Schafer J. A Comparison of Behavioral Group Therapy and Individual Behavior Therapy in Treating Obsessive-Compulsive Disorder. J Nerv Ment Dis. 1993;181:189–93.

    Article  CAS  PubMed  Google Scholar 

  19. Hoddinott P, Allan K, Avenell A, Britten J. Group interventions to improve health outcomes: A framework for their design and delivery. BMC Public Health. 2010;10:800.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Broome KM, Flynn PM, Knight DK, Simpson DD. Program structure, staff perceptions, and client engagement in treatment. J Subst Abus Treat. 2007;33:149–58.

    Article  Google Scholar 

  21. Bastien CH, Morin CM, Ouellet M-C, Blais FC, Bouchard S. Cognitive-Behavioral Therapy for Insomnia: Comparison of Individual Therapy, Group Therapy, and Telephone Consultations. J Consult Clin Psychol. 2004;72:653–9.

    Article  PubMed  Google Scholar 

  22. Barrowclough C, Haddock G, Lobban F, Jones S, Siddle R, Roberts C, et al. Group cognitive-behavioural therapy for schizophrenia. Br J Psychiatry. 2006;189:527–32.

    Article  PubMed  Google Scholar 

  23. Wykes T, Hayward P, Thomas N, Green N, Surguladze S, Fannon D, et al. What are the effects of group cognitive behaviour therapy for voices? A randomised control trial. Schizophr Res. 2005;77:201–10.

    Article  PubMed  Google Scholar 

  24. Morrison N. Group Cognitive Therapy: Treatment of Choice or Sub-optimal Option? Behav Cogn Psychother. 2001;29:311–32.

    Article  Google Scholar 

  25. Whitfield G. Group cognitive–behavioural therapy for anxiety and depression. Adv Psychiatr Treat. 2010;16:219–27.

    Article  Google Scholar 

  26. Tasca GA, Mcquaid N, Balfour L. Complex contexts and relationships affect clinical decisions in group therapy. Psychotherapy. 2016;53:314–9.

    Article  PubMed  Google Scholar 

  27. Tuckman BW. Developmental sequence in small groups. Psychol Bull. 1965;63:384–99.

    Article  CAS  PubMed  Google Scholar 

  28. Tuckman BW, Jensen MAC. Stages of Small-Group Development Revisited. Gr Organ Stud. 1977;2:419–27.

    Article  Google Scholar 

  29. Harris PM. Attrition Revisited. Am J Eval. 1998;19:293–305.

    Article  Google Scholar 

  30. Erickson RC. Inpatient Small Group Psychotherapy: A Survey. Clin Psychol Rev. 1982;2:137–52 Accessed 30 Aug 2018.

    Article  Google Scholar 

  31. Weis J. Support groups for cancer patients. Support Care Cancer. 2003;11:763–8.

    Article  PubMed  Google Scholar 

  32. Thorn BE, Kuhajda MC. Group cognitive therapy for chronic pain. J Clin Psychol. 2006;62:1355–66.

    Article  PubMed  Google Scholar 

  33. Castore GF. Number of verbal interrelationships as a determinant of group size. J Abnorm Soc Psychol. 1962;64:456–8.

    Article  CAS  PubMed  Google Scholar 

  34. Slavson SR. Are There “Group Dynamics” in Therapy Groups? Int J Group Psychother. 1957;7:131–54.

    Article  Google Scholar 

  35. Cohen SL, Cecil ED, Rice A. Maximising the therapeutic effectiveness of small psychotherapy groups. Group. 1985;9:3–9 Accessed 30 Aug 2018.

    Article  Google Scholar 

  36. Anderson TI. Small and unfilled psychotherapy groups: Understanding and using them effectively. Group. 1993;17:13–20.

    Article  Google Scholar 

  37. Fulkerson CCF, Hawkins DM, Alden AR. Psychotherapy Groups of Insufficient Size. Int J Group Psychother. 1981;31:73–81.

    Article  CAS  PubMed  Google Scholar 

  38. Hollon SD, Shaw BF. Group Cognitive Therapy for Depressed Patients. In: Beck AT, Rush AJ, Shaw BF, Emery G, editors. Cognitive therapy of depression. New York: Guilford Press; 1979. p. 328–53.

    Google Scholar 

  39. Hare AP. Handbook of small group research. New York: Free Press; London: Collier-Macmillan; 1962. Accessed 30 Aug 2018.

  40. Bond GR. Positive and negative norm regulation and their relationship to therapy group size. Group. 1984;8:35–44.

    Article  Google Scholar 

  41. Roberts C, Roberts SA. Design and analysis of clinical trials with clustering effects due to treatment. Clin Trials J Soc Clin Trials. 2005;2:152–62.

    Article  Google Scholar 

  42. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, et al. Developing and evaluating complex interventions: new guidance. London: MRC; 2008.

    Google Scholar 

  43. Borek AJ, Abraham C, Smith JR, Greaves CJ, Tarrant M. A checklist to improve reporting of group-based behaviour-change interventions. BMC Public Health. 2015;15:963.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Flyvbjerg B. Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice. Eur Plan Stud. 2008;16:3–21.

    Article  Google Scholar 

  45. CTRU - Design, Trials &amp; Statistics - Sections - ScHARR - The University of Sheffield. Accessed 26 Sept 2019.

  46. Rhee H, Ciurzynski SM, Yoos HL. Pearls and Pitfalls of Community-Based Group Interventions for Adolescents: Lessons Learned from an Adolescent Asthma Camp Study. Issues Compr Pediatr Nurs. 2008;31:122–35.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Cooper CL, Hind D, Duncan R, Walters S, Lartey A, Lee E, et al. A rapid review indicated higher recruitment rates in treatment trials than in prevention trials. J Clin Epidemiol. 2015;68:347–54.

    Article  PubMed  Google Scholar 

  48. Cohen BJ, Mcgarvey EL, Pinkerton RC, Kryzhanivska L. Willingness and Competence of Depressed and Schizophrenic Inpatients to Consent to Research. 2004. Accessed 12 Oct 2018.

    Google Scholar 

  49. Siddiqi A-A, Sikorskii A, Given CW, Given B. Early participant attrition from clinical trials: role of trial design and logistics. Clin Trials J Soc Clin Trials. 2008;5:328–35.

    Article  Google Scholar 

  50. Greenfield SF, Crisafulli MA, Kaufman JS, Freid CM, Bailey GL, Connery HS, et al. Implementing substance abuse group therapy clinical trials in real-world settings: Challenges and strategies for participant recruitment and therapist training in the Women’s Recovery Group Study. Am J Addict. 2014;23:197–204.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Palmer R, Harrison M, Cross E, Enderby P. Negotiating excess treatment costs in a clinical research trial: the good, the bad and the innovative. Trials. 2016;17:71.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Simmons T. Attributing the costs of health and social care Research & Development (AcoRD). London: Department of Health; 2012.

    Google Scholar 

  53. Lenney W, Perry S, Price D. Clinical trials and tribulations: the MASCOT study. Thorax. 2011;66:457–8.

    Article  PubMed  Google Scholar 

  54. Snooks H, Hutchings H, Seagrove A, Stewart-Brown S, Williams J, Russell I. Bureaucracy stifles medical research in Britain: a tale of three trials. BMC Med Res Methodol. 2012;12:122.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Mackway-Jones K. Seeking funding for research. Emerg Med J. 2003;20:359–61.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  56. Hawe P, Shiell A, Riley T. Complex interventions: how &quot;out of control&quot; can a randomised controlled trial be? BMJ. 2004;328:1561–3.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Linnan L, Steckler A. Process evaluation for public health interventions and research: an overview. In: Linnan L, Steckler A, editors. Process evaluation for public health interventions and research. 1st ed. San Francisco: Jossey-Bass; 2002. p. 1–23.

    Google Scholar 

  58. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258 Accessed 19 Oct 2018.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, et al. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med. 2010;8:63.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Flight L, Allison A, Dimairo M, Lee E, Mandefield L, Walters SJ. Recommendations for the analysis of individually randomised controlled trials with clustering in one arm - A case of continuous outcomes. BMC Med Res Methodol. 2016;16:1–13.

    Article  Google Scholar 

  61. Walters SJ. Therapist effects in randomised controlled trials: What to do about them. J Clin Nurs. 2010;19:1102–12.

    Article  PubMed  Google Scholar 

  62. Firth N, Barkham M, Kellett S, Saxon D. Therapist effects and moderators of effectiveness and efficiency in psychological wellbeing practitioners: A multilevel modelling analysis. Behav Res Ther. 2015;69:54–62.

    Article  PubMed  Google Scholar 

  63. Candlish J, Teare MD, Dimairo M, Flight L, Mandefield L, Walters SJ. Appropriate statistical methods for analysing partially nested randomised controlled trials with continuous outcomes: a simulation study. BMC Med Res Methodol. 2018;18:105.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Campbell M, Walters S. How to design, analyse and report cluster randomised trials in medicine and health related research. Chichester: Wiley; 2014.

    Book  Google Scholar 

  65. Bernardo J, Nowacki A, Martin R, Fanaroff JM, Hibbs AM. Multiples and parents of multiples prefer same arm randomization of siblings in neonatal trials. J Perinatol. 2014;35:208–13.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Ukoumunne OC, Gulliford MC, Chinn S, Sterne JA, Burney PG. Methods for evaluating area-wide and organisation-based interventions in health and health care: a systematic review. Health Technol Assess. 1999;3:iii–92.

    Article  CAS  PubMed  Google Scholar 

  67. Walters SJ, Bonacho dos Anjos Henriques-Cadby I, Bortolami O, Flight L, Hind D, Jacques RM, et al. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme. BMJ Open. 2017;7:e015276.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Mountain GA, Hind D, Gossage-Worrall R, Walters SJ, Duncan R, Newbould L, et al. “Putting Life in Years” (PLINY) telephone friendship groups research study: pilot randomised controlled trial. Trials. 2014;15:141.

    Article  PubMed  PubMed Central  Google Scholar 

  69. White D, Hind D. Projection of participant recruitment to primary care research: a qualitative study. Trials. 2015;16:473.

    Article  PubMed  PubMed Central  Google Scholar 

  70. Pfadenhauer LM, Mozygemba K, Gerhardus A, Hofmann B, Booth A, Lysdahl KB, et al. Context and implementation: A concept analysis towards conceptual maturity. Z Evid Fortbild Qual Gesundhwes. 2015;109:103–14.

    Article  PubMed  Google Scholar 

  71. Spilker B, Cramer JA. A frame of reference for patient recruitment Issues. In: Patient Recruitment in Clinical Trials. New York: Raven; 1991. p. 3–23.

    Google Scholar 

  72. Stein REK, Bauman LJ, Ireys HT. Who enrolls in prevention trials? Discordance in perception of risk by professionals and participants. Am J Community Psychol. 1991;19:603–17.

    Article  CAS  PubMed  Google Scholar 

  73. Cassileth BR. Attitudes Toward Clinical Trials Among Patients and the Public. JAMA. 1982;248:968.

    Article  CAS  PubMed  Google Scholar 

  74. Nager AL, Khanna K. Emergency Department Surge: Models and Practical Implications. J Trauma Inj Infect Crit Care. 2009;67(Supplement):S96–9.

    Article  Google Scholar 

  75. Vanderby S, Carter MW. An evaluation of the applicability of system dynamics to patient flow modelling. J Oper Res Soc. 2010;61:1572–81.

    Article  Google Scholar 

  76. Watson SK, Rudge JW, Coker R. Health Systems’ “Surge Capacity”: State of the Art and Priorities for Future Research. Milbank Q. 2013;91:78–122.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Schulz KF, Grimes DA. Sample size slippages in randomised trials: Exclusions and the lost and wayward. Lancet. 2002;359:781–5.

    Article  PubMed  Google Scholar 

  78. Hollis S, Campbell F. What is meant by intention to treat analysis? Survey of published randomised controlled trials. BMJ. 1999;319:670–4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  79. Hawe P. Lessons from complex interventions to improve health. Annu Rev Public Health. 2015;36:307–23.

    Article  PubMed  Google Scholar 

  80. Bellg AJ, Resnick B, Minicucci DS, Ogedegbe G, Ernst D, Borrelli B, et al. Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23:443–51.

    Article  PubMed  Google Scholar 

  81. Holt RIG, Pendlebury J, Wildgust HJ, Bushe CJ. Intentional weight loss in overweight and obese patients with severe mental illness: 8-year experience of a behavioral treatment program. J Clin Psychiatry. 2010;71:800–5.

    Article  PubMed  Google Scholar 

  82. Kealy D, Ogrodniczuk JS, Piper WE, Sierra-Hernandez CA. When it is not a good fit: Clinical errors in patient selection and group composition in group psychotherapy. Psychotherapy. 2016;53:308–13.

    Article  PubMed  Google Scholar 

  83. Gans JS, Counselman EF. Patient selection for psychodynamic group psychotherapy: practical and dynamic considerations. Int J Group Psychother. 2010;60:197–220.

    Article  PubMed  Google Scholar 

  84. Tudor Smith C, Hopkins C, Sydes MR, Woolfall K, Clarke M, Murray G, et al. How should individual participant data (IPD) from publicly funded clinical trials be shared? BMC Med. 2015;13:298.

    Article  Google Scholar 

Download references


We gratefully acknowledge all participants and staff that took part in the five included studies and the Chief Investigators of the projects: Gail Mountain (LM, PLINY, JtD), Simon Heller (REPOSE) and Richard Holt (STEPWISE). We are grateful to the Independent trial Steering Committees and Data Monitoring and Ethics Committees for their expertise and guidance on all five trials. We would like to thank Emily Turton (CTRU) for her assistance with the data for JtD.


REPOSE, STEPWISE and JtD were funded by the National Institute for Health Research (NIHR) Health Technology Assessment Programme (08/107/01, 12/28/05, 14/140/80) and PLINY was funded by NIHR Public Health Research Programme (09–3004-01). LM was funded by the Medical Research Council (MRC, grant number G1001406). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, the Department of Health or the MRC. There was no specific funding for the writing of this paper.

Author information

Authors and Affiliations



KB and DH worked the first draft of this paper with input from RGW, KS, DW, JW, RC, KBe, DP, MB, SJW and CC. RGW managed STEPWISE, KS and RC managed LM, DW and DP managed REPOSE and JW managed JtD and all provided detail and information regarding their experiences. All authors reviewed and commented on the manuscript drafts and approved the final version for submission.

Corresponding author

Correspondence to Katie Biggs.

Ethics declarations

Ethics approval and consent to participate

All trials discussed in this paper were ethically approved and participants provided written informed consent to take part in the included trials.

LM [2] - South Yorkshire Research Ethics Committee (reference 12/YH/0101).

PLINY [3] - South Yorkshire Research Ethics Committee (reference.

REPOSE [42] - North West, Liverpool East Research Ethics Committee (reference 11/H1002/10).

STEPWISE [5] - Yorkshire & the Humber - South Yorkshire Research Ethics Committee (reference 14/YH/0019).

JtD [1] - Yorkshire & the Humber - Leeds East Research Ethics Committee (reference number 16/YH/0238).

Consent for publication

All authors and Chief Investigators of the included trials consent to the publication of this paper.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Biggs, K., Hind, D., Gossage-Worrall, R. et al. Challenges in the design, planning and implementation of trials evaluating group interventions. Trials 21, 116 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: