The detailed reporting of both the operation and findings of intervention-based studies is essential to inform effective decision making and thus lead to real improvements in healthcare provision. Decision makers, whether at policy or practice level, need sufficient information in both domains to answer four questions: What level of benefit was achieved in the research? Are the findings valid and reliable? Is the intervention sufficiently defined as to allow it to be replicated? If replicated in other places, would similar benefit still be achieved?
For ‘simple’ interventions such as pharmaceutical products (those which are easy to define, have few components and for which the active ingredients are known) researchers face relatively few problems in providing sufficient information to help answer these questions. For example, benefit can be defined in terms of confidence intervals around primary and secondary outcome measures; reliability and validity can be assessed through adherence to a pre-set list of methodological criteria; the intervention can be defined in terms of dose and frequency; and it is reasonable to assume that a similar benefit will be obtained if replicated in a similar population, irrespective of place.
However, policy makers and practitioners are increasingly asked to make judgements regarding complex rather than simple interventions; in these cases an array of new problems arise. Complex interventions in healthcare are built up from a number of components, which may act independently or interdependently although the ‘active ingredient’ is generally difficult to specify . The components usually include behaviours, characteristics of behaviours (for example, frequency, timing), and methods of organising and delivering those behaviours for example type(s) of practitioners, setting and location.
In this paper, we describe potential problems identified in the literature regarding the operation and reporting of randomised controlled trials (RCTs) of complex interventions. While a number of case studies have shown that the context within which intervention studies take place may sometimes challenge the way in which interventions are delivered (particularly with regard to dose, fidelity and reach) [2–4], the focus on a single trial has necessarily limited their ability to identify the diverse ways in which context may challenge the central assumptions of the RCT, the degree to which these occur consistently across different trial or intervention types, or the degree to which this information is known to researchers or subsequently reported.
The RCT and the challenge of the complex intervention
Randomised controlled trials and meta-analyses remain the Gold Standard for evaluating the effectiveness of interventions and informing guidelines, protocols and policies. The strength and usefulness of the RCT design lies in its power to provide a credible link between cause and effect. However, decision makers need to be able to understand and define the cause. There is potential for the cause to be the intervention itself, elements of the healthcare context within which the intervention is being delivered, elements of the research process that are introduced to that setting (for example, presence of researchers and their operations), or a combination of all three. In other words, it is often difficult to separate the intervention from the context within which it was evaluated.
Common sense suggests, therefore, that RCTs of both complex and simple interventions face these challenges, as both take place in healthcare and experimental contexts which may adapt and evolve, be unpredictable, and involve the interconnected actions of individuals . However, for simple interventions these may pose fewer problems as the intervention is easier to define, easier to separate from context, and those contextual influences that may have the potential to influence the results are easier to separate and standardise . The more nuanced relationship between complex interventions and the healthcare and experimental contexts in which they are situated, poses a greater number of important challenges.
First, the components of a complex intervention may be difficult to define precisely as the distinction between intervention and context is unclear. The rigour that is at the heart of the scientific method embodied in the RCT requires a hypothesis, which includes an a priori definition of the intervention (that is, that A will/will not lead to B). However, given that complex interventions may consist of a mix of people, skills, devices, contexts, processes, actions and decisions, developing a definition of ‘A’ is always likely to be problematic, and this has been recognised by the UK’s Medical Research Council (MRC) in their framework for the development and evaluation of complex interventions . In practice, a single approach to definition may not be possible. Indeed, authors have pointed out that complex interventions require flexibility in their definitions, so that instead of defining and standardising them by ‘form’, they should be defined by ‘function’ , with clear indication of whether components are ‘fixed’ or ‘flexible’ .
Second, even where interventions can be defined and separated conceptually from their healthcare contexts, those elements of context that might influence trial operation or outcome may not be straightforward to identify and may be almost impossible to control or standardise. Indeed some settings may themselves be characterised as complex systems: being multifaceted, experiencing constantly shifting contexts and more similar to a dynamic ecology . The greater the complexity of the intervention, the greater the degree to which an intervention definition blurs into or depends on elements of context for its effectiveness. If the context cannot be fully controlled, then standardisation of a blurred intervention becomes impossible .
Third, since defining the intervention and controlling the clinical and experimental context is problematic, it may be difficult to know post facto what precisely led to any change detected in the RCT. Consequently, there is likely to be insufficient information to allow practitioners to make meaningful decisions about whether and how to implement it in their own setting to maximise effectiveness [12–15]. Even pragmatic trials do not fully get around these problems - although they provide more information about real world settings, their heterogeneity may limit the usefulness of their results for specific clinical situations [16, 17], and the debate continues as to the merits and pitfalls of explanatory versus pragmatic trials . Understanding the particular contexts in which interventions are evaluated is important for any clinical decision maker, regardless of where the trial sits on the explanatory-pragmatic continuum, as ‘any attempts to extrapolate from study settings to the real world are hampered by a lack of understanding of the key elements of individuals and the settings in which they were trialed’ (ICEBeRG, p5) .
If the relationship between intervention and context cannot be fully controlled then it should at least be fully acknowledged and its likely impact reported. This would assist in the interpretation of the results of RCTs , the implementation of research , and the synthesis of evidence from RCTs of complex interventions . In practice, many studies lack basic information about the trial and clinical contexts . This is perhaps unsurprising, as guidelines for reporting trials, have not, until recently, emphasised the importance of details about intervention components, standardisation and adherence [23, 24]. Indeed, these guidelines omit some aspects of interventions that may be important to understanding links between treatment and outcomes, including cultural sensitivity, adaptability and strategies for treatment implementation . Given that inadequate reporting of these issues can undermine judgements about the quality and generalisability of trials, it is important to explore ways in which reporting can be improved and a common language developed.
Although retrospective data collection about trial implementation may be helpful (particularly in detecting unanticipated issues), a more rigorous approach would be to know a priori those issues that are likely to threaten the internal validity of the trial and those that may impede the effectiveness of the intervention. Previous research has explored aspects of the context to: design a trial ; pilot or understand an intervention ; and explain process or interpret the findings of their research . However, the issues and problems identified by this research have not been explored across a spectrum of different trial situations and we therefore do not know whether they are generalisable to other trials of complex interventions.
Consequently, this paper reports a study that moves beyond previous single case studies of complex interventions, and uses a multiple case study approach to explore these diverse issues. Further details of the study including extensive description of methods and wider findings are available elsewhere [29
]. In particular, this study seeks to:
explore, from the perspectives of researchers and practitioners, what goes on ‘behind the scenes’ in randomised trials of complex interventions and establish what information on potential threats to trials is available and known to those running them;
set out the particular challenges of achieving control and standardisation in a real life setting, and;
describe key elements of trial environment and indicate how this might affect the implementation of complex interventions.