Skip to main content

Using qualitative evidence on patients’ views to help understand variation in effectiveness of complex interventions: a qualitative comparative analysis



Complex healthcare interventions consist of multiple components which may vary in trials conducted in different populations and contexts. Pooling evidence from trials in a systematic review is challenging because it is unclear which components are needed for effectiveness. The potential is recognised for using recipients’ views to explore why some complex interventions are effective and others are not. Methods to maximise this potential are poorly developed.


We used a novel approach to explore how patients’ views may explain the disparity in effectiveness of complex interventions. We used qualitative comparative analysis to explore agreement between qualitative syntheses of data on patients’ views and evidence from trialed interventions to increase adherence to treatments. We first populated data matrices to reflect whether the content of each trialed intervention could be matched with suggestions arising from patients’ views. We then used qualitative comparative analysis software to identify, by a process of elimination, the smallest number of configurations (patterns) of components that corresponded with patients’ suggestions and accounted for whether each intervention was effective or ineffective.


We found suggestions by patients were poorly represented in interventions. Qualitative comparative analysis identified particular combinations of components corresponding with patients’ suggestions and with whether an intervention was effective or ineffective. Six patterns were identified for an effective and four for an ineffective intervention. Two types of patterns arose for the effective interventions, one being didactic (providing clear information or instruction) and the other interactive (focusing on personal risk factors).


Our analysis highlights how data on patients’ views has the potential to identify key components across trials of complex interventions or inform the content of new interventions to be trialed.

Peer Review reports


Systematic reviews of randomised controlled trial (RCTs) are invaluable in combining evidence on effectiveness. Their interpretation is most straightforward in drug therapies. However, interpretation of the effectiveness of more complex interventions is challenging as the interventions may consist of several components that vary across trials but are assumed to contribute to the effect. Based on reasons of practicality or personal choice policy makers and planners may then select one of these varying interventions. They may also pick certain components contained in one or more of these interventions. New approaches are being sought to increase our understanding of how healthcare interventions exert their effects. These may involve asking trialists to identify the common components of their effective interventions, for example in relation to stroke care units [1]. They may also, based on expert consultation using consensus methods and literature review, involve developing taxonomy to classify and describe effective interventions [2]. Or they may involve seeking to identify mechanisms through which effects of the intervention are achieved. For example, behaviour change theory might be applied to trials identified in a systematic review of ‘audit and feedback’ interventions [3].

Another way is to use recipients’ perspectives, experiences or opinions on the potential suitability and utility of an intervention. Recipients’ views are clearly important, particularly for interventions that require their active participation. Innovative approaches to using such information are being explored, for instance drawing on qualitative evidence on recipients’ (usually patients’) views to inform effectiveness evidence from systematic reviews of trials [4]. The Cochrane Handbook on Systematic Reviews lists various ways qualitative evidence can be used [5]. These include (but are not limited to) helping to define the research question and helping to ensure the review includes important outcomes. Another way is to supplement reviews by synthesising qualitative evidence within a stand-alone, but complementary, quantitative review to address questions on aspects other than effectiveness. In this way it is possible to examine whether interventions that address patients’ priorities might be more effective than those that do not. If this is shown to be the case, it may be possible to identify components that may have more or less influence on outcomes. However, such considerations are constrained by the quality, accuracy and extent of descriptions of the intervention in papers reporting the results of trials [6, 7]. Sometimes, there may be no clear description of the intervention beyond a basic statement, for example what telephone counseling or psychosocial care involved [7]. In addition, it remains unclear exactly how best to bring together within a systematic review the different types of evidence (qualitative and quantitative). Moreover, linking qualitative and quantitative evidence may be hampered by the usual custom of publishing different kinds of evidence in separate journals.

In earlier research we used evidence from trials of a complex intervention in which content of the intervention was reported and whose overall effect was unclear [8]. This we combined with evidence from a review of qualitative studies (often referred to as a qualitative evidence synthesis (QES)). We used a data matrix table to align the evidence from the QES of patients’ views with the evidence from a Cochrane systematic review of trials. This allowed visual exploration of correspondence between (i) suggestions on intervention content derived from the QES, and (ii) components of interventions in effective and ineffective trials. We reported that components of effective interventions corresponded more often than those of ineffective interventions with patients’ priorities derived from the QES. The potential of combining mixed evidence in this way is recognised [9].

Qualitative comparative analysis (QCA) is an analytical approach and a set of techniques [10]. It is recognised in social sciences as an innovation in mixed methods research [11, 12], and it has been found to be applicable to the evaluation of complex public health interventions [13]. QCA has as yet untested potential for use in integrating reviews of qualitative and quantitative health evidence [14]. The potential of QCA in the analysis of a systematic review of complex interventions is seen in: (1) its usefulness in small datasets (where statistical analysis can be limited), and (2) how it seeks to explain a given outcome by identifying multiple pathways of putative causal factors. This may be viewed as analogous with locating different recipes to make a cake. This approach is relevant to the analysis of complex interventions where the focus should include the differing circumstances, mechanisms and patterns through which an intervention may fit together to exert its effect. In this paper, we report our research using a systematic review with a larger number of quality trials than in our earlier work [8]. This allowed us to test the use of QCA in understanding how to integrate qualitative and quantitative review evidence. In doing so, we aimed to identify more information than would be possible by visual inspection.

In this worked example of combining mixed evidence from systematic reviews on interventions to improve adherence to drug therapy, we used QCA to explore how qualitative evidence might explain the variability in effectiveness of complex interventions.

The objective was to identify matches between patients’ views and components of interventions and see whether these matches were associated with the effectiveness of interventions.



We first identified a systematic review of trials. We searched specifically for a Cochrane systematic review because these reviews are performed to internationally agreed standards. We sought a review on chronic disease self-management because it requires long-term commitment by patients, whose views are therefore pertinent. This review needed to involve a sufficient number of trials to require an analytical tool to aid quantitative analysis (at least five trials) [15]. It also needed to include trials that differed in intervention effect, and in which the content of the intervention was described in sufficient detail to understand how the intervention might be operationalised in practice. We chose a review of interventions to promote adherence to drug therapy for a range of conditions [16], because it fulfilled our criteria and because we were aware from earlier work that a complementary QES was available [8]. This QES by Vervoort and colleagues explored evidence about HIV/AIDS patients’ views on their disease self-management [17]. We purposively sought additional QES for adherence to therapy in other common chronic diseases reported in the trials included in the Cochrane review (such as diabetes, asthma and depression) [16]. We searched three large citation databases from 1999 to 2009; Medline, Psyinfo and Cinahl. We used terms relating to the diseases of participants in the trials and terms describing a QES. Details of the search strings for identifying QES are listed in Additional file 1. As the focus of this study was on the exploratory use of QCA, our search for QES was not exhaustive. We were not concerned that we may not identify a QES in relation to patients’ views on all diseases relevant to the Cochrane review. This is because our earlier research suggested that such evidence was not necessarily disease-specific [8]. We identified three relevant QES [1820]. They were selected on the basis that (1) they explored patients’ views on living or managing their chronic disease, (2) they were of sufficient methodological quality [21], and (3) their findings were or could be translated into suggestions for strategies to promote adherence to therapy for a range of long-term conditions. The QES by Vervoort et al. identified earlier thematically analysed evidence from 18 studies on patients views on adherence to HIV therapy [17]; of the others, Campbell et al. synthesised evidence in a meta-ethnographic review from 10 studies on patients’ views on diabetes including how they managed their disease [18], the QES by Malpass and colleagues was a meta-ethnography of 16 studies on patients’ experience of taking antidepressants [19] and the other, the QES by Schlomann and colleagues, analysed thematically 11 studies on lay beliefs about high blood pressure [20].

From each QES one author (BC) extracted data on findings relating to promoting adherence. They were listed as individual strategies to promote adherence. The extraction sheet used to extract data from the QES is detailed in Additional file 2. Another author (LJ) checked the list for completeness against the QES. The final list was discussed with all authors at project meetings. From the QES by Vervoort et al. we had already derived 22 suggested strategies [17]. The additional three QES provided 17 suggestions; of these, 14 were similar to those generated from the QES by Vervoort et al. [17]. They focused on the same issues, for example in the QES on beliefs about high blood pressure, the authors derived as an implication for practice that: ‘The therapeutic content of that consultation is in part dependent on the GP’s understanding of the patients’ beliefs and views regarding medication use’ [20]. This is similar to two suggestions from the QES by Vervoort et al., specifically the more general suggestion ‘Interventionists should enquire into possible factors influencing each individual patient before starting treatment’ and the more focused suggestion advising that ‘ambivalence towards medications should be discussed’ [17]. In total, we listed 25 different ways of promoting adherence. These are listed in Additional file 2: Table S2.

We did not set methodological quality criteria for selection of the review of trials as we assumed a published Cochrane review would have already undergone rigorous checking. However our earlier research suggested the need to restrict trials by quality [8]. Therefore, we only used the 21 trials that were assessed by the Cochrane reviewers as fulfilling their quality criteria [2242]. This was based on the key recommendation by the Cochrane Collaboration, which was assessment of whether randomization was concealed [43]. The 21 trialed interventions in our chosen Cochrane review varied in how they aimed to promote adherence, for example by providing more instruction, increasing convenience of care, and providing psychological therapy and/or group meetings.

Eleven of the trials were deemed by the Cochrane reviewers as effective in promoting adherence. This judgment was based on whether the P value was significant at the <0.05 level. On this basis we assigned a binary outcome for each of the trials; ‘1’ if the trial was effective and ‘0’ if ineffective. We were aware of the weakness of using the P value as indicating effect, but additional information was limited. We could not use the preferred option of effect size, because some trials did not report this. Trials also assessed adherence differently, including pill counts, self-report of adherence and pharmacist records. The data extraction form for whether the intervention was found to be effective is provided in Additional file 2: Table S3.

Conjoining extracted data

We scored the contents of each intervention against the 25 suggested strategies derived from the QES. The information on intervention content was key to our analysis. Details of the interventions were sought from the original trial papers in which they were more extensively reported. However, to ensure detail was sufficient for our analysis, on reading the papers we applied two criteria: (1) that more than two sentences were devoted to describing the intervention content, and (2) that components were described in detail [44]. What we meant by detail was that there was more than a brief statement. For instance, if the intervention was described as educational, then there needed to be detail on more than one aspect of that educational approach. All intervention descriptions met these criteria. Intervention descriptions ranged in length from three sentences to twenty-eight (the median in interventions found ineffective was six to eight, and in effective was nine to ten). The description for each trial intervention is provided in Additional file 3. We used binary scoring; 1 = suggestion corresponded with an intervention component, 0 = no correspondence. We were aware that binary scoring could reduce the potential information available. However, a more graded scoring was for many suggestions not relevant, and if adopted might have increased subjectivity in scoring. To enhance standardisation and clarity, scoring guidelines (available from authors) were devised and tested by independent researchers. We populated a data matrix table for each intervention with all the conjoined data and where the intervention was effective. We report an analysis of the relationships created in this table between the qualitative and quantitative data.

Qualitative comparative analysis: the approach

To analyse the relationships created we used QCA [10]. QCA uses Boolean algebra. It is grounded in set-theoretic relationships. This allows in the analysis all logically possible combinations of factors to be examined systematically in relation to the outcome: as in which ingredients work together to make a cake. In our worked example, QCA identifies across interventions all parsimonious patterns of components that match patients’ views and result in the intervention being effective. Here, parsimonious is the smallest number of patterns of components (in the dataset) that account for the intervention effect. This is not to imply that an intervention is effective because of the match with patients’ views but rather this congruence might moderate its effectiveness.

QCA is case-orientated in that it seeks to explain the outcome in terms of ‘pathways of components’ per trialed intervention: as in which ingredients per cake were essential or necessary. This differs from a conventional statistical approach, which would average the effect simultaneously across all interventions. Therefore, in this example QCA presents ‘key’ components per individual intervention. ‘Key’ in these data means those found to be the simplest explanation (in the data provided) for the results.

Qualitative comparative analysis: the technique

To explore patterns in the relationships between the data we used specific QCA analysis software [45]. The software explored these data using Boolean algorithms. This involved pair-wise comparison between interventions to establish commonalities in the way the relationships created (between the qualitative and quantitative evidence) combined together in effective and ineffective interventions. To do this the QCA software reduced the data presented per intervention by only retaining those data found to produce the outcome.

The analytical process has several stages; these are charted in Figure 1. QCA involves analysis of cases, factors and outcomes. In these data cases are the interventions and outcome is whether or not the intervention under trial conditions was found to be effective. Factors were the relationships created between the suggested strategies and the intervention content, that is whether or not the intervention contained content that corresponded with patients’ views on how to promote adherence. QCA is limited in the number of factors that can be included in analysis. This is because cases must be aligned with factors and this process can quickly become unwieldy. This is due to the large number of possible combinations of factors that are created. For two factors there will be four possible configurations, for nine factors there will be 512 and so on. Thus the number of possible configurations of factors can quickly exceed the number of cases. On examining published papers using QCA, we found that none used more than 10 factors. In our worked example, we reduced our number of suggested strategies from 25 by first not including those that had been taken up by three or fewer of the trialed interventions. We then combined those that had some overlap in aim. This resulted in nine suggested strategies. See Table 1.

Figure 1

Flow chart for QCA analysis.

Table 1 Reducing for the purposes of qualitative comparative analysis (QCA) the number of factors to below 10

QCA analysis is unidirectional; therefore we undertook two analyses, one for effective and a second for ineffective interventions.

Analysis models

QCA uses three analysis models to summarise findings [10]. These are the complex, intermediate and the parsimonious. We present the parsimonious model because, unlike the complex model, it fully utilises the mathematical approach by allowing inferences to be made on unobserved cases. Inferences are based on the patterns found between the data in the observed cases; the model makes assumptions on the outcome of the combinations of factors not taken up in the unobserved cases. In the complex model, no inferences are made on unobserved cases. In a sensitivity analysis, we also ran the analysis using this model. In the intermediate model, the researcher makes assumptions on unobserved cases by testing the effect in one selected direction only (as in a one-tailed statistical test). We did not use this model as a suggested strategy could have potentially led to a negative outcome.

Sensitivity analysis

Our approach involved subjective decisions that could impact on our findings. Therefore, we tested the underlying assumptions in our model by:

  •  reducing data by (1) excluding cases (interventions) if they did not correspond with any of the suggestions (factors); and (2) removing certain suggestions because they did not enter the final analysis model or were found in analysis to explain the outcome by both their presence or their absence;

  •  increasing the data by including all 67 trialed interventions (irrespective of quality) in the Cochrane review;

  •  analysing the data differently by (1) reporting the results using the QCA complex model; and (2) using (where appropriate) a graded scoring system rather than binary to score agreement between intervention components and suggestions.


We found that suggestions by patients were poorly represented in interventions. In Additional file 4 this is illustrated in a cell by a ‘–’. QCA identified six potential pathways (configurations of components) to an effective and four to an ineffective intervention. The configurations are different per outcome. Each configuration is represented in Additional file 4 by a different colour shading, border or cell pattern.

Effective interventions

There were two types of patterns for the effective interventions; one involving the presence of one component (pathway 1), and the others involving more components of which some are shared between the configurations (pathways 2 to 6).

The configuration most commonly associated with effectiveness (found in eight trials) involved one component: ‘a focus on personal risk factors’ (pathway 1: [2224, 26, 27, 30, 33, 34]). Other configurations each involved either the presence of ‘explaining the value of adherence’ or ‘provision of clear/appropriate information on how to take medication’, and the absence of other components (Table 2 and Additional file 4: Table S1).

Table 2 Pathways identified to effective and ineffective interventions

The pathways correspond with two distinct approaches to promoting adherence, one being interactive (‘a focus on personal risk factors’) (pathway 1) and the other more didactic (‘emphasizing clear or appropriate information’) (pathways 2 to 6). Neither the interactive nor the didactic approach was differently associated with the specific diseases in the trial populations.

Ineffective interventions

All four configurations (pathways) for the ineffective interventions included the absence of one component, ‘a focus on personal risk factors’. The configurations also involved various combinations of other components (Table 2 and Additional file 4: Table S2).

Sensitivity analyses

There were sufficient data to undertake all proposed sensitivity analyses. None of the analyses had dramatically different findings, in that most of the dominant patterns per outcome remained. A common component, ‘a focus on personal risk factors’, for an effective intervention remained in most analyses. As before, the configurations for ineffective interventions commonly remained the mirror opposites of those found in the models for an effective outcome. As expected, the sensitivity analyses produced more configurations. Some of these (particularly when all trials irrespective of quality were included) were conflicting, in that they included per outcome both the presence and absence of certain factors.


Main findings

We tested QCA as a way to combine QES evidence on patients’ views with evidence from a systematic review of trials of a complex intervention. We used as a worked case example evidence on interventions to promote adherence to long-term drug therapy. We found in general, patients’ suggestions on what is important were poorly represented in the interventions. Using QCA we found that certain suggestions seemed to hold together in particular patterns that corresponded to whether the intervention was effective or not. Three influential components were identified in the effective interventions, while other components were influential through their absence. It seemed that those that were influential by their absence were those in which those providing the intervention focused on negative influences on adherence, such as ‘the need to take the drug continuously irrespective of symptoms’. The pattern of components suggests two approaches to promoting adherence. One involved adopting an interactive style focusing on personal risk factors. The other was a more didactic approach. This involved emphasising clear, appropriate information including the value of adherence but without discussing certain potentially challenging aspects, such as discussion of attitudes to medication or disease, missing doses or enhancing social support, or taking the drug continuously irrespective of symptoms. Individual patients may respond better to either the interactive or didactic approach.

It is difficult to identify other research that has similar overall findings, in part because of the novelty of this approach. However, what we found had face validity. In particular, the correspondence between effective and ineffective interventions was often mirror opposites. This was best illustrated by the suggestion that the intervention should focus on personal risk factors, which was present in most of the effective interventions and absent in all ineffective interventions. These findings generate hypotheses about what may be more usefully included in future interventions, which then need to be followed by randomised controlled trials to generate evidence about effectiveness.

Limitations and advantages

There are challenges to the use of QCA as used in this study in the evaluation of interventions. These make interpretation of our findings more difficult. First, prior to QCA analysis, selection and processing of the evidence from source to our analysis involved the need for several decisions. As the focus of this study was on the exploratory use of QCA, our search for QES was not exhaustive. We used only controlled vocabulary and a limited number of databases. This presupposes that there are more QES beyond those located, from any further QES there may be other ways to promote adherence. There were several stages of synthesis, extraction and interpretation, with only the final stages undertaken by us. For the qualitative evidence this involved: (1) patients’ views on living with and managing a disease being collected and analysed in a large number of studies; (2) the findings from these studies being pooled in further qualitative analysis, which used different methodologies in critiquing and analysing; and (3) these findings then being translated into suggestions for intervention components. In the quantitative evidence, the method was challenged by reliance on the completeness of intervention descriptions. Complex interventions are often poorly described in main trial papers and this is a real limitation to any attempt (using QCA or another method) to understand why variations in effectiveness occur. The brevity of descriptions of the intervention and the lack of agreement over what constitutes a complete description make it problematic to assume that particular aspects of an intervention were absent simply because they were unreported. Another limitation in our work is using a P value to decide whether or not an intervention was effective or not. However, wherever possible, we used robust approaches to reduce the risk of bias, such as by setting quality criteria, and making independent checks of steps undertaken during the study.

In our use of QCA we also needed to make subjective decisions that could have introduced error. However, we undertook sensitivity analyses to test the consistency of our findings. In QCA a limited number of factors can be analysed and this restricted our exploration of heterogeneity of interventions to less than ten features. Additionally, within any one (QCA) analysis, the exploration of an outcome can only be conducted in one direction (either the outcome is effective or ineffective). A single analysis cannot be used to quantify and compare head to head the differences between the outcomes. The method also does not incorporate a statistical (probability) measure of the precision of the relationships found in the data, or the likelihood that they are simply chance findings. However, the small dataset limits the use of conventional statistics and thus alternative approaches are needed. Finally, QCA is only of use in circumstances where the trial evidence continues to result in equivocal results. Thus not all interventions have the potential to undergo this sort of analysis.

This case study illustrates the potential for QCA to address a key challenge in trials of complex interventions, namely understanding why some are effective and others not. It is particularly useful where the pathways (as in complex interventions) to success may differ across trials. Using QCA to analyse complex healthcare interventions could identify several types of best practice; that is, effective interventions that share similar key components. Thereby planners and practitioners may be informed by patient choice to select intervention components for a type of best practice based on local conditions and choice. This is with an understanding of which components it may be more important to include.

QCA is under-explored and its usefulness in complex intervention development is not established. This study does not suggest that this approach may be useful in every circumstance, or that QCA is more informative than other approaches being considered. Moreover, there is value in consulting patients directly in the development of an intervention [46]. While the results of local consultation apply to the specific setting, they also shed light on implications of the intervention elsewhere and the need for consideration of recipients’ views and behaviours. For instance, Atkins and colleagues [47] explored patients’ experiences of a new intervention, aiming to empower them to take more responsibility for the management of their tuberculosis. They found that the intervention had achieved its aims but only in patients internalising the intervention messages, not necessarily resulting in an increase adherence.

This paper highlights the value of listening to patients’ views in order to understand disease management. In the case of chronic disease, greater adherence seems to be associated with a focus on personal risk factors, an emphasis on the value of adherence, and the provision of clear and appropriate information on how to adhere. We have demonstrated the potential value of using qualitative research to explain the varying effectiveness of complex interventions. We call for more integrative research in this area.

The usefulness of QCA should be tested by comparing it with alternative approaches such as using more subjective researcher judgments in exploring the same evidence, or (dependent on data limitations) with other analytical tools such as such as Bayesian statistics or regression methods. The 10 factors (listed in Table 1) that were mentioned in three or fewer interventions and omitted from the analysis may deserve further attention from primary research.


In this case study, we found that the application of QCA enhanced our understanding of the effectiveness of complex interventions.



Qualitative comparative analysis


Qualitative evidence synthesis


Randomised controlled trials.


  1. 1.

    Langhorne P, Pollock A, in conjunction with the Stroke Unit Trialists’ Collaboration: What are the components of effective stroke unit care?. Age Ageing. 2002, 31: 365-371. 10.1093/ageing/31.5.365.

    Article  PubMed  Google Scholar 

  2. 2.

    Lamb SE, Becker C, Gillespie LD, Smith JL, Finnegan S, Potter R, Pfeiffer K, for The Taxonomy Investigators: Reporting of complex interventions in clinical trials: development of a taxonomy to classify and describe fall-prevention interventions. Trials. 2011, 12: 125-10.1186/1745-6215-12-125.

    Article  PubMed  PubMed Central  Google Scholar 

  3. 3.

    Gardner B, Whittington C, McAteer J, Eccles MP, Michie S: Using theory to synthesise evidence from behaviour change interventions: the example of audit and feedback. Soc Sci Med. 2010, 70: 1618-1625. 10.1016/j.socscimed.2010.01.039.

    Article  PubMed  Google Scholar 

  4. 4.

    Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J: Integrating qualitative research with trials in systematic reviews. BMJ. 2004, 328: 1010-1012. 10.1136/bmj.328.7446.1010.

    Article  PubMed  PubMed Central  Google Scholar 

  5. 5.

    Noyes J, Popay J, Pearson A, Hannes K, Booth A, on behalf of the Cochrane Qualitative Research Methods Group: Qualitative research and Cochrane reviews. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0 [updated March 2011]. Edited by: The Cochrane Collaboration, Higgins JPT, Green S. 2011, Chapter 20. Available at []

    Google Scholar 

  6. 6.

    Glasziou P, Heneghan C, Shepperd S: What is missing from descriptions of treatment in trials and reviews?. BMJ. 2008, 336: 1472-1474. 10.1136/bmj.39590.732037.47.

    Article  PubMed  PubMed Central  Google Scholar 

  7. 7.

    Schroter S, Glasziou P, Heneghan C: Quality of descriptions of treatments: a review of published randomised controlled trials. BMJ Open. 2012, 2: e001978-10.1136/bmjopen-2012-001978.

    Article  PubMed  PubMed Central  Google Scholar 

  8. 8.

    Candy B, King M, Jones L, Oliver S: Using qualitative synthesis to explore heterogeneity of complex interventions. BMC Med Res Methodol. 2008, 11: 124-

    Article  Google Scholar 

  9. 9.

    Totten AM, Wagner J, Tiwari A, O’Haire C, Griffin J, Walker M: Closing the quality gap: revisiting the state of science. Public Reporting as a Quality Improvement Strategy. (Evidence Reports/Technology Assessments, No.208.5.) Vol 5. 2012, Rockville (MD): Agency for Healthcare Research and Quality (US), Available at []

    Google Scholar 

  10. 10.

    Rihoux B, Ragin CC: Configurational comparative methods. Qualitative comparative analysis (QCA) and related techniques. 2008, Los Angeles: Sage Publications

    Google Scholar 

  11. 11.

    Ragin C: Qualitative comparative analysis. Proceedings of Economic and Social Research Council Research UK Methods 2nd Festival 18-20. 2006, Oxford, Available at: [], July

    Google Scholar 

  12. 12.

    Cooper B, Glaesser J: Qualitative work and the testing and development of theory: lessons from a study combining cross-case and within-case analysis via Ragin’s QCA. Proceedings of the Economic and Social Research Council Research UK Methods 5th Festival 2-5. 2012, Oxford, Available at: [], July

    Google Scholar 

  13. 13.

    Blackman T, Wistow J, Byrne D: A qualitative comparative analysis of factors associated with trends in narrowing health inequalities in England. Soc Sci Med. 2011, 72: 1965-1974. 10.1016/j.socscimed.2011.04.003.

    Article  PubMed  Google Scholar 

  14. 14.

    Pope C, May N, Popay J: Synthesizing qualitative and quantitative health evidence. 2007, Maidenhead, UK: Open University Press

    Google Scholar 

  15. 15.

    Berg Schlosser D, De Meur G: Comparative research design: case and variable selection. Configurational comparative methods. Qualitative comparative analysis (QCA) and related techniques. Edited by: Rihoux B, Ragin C. 2009, Thousand Oaks: Sage

    Google Scholar 

  16. 16.

    Haynes RB, Ackloo E, Sahota N, McDonald HP, Yao X: Interventions for enhancing medication adherence. Cochrane Database Syst Rev. 2008, 2: CD000011

    Google Scholar 

  17. 17.

    Vervoort SCJM, Borleffs JCC, Hoepelman AIM, Grypdonck MHF: Adherence in antiretroviral therapy: a review of qualitative studies. AIDS. 2007, 21: 271-281. 10.1097/QAD.0b013e328011cb20.

    Article  PubMed  Google Scholar 

  18. 18.

    Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J: Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003, 56: 671-684. 10.1016/S0277-9536(02)00064-3.

    Article  PubMed  Google Scholar 

  19. 19.

    Malpass A, Shaw A, Sharp D, Walter F, Feder G, Ridd M, Kessler D: “Medication career” or “Moral career”? The two sides of managing antidepressants: a meta-ethnography of patients’ experience of antidepressants. Soc Sci Med. 2009, 68: 154-168. 10.1016/j.socscimed.2008.09.068.

    Article  PubMed  Google Scholar 

  20. 20.

    Schlomann P, Schmitke J: Lay beliefs about hypertension: an interpretive synthesis of the qualitative research. J Am Acad Nurse Pract. 2007, 19: 358-367. 10.1111/j.1745-7599.2007.00238.x.

    Article  PubMed  Google Scholar 

  21. 21.

    Tong A, Flemming K, McInnes E, Oliver S, Craig J: Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012, 12: 181-10.1186/1471-2288-12-181.

    Article  PubMed  PubMed Central  Google Scholar 

  22. 22.

    Berrien V, Salazar J, Reynolds E, Mckay K: Adherence to antiretroviral therapy in HIV-infected paediatric patients improves with home-based intensive nursing intervention. AIDS Patient Care STDS. 2004, 18: 355-363. 10.1089/1087291041444078.

    Article  PubMed  Google Scholar 

  23. 23.

    Farber HJ, Oliveria L: Trial of an asthma education program in an inner-city pediatric emergency department pediatric. Pediatr Asthma Allergy Immunol. 2004, 17: 107-115. 10.1089/0883187041269913.

    Article  Google Scholar 

  24. 24.

    Haynes RB, Sack DL, Gibson ES, Tayler DW, Hackett BC, Roberts RS, Johnson AL: Improvement of medication compliance in uncontrolled hypertension. Lancet. 1976, 1: 1265-1268.

    CAS  Article  PubMed  Google Scholar 

  25. 25.

    Hill J, Bird H, Johnson S: Effect of patient education on adherence to drug treatment for rheumatoid arthritis: a randomised controlled trial. Ann Rheum Dis. 2001, 60: 869-875.

    CAS  PubMed  PubMed Central  Google Scholar 

  26. 26.

    Kemp R, Hayward P, Applewhaite G, Everitt B, David A: Compliance therapy in psychotic patients: randomised controlled trial. BMJ. 1996, 312: 345-349. 10.1136/bmj.312.7027.345.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  27. 27.

    Kemp R, Kirov G, Everitt B, Hayward P, David A: Randomised controlled trial of compliance therapy 18-month follow-up. Br J Psychiatry. 1998, 172: 413-419. 10.1192/bjp.172.5.413.

    CAS  Article  PubMed  Google Scholar 

  28. 28.

    Laporte S, Quenet S, Buchmüller-Cordier A, Reynaud J, Tardy-Poncet B, Thirion C, Decousus H, Mismetti P: Compliance and stability of INR of two oral anticoagulants with different half-lives: a randomised trial. Thromb Haemost. 2003, 89: 458-467.

    CAS  PubMed  Google Scholar 

  29. 29.

    Lee JK, Grace KA, Taylor AJ: Effect of a pharmacy care program on medication adherence and persistence, blood pressure, and low-density lipoprotein. Cholesterol: a randomized controlled trial. JAMA. 2006, 296: 2563-2571. 10.1001/jama.296.21.joc60162.

    CAS  Article  PubMed  Google Scholar 

  30. 30.

    Levy LM, Robb M, Allen J, Doherty C, Bland JM, Winter RJ: A randomized controlled evaluation of specialist nurse education following accident and emergency department attendance for acute asthma. Respir Med. 2004, 94: 900-908.

    Article  Google Scholar 

  31. 31.

    Márquez Contreras E, Casado Martínez JJ, Corchado Albalat Y, Chaves Gonzalez R, Grandio A, Losada Velasco C, Obando J, de Eugenio JM, Barrera JM: [Efficacy of an intervention to improve treatment compliance in hyperlipidemias]. Aten Primaria. 2004, 33: 443-450.

    Article  PubMed  Google Scholar 

  32. 32.

    Márquez Contreras E, GarcÍa OV, Claros NM, Gil-Guillen V, de la Figuera-Von WM, Casado-Martinez JJ, Martin-de Pablos JL, Figueras M, Galera J, Serra A, Compliance Group of the Spanish Society of Hypertension: Efficacy of a home blood pressure monitoring programme on therapeutic compliance in hypertension: the EAPACUM-study. J Hypertens. 2006, 24: 169-175. 10.1097/01.hjh.0000198023.53859.a2.

    Article  PubMed  Google Scholar 

  33. 33.

    Peveler R, George C, Kinmonth AL, Campbell M, Thompson C: Effect of antidepressant drug counselling and information leaflets on adherence to drug treatment in primary care: randomised controlled trial. BMJ. 1999, 319: 612-615. 10.1136/bmj.319.7210.612.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  34. 34.

    Piette JD, Weinberger M, McPhee SJ, Mah CA, Kraemer FB, Crapo LM: Do automated calls with nurse follow-up improve self-care and glycemic control among vulnerable patients with diabetes?. Am J Med. 2000, 108: 20-27.

    CAS  Article  PubMed  Google Scholar 

  35. 35.

    Remien RH, Stirratt MJ, Dolezal C, Lobozzo JS, Wagner GJ, Carballo-Dieguez A, El-Bassel N, Jung TM: Couple-focused support to improve HIV medication adherence: a randomized controlled trial. AIDS. 2005, 19: 807-814. 10.1097/01.aids.0000168975.44219.45.

    Article  PubMed  Google Scholar 

  36. 36.

    Schaffer SD, Tian L: Promoting adherence: effects of theory-based asthma education. Clin Nurs Res. 2004, 13: 69-89. 10.1177/1054773803259300.

    Article  PubMed  Google Scholar 

  37. 37.

    Schroeder K, Fahey T, Hollinghurst S, Peters TJ: Nurse-led adherence support in hypertension: a randomized controlled trial. Fam Pract. 2005, 22: 144-151. 10.1093/fampra/cmh717.

    Article  PubMed  Google Scholar 

  38. 38.

    van Es SM, Nagelkerke AF, Colland VT, Scholten RJ, Bouter LM: An intervention programme using the ASE-model aimed at enhancing adherence in adolescents with asthma. Patient Educ Couns. 2001, 44: 193-203. 10.1016/S0738-3991(00)00195-6.

    CAS  Article  PubMed  Google Scholar 

  39. 39.

    Vergowen AC, Bakker A, Burger H, Verheij TJ, Koerselman F: A cluster randomized trial comparing two interventions to improve treatment of major depression in primary care. Psychol Med. 2005, 35: 25-33. 10.1017/S003329170400296X.

    Article  Google Scholar 

  40. 40.

    Volume CI, Farris KB, Kassam R, Cox CE, Cave A: Pharmaceutical care research and education project: patient outcomes. J Am Pharm Assoc. 2001, 41: 411-420.

    CAS  Article  Google Scholar 

  41. 41.

    Walley JD, Khan AM, Newell JN, Khan MH: Effectiveness of the direct observation component of DOTS for tuberculosis: a randomised controlled trial in Pakistan. Lancet. 2001, 357: 664-669. 10.1016/S0140-6736(00)04129-5.

    CAS  Article  PubMed  Google Scholar 

  42. 42.

    Weber R, Christen L, Christen S, Tschopp S, Znoj H, Schneider C, Schmitt J, Opravil M, Gunthard HF, Ledergerber B, Swiss HIV Cohort Study: Effect of individual cognitive behaviour intervention on adherence to antiretroviral therapy: prospective randomized trial. Antivir Ther. 2004, 9: 85-95.

    CAS  PubMed  Google Scholar 

  43. 43.

    Cochrane Handbook for Systematic Reviews of Interventions Version 4.2.5 (updated May 2005). The Cochrane Library. Edited by: Higgins JPT, Green S. 2005, Chichester, UK: John Wiley & Sons, Ltd

    Google Scholar 

  44. 44.

    Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P, CONSORT Group: Methods and process of the CONSORT group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med. 2008, 148: 295-309. 10.7326/0003-4819-148-4-200802190-00008.

    Article  PubMed  Google Scholar 

  45. 45.

    fs/QCA Software: []

  46. 46.

    Oakley A, Strange V, Bonell C, Allen E, Stephenson J: Process evaluation in randomised controlled trials of complex interventions. BMJ. 2006, 332: 413-416. 10.1136/bmj.332.7538.413.

    Article  PubMed  PubMed Central  Google Scholar 

  47. 47.

    Atkins S, David Biles D, Lewin S, Ringsberg K, Thorson A: Patients’ experiences of an intervention to support tuberculosis treatment adherence in South Africa. J Health Serv Res Policy. 2010, 15: 163-170. 10.1258/jhsrp.2010.009111.

    Article  PubMed  Google Scholar 

Download references


The project was funded by Marie Curie Cancer Care. We gratefully acknowledge the work of Tatiana Salisbury, and Emma Beecham for their support and comment on the development of the guidelines on how an intervention may qualify as corresponding to a suggested strategy, and Jonathon Wistow for commenting on an earlier draft of this paper.

Author information



Corresponding author

Correspondence to Bridget Candy.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

BC contributed to the conception and design of the project, the analysis and interpretation of data and to drafting the manuscript. LJ contributed to the design of the project, the interpretation of data and to critically revising the manuscript for important intellectual content. MK contributed to the conception and design of the project, the interpretation of data and to critically revising the manuscript for important intellectual content. SO contributed to the conception and design of the project, the interpretation of data and to critically revising the manuscript for important intellectual content. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Candy, B., King, M., Jones, L. et al. Using qualitative evidence on patients’ views to help understand variation in effectiveness of complex interventions: a qualitative comparative analysis. Trials 14, 179 (2013).

Download citation


  • Complex interventions
  • Randomised controlled trials
  • Qualitative evidence
  • Qualitative comparative analysis