Skip to main content

Evaluation of cultural competency in a South African cluster randomised controlled trial: lessons learned for trial reporting standards

Abstract

Background

Failure to consider relevant cultural, ethnic and diversity parameters (and the intersection between these parameters) during trial protocol development and trial conduct may negatively impact recruitment, intervention development and delivery, and participant adherence and retention, potentially reducing overall internal validity. This case study aimed to evaluate the utility and comparability between the 9-item Gibbs Framework to measure cultural competency and the GRIPP-2(Short Form (SF)) 5-point checklist to assess patient and public involvement in the context of a complex clinical trial conducted in an African setting.

Methods

We identified and collated all relevant publications, source and procedural data related to the trial and integrated the documents into a dynamic trial timeline. Two independent investigators applied and scored the Gibbs Framework and the GRIPP-2(SF) checklist to the four publications arising from the trial, noting functionality and comparability between tools. Where cultural competency was not met, a third investigator screened all procedural and source data and identified if cultural competency had been achieved but not reported in the publications, or if the trial had not met appropriate cultural competency based on the documentation.

Results

Application of the Gibbs Framework found that the trial scored ‘2’ for seven of the nine Gibbs items, indicating full cultural competency for those questions. The Framework indicated that the trial research question was not driven by the articulated needs of patients, and neither were patients, caregivers and clinical providers involved in the development of the intervention. Comparability with the GRIPP-2(SF) checklist showed that the Gibbs performed better on evaluation of partnerships with the community, identification of culturally competent data sources and target populations, and appointment of trial staff in an inclusive manner.

Conclusions

Comprehensive evaluation of the trial’s cultural competency required scrutiny of both published manuscripts and source and procedural data, suggesting that there is a gap in current trial reporting standards with respect to cultural competence.

Trial registration

PACTR201610001825403. Registered on 17 October 2016.

Peer Review reports

Background

Cultural competence is a broad term which recognises the responsiveness of the health system to diversity of language, ethnicity and social groups [1,2,3]. The concept has been integrated into patient care, research and medical education in high-income countries where vulnerable and/or minority populations are relatively under-served by the health system compared to the general population [4].

To our knowledge, the term has not been widely used in the context of low- and middle-income countries (LMIC) where vulnerable, impoverished and indigent populations are likely to represent the majority of the population but may nevertheless be poorly served by the health system.

Cultural competence in the context of trial design and conduct refers to consideration of the cultural and linguistic diversity of the populations targeted for inclusion into a trial [5]. Failure to report these parameters may hamper successful implementation of effective interventions following the trial due to limitations in judging external validity (generalisability). In addition, internal validity may be affected as cultural, ethnic and diversity parameters (and the intersection between these parameters) can impact participant adherence and retention (attrition bias) and outcome measurement (detection bias). In order to optimise participant recruitment and retention in trials, identification and engagement of the target population in a trial is recognised as a key concept in cultural competency.

A checklist to guide the cultural competency of a trial may be useful to investigators during protocol development and trial reporting. Current CONSORT guidance does not include reporting related to cultural competence [6]. The 2007 Gibbs framework comprises nine criteria to evaluate the degree to which a trial meets cultural competency commencing at the community engagement stage through to analysis and dissemination [5]. The GRIPP-2(SF) is an abbreviated five-item checklist targeted to clinical trial reports [7].

This case study aims to evaluate the utility and comparability between the Gibbs and GRIPP-2(SF) tools in the context of a complex clinical trial conducted in an African setting. We retrospectively applied the 2007 Gibbs framework and the GRIPP-2(SF) checklist to source and procedural data arising from Project Mind, a three-arm cluster randomised controlled trial of 1340 patients conducted in 24 primary care clinics in both urban and rural settings in the Western Cape Province of South Africa. Project Mind compared two different systems approaches to integrating mental health and alcohol use counselling into chronic disease care with treatment as usual [8]. The target population in Project Mind was people receiving treatment for HIV or diabetes from public health services, representing the poorest, least educated, and most vulnerable populations in South Africa. Differences in education and income are likely to impact on conceptualisations of mental health and stigma, as well as views on acceptability of alcohol use. Project MIND thus presented a context in which a high degree of linguistic and cultural diversity both within sampled trial populations and between trial participants and trial investigators was implicit.

Method

We established a Trial Cultural Competency (CC) Working Group comprising the Project MIND principal investigator (BM), the Project MIND Quality Assurance Officer (LEC), and the CC project lead who had served on the Project Mind Trial Steering Committee (NS) responsible for trial oversight and good governance. In addition, an experienced trialist and member of the CONSORT advisory group external to Project Mind (SH) was invited to join the Working Group to act as an arbiter and conduct duplicate data extraction. The study was funded by the Trial Research Methods Partnership at Liverpool University and conducted as a study within a trial [9].

To evaluate the utility and comparability between the Gibbs and GRIPP-2(SF) tools applied to Project Mind, we first identified and collated all relevant source and procedural data related to the trial. Source data included de-identified interview transcripts with facility-based staff and with patients, and procedural data such as:

  • The Trial Protocol (ID: PACTR201610001825403) [10]

  • The original grant application to funders

  • Minutes of all trial-related meetings throughout the planning, preparation and conduct of the trial including (1) health departmental planning meetings, (2) facility-based (clinic) stakeholder meetings and (3) stakeholder advisory group meetings

  • Training Manuals for the formative, pilot, and implementation phases

This documentation did not contain any patient-identifying information or trial outcome data. We also captured all publications arising from the trial, of which there were four at the time [8, 11,12,13]. Documentation was stored on a secure shared electronic folder in Microsoft OneDrive only available to the investigators and categorised according to the relevant phase of the trial: (1) formative, (2) pilot, (3) conduct and (4) implementation.

An interactive trial process diagram reflecting the timeline of the trial separated by the four phases (formative, pilot, conduct, implementation) was prepared in Microsoft Visio with the responsible trial investigator identified for each phase. The source and procedural data folders were then linked electronically to the appropriate trial phase in the diagram creating an integrated and interactive data-rich timeline environment to facilitate access to the documentation required for application of the tools. See Fig. 1.

Fig. 1
figure 1

Interactive trial timeline including intercepts with Gibbs Framework Domains

Application of the Gibbs Framework to Project MIND

We piloted the Gibbs Framework during the South African Medical Research Council's Alcohol, Tobacco and Other Drug Research Unit journal club in which ten research staff, several of whom had worked on Project MIND, applied the Gibbs Framework to a publication reporting on data collected during the formative (early planning) phase of Project MIND [11]. Following their feedback, we added an additional clarification to the Framework guidance to permit broader interpretation of the terms ‘peer support’ and ‘community stakeholder’ to include patients, community workers and/or healthcare providers. This was considered necessary in the context of a facility-based cluster trial where the model of care to be tested required the support and acceptance of both patients and clinic staff.

We applied the tools using a two-step process, first applying both tools to the published manuscripts, and then applying the Gibbs Framework to the source and procedural data. During Step 1, two researchers (NS and SH) independently applied the Gibbs Framework and the GRIPP-2(SF) checklist to four completed published articles reporting on (1) the feasibility of conducting the trial [12], (2) a qualitative study of patient preferences for mental health counselling [13], (3) an observational evaluation of facility readiness [12] and (4) focus group and interview data with facility staff regarding optimal selection of models of integration [11]. The articles were selected for initial evaluation as the intention of both tools is application to published manuscripts. The GRIPP-2(SF) was selected over the GRIPP-2 (Long Form) as it is intended to appraise the degree of patient and public involvement in studies where patient and public involvement is a secondary or tertiary focus (such as in a clinical trial), whereas the GRIPP-2(LF) is applicable to studies in which patient and public involvement is the primary focus [7]. A spreadsheet was created in Microsoft Excel and individual judgments were recorded for each Gibbs Framework question and each GRIPP-2(SF) Checklist item.

For the Gibbs Framework, questions can be awarded as follows:

  1. 1

    Culturally blind which describes methodological approaches underpinned by the belief that neither colour nor culture influence behaviour and that all people are the same

  2. 2

    Culturally pre-competent describing approaches recognising that the dominant race or culture of a country is not universally applicable but failing to fully attend to cultural differences

  3. 3

    Culturally competent describing approaches recognising the cultural diversity of the intended population

  4. 4

    NM If there is no reporting or no information, then one should record the score as ‘Not Mentioned’ (NM)

For the GRIPP-2(SF), we recorded reporting of the item as Yes, No, or Unclear. For both instruments, we extracted supportive text from the articles and recorded these to support the judgement decisions. Agreements and discrepancies between the two coders were identified and these were then discussed before a final consensus judgement was recorded.

During Step 2, a decision-making matrix was prepared to identify those questions where it was (1) not possible to make judgments due to a lack of reporting, (2) the Gibbs Cultural Competency score was rated as ‘1’, or (3) the Gibbs Cultural Competency score was rated as ‘0’ but it was not clearly or fully reported how the researchers failed to address Cultural Competency. See Additional Tables 1 a–c. For questions meeting any of these criteria, the research assistant (LE) then systematically explored the phase-specific procedural and source data to establish whether Cultural Competency was met. The search process was ordered as follows: (1) protocols, (2) meeting minutes, (3) training manuals and (4) source data including patient and staff interview transcripts. Where additional data related to Cultural Competency was identified, a supportive extract(s) and the file name of the source document were recorded and the search process ceased. The additional data was then scrutinised by the two initial coders and a final Gibbs score was assigned based on consensus. If no elements of CC were identified after scrutinising all the available secondary and source data, the question was given a zero score for CC.

Table 1 Results of application of Gibbs Framework to publication, source and procedural data

Finally, the responsible trial investigators and stakeholders identified in the Trial Timeline and Process Diagram were invited to discuss the findings for the questions which were scored at the end of the process as ‘zero’ or ‘1’ and to share their reflections on Cultural Competency within the trial.

We had planned to disseminate our findings to Project MIND clinic staff and gain their views on the utility of the tool and our interpretation thereof. However, COVID-19 regulations enforced in 2021 prevented this from taking place. Due to limited internet connectivity in the health clinics, virtual dissemination was not possible.

Comparability between the Gibbs Framework and GRIPP-2(SF)

During the application process and in monthly Working Group meetings, we identified the overlap between the instrument criteria (Gibbs and GRIPP-2(SF)) and tabulated these, recording variances and producing a graphical representation of tool comparability. We also noted challenges in coding and operationalising the guidance, and considered the utility and relevance to LMIC settings.

Results

Application of the Gibbs Framework and GRIPP-2(SF) to Project MIND

Results of the Gibbs and GRIPP-2(SF) evaluation of the four published manuscripts detailing the formative and pilot stages of the Project MIND trial are detailed in Additional Table 2.

Table 2 Utility of Gibbs Framework and suggested improvements

Subsequent in-depth source and procedural data analysis to further inform the Gibbs Framework evaluation is shown in Table 1. Overall, when both the manuscript data and the source and additional procedural data were considered together, the trial was considered culturally competent scoring ‘2 s’ on all questions except questions 2 and 7.

The trial did not meet any CC criteria for question 2 viz. identification of the research question being initiated by the cultural group most likely to be affected by the trialled intervention (the patients attending the clinics).

For question 7 regarding the involvement of peers in the development of the intervention, analysis of the minutes of meetings with the provincial department of health in advance of the trial confirmed input into the intervention by departmental stakeholders only, but not of peers represented by patients, community health workers or clinical providers. The score therefore remained at ‘1’.

For question 4 which relates to involvement of peers in the appointment and reimbursement of staff, secondary data analysis revealed an additional level of cultural competency following scrutiny of the protocol submitted to the Ethics Committee (not in public domain) and the CC score was increased from ‘1’ to ‘2’. In the protocol, it was noted that community health workers were ‘already providing adherence and disease management-related counselling (and) were best placed and most receptive to being trained to deliver a brief and structured mental health intervention. These findings, together with the pre-existing body of literature on the feasibility of using community health workers to deliver mental health counselling, informed our decision to use community health workers as intervention agents’. In the South African context, community health workers are drawn from the patient populations that they serve and we thus considered them as ‘peers’.

Comparability and utility of the Gibbs Framework and GRIPP-2(SF)

The differences and similarities between the two tools are articulated in Additional Table 3. Figure 2 illustrates the areas of overlap between the two tools. Questions 1, 3 and 4 of the Gibbs Framework are not mirrored in the GRIPP-2(SF) and cover forming partnerships, identifying data sources and target populations, and appointing staff respectively. The Working Group recorded the utility of the Gibbs Framework noting challenges and possible improvements outlined in Table 2.

Fig. 2
figure 2

Diagrammatic representation of overlap between Gibbs Framework Domains and GRIPP-2(SF) Items

Discussion

The application of the Gibbs Framework indicated that the Project MIND trial was culturally competent in all but two of the nine Gibbs criteria. The Framework revealed that the trial research question was not driven by the articulated needs of patients, and neither were patients, caregivers and clinical providers involved in the development of the intervention. Comprehensive evaluation of the trial’s cultural competency required scrutiny of both published manuscripts and source and procedural data, suggesting that there is a gap in current trial reporting standards with respect to cultural competence.

For this study, the Gibbs Framework had distinct advantages over the GRIPP-2(SF) as it is tailored specifically to cultural competence. It includes an evaluation of partnerships forged before the start of the trial and an assessment of the awareness of the investigators’ cultural framework and its influence on their research approach. Self-reflection of a researcher’s own cultural bias is a key component of the qualitative research paradigm [14] but is rarely considered in the conduct or reporting of a clinical trial. This may be lacking in the clinical trialist’s toolbox of skills and may be especially important for trialists working in countries and cultural settings different from their own. Failure to consider the lens of the trialist and how it may differ to those of the trial participants and indeed to that of the trial clinical staff may impact successful recruitment, participation, and ultimately the robustness of findings if attrition is high.

We can foresee the criteria of the Gibbs framework being used to guide all stages of framing the trial research question, protocol development and final analysis. Our view is supported by a recent systematic review in which modified Gibbs criteria were applied to 80 HIV adherence trials mainly conducted in the USA [1]. The authors observed a lack of culturally competent trial methods but noted that inadequate reporting hampered their ability to be conclusive. Identification of the key components of the Gibbs to incorporate into the current CONSORT Statement [15] and potentially the SPIRIT Statement [16] which guide reporting standards for trial conduct and protocol development respectively, will clearly require further interrogation, development, and collaboration among trialists. Consideration will need to be given to the implications of any such changes to the stability of CONSORT Extension Statements such as that of CONSORT-SPI which pertains to social and psychological interventions [17]. We believe the Gibbs Framework is a reasonable starting point for these discussions.

In the USA, cultural competence, as a construct to incorporate appropriate responsiveness to differences in ethnicity, language and other cultural markers, has been criticised for failing to recognise the extent of systemic racism inherent in healthcare [4]. In many of these settings, vulnerable populations represent migrants, immigrants and other minority populations. This is in direct contrast to a country like South Africa where the vast majority of the general population are impoverished and experience vulnerability. In settings where clinical researchers may have low exposure to, and awareness of, linguistic and other diversity indicators specifically in minority populations, a tool like the Gibbs Framework may be an important step in furthering responsiveness in trial conduct.

Following the fall of apartheid in South Africa and the emergence of the HIV epidemic, clinical research developed a strong social justice and advocacy approach [18]. Community preparedness studies are conducted prior to clinical and public health trials, and almost all HIV trials conducted in the public sector include regular input from Community Advisory Boards (CAB), ensuring active participation by those likely to be most affected by the interventions under trial [19, 20]. As a result, the terminology of cultural competency (and its inherent ‘othering’) may not be the most appropriate for our setting where diversity and the importance of ensuring inclusivity and equity are encouraged in research. Nevertheless, application of the Gibbs Framework to the Project Mind trial did elicit important deficits with the lack of participation by the patient population in driving the research question being the most critical. This was also observed in the HIV trials included in the systematic review referred to earlier [1] and is not addressed by the scope of a CAB or by community preparedness.

In applying the Gibbs Framework, we identified several challenges including the lack of clear definitions for many of the terms used in the tool and recommend that an updated version of the tool consider the inclusion of a user glossary. This should be informed by scrutiny of the terminology used across a wide range of clinical and public health settings and regions to reflect and achieve greater global reach. In addition, real trial examples which illustrate the differentiation between thresholds of cultural blindness through to cultural competence will greatly aid in ensuring consistent scoring between researchers.

Our approach made use of both internal and independent evaluators working in partnership, as described by Conley-Tyler [21], to gain the benefits of both approaches. Given the need to access unpublished data to fully assess cultural competency and to understand the trial context, it was advantageous to include trial investigators in the project team. We aimed to address any real or perceived subjectivity by having all scoring conducted by two independent authors, one of whom was not involved in the Project MIND trial in any capacity and was considered impartial. Final decisions were made collectively to ensure the robustness of the results. The use of shared software and the creation of an interactive trial timeline linked to manuscripts, source data and procedural data greatly enhanced our ability to conceptualise all the stages of the Project MIND trial where cultural competence was important to consider. Generalisability of our results is limited given that we applied the Framework to a single trial. However, as a clinic-based cluster trial incorporating both clinical and behavioural outcomes, we believe Project MIND provided a sufficiently complex platform to thoroughly evaluate the Gibbs tool.

Our study included a global team with research expertise in reviewing and conducting clinical trials in both low and high-income settings. As a team, we share limited anthropological knowledge and experience and recognise that expanding this field of research will require a more diverse inter-disciplinary approach. During our discussions, we consciously created an environment where questions regarding diversity within the clinic setting were candidly explored. We acknowledge that our lived experiences of navigating the healthcare research environment are vastly different to those of the participants in Project Mind and note that future teams should be inclusive of patient and provider representation in equal measures to researchers.

Conclusion

Our approach operationalised secondary data analytical methods for the application of the Gibbs Framework to a LMIC trial which was judged to be culturally competent in seven of nine domains. The process and results will be important to repeat and replicate to inform revision of the Framework to improve its functionality and usability. Further evaluation may take the form of retrospective application to completed trials, such as in our study, or prospective application in planned and ongoing trials while monitoring the utility of its use. Comprehensive evaluation of the trial’s cultural competency required scrutiny of both published manuscripts and source and procedural data, suggesting that there may be a gap in current trial reporting standards with respect to cultural competence.

Availability of data and materials

The dataset supporting the conclusions of this article is included within the article (and its additional files).

References

  1. Wali N, Renzaho A, Wang X, Atkins B, Bhattacharya D. Do interventions to improve adherence to antiretroviral therapy recognise diversity? A systematic review AIDS Care. 2021;33(11):1379–93. https://doi.org/10.1080/09540121.2020.1811198.

    Article  PubMed  Google Scholar 

  2. T L Cross, Bazron BJ, Dennis KW, Isaacs MR. Towards a Culturally Competent System of Care: A Monograph on Effective Services for Minority Children Who Are Severely Emotionally Disturbed. Washington, DC: Georgetown University Child Development Centre. 1989. https://spu.edu/~/media/academics/school-of-education/Cultural%20Diversity/Towards%20a%20Culturally%20Competent%20System%20of%20Care%20Abridged.ashx. Accessed 11 August 2021.

  3. Riggs E, Gussy M, Gibbs L, van Gemert C, Waters E, Priest N, Watt R, Renzaho AM, Kilpatrick N. Assessing the cultural competence of oral health research conducted with migrant children. Community Dent Oral Epidemiol. 2014;42(1):43–52. https://doi.org/10.1111/cdoe.12058.

    Article  PubMed  Google Scholar 

  4. Berger JT, Miller DR. Health Disparities, Systemic Racism, and Failures of Cultural Competence. Am J Bioeth. 2021;21(9):4–10. https://doi.org/10.1080/15265161.2021.1915411.

    Article  PubMed  Google Scholar 

  5. Gibbs L, Waters E, Renzaho AMN, Kulkens M. Moving towards increased cultural competency in public health research. In: Williamson A, DeSouza R, editors. Researching with communities: grounded perspectives on engaging communities in research. London: Muddy Creek Press; 2007. p. 339–55.

    Google Scholar 

  6. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340: c869. https://doi.org/10.1136/bmj.c869.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Staniszewska S, Brett J, Simera I, Seers K, Mockford C, Goodlad S, et al. GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. BMJ. 2017;358: j3453. https://doi.org/10.1136/bmj.j3453.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  8. Myers B, Lund C, Lombard C, Joska JA, Levitt NS, Butler C, et al. Comparing dedicated and designated models of integrating mental health into chronic disease care: study protocol for a cluster randomized controlled trial. Trials. 2018;19(1):185. https://doi.org/10.1186/s13063-018-2568-9.

    Article  PubMed  PubMed Central  Google Scholar 

  9. SAMRC: Project MIND Studies within a Trial. http://projectmind.mrc.ac.za/swat.html (2021). Accessed 25 November 2021.

  10. Myers, B. Strengthening South Africa's health system through integrating treatment for mental illness into chronic disease care. 2016. https://pactr.samrc.ac.za/Search.aspx. Accessed: 25 November 2021.

  11. Sorsdahl K, Naledi T, Lund C, Levitt NS, Joska JA, Stein DJ, et al. Integration of mental health counselling into chronic disease services at the primary health care level: Formative research on dedicated versus designated strategies in the Western Cape, South Africa. J Health Serv Res Policy. 2021;26(3):172–9. https://doi.org/10.1177/1355819620954232.

    Article  PubMed  Google Scholar 

  12. Myers B, Breuer E, Lund C, Petersen Williams P, van der Westhuizen C, Brooke-Sumner C, et al. Assessing capability for implementing mental health counselling within primary care facilities in a middle-income country: A feasibility study. J Psychiatr Ment Health Nurs. 2019;26(5–6):163–74. https://doi.org/10.1111/jpm.

    Article  PubMed  Google Scholar 

  13. Myers B, Joska JA, Lund C, Levitt NS, Butler CC, Naledi T, et al. Patient preferences for the integration of mental health counseling and chronic disease care in South Africa. Patient Prefer Adherence. 2018;12:1797–803.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001;358(9280):483–8. https://doi.org/10.1016/S0140-6736(01)05627-6.

    Article  PubMed  CAS  Google Scholar 

  15. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol. 2010;63(8):1–37. https://doi.org/10.1016/j.jclinepi.2010.03.004.

    Article  Google Scholar 

  16. Chan A, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, Berlin JA, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ. 2013;346: e7586. https://doi.org/10.1136/bmj.e7586.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Grant S, Mayo-Wilson E, Montgomery P, Macdonald G, Michie S, Hopewell S, et al. CONSORT-SPI 2018 Explanation and Elaboration: guidance for reporting social and psychological intervention trials. Trials. 2018;19(1):406. https://doi.org/10.1186/s13063-018-2735-z.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Caswell G, Dubula V, Baptiste S, Etya’ale H, Syarif O, Barr D. The continuing role of communities affected by HIV in sustained engagement in health and rights. J Int AIDS Soc. 2021;24(Suppl 3): e25724. https://doi.org/10.1002/jia2.25724.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Zhao Y, Fitzpatrick T, Wan B, Day S, Mathews A, Tucker JD. Forming and implementing community advisory boards in low- and middle-income countries: a scoping review. BMC Med Ethics. 2019;20(1):73.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Reddy P, Buchanan D, Sifunda S, James S, Naidoo N. The role of community advisory boards in health research: Divergent views in the South African experience. SAHARA J. 2010;7(3):2–8. https://doi.org/10.1080/17290376.2010.9724963 (PMID: 21409299).

    Article  PubMed  Google Scholar 

  21. Conley-Tyler M. A fundametal choice: internal or external evaluation? Eval J Australas. 2005;4(1–2):3–11.

    Article  Google Scholar 

Download references

Acknowledgements

The author team gratefully acknowledges the support received from the Trial Research Methods Partnership and Professor Paula Williamson of the University of Liverpool. We also thank Dr Petal Petersen Williams and Prof Katherine Sorsdahl for their helpful feedback on the results.

Funding

The project was funded by UK Medical Research Council (MRC) Trials Methodology Research Partnership (grant reference MR/S014357/1) and ATODRU, South African Medical Research Council.

Author information

Authors and Affiliations

Authors

Contributions

NS conceptualised the study, obtained the funding and led the design of the project methods. All authors contributed to the design and application of the methodology. NS and SH applied the Gibbs and GRIPP-2(SF) tools to the publications and LEC applied the Gibbs to the secondary data. LEC and BM provided content and context information. All authors contributed to the manuscript and its revisions. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Lesley-Ann Erasmus-Claassen.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Supplementary Table S1a.

Guide to Publication reporting rating. Supplementary Table S1b. Source and procedural data to be searched and scrutinised for further data. Supplementary Table S1c. Matrix of Gibbs score and publication reporting to determine data scrutiny.

Additional file 2: Supplementary Table S2.

Results of application of Gibbs Framework and GRIPP-2(SF) to publications from Project MIND.

Additional file 3: Supplementary Table S3.

Comparison between the Gibbs Framework and the GRIPP-2(SF).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Siegfried, N.L., Hopewell, S., Erasmus-Claassen, LA. et al. Evaluation of cultural competency in a South African cluster randomised controlled trial: lessons learned for trial reporting standards. Trials 23, 918 (2022). https://doi.org/10.1186/s13063-022-06767-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-022-06767-y

Keywords

  • Cultural competency
  • Diversity
  • Reporting guidelines
  • Randomised controlled trial
  • Gibbs Framework
  • GRIPP-2(SF) checklist