Measuring implementation fidelity in a cluster-randomized pragmatic trial: development and use of a quantitative multi-component approach

Background In pragmatic trials, on-site partners, rather than researchers, lead intervention delivery, which may result in implementation variation. There is a need to quantitatively measure this variation. Applying the Framework for Implementation Fidelity (FIF), we develop an approach for measuring variability in site-level implementation fidelity. This approach is then applied to measure site-level fidelity in a cluster-randomized pragmatic trial of Music & MemorySM (M&M), a personalized music intervention targeting agitated behaviors in residents living with dementia, in US nursing homes (NHs). Methods Intervention NHs (N = 27) implemented M&M using a standardized manual, utilizing provided staff trainings and iPods for participating residents. Quantitative implementation data, including iPod metadata (i.e., song title, duration, number of plays), were collected during baseline, 4-month, and 8-month site visits. Three researchers developed four FIF adherence dimension scores. For Details of Content, we independently reviewed the implementation manual and reached consensus on six core M&M components. Coverage was the total number of residents exposed to the music at each NH. Frequency was the percent of participating residents in each NH exposed to M&M at least weekly. Duration was the median minutes of music received per resident day exposed. Data elements were scaled and summed to generate dimension-level NH scores, which were then summed to create a Composite adherence score. NHs were grouped by tercile (low-, medium-, high-fidelity). Results The 27 NHs differed in size, resident composition, and publicly reported quality rating. The Composite score demonstrated significant variation across NHs, ranging from 4.0 to 12.0 [8.0, standard deviation (SD) 2.1]. Scaled dimension scores were significantly correlated with the Composite score. However, dimension scores were not highly correlated with each other; for example, the correlation of the Details of Content score with Coverage was τb = 0.11 (p = 0.59) and with Duration was τb = − 0.05 (p = 0.78). The Composite score correlated with CMS quality star rating and presence of an Alzheimer’s unit, suggesting face validity. Conclusions Guided by the FIF, we developed and used an approach to quantitatively measure overall site-level fidelity in a multi-site pragmatic trial. Future pragmatic trials, particularly in the long-term care environment, may benefit from this approach. Trial registration Clinicaltrials.gov NCT03821844. Registered on 30 January 2019, https://clinicaltrials.gov/ct2/show/NCT03821844. Supplementary Information The online version contains supplementary material available at 10.1186/s13063-022-06002-8.

Notes: A key concept of the StaRI standards is the dual strands of describing, on the one hand, the implementation strategy and, on the other, the clinical, healthcare, or public health intervention that is being implemented. These strands are represented as two columns in the checklist.
The primary focus of implementation science is the implementation strategy (column 1) and the expectation is that this will always be completed.
The evidence about the impact of the intervention on the targeted population should always be considered (column 2) and either health outcomes reported or robust evidence cited to support a known beneficial effect of the intervention on the health of individuals or populations. The StaRI standards refers to the broad range of study designs employed in implementation science. Authors should refer to other reporting standards for advice on reporting specific methodological features. Conversely, whilst all items are worthy of consideration, not all items will be applicable to, or feasible within every study.

Reported on page #
Intervention "Implementation strategy" refers to how the intervention was implemented "Intervention" refers to the healthcare or public health intervention that is being implemented.

Title and abstract
Title 1 1 Identification as an implementation study, and description of the methodology in the title and/or keywords Introduction Introduction 3 6 Description of the problem, challenge or deficiency in healthcare or public health that the intervention being implemented aims to address. Rationale 4 7 The scientific background and rationale for the implementation strategthe y (including any 7 The scientific background and rationale for the intervention being implemented (including evidence underpinning theory/framework/model, how it is expected to achieve its effects and any pilot work). about its effectiveness and how it is expected to achieve its effects). Aims and objectives

5
8 The aims of the study, differentiating between implementation objectives and any intervention objectives.

Methods: description
Design 6 9 The design and key features of the evaluation, (cross referencing to any appropriate methodology reporting standards) and any changes to study protocol, with reasons The context in which the intervention was implemented. (Consider social, economic, policy, healthcare, organisational barriers and facilitators that might influence implementation elsewhere). Targeted 'sites' The characteristics of the targeted 'site(s)' (e.g locations/personnel/resources etc.) for implementation and any eligibility criteria.

8
The population targeted by the intervention and any eligibility criteria.
Description 9 9-10 A description of the implementation strategy 7 A description of the intervention Sub-groups 10 N/A Any sub-groups recruited for additional research tasks, and/or nested studies are described

Methods: evaluation
Outcomes 11 11-15 Defined pre-specified primary and other outcome(s) of the implementation strategy, and how they were assessed. Document any pre-determined targets

11-15
Defined pre-specified primary and other outcome(s) of the intervention (if assessed), and how they were assessed. Document any pre-determined targets Process evaluation 12 9 Process evaluation objectives and outcomes related to the mechanism by which the strategy is expected to work Other † (details)

BRIEF NAME 1.
Provide the name or a phrase that describes the intervention. _____7_______ ______________

2.
Describe any rationale, theory, or goal of the elements essential to the intervention. _____7, 10____ Study Protocol_

3.
Materials: Describe any physical or informational materials used in the intervention, including those provided to participants or used in intervention delivery or in training of intervention providers.
Provide information on where the materials can be accessed (e.g. online appendix, URL).

4.
Procedures: Describe each of the procedures, activities, and/or processes used in the intervention, including any enabling or support activities.

5.
For each category of intervention provider (e.g. psychologist, nursing assistant), describe their expertise, background and any specific training given.

6.
Describe the modes of delivery (e.g. face-to-face or by some other mechanism, such as internet or telephone) of the intervention and whether it was provided individually or in a group.

7.
Describe the type(s) of location(s) where the intervention occurred, including any necessary infrastructure or relevant features. ___18, 19, 20__ _____________ ** Authors -use N/A if an item is not applicable for the intervention being described. Reviewers -use '?' if information about the element is not reported/not sufficiently reported. † If the information is not provided in the primary paper, give details of where this information is available. This may include locations such as a published protocol or other published papers (provide citation details) or a website (provide the URL). ǂ If completing the TIDieR checklist for a protocol, these items are not relevant to the protocol and cannot be described until the study is complete.

_____8_______ _____________
* We strongly recommend using this checklist in conjunction with the TIDieR guide (see BMJ 2014;348:g1687) which contains an explanation and elaboration for each item.
* The focus of TIDieR is on reporting details of the intervention elements (and where relevant, comparison elements) of a study. Other elements and methodological features of studies are covered by other reporting statements and checklists and have not been duplicated as part of the TIDieR checklist. When a randomised trial is being reported, the TIDieR checklist should be used in conjunction with the CONSORT statement (see www.consort-statement.org) as an extension of Item 5 of the CONSORT 2010 Statement. When a clinical trial protocol is being reported, the TIDieR checklist should be used in conjunction with the SPIRIT statement as an extension of Item 11 of the SPIRIT 2013 Statement (see www.spirit-statement.org). For alternate study designs, TIDieR can be used in conjunction with the appropriate checklist for that study design (see www.equator-network.org).