Skip to main content

Development of a measure to assess the quality of proxy decisions about research participation on behalf of adults lacking capacity to consent: the Combined Scale for Proxy Informed Consent Decisions (CONCORD scale)

Abstract

Background

Recruitment of adults lacking the capacity to consent to trials requires the involvement of an alternative ‘proxy’ decision-maker, usually a family member. This can be challenging for family members, with some experiencing emotional and decisional burdens. Interventions to support proxy consent decisions in non-emergency settings are being developed. However, the ability to evaluate interventions is limited due to a lack of measures that capture outcomes of known importance, as identified through a core outcome set (COS).

Methods

Using established measure development principles, a four-stage process was used to develop and refine items for a new measure of proxy decision quality: (1) findings from a recent scoping review and consensus study were reviewed to identify items for inclusion in the scale and any existing outcome measures, (2) assessment of content coverage by existing measures and identification of insufficiency, (3) construction of a novel scale, and (4) cognitive testing to explore comprehension of the scale and test its content adequacy through interviews with family members of people with impaired capacity.

Results

A range of outcome measures associated with healthcare decision-making and informed consent decisions, such as the Decisional Conflict Scale, were identified in the scoping review. These measures were mapped against the key constructs identified in the COS to assess content coverage. Insufficient coverage of areas such as proxy-specific satisfaction and knowledge sufficiency by existing instruments indicated that a novel measure was needed. An initial version of a combined measure (the CONCORD scale) was drafted and tested during cognitive interviews with eleven family members. The interviews established comprehension, acceptability, feasibility, and content adequacy of the scale. Participants suggested re-phrasing and re-ordering some questions, leading to the creation of a revised version.

Conclusions

The CONCORD scale provides a brief measure to evaluate the quality of decisions made on behalf of an adult who lacks the capacity to consent in non-emergency settings, enabling the evaluation of interventions to improve proxy decision quality. Initial evaluation indicates it has content adequacy and is feasible to use. Further statistical validation work is being undertaken.

Peer Review reports

Background

Participants included in clinical trials often do not reflect the entire population that the intervention is intended to be used for, leading to concerns about both the generalisability of the results and the impact on excluded groups who are under-served by research [1]. One such under-served group is adults who have an impaired capacity to consent to research due to, for example, a neurodegenerative condition such as dementia, an acute or critical illness, or a life-long disability [1]. The exclusion of adults who lack the capacity to consent from trials in non-emergency settings is widespread, even in populations with a high prevalence of cognitive impairment, and so inclusion might be particularly expected [2]. This has been found in trials in populations ranging from people with hip fractures [3, 4], older people [5], and people with intellectual disabilities [6].

Whilst the barriers to the inclusion of adults lacking capacity are complex and multifactorial, one of the principal factors is that conducting research with people with impaired capacity to consent relies on alternative decision-makers to provide informed consent on their behalf [7]. Where someone lacks the capacity to consent, family members are usually approached to act as a ‘proxy’ or ‘surrogate’ decision-maker [8, 9]. However, it can be difficult for families to make a decision about whether their relative should participate in a research study, and many experience an emotional and decisional burden as a result [10]. Families often express uncertainty about making what for some can be complex and challenging decisions, which can lead to psychological stress when asked to take on this role [11, 12]. Proxy decision-making for research has been demonstrated to be particularly stressful in some settings and contexts [13], with some studies reporting that nearly all proxies experience some degree of burden when making decisions about research [14]. This contributes to a higher proportion of families declining participation than patients themselves [15]. Despite numerous innovations to improve informed consent processes for research, there are currently no effective interventions for proxies who are making decisions on behalf of someone who lacks capacity in either emergency or non-emergency situations, although these are currently under development.

Evaluating interventions to support proxy consent

One intervention recently developed to support proxies in making decisions about non-emergency research participation is a decision aid (DA) intended to help families to make more informed and supported decisions when acting as a proxy [16]. Using decision science principles, DAs can help when making complex and preference-sensitive decisions, including decisions about participating in clinical trials [17, 18]. DAs differ from traditional information materials in that they do not focus solely on improving the delivery of information [18], but instead are intended to facilitate decision-making and lead to decisions which are more informed and consistent with the person’s values [19]. We developed a DA for proxy decisions about trial participation in response to family members identifying a need for better information and support when making these challenging decisions [16]. The DA has undergone acceptability testing with family members of people with impaired capacity to consent and now requires evaluation to determine if it does provide an effective form of support. Establishing the effectiveness of DAs (compared to standard approaches or alternatives) requires evidence that they improve decision quality—that is the dual constructs of both the quality of the decision-making process and the quality of the choice made [20]. There was therefore a need to develop an appropriate outcome measure to assess the quality of proxy decisions made about research participation in order to evaluate the DA and similar future interventions [16].

Developing an appropriate outcome measure

The first step in the development process of any new scale is to establish a theoretical definition of the focal concept [21]. A concept synthesis approach was used to undertake the first conceptualisation of the construct - what constitutes a high-quality proxy consent decision [22]. Following this, a scoping review and consensus study (COnSiDER Study) was conducted to establish the core outcomes that are important to stakeholders when evaluating interventions to improve proxy decisions about research [23]. The final core outcome set (COS), which was developed with an expert stakeholder Delphi panel (patients/public and those who care for them), consists of 28 items across 11 domains including knowledge sufficiency, values clarity, self-efficacy, preparedness, and satisfaction [23]. Once a COS has been agreed, the next stage in COS development is to determine how the outcomes that are known to be important should be measured [24]. This paper builds on the COS to report the development of the Combined Scale for Proxy Informed Consent Decisions (CONCORD scale).

Methods

The process of determining how the outcomes included in the COnSiDER COS should be measured was conducted in four sequential stages based on established measure development principles [25]. The development process is shown in Fig. 1. When devising items for a new tool, the development stages are as follows: (1) identification of the content to be included, (2) review of previous work to determine if existing outcome measures are adequate and comprehensively cover the construct domains being measured, (3) this is then followed by the development of new items, and (4) assessment such as face validity and comprehension of the new measure and the feasibility of using it [25].

Fig. 1
figure 1

CONCORD scale development process

Stage 1: content generation and identification of existing outcome measurement instruments

During the scoping review which was conducted as part of the COS development (published previously [23] including details of the methods and results), studies reporting the evaluation of decision support interventions to either improve consent in trials or proxy decision-making for care/medical treatment were reviewed. Data related to the outcome domains assessed were extracted to inform ‘what’ should be measured by the COS. Data relating to any outcome measurement instruments (OMI) used were also extracted in order to establish ‘how’ the outcome should be measured [26]. As there were no OMI specific to the assessment of proxy consent decisions, the closest were for decision aids intended for patients making decisions about healthcare (and more recently trial participation) and decision aids intended for proxies making non-research decisions, such as decisions about place of care on behalf of someone living with dementia.

Stage 2: assessment of content coverage by existing outcome measurement instruments

The aim of the second stage was to identify candidate outcome measures and assess content coverage. This was done through the development of a matrix which can help to determine how representative items are across content domains for a concept under measure [25]. Within the matrix, items identified in the COS were mapped against the validated scales used in the studies included in the scoping review and in the wider DA literature such as a recent update of a Cochrane review of DAs for people making treatment or screening decisions [27]. These constructs were then tabulated against those included in relevant validated OMI including the Decisional Conflict Scale (DCS) [28], Quality of Informed Consent Scale (QuIC) [29], Satisfaction with Decision Scale (SWDS) [30], and the Decision Regret Scale (DRS) [31]. Any overlapping areas, or where constructs were not sufficiently covered by these existing measures, were identified.

Stage 3: construction of a novel combined measure of proxy consent

The lack of coverage by, and relevance of, existing OMI to this novel construct (quality proxy consent decisions) meant that a new scale was needed. This could take the form of an additional scale to supplement the existing OMI, or a new combined scale that could be used as a single OMI. Integrating scales can be problematic, for example where they differ in terms of the response format [32]. However, the benefits of having one combined measure in terms of the ability to establish a consistent and reliable measure of specific relevance to this construct, and to reduce the burden of completion when administered to family members acting as proxy during a difficult time, led to the decision to develop a single combined multi-dimensional OMI—the Combined Scale for Proxy Informed Consent Decisions (CONCORD scale).

The construction of the combined measure followed the guiding principles and methods used in Steiner et al. who describe the derivation of items from other widely used indices, with modifications where needed, and the addition of items to meet the requirements of the new scale [25]. This enables the use of previously tested and psychometrically sound items, although their terminology may require updating, alongside the inclusion of new items which may come from sources such as research involving the ‘target population’ themselves [25].

A draft version of the CONCORD scale was developed, in preparation for piloting in the stage 4 of the project. The items were drafted by the first author with reference to the corresponding core outcome set item and then reviewed by the research team and iteratively revised. All items should be worded simply and unambiguously [33]; therefore, the readability and comprehension of the questionnaire was checked using a software tool (QUAID—question-understanding aid) to identify and improve issues around the wording, syntax, and semantics of questions [34]. This included revising any double-barrelled questions. Previous research suggests that Likert-type response scales with 5 to 7 points generally have better reliability and sensitivity than responses with 2 to 3 points [35]. Therefore, CONCORD consists of Likert-type scales with 5 points on the response scale, across its 28 items.

Stage 4: cognitive testing of the CONCORD scale

Best practice guidance for developing and validating scales suggests that a draft questionnaire should be administered to 5–15 interviewees in 2–3 rounds whilst allowing participants to verbalise the mental process entailed in providing answers [33]. Therefore, the final stage of this project was to cognitively test and refine the CONCORD scale with family members of people living with an impairing condition such as dementia, prior to its use in a future evaluation of a decision support intervention [36]. The aim of the cognitive interviews was to obtain a range of views and experiences of family members of someone with a condition that affects (or may affect) their ability to provide consent to participate in non-emergency research. This included a variety of conditions and relationships (e.g. spouse, child/parent).

Cognitive testing is concerned with how people interpret and comprehend questions, recall information and events, make judgements about how to respond, and provide a response [37]. When developing measurement instruments, it can enable an understanding about whether participants can understand the question concept, whether they do so in a consistent way, and in a way the researcher intended [38]. Cognitive interview methods include the use of ‘probing’ to elicit how the participant went about answering the question and to explore how easy or hard the participant found it to answer the question [38]. Participants were asked to imagine themselves in a hypothetical scenario where they were being asked to act as proxy on behalf of their family member with dementia. They were asked to complete the questionnaire taking as long as they thought they would need, followed by semi-structured questions to explore their responses and understanding of each item.

Participant recruitment

Participants were identified via social media platforms such as Twitter and through Join Dementia Research (JDR). JDR is an online registry that enables volunteers with memory problems or dementia, carers of those with memory problems or dementia, and healthy volunteers to sign up and register their interest in taking part in research [39]. Participants who expressed an interest in participating were provided with a Participant Information Sheet and consent form by post, together with a copy of the CONCORD scale contained in a separate sealed envelope.

Cognitive interviews were conducted via an online video conferencing platform (Zoom) which has been shown to be a reliable tool for collecting qualitative data [40]. Verbal consent was obtained prior to the start of the interview, and before the study commenced it received a favourable ethical opinion from the School of Medicine Research Ethics Committee at Cardiff University. The target sample size was informed by the literature reporting the development of health measurement scales [25]. Interviews were conducted over 2 rounds. In round 1, cognitive interviews explored the initial draft scale, with a revised version of the scale being developed following an interim analysis of the interviews. Round 2 explored the revised version with a different cohort of participants.

Data collection

Interviews comprised of two parts. Firstly, the participant was provided with a brief scenario—to imagine that the person they cared for was being considered for a research study (emphasising that it could be any type of study such as a drug trial or a music therapy intervention) and they were being approached to help make a decision about whether they should participate or not and were then provided with the questionnaire. They were then asked to open the sealed envelope containing the CONCORD scale and asked to complete it as if they had just acted as a consultee or legal representative. The time taken for completion was recorded by the researcher. Secondly, immediately after completing the scale the participant was asked about their understanding of the individual questions and their views about the scale as a whole. A topic guide containing questions about the participant’s views and experience of completing the scale with standardised ‘probes’ or prompts was used as the basis for the interviews. Particular attention was paid to any items that appeared to elicit greater uncertainty or misunderstanding. Interviews were conducted by the lead researcher (VS) and audio-recorded with consent and transcribed verbatim prior to analysis. The transcripts were checked for accuracy and completeness against the source data.

Data analysis

Interview transcripts were read and re-read to ensure familiarity and inductively coded. Inductive analysis is a data-driven approach which can enable a rich description of the overall data to be provided [41] and has been used in previous studies using cognitive interviews [42]. NVivo software (v.12) was used to organise and store the data and support data analysis. The coding framework was reviewed by the research team, and consensus on coding reached through discussion during the coding process. Reflexivity is key to qualitative research [43] and developments in the analytical process were recorded through data analysis memos held in NVivo. Interviews were conducted until it was considered that data saturation had been reached, defined as adequacy of the data generated (in terms of richness and complexity) and is the point at which no new information, codes, or themes are yielded from the data [44], as determined through agreement between the study team. The CONCORD scale was then revised and finalised in preparation for use in a study to evaluate a decision aid for proxies, with concurrent validation of the scale [36].

Results

Stage 1: content generation and identification of existing outcome measurement instruments

Searches of published literature identified 14 studies that met the criteria for inclusion in the scoping review. The full results of the searches can be found in the published scoping review [23]. Characteristics of the studies and the decision-related outcome measurement instruments used in the studies are shown in Table 1.

Table 1 Characteristics of studies included in the scoping review and decision-related outcome measurement instruments used

There was notable heterogeneity in the outcomes and OMI used. All studies used a combination of purpose-designed measures, some of which had been used in previous studies, and established and validated OMI. Of validated measures of decision quality, including the quality of decision support, the Decisional Conflict Scale (DCS) was most commonly used (12 (86%) studies), followed by Quality of Informed Consent Scale (QuIC) [29] and Satisfaction with Decision Scale (SWDS) [30] which were both used in 3 (21%) studies. DCS use included the traditional version with 16 statements and 5 response categories [28], as well as the low literacy version with 10 statements and 3 response categories [45]. The Decision Regret Scale (DRS) [31] was used in 2 studies. There was considerable heterogeneity in measures used to assess constructs such as objective and subjective knowledge, with most studies using purpose-designed scales containing decision-specific knowledge items.

Stage 2: assessment of content coverage by existing outcome measurement instruments

The COS consists of 28 outcome items which relate to the process of decision-making, proxies’ experience of decision-making, and factors that influence decision-making such as understanding [23]. These cover key construct domains such as values clarity (understanding the personal value of options), subjective understanding (feeling informed), objective understanding (being informed), preparedness to make a decision, and regret and satisfaction with the decision—which included outcomes of that decision.

These constructs were tabulated against those included in commonly used validated measures including the DCS, QuIC, SWDS, and DRS. A focused literature review of outcome measures used in the evaluation of interventions to improve decision-making about healthcare and informed consent decisions identified additional candidate OMIs: DelibeRATE for measuring deliberation during the informed consent process for clinical trials [46], Preparation for Decision Making (PrepDM) scale [47], and Decision Self-Efficacy scale (DSE) [48]. As these were validated measures that are widely used in studies to evaluate decision aids, and form part of the well-established evaluation toolkit for the Ottawa Decision Support Framework (ODSF, a framework that aims to conceptualise the support needed for making difficult preference-sensitive decisions), the quality of the OMI was not formally assessed as is usually recommended when selecting outcome measurement instruments for outcomes included in a COS [26]. Any overlapping areas or constructs, and those not currently captured by these existing measures, were identified.

Across 28 outcome items, measures were identified for 18 of the outcomes when assessed using seven existing OMI. There was sufficient coverage of domains such as self-efficacy with all three items measured by existing scales. However, ten of the COS items across five domains (knowledge, understanding, deliberation, values congruence, and satisfaction) were not covered by any of the measures. This, unsurprisingly, included proxy decision-making-specific domains such as knowledge sufficiency—both about their role as proxy decision-maker and in relation to them knowing about the wishes and preferences of the person they are representing. Another domain not well covered was satisfaction, including whether the proxy felt that they had sufficient time to make a decision which is considered an essential component of informed consent for research [49]. There was considerably heterogeneity in the phrasing of many of the items, which primarily reflected the diverse origins of the scales included in this review.

Stage 3: integration of items into a new measure of proxy consent

Based on the content gaps across existing items and the lack of specificity and applicability to proxy consent decisions described in stage 2, new self-report items were generated for the ten outcome items not already covered. Having identified the need for the newly developed items to be combined into a new scale alongside those adapted from existing OMI, it was recognised that a degree of harmonisation in the phrasing of individual items was needed in order to present a single combined questionnaire and improve comprehension. The phrasing of items which were covered by existing OMIs was slightly modified from the original COS wording to more closely align with one another and to improve comprehension when used as a self-completed OMI rather than as a list of COS items. A draft version of the CONCORD scale was developed.

In order to establish initial face validity prior to larger scale testing, this draft version of the scale then underwent initial testing with a small group of lay advisors (n = 3) who support the larger research programme. This resulted in changes being made to the layout of the questionnaire so that it was easier to navigate, including grouping items into three sections with headings to help orientate respondents towards which stage of the decision-making process they were being asked about. Three sections were created: preparation for decision-making, decision-making process, and decision outcome. By the end of this stage, the first test version of the CONCORD scale (version 1.0) was developed.

Stage 4: cognitive testing of the CONCORD scale

Remote cognitive interviews were conducted with eleven family members of someone living with dementia between September and October 2021 (round 1) and between November and December (round 2) with the revised version of the scale (version 2.0). The mean duration of interviews was 43 min (range 29–59 min). Participant characteristics are presented in Table 2.

Table 2 Cognitive interview participant characteristics

An interim analysis was conducted after round 1, and the results were discussed among the research team. Where responses indicated uncertainty or participants had recommended specific word changes to items to increase clarity, a consensus was reached about which modifications to the content or additions were required. This included changes to the phrasing of some questions to reduce uncertainty about what decision was being referred to, through adding the term ‘decision about my relative taking part in the study’, and to ensure greater consistency instead of using a combination of ‘choice’, ‘option’, and ‘decision’. Changes were also made to the order of questions in the section concerning preparation for decision-making, where some round 1 participants felt they had lacked a ‘logical’ order. Revisions were made to the wording and ordering of the CONCORD scale as a result. The revised version was then used in the interviews in round 2. Participants’ general views about the length and format of the questionnaire are reported in the following text, with corresponding illustrative quotes in a supplementary file (Additional file 1).

Instructions for completion of the CONCORD scale

Participants appeared to understand the brief instructions for completing the scale that were provided at the beginning of the questionnaire, with no participants requiring further explanation to complete the questionnaire. However, when specifically asked about the clarity of the instructions, many participants had not read them in detail, appearing to skip over them to complete the questions.

Length and format of the questionnaire

The mean time for completion of the questionnaire was just under 3 ½ min (range 1 ½ to 5 min). All participants considered the questionnaire to be of reasonable length. One participant commented that proxy decision-makers are also likely to be (or have been) carers and so will be experienced in completing forms as part of that carer role, including much longer administrative questionnaires. The format was considered to be acceptable, although one participant commented that the font size may be a little too small, for example for people with visual impairments.

Ordering of items

Following the changes made to the order of the items after round 1 interview, participants in round 2 considered the order of the three sections and the items within each part to be acceptable and have a ‘logical order’ that followed their thought processes. Suggestions were made to further revise the section headings to more clearly describe the content of each section and to consider labelling them as A–C, although, as with the instructions at the start of the questionnaire, not all participants read the section headings closely or were conscious of having done so.

Views about the contents and acceptability of the questionnaire

Participants understood the purpose of the questionnaire, and all viewed the contents of the questions as being acceptable or viewed as ‘pretty harmless’. Whilst participants generally felt that items were clear and straightforward, their responses in round 1 indicated that some items included terms that required further explanation, e.g. what form ‘support’ might take when asked if they had enough support to make a decision.

Other responses indicated that quantifying sufficiency was challenging—particularly against the backdrop of uncertainty that surrounds proxy decision-making. For example, some participants questioned whether they could ever feel confident or satisfied with their decisions. Some participants identified points of redundancy across a small number of the items, for example ‘I feel it is the right decision’ and ‘I feel the decision was a wise one’ where participants felt that ‘right’ and ‘wise’ were similar concepts. However, other participants viewed them as disparate items and considered their responses to whether their decision was right or wise to be distinct from one another.

Participants often distinguished between questions that related to their own feelings and knowledge, and those that required thinking about the wishes of the person they were representing—which they described as requiring more time and consideration. Some participants described the impact that completing the questionnaire had on their (hypothetical) decision-making and how it prompted them to think about some of the issues. When asked about areas they thought were important but missing from the questionnaire, no participants identified any areas of content inadequacy.

Scoring

All participants attempted to complete the CONCORD scale, although it was a counterfactual exercise for participants and so some questions were slightly more difficult for some. One participant [ID 06] did not complete a small number of items (n = 3) where the meaning was not clear to them, and another [ID 01] did not complete a larger number of items (n = 14) as they focused on commenting about the phrasing and order of items which they felt meant that they could not score those items. Some participants said that they had scored an item as ‘neither agree nor disagree’ where they were not sure of the meaning, although more normally they used this to indicate a neutral response.

Creation of a final version

Following the completion of round 2, the results were discussed among the research team. Additional minor revisions were made to the wording of a small number of items where participants appeared to be uncertain about the meaning or where they suggested it might be unclear for others. The refined version of the CONCORD scale is shown in Table 3, with a 5-point response scale where 1 = strongly agree to 5 = strongly disagree. Responses are then reverse scored, with higher scores indicating higher decision quality.

Table 3 Combined Scale for Proxy Informed Consent Decisions (CONCORD scale)

Discussion

Building on previous conceptual and empirical work, this paper reports on the development and first phase of the validation for the CONCORD scale, a self-reported combined measure of the quality of proxy consent decisions made in non-emergency settings on behalf of other adults. Higher scores indicate a higher quality decision, defined as one in which the proxy is prepared and supported to make a decision and feels satisfied they have made a decision based on the preferences of the person they are representing, and where they accept the uncertainty they may experience [22]. This novel questionnaire is intended to measure a complex and—when compared to decisions about healthcare or participation in a trial made for oneself—relatively under-developed construct. This makes comparisons with existing scales difficult. However, the length of the questionnaire and time for completion is broadly similar to other scales measuring aspects of informed consent for clinical trials. This includes the Quality of Informed Consent (QuIC) questionnaire which measures participants’ understanding of cancer clinical trials using 34 questions with three response levels and requires an average of 7.2 min to complete [29]. The DelibeRATE questionnaire measures deliberation during the informed consent process for clinical trials and consists of ten items to be rated on a three-level Likert scale and takes approximately 5 min to complete [46]. It is also broadly comparable with OMI used in studies evaluating proxy decisions that were included in the scoping review (see Table 1). These typically used a combination of scales such as Decisional Conflict Scale (DCS) [28] which consists of 16 items plus an option preference question, alongside a knowledge assessment using a study-specific questionnaire with approximately 19 items.

The scale underwent cognitive testing which demonstrated that participants had strong comprehension of the items. Whilst the current version of the scale consists of 28 items, the time required for completion by participants was low, and the high levels of acceptability reported suggest feasibility. Although no areas of content inadequacy or redundancy were identified, the scale has yet to undergo formal statistical evaluation, following which items may be removed due to redundancy. Therefore, in line with the recommended practice for developing and validating scales, some content that is currently included may ultimately be shown to be either redundant or unrelated to the core construct [33]. Further work is also needed to inform the scoring instructions for CONCORD, including any weighting of the items and transforming the raw score to a 0–100 scale.

A future evaluation study is planned as part of the larger programme of research that this project forms part of. This will include establishing the feasibility of using the CONCORD scale in ‘real’ (rather than hypothetical) proxy consent decision-making contexts, with concurrent validation of the scale as part of a ‘Study Within a Trial’ (or ‘SWAT’) to evaluate the effectiveness of a decision aid for families making proxy consent decisions across a number of host trials [36]. Family members will be randomised to receive the decision aid alongside the standard information about the trial they would receive as consultee or legal representative, or the standard information alone [50]. This research programme has also included developing an empirical and conceptual account of proxy consent decisions made by families [51], development of the COS [23], and the decision aid intervention [16]; therefore, this further evaluation work will also contribute to the content validation of this scale and developing the theory that underpins this work.

Further research is also needed to explore CONCORD as a measure of the quality of other types of proxy decisions relating to research, such as participation in emergency research, the decision to withdraw a participant who lacks capacity, or decisions made on behalf of children.

Strengths and limitations

A strength of our study is that the scale development process was based on established measure development principles. Other strengths include the involvement of an expert stakeholder panel in the COS which led to the generation of content items and contributed to establishing the content validity of the scale. Although a relatively modest number of interviews were conducted, this is in line with best practice guidance [33] and data saturation was reached [44]. Conducting an interim analysis during the cognitive interview study meant that the revised version could undergo testing prior to a final version being produced ready for further validation work.

There are also some limitations to note. First, whilst the development of the scale is underpinned by empirical research involving families who have acted as a consultee or legal representative, the feasibility and acceptability of the scale has only been established in the context of hypothetical decision-making. Framing effects and any potential differences between real and hypothetical decisions remain the subject of much debate [52]. Secondly, as the scale is intended for proxy decisions regarding non-emergency research, it is not considered applicable to research decisions made during life-threatening and time-critical situations which are likely to differ. Lastly, the scale draws on work conducted in England and Wales with predominantly white participants and so, whilst there can be confidence in its validity in the UK and broadly similar contexts, the legislative and cultural differences surrounding the ethics and practice of proxy consent must be taken into account when considering the use of CONCORD in other jurisdictions.

Conclusion

CONCORD is a 28-item scale to assess the quality of decisions about non-emergency trial participation made by family members on behalf of someone who lacks the capacity to consent for themselves. This new brief measure has the potential to help evaluate interventions that improve the quality of decisions made by proxies on behalf of someone who lacks capacity and reduce the emotional and decisional burden they experience as a result. Following further validation work, the use of this scale in future studies will enable meta-analyses to synthesise the growing literature on how best to support family members making decisions about research. Having a standardised approach to developing and evaluating interventions to improve proxy consent decisions will contribute to the developing evidence base in this novel area of research and so help to address some of the barriers to the inclusion of this under-served group.

Availability of data and materials

The dataset generated and used in this study is available through the submission of a data request to the Centre for Trials Research at https://www.cardiff.ac.uk/centre-for-trials-research/about-us/data-requests.

Abbreviations

COS:

Core outcome set

CONCORD:

Combined Scale for Proxy Informed Consent Decisions

DA:

Decision aid

DCS:

Decisional Conflict Scale

DRS:

Decision Regret Scale

DSE:

Decision Self-Efficacy scale

JDR:

Join Dementia Research

ODSF:

Ottawa Decision Support Framework

OMI:

Outcome measurement instruments

PrepDM:

Preparation for Decision Making

QuIC:

Quality of Informed Consent Scale

SWAT:

Study Within A Trial

SWDS:

Satisfaction with Decision Scale (SWDS)

RCT:

Randomised controlled trial

References

  1. Witham MD, Anderson E, Carroll C, Dark PM, Down K, Hall AS, et al. Developing a roadmap to improve trial delivery for under-served groups: results from a UK multi-stakeholder process. Trials. 2020;21:694.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Shepherd V, Wood F, Griffith R, Sheehan M, Hood K. Protection by exclusion? The (lack of) inclusion of adults who lack capacity to consent to research in clinical trials in the UK. Trials. 2019. https://doi.org/10.1186/s13063-019-3603-1.

  3. Mundi S, Chaudhry H, Bhandari M. Systematic review on the inclusion of patients with cognitive impairment in hip fracture trials: a missed opportunity? Can J Surg. 2014;57:E141–5.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Sheehan KJ, Fitzgerald L, Hatherley S, Potter C, Ayis S, Martin FC, et al. Inequity in rehabilitation interventions after hip fracture: a systematic review. Age Ageing. 2019;48:489–97.

    Article  CAS  PubMed  Google Scholar 

  5. Taylor JS, DeMers SM, Vig EK, Borson S. The disappearing subject: exclusion of people with cognitive impairment and dementia from geriatrics research. J Am Geriatr Soc. 2012;60:413–9.

    Article  PubMed  Google Scholar 

  6. Feldman MA, Bosett J, Collet C, Burnham-Riosa P. Where are persons with intellectual disabilities in medical research? A survey of published clinical trials. J Intellect Disabil Res. 2014;58:800–9.

    Article  CAS  PubMed  Google Scholar 

  7. Shepherd V. An under-represented and underserved population in trials: methodological, structural, and systemic barriers to the inclusion of adults lacking capacity to consent. Trials. 2020;21:445.

    Article  PubMed  PubMed Central  Google Scholar 

  8. HMSO, London. Mental Capacity Act 2005. 2005.

    Google Scholar 

  9. The Medicines for Human Use (Clinical Trials) Regulations 2004 SI No. 1031. 2004. https://www.legislation.gov.uk/uksi/2004/1031/.

  10. Shepherd V, Hood K, Sheehan M, Griffith R, Wood F. ‘It’s a tough decision’: a qualitative study of proxy decision-making for research involving adults who lack capacity to consent in UK. Age and Ageing. 2019;48(6):903–9. https://doi.org/10.1093/ageing/afz115.

    Article  PubMed  Google Scholar 

  11. Ciccone A, Sterzi R, Crespi V, Defanti CA, Pasetti C. Thrombolysis for acute ischemic stroke: the patient’s point of view. CED. 2001;12:335–40.

    CAS  Google Scholar 

  12. Demarquay G, Derex L, Nighoghossian N, Adeleine P, Philippeau F, Honnorat J, et al. Ethical issues of informed consent in acute stroke. CED. 2005;19:65–8.

    Google Scholar 

  13. Iverson E, Celious A, Kennedy CR, Shehane E, Eastman A, Warren V, et al. Factors affecting stress experienced by surrogate decision makers for critically ill patients: implications for nursing practice. Intensive Crit Care Nurs. 2014;30:77–85.

    Article  PubMed  Google Scholar 

  14. Sugarman J, Cain C, Wallace R, Welsh-Bohmer KA. How proxies make decisions about research for patients with Alzheimer’s disease. J Am Geriatr Soc. 2001;49:1110–9.

    Article  CAS  PubMed  Google Scholar 

  15. Mason S, Barrow H, Phillips A, Eddison G, Nelson A, Cullum N, et al. Brief report on the experience of using proxy consent for incapacitated adults. J Med Ethics. 2006;32:61–2.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Shepherd V, Wood F, Griffith R, Sheehan M, Hood K. Development of a decision support intervention for family members of adults who lack capacity to consent to trials. BMC Med Inform Decis Mak. 2021;21:30.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Gillies K, Cotton SC, Brehaut JC, Politi MC, Skea Z. Decision aids for people considering taking part in clinical trials. Cochrane Database Syst Rev. 2015. https://doi.org/10.1002/14651858.CD009736.pub2.

  18. Gillies K, Campbell MK. Development and evaluation of decision aids for people considering taking part in a clinical trial: a conceptual framework. Trials. 2019;20:401.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Feldman-Stewart D, Brennenstuhl S, McIssac K, Austoker J, Charvet A, Hewitson P, et al. A systematic review of information in decision aids. Health Expect. 2007;10:46–61.

    Article  PubMed  Google Scholar 

  20. Sepucha KR, Borkhoff CM, Lally J, Levin CA, Matlock DD, Ng CJ, et al. Establishing the effectiveness of patient decision aids: key constructs and measurement instruments. BMC Med Inform Decis Mak. 2013;13:S12.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Netemeyer R, Bearden W, Sharma S. Scaling procedures: issues and applications. Thousand Oaks: United States of America: SAGE Publications, Inc.; 2003.

    Book  Google Scholar 

  22. Shepherd V. (Re)Conceptualising ‘good’ proxy decision-making for research: the implications for proxy consent decision quality. BMC Med Ethics. 2022;23:75.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Shepherd V, Wood F, Robling M, Randell E, Hood K. Development of a core outcome set for the evaluation of interventions to enhance trial participation decisions on behalf of adults who lack capacity to consent: a mixed methods study (COnSiDER Study); 2021.

    Google Scholar 

  24. Williamson PR, Altman DG, Bagley H, Barnes KL, Blazeby JM, Brookes ST, et al. The COMET handbook: version 1.0. Trials. 2017;18:280.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Streiner DL, Norman GR, Cairney J. Health Measurement Scales: a practical guide to their development and use. 5th ed. Oxford: Oxford University Press; 2015.

    Book  Google Scholar 

  26. Prinsen CA, Vohra S, Rose MR, Boers M, Tugwell P, Williamson PR, et al. Guideline for selecting outcome measurement instruments for outcomes included in a Core Outcome Set. 2016.

    Google Scholar 

  27. Stacey D, Légaré F, Col NF, Bennett CL, Barry MJ, Eden KB, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014. https://doi.org/10.1002/14651858.CD001431.pub4.

  28. O’Connor AM. Validation of a decisional conflict scale: medical decision making; 1995. https://doi.org/10.1177/0272989X9501500105.

    Book  Google Scholar 

  29. Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC. Quality of informed consent: a new measure of understanding among research subjects. J Natl Cancer Inst. 2001;93:139–47.

    Article  CAS  PubMed  Google Scholar 

  30. Holmes-Rovner M, Kroll J, Schmitt N, Rovner DR, Breer ML, Rothert ML, et al. Patient satisfaction with health care decisions: the satisfaction with decision scale. Med Decis Making. 1996. https://doi.org/10.1177/0272989X9601600114.

  31. Brehaut JC, O’Connor AM, Wood TJ, Hack TF, Siminoff L, Gordon E, et al. Validation of a decision regret scale. Med Decis Making. 2003;23:281–92.

    Article  PubMed  Google Scholar 

  32. Chakrabartty SN. Integration of various scales for measurement of insomnia. Res Methods Med Health Sci. 2021;2:102–11.

    Google Scholar 

  33. Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Front Public Health. 2018;6:149.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Graesser AC, Wiemer-Hastings K, Kreuz R, Wiemer-Hastings P, Marquis K. QUAID: a questionnaire evaluation aid for survey methodologists. Behav Res Methods Instrum Comput. 2000;32:254–62.

    Article  CAS  PubMed  Google Scholar 

  35. Krosnick JA. Questionnaire design. In: Vannette DL, Krosnick JA, editors. The Palgrave handbook of survey research. Cham: Springer International Publishing; 2018. p. 439–55.

    Chapter  Google Scholar 

  36. CONSULT. Cardiff University. https://www.cardiff.ac.uk/centre-for-trials-research/research/studies-and-trials/view/consult. Accessed 12 Oct 2021.

  37. Harris-Kojetin LD, Fowler FJ, Brown JA, Schnaier JA, Sweeny SF. The use of cognitive testing to develop and evaluate CAHPSTM 1.0 core survey items. Med Care. 1999;37:MS10–21.

    Article  CAS  PubMed  Google Scholar 

  38. Collins D. Pretesting survey instruments: an overview of cognitive methods. Qual Life Res. 2003;12:229–38.

    Article  PubMed  Google Scholar 

  39. Join Dementia Research. https://www.joindementiaresearch.nihr.ac.uk/. Accessed 23 Feb 2022.

  40. Archibald MM, Ambagtsheer RC, Casey MG, Lawless M. Using Zoom videoconferencing for qualitative data collection: perceptions and experiences of researchers and participants. Int J Qual Methods. 2019;18:1609406919874596.

    Article  Google Scholar 

  41. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

    Article  Google Scholar 

  42. Yu CH, Ke C, Jovicic A, Hall S, Straus SE, Cantarutti P, et al. Beyond pros and cons – developing a patient decision aid to cultivate dialog to build relationships: insights from a qualitative study and decision aid development. BMC Med Inform Decis Mak. 2019;19:186.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Phillippi J, Lauderdale J. A guide to field notes for qualitative research: context and conversation. Qual Health Res. 2018;28:381–8.

    Article  PubMed  Google Scholar 

  44. Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health. 2021;13:201–16.

    Article  Google Scholar 

  45. Linder SK, Swank PR, Vernon SW, Mullen PD, Morgan RO, Volk RJ. Validity of a low literacy version of the Decisional Conflict Scale. Patient Educ Couns. 2011;85:521–4.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Gillies K, Elwyn G, Cook J. Making a decision about trial participation: the feasibility of measuring deliberation during the informed consent process for clinical trials. Trials. 2014;15:307.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Bennett C, Graham ID, Kristjansson E, Kearing SA, Clay KF, O’Connor AM. Validation of a preparation for decision making scale. Patient Educ Couns. 2010;78:130–3.

    Article  PubMed  Google Scholar 

  48. O’Connor AM. User manual - Decision Self-Efficacy Scale; 2002.

    Google Scholar 

  49. Manti S, Licari A. How to obtain informed consent for research. Breathe. 2018;14:145–52.

    Article  PubMed  PubMed Central  Google Scholar 

  50. SWAT 159: Feasibility and effectiveness of a decision aid for family members considering trial participation on behalf of an adult who lacks capacity to consent. SWAT-SWAR repository. https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/FileStore/Filetoupload,1313262,en.pdf. Accessed 10 Mar 2022.

  51. Shepherd V, Sheehan M, Hood K, Griffith R, Wood F. Constructing authentic decisions: proxy decision-making for research involving adults who lack capacity to consent. J Med Ethics. 2020. https://doi.org/10.1136/medethics-2019-106042.

  52. Kühberger A, Schulte-Mecklenbeck M, Perner J. Framing decisions: hypothetical and real. Organ Behav Hum Decis Process. 2002;89:1162–75.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the participants who generously volunteered their time to participate in the interviews and the lay advisory group who provide invaluable insight and support for this research programme.

Funding

This study was conducted as part of a National Institute of Health Research Advanced Fellowship (CONSULT) held by VS and funded by the Welsh Government through Health and Care Research Wales (NIHR-FS(A)-2021). The funding body did not participate in the study design, data collection, analysis, or interpretation in writing this manuscript. Primary and Emergency Care (PRIME) Research Centre Wales is funded by the Welsh Government through Health and Care Research Wales, and the Centre for Trials Research is funded by Health and Care Research Wales and Cancer Research UK.

Author information

Authors and Affiliations

Authors

Contributions

VS conceived the study and VS, KH, and FW designed the study. VS recruited study participants and conducted all cognitive interviews. VS completed all data coding and led the analysis. VS, KH, and FW interpreted the data. VS drafted the manuscript. All authors critically revised the manuscript and approved the final version.

Corresponding author

Correspondence to Victoria Shepherd.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was obtained from Cardiff University School of Medicine Research Ethics Committee (ref. 21/63).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Illustrative quotes from participants in Phase 4 cognitive interviews.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shepherd, V., Hood, K., Gillies, K. et al. Development of a measure to assess the quality of proxy decisions about research participation on behalf of adults lacking capacity to consent: the Combined Scale for Proxy Informed Consent Decisions (CONCORD scale). Trials 23, 843 (2022). https://doi.org/10.1186/s13063-022-06787-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-022-06787-8

Keywords