Skip to main content
  • Study protocol
  • Open access
  • Published:

Using a theory-based, customized video game as an educational tool to improve physicians’ trauma triage decisions: study protocol for a randomized cluster trial

Abstract

Background

Transfer of severely injured patients to trauma centers, either directly from the field or after evaluation at non-trauma centers, reduces preventable morbidity and mortality. Failure to transfer these patients appropriately (i.e., under-triage) remains common, and occurs in part because physicians at non-trauma centers make diagnostic errors when evaluating the severity of patients’ injuries. We developed Night Shift, a theory-based adventure video game, to recalibrate physician heuristics (intuitive judgments) in trauma triage and established its efficacy in the laboratory. We plan a type 1 hybrid effectiveness-implementation trial to determine whether the game changes physician triage decisions in real-life and hypothesize that it will reduce the proportion of patients under-triaged.

Methods

We will recruit 800 physicians who work in the emergency departments (EDs) of non-trauma centers in the US and will randomize them to the game (intervention) or to usual education and training (control). We will ask those in the intervention group to play Night Shift for 2 h within 2 weeks of enrollment and again for 20 min at quarterly intervals. Those in the control group will receive only usual education (i.e., nothing supplemental). We will then assess physicians’ triage practices for older, severely injured adults in the 1-year following enrollment, using Medicare claims, and will compare under-triage (primary outcome), 30-day mortality and re-admissions, functional independence, and over-triage between the two groups. We will evaluate contextual factors influencing reach, adoption, implementation, and maintenance with interviews of a subset of trial participants (n = 20) and of other key decision makers (e.g., patients, first responders, administrators [n = 100]).

Discussion

The results of the trial will inform future efforts to improve the implementation of clinical practice guidelines in trauma triage and will provide deeper understanding of effective strategies to reduce diagnostic errors during time-sensitive decision making.

Trial registration

ClinicalTrials.gov; NCT06063434. Registered 26 September 2023.

Peer Review reports

Introduction

Background and rationale

Injury is the leading cause of loss of independence among those over the age of 65, resulting in ≥ 3 million emergency department (ED) visits, ≥ 800,000 hospitalizations, and ≥ $50 billion in costs each year [1]. The appropriate triage of trauma patients, defined as the rapid identification of those with severe injuries and transfer to trauma centers either directly from the field or after evaluation at a non-trauma center, decreases mortality by 10–25%, reduces loss of independence, and diminishes pain at 1 year [2,3,4,5,6]. Consequently, stakeholders have implemented clinical practice guidelines for trauma triage using best-practice methods such as text-based education, outreach by opinion leaders, and legislative mandates [3]. Despite these efforts, under-triage at non-trauma centers remains common (~ 50–80%), particularly among older adults [7,8,9].

Physicians are the largest source of non-compliance with clinical practice guidelines [10]. Experimental evidence from the basic behavioral sciences literature and our prior research suggests that people typically rely on intuitive judgments (heuristics) to make complex decisions under pressure, as in the case of trauma triage [11,12,13]. When calibrated well, heuristics allow rapid, accurate decisions [14]. When calibrated poorly, they produce errors in diagnoses [15]. We previously developed a customized, theory-based video game (Night Shift) to recalibrate physician heuristics in trauma triage, using the platform of an adventure game to train physicians to use clinical practice guidelines by making them relevant and memorable. In pilot trials, physicians who played the game made 10–18% more guideline-concordant decisions on a validated virtual simulation compared with those who completed a gold-standard, text-based educational program, an effect that persisted through the 6-month follow-up [16, 17].

Objectives and trial design

The objective of this type 1 effectiveness-implementation trial is to evaluate the effect of Night Shift on real-world triage decision making and on patient outcomes. We include a trial schematic in eFigure 1 and the SPIRIT checklist in the supplemental materials. We will randomize a national sample of physicians who work at non-trauma centers in the US (N = 800) to play a video game (Night Shift) or to usual education (control) and will use Medicare claims data to evaluate the groups’ triage practices for severely injured patients who present initially at non-trauma centers in the 1 year after enrollment in the trial. We will subsequently conduct a series of semi-structured interviews with trial participants and with key decision makers (e.g., ED directors, paramedics [N = 100]) to identify contextual factors that would influence implementation of the intervention in the future. We hypothesize that physicians randomized to play the video game will under-triage a smaller proportion of severely injured patients compared to those randomized to the control (usual education) group. We secondarily hypothesize that intervention physicians will have fewer adverse patient outcomes (e.g., 30-day mortality and 30-day readmissions) compared to physicians in the control group.

Methods

Study setting, eligibility criteria, recruitment, and consent procedures

We have partnered with 3 US (United States) physician staffing groups. Cumulatively, these groups employ approximately 4500 physicians, cover ≥ 30 states, staff ≥ 600 EDs, and provide care to ≥ 16 million patients each year. We plan to recruit board-certified physicians who work exclusively in the EDs of non-trauma centers in the US, who triage adult trauma patients as part of their practice, and who have a National Provider Identifier (NPI). We plan to exclude non-physician healthcare professionals (e.g., nurse practitioners, physician assistants) because of variation in billing practices (e.g., some bill under their own identifiers while others do not) that will confound outcome assessment. We will also exclude physicians who work at both trauma and non-trauma centers (because this limits the number of eligible patients they encounter), and those who work outside the continental US (because of differences in referral patterns). We will ask physician leaders of the organizations, with which we have partnered, to distribute an email to their staff that describes the trial, and includes a link to the consent form. Physicians who provide consent will then receive a survey that collects demographic data that will allow us to assess eligibility.

Additional consent provisions for collection and use of participant data and biological specimens

We are not planning any ancillary studies and therefore have not outlined any additional consent provisions.

Interventions

Explanation for the choice of comparator

Stakeholders in trauma, including the American College of Surgeons and the American Board of Emergency Medicine (ABEM), have already executed best-practice educational efforts to increase the implementation of trauma triage guidelines, including widespread dissemination of the guidelines through Advanced Trauma Life Support (ATLS), a 2-day textbook-based course completed quadrennially by > 80% of physicians who work in non-trauma centers, and a 4-h trauma resuscitation module required quinquennially as part of the ABEM’s recertification program [18, 19]. We therefore consider usual education to be the best comparator for our intervention.

Intervention description

Night Shift is an adventure video game designed to recalibrate physician heuristics for identifying severely injured trauma patients (i.e., their pattern recognition) that we developed originally in 2016. Players take on the persona of Andy Jordan, a young emergency medicine physician, who moves home to search for his missing grandfather, and takes a job at a small community hospital. The player must not only solve the mystery but also manage a series of trauma and non-trauma cases, experiencing the consequences of his/her decision making. Based on feedback from participants in laboratory-based trials, we partnered with Schell Games (Pittsburgh, PA) to modify the user interface, simplifying the movement controls and refining the clinical content. We also expanded the game to allow for its longitudinal use. Notably, we embedded a puzzle mini-game (Graveyard Shift) with levels that unlock at pre-specified intervals (e.g., levels 1–3 become available in March 2024; levels 4–6 become available in June 2024; levels 7–10 become available in September 2024), with the objective of encouraging participants to return to the game for booster sessions. The revised application has the name Night Shift 2024 and will be available for download on the iOS application store. We summarize the theoretical framework, game content, and game mechanics of Night Shift 2024 in Table 1 and share a schematic of the process that we followed to ensure theory-based development in eFigure 2.

Table 1 Description of Night Shift 2024

We will ask participants to spend a minimum of 2 h playing Night Shift 2024 within 14 days of receiving their device and then return to the game for 20 min at 90, 180, and 270 days after enrollment. Participants have the option of not completing the assigned study task, but we do not have pre-specified criteria for discontinuing or modifying the allocated intervention.

Strategies to improve adherence to the intervention and retention and to complete follow-up

We will pre-load new iPads with Night Shift 2024 and will mail the devices to those allocated to the intervention group. Participants will keep their iPad as a fixed honorarium (approximate value: $350) and will also receive a conditional monetary honorarium for each booster dose that they complete ($25/session). They can also apply for 3 h of continuing medical education credit after completing all three booster sessions. We will issue three email reminders and a phone call during the first 2 weeks after enrollment and then quarterly reminders during the trial period. Participants in the usual education group will complete outcome assessment tools and will receive a conditional, wage-based honorarium ($100/hour spent) upon completion of study tasks. They too will receive email reminders and a phone call to encourage retention.

Relevant concomitant care permitted or prohibited during the trial and provisions for post-trial care

Participants may receive routine continuing medical education during the trial, although we have no mechanism in place to track that information. We have not made any provisions for post-trial care as we consider the likelihood of any harm extremely unlikely.

Outcomes

We will use the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework to evaluate this type 1 hybrid effectiveness-implementation trial and summarize the outcomes in.Table 2 [20, 21].

Table 2 Application of RE-AIM framework to trial outcomes. Our primary outcome measure is italicized

Primary outcome (effectiveness)

Our primary outcome is the aggregated performance of physicians in the intervention and control groups when triaging trauma patients, assessed using Medicare claims data. Specifically, we will calculate the mean proportion of severely injured patients, evaluated by trial participants at a non-trauma center, who are not transferred to a level 1 or level 2 trauma center within 24 h (i.e., under-triage), as recommended by the clinical practice guidelines. We use under-triage as our primary outcome because it is an important measure of physician behavior, and because of its association with patient-centered outcomes, including mortality and return to work [22, 23]. We will define “severe injuries” using injury severity scores (ISS), with a cutoff > 15, consistent with the literature.

Secondary outcomes (effectiveness)

Secondary outcomes will include 30-day mortality and readmission (composite outcome), new-onset functional dependence (proportion of patients with 90-day pre-admission location at home with discharge to a skilled nursing or rehabilitation facility), and over-triage (patients with ISS < 15 who were transferred to a higher level of care). We use the 30-day composite outcome to increase statistical efficiency, balancing concerns of validity with feasibility (given the low base rate of individual outcomes) [24,25,26]. To assess harm, we will capture over-triage, which in theory could worsen outcomes for patients with other conditions at trauma centers by increasing treatment delays and reducing the availability of resources. Additionally, over-triage results in removal of patients from their community, without personal benefit.

Other RE-AIM outcomes 

We will also estimate the intervention’s reach (i.e., the number, proportion, and representativeness of the individuals willing to enroll in the trial), adoption (i.e., the absolute number, proportion, and representativeness of the settings), and implementation of the intervention (i.e., participants’ use and the costs required [e.g., honoraria]) [21].

Participant timeline

We summarize participant activities in Table 3. We will ask participants to use their intervention within 2 weeks of enrollment in the trial and quarterly during the subsequent 9 months (total time required: 3 h). They will complete web-based questionnaires assessing the intervention’s usability and reporting the fidelity of intervention delivery twice: immediately after completing the intervention the first time and then after their third booster dose at 9 months. These instruments will take less than 5 min to complete. We will also ask them to complete a web-based tool to measure physician behavior in trauma (SONAR) after the first use of the game. Participants in the usual education control will be asked to complete SONAR within 2 weeks of enrollment. Completing SONAR will take approximately 1 h.

Table 3 Schedule of enrollment, allocation, and assessment activities

Sample size

Based on prior studies, we assume that ≥ 75% of enrolled physicians will encounter at least one eligible patient (i.e., enrolled in Medicare Fee-for-Service, severely injured, age ≥ 65 years), with a median number of patients per physician of 1–2. In this clustered randomization trial, physicians serve as the randomization unit. Based on data from a prior study, we assume an intraclass correlation coefficient (ICC) of 0.45, indicating the correlation in outcomes within the same physician group [27]. Under these assumptions, with 400 physicians per group (N = 800), we can detect between a 7.4–10% difference in under-triage between the intervention and control groups with 80% power and a significance level of 0.05, using a one-sided hypothesis test. We have chosen a one-sided hypothesis test to boost statistical efficiency and because we prioritize identification of a positive effect of the intervention. A negative effect would produce the same outcome as a null: the decision to pause further efforts to disseminate the intervention [28].

Assignment of interventions: allocation and blinding

Sequence generation, concealment mechanism, and implementation

We will ask physicians to describe their personal characteristics at the time of enrollment and will include all minority (women, non-white) physicians who agree to participate, up to 50% of the targeted sample. Physicians will be randomized with an allocation ratio of 1:1, based on a randomization schema generated by our statistician (CCC) in STATA 17.0 (StataCorp, TX). Trial coordinators will link the randomization schema to the enrollment data and will assign participants to groups.

Who will be blinded

Although we cannot maintain blinding after allocation, our data analysts will not have access to that information.

Procedure for unblinding if needed

Since we cannot maintain blinding after allocation, we have not established a procedure for revealing a participant’s allocated intervention.

Data collection and management

Plans for assessment and collection of outcomes

Physicians

Each physician who enrolls in the trial will complete a baseline questionnaire with items that capture age, gender, race/ethnicity, state in which they work, type of residency training (emergency medicine, internal medicine, family practice, other), type of fellowship training, trauma education (date of last ATLS certification, completion of trauma resuscitation module published by ABEM), professional characteristics (number of ED shifts completed each month, trauma center status of the hospital at which they work, assignment of an NPI), attitudes to game-based learning, and the amount of time and money they have spent on continuing medical education activities in the prior year. Participants will also complete questionnaires to assess the fidelity of intervention delivery and receipt. We summarize the methods that we plan to use to assess treatment fidelity in Table 4, using a checklist developed by Borelli et al. as part of the National Institutes of Heath’s (NIH’s) Behavior Change Commission’s effort to improve the replicability of health-related behavior change interventions [29,30,31].

Table 4 Treatment fidelity execution and assessment plan

Fidelity of intervention delivery

We define fidelity of intervention delivery as treatment adherence. The Night Shift 2024 application collects data on the time that each player spends using the application, including the number of minutes, number of visits, and progress through the application. Each time the device connects to a wireless network, these data will be uploaded to a secure server hosted by the University of Pittsburgh. We will additionally ask participants in the intervention group to report game usage (e.g., time spent, most memorable case completed) and to complete the User Engagement Scale – Short Form (a validated 12-item instrument that measures esthetic appeal, attentional focus, perceived usability, and reward), after using Night Shift in the first month of the trial and again after the ninth month of the trial [32]. We provide the questionnaires in Additional file 1: Appendix.

Fidelity of intervention receipt

We define intervention receipt as evidence that participants understand and can use the skills or knowledge learned during the intervention and will evaluate it in three ways. First, we will ask participants (intervention and control) to describe their understanding of the central principle of trauma triage on the first post-enrollment survey. Second, we will ask them to complete an online tool that assesses physician behavior in trauma triage (SONAR). SONAR is a 2D web-based serious game, during which physicians must triage 30 trauma patients (15 with severe injuries; 15 with minor injuries). The game evaluates decisions using the guidelines published by the American College of Surgeons – Committee on Trauma and produces two sets of metrics: compliance with guidelines (e.g., under-triage) and signal detection parameters. Signal detection theory came to prominence during World War II and allows inferences about the sources of non-compliance with clinical practice guidelines by parsing the influence of perceptual sensitivity (the ability to distinguish between minimally and severely injured patients) and decisional thresholds (preferences to err on the side of under- or over-triage) on decisions [33]. Third, we will interview a subset of physicians in the game arm (n = 20) at 1 and 6 months after enrollment, to learn about their experiences with the game and contextual modifiers to adoption, implementation (anticipated/actual), and maintenance (anticipated/actual) of guideline-concordant trauma triage. We will supplement this data with interviews of a national sample of other key decision-makers (i.e., patients, ED directors at non-trauma centers, trauma directors at level I/II trauma centers, first responders [n = 80]). We will focus on barriers and facilitators of implementation of clinical practice guidelines in trauma triage (anticipated [month 1] and actual [month 6]) using an interview guide developed using the Consolidated Framework for Implementation Research (CFIR) [21]. We include the interview guides in Additional file 1: Appendix.

Hospitals

We will obtain information about the organizational characteristics of each hospital at which physicians work using the 2022 Centers for Medicare and Medicaid Services (CMS) Healthcare Cost Report Information System (HCRIS). HCRIS contains facility-level characteristics of all non-federal hospitals, including geographic location (state, region), participation in a hospital network, total bed count, intensive care unit (ICU) bed count, ownership, and teaching status. Since HCRIS does not contain data on the trauma center status of hospitals, we will link HCRIS to the Trauma Information Exchange Program (TIEP) to identify the trauma center designation for each hospital in 2023.

Patients

To construct the dataset that we will use to analyze the performance of physicians, we will obtain Inpatient, Outpatient, and Professional Claims filed with Medicare Fee-for-Service (FFS) and Advantage for beneficiaries older than 65 years old with International Classification of Diseases, Tenth Revision (ICD10) codes associated with injuries in 2024. We will also obtain claims for the quarters (Qs) flanking 2024 (i.e., Q4 2023, Q1 2025) to enable us to identify preceding and follow-up care after injuries. Data elements abstracted directly from the claims will include patient demographics, the hospital identifier, date of admission/discharge, ICD10 diagnosis/procedure codes, disposition status (e.g., home, nursing home), and vital status (date of death). We will map ICD10 diagnosis codes to abbreviated injury scores (AIS) using a well-validated program (ICDPIC) and will calculate injury severity scores (ISS). We will also estimate the presence or absence of serious illness and organ failure using validated algorithms [34, 35]. Finally, we will estimate functional status pre-injury by conducting a 90-day lookback from the date of admission to identify claims filed at skilled nursing or rehabilitation facilities.

We will identify patients treated by trial physicians by linking the names of trial participants to NPIs and searching for claims filed by those physicians in the Inpatient, Outpatient, and Professional Claims files. We will then construct episodes of care for each patient by linking Outpatient and Inpatient Standard Analytic files to identify visits to acute care, non-federal hospitals. We will order claims by day and classify visits that occur within 1 day of each other as part of a single episode of care. For episodes with multiple claims from the same day, we will order the claims under the assumption that patients will move from non-trauma centers to trauma centers, and from low-volume hospitals to high-volume hospitals.

Data management

Data sources include consent forms collected electronically, survey data collected electronically, audio files and transcripts of interviews, and claims files purchased from CMS. All data will be stored on a secure server at the University of Pittsburgh. Data integrity checks will be conducted periodically every 6 to 12 months by the principal investigator and the security team for the University of Pittsburgh School of Medicine Information Technology department. Additional processes to promote data quality will include range checks for data values and analysis by two different statisticians.

Confidentiality

We will create a linkage file that connects personal data with anonymized identifiers and then will use de-identified data for all analyses. This file will be encrypted and stored on our secure server, and only the study team will have access to it.

Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use

Given the nature of the trial, we have not developed any plans for biological specimens.

Statistical methods

We will use summary statistics to describe physician, hospital, and patient-level variables and will describe patterns of missingness to identify variables with non-random and high (≥ 10%) missingness. We will also calculate measures of reach (proportion of those received [and opened] an email invitation who completed a screening form and the demographics of those who responded compared to those who meet eligibility criteria), adoption (number, proportion, and characteristics of hospitals staffed by trial physicians compared to other non-trauma centers), and implementation (the proportion of those who enrolled who completed study tasks, the unit costs per physician).

Fidelity of intervention delivery and receipt

We will summarize the proportion of time that physicians spend using their intervention, and the proportion who provide a complete answer to the attention check question. We will also summarize responses to the question about the principle that guides trauma triage. Finally, we will estimate measures of compliance with guidelines and signal detection measures of the determinants of those decisions from SONAR and will compare them between groups using analysis of variance (ANOVA). Differences between groups in parameters of compliance or signal detection theory would suggest differential receipt of learning principles embedded in the different applications.

Statistical methods for primary and secondary outcomes

Our hypotheses are related to the effectiveness of the intervention and are listed in Table 5.

Table 5 List of hypotheses to be tested

Primary and sensitivity analyses

To evaluate physician performance, we will create a cohort of patients as described above in the “Data collection and management” section and will restrict analysis to patients with a severe injury (ISS ≥ 15), to the first episode of care for each patient (since we cannot determine if subsequent episodes reflect follow-up care or new injuries), and to patients treated for their first episode of care in 2024 (i.e., after rollout of the intervention). We will exclude patients who died on the day of admission (as this could reflect either an error in triage decision making or an assessment of clinical instability that precluded transfer) and patients who were discharged from the ED (as this could reflect either an error in triage decision making or an error in the coding of hospital records). We will also exclude patients who presented initially to a level I–IV trauma center, since the guidelines for triage focus on under-triage at non-trauma centers. Finally, we will classify patients with ISS ≥ 15 as triaged appropriately (if they were transferred to a higher level of care within 24 h of presenting to the hospital) or under-triaged (if they were admitted to the non-trauma center).

To estimate the effect of the intervention on physician behavior, we will calculate under-triage (1-proportion of patients with severe injuries successfully transferred to trauma centers) for physicians in each arm of the trial. We plan to use an intention-to-treat approach, including all randomized physicians (regardless of their degree of participation in the study) if we allocated them to treatment and if they filed Medicare claims. We will compare differences in post-intervention behavior between groups using a generalized linear mixed model (GLMM) predicting under-triage at the level of the patient, with binomial error distribution and log or logit link function depending on outcome is rare or not, and adjusting for baseline covariates (hospital-level [bedsize, teaching status, participation in a healthcare system, resource availability], physician-level [demographics, type of board certification, ATLS certification, patient load], and patient-level ones [demographics, injury severity scores, organ failure]). Fixed effects in the model will include intervention groups, intervention period, and their interactions. Random effects will include hospital- and physician-level random intercepts. If the model does not converge (due to the scarcity of cases within hospitals), we will remove the hospital-level random effect preferentially.

In sensitivity analyses, we will test if the effect is modified by candidate moderators (e.g., physician experience) by testing the interaction between the moderator and the group assignment. Finally, we will test alternative definitions of under-triage (including the more restrictive categorization of any patient not transferred to a level I/II trauma center as under-triaged).

Interim analyses

We are not planning any interim analyses.

Methods for additional analyses

To test the effect of the interventions on patient-centered outcomes, we will repeat the GLMM analyses using different dependent variables: over-triage, composite 30-day patient outcome, and functional dependence. We will build regression models in which we will calculate the direct, indirect, and total effects of the interventions on patient-centered outcomes, testing the mediation exerted by under-triage. Finally, we will explore the heterogeneity of the treatment effect by evaluating the effect of the intervention on different cohorts of participants: women v. men; white v. non-white; those with positive v. negative attitudes to game-based learning. Additional heterogeneity of treatment effect analyses include a test of the dose effect of the intervention, the durability of the treatment effect, and the impact of intervention receipt.

Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data

We plan to use an intention-to-treat approach to evaluate the effect of the video game on physician triage decision making. The GLMM assumes that missing data is missing at random (MAR). If that assumption holds, and some patient covariates exhibit a higher percentage of missingness, we will carry out multiple imputations before fitting the GLMM model. This is to ensure that the statistical power remains not less than 80%. However, in secondary analyses (as described above), we will also test a per-protocol approach, categorizing physicians based on their completion of assigned study tasks. We also plan sensitivity analyses to handle non-random missingness using a joint modeling approach.

Plans to give access to the full protocol, participant-level data, and statistical code

The protocol, primary data (i.e., physician-level data), summary data, and meta-data (e.g., documentation, protocols used to clean and to manage the data) will be uploaded to the open access Inter-university Consortium for Political and Social Research (open ICPSR) repository at the conclusion of the trial. Data use agreements for Medicare claims typically preclude sharing of data, so patient-level files cannot be distributed. However, we will make available the processes that we use to create administrative linkages between trial data and the claims files.

Oversight and monitoring

Composition of the trial team and stakeholder advisory committee

The trial team will comprise of the investigators, coordinators, and staff members. They will meet monthly initially to establish and adjust the study protocol as necessary. Subsequently, they will meet quarterly to discuss study progress and interim results, as well as respond to any issues that have arisen. They will receive input from a stakeholder advisory committee, comprised of a diverse group of local and national leaders in trauma care (n = 9). These stakeholders vary in their demographics, training, experience, and work environment. We will convene the panel every 6 months via video-conference to obtain feedback on all phases of the study, from startup to close-out.

Composition of the data monitoring committee, its role and reporting structure

The University of Pittsburgh Human Research Protection Office (HRPO) has reviewed our protocol and provided approval (STUDY23070156). The funding agency (the National Institute on Aging [NIA]) will convene an independent Data and Safety Monitoring Board (DSMB) who will also review and approve the protocol and the data monitoring plan. The DSMB will meet before the start of the trial and then every 9–12 months until analysis is completed. We do not plan any interim analyses and therefore have not included any stopping guidelines. We have registered the trial on ClinicalTrials.gov.

Adverse event reporting and harms

The primary investigator (PI) will ask participants to communicate any adverse events or unintended effects of participation via email, which she will relay to the review boards. Physicians may opt to withdraw from the trial at any point, at which time we will exclude all self-reported data from analysis.

Frequency and plans for auditing trial conduct

There is no set frequency for audits of trial processes and protocols.

Plans for communicating important protocol amendments to relevant parties

All protocol amendments will be communicated to the DSMB, to the trial sponsor, and to the HRPO at the University of Pittsburgh.

Dissemination plans

Results from the study will be reported to the public through manuscripts and oral presentations at national meetings. All investigators and stakeholders will have the opportunity to serve as authors on manuscripts generated from this research as long as they have made a substantial contribution, have reviewed it for content, provide approval of the final manuscript, and agree to be accountable for the accuracy and integrity of the work.

Discussion

This protocol paper outlines a clinical trial to test the effectiveness of a video game at increasing the implementation of clinical practice guidelines in trauma triage. Strengths of the trial include the addressing of the national priority of maintaining the health and independent living of older adults, the testing of an intervention explicitly grounded in theory and proven efficacious in the laboratory, and a pragmatic mixed-method process evaluation that will allow interpretation of both negative as well as positive results.

We confronted several design challenges during the development of the study. First, we debated the optimal comparator for our intervention. ATLS, a 2-day textbook based educational program sponsored by the American College of Surgeons, represents the gold-standard for continuing medical education in trauma triage [18]. Participants attend lectures covering core topics, practice unfamiliar skills (e.g., chest tube insertion), and demonstrate knowledge acquisition by completing a pre- and post-test. The American College of Surgeons recommends that physicians take the course quadrennially, and provides certification of competence to credentialing organizations (e.g., hospitals). The logistics and cost required to enroll trial participants in the course made this option infeasible. Additionally, we did not design Night Shift 2024 to replace ATLS. Instead, it ideally serves as an adjunct, facilitating the type of distributed training that encourages the retention and use of best-practice decision making principles. It offers an alternative to the continuing medical education that physicians ordinarily complete to satisfy annual requirements of state medical boards. We therefore selected usual education as our comparator. We considered but rejected the idea of using two comparators (usual education and enhanced usual education) in the interests of statistical efficiency for testing the intervention.

A second design challenge involved the decision of how best to assess intervention receipt (i.e., do trial participants understand the information provided in the intervention), a core component of treatment fidelity. The NIH’s Behavior Change Commission recommends using pre- and post-test measures of process, skills, and knowledge for this purpose [29]. However, measurement of physician judgment before-and-after exposure to the intervention (a within-subject analysis) requires that participants complete SONAR twice, increasing respondent burden and the likelihood of attrition. We therefore decided to use a between-subject analysis, comparing the judgment of physicians exposed to game-based and text-based learning.

A third design challenge was determining the optimal dose of the triage video game that participants would receive. In prior laboratory studies, we found that physicians exposed to 2 h of game-based learning experienced a greater effect than those who completed 1 h [16, 17]. Pedagogical research shows the value of distributed exposure to educational interventions, to allow the transference of information from the working to the long-term memory [36]. However, the longer the intervention delivery period, the greater the risk of attrition. We therefore compromised by designating the dose as 2 h of game play immediately after enrollment, followed by 20-min booster sessions quarterly. As mentioned, we propose secondary analyses relating the dose (total number of minutes played and number of sessions) to effectiveness.

The study has several limitations. One primary concern is the reliance on claims to assess the effectiveness of the intervention. For example, the identification of the patient cohort requires the use of ISS derived from ICD10 codes, which have less sensitivity and specificity than the gold standard of scores calculated by trauma registrars after chart review. However, the method of using ICD10 codes to measure injury severity is well-validated (kappa 0.76–0.92), offers a reasonable proxy for the full clinical record, and makes the current project feasible [37,38,39]. The use of Medicare claims also allows the recruitment of a national sample of physicians, increasing the generalizability of observations. Another potential limitation of the study arises from the use of incentives to recruit and to retain physician participants, which introduces the potential for selection bias. However, we believe that this approach is consistent with the NIA’s recommendation to prioritize the fidelity of intervention delivery (internal validity) during initial effectiveness testing over the representativeness of the sample (external validity) [28]. Finally, the study intervenes on only one determinant of non-compliance with clinical practice guidelines (physician heuristics), even though multiple variables contribute to the problem (e.g., structural constraints, capacity issues). However, physicians represent the largest source of variation in triage practices; interventions that effectively modify their behavior have the potential to offer novel solutions to the refractory problem of poorly calibrated heuristics in medicine.

Advances in technology hold the potential to transform the delivery of behavioral and social science interventions. They improve treatment fidelity and can increase the acceptability of distributed delivery, thus improving behavioral maintenance. We have developed one such behavioral intervention to recalibrate physician heuristics in trauma triage and plan to test its efficacy. We intend that results of this trial will contribute to the literature on physician quality improvement and the efficacy of video games as behavioral interventions.

Trial status: Not yet recruiting.

Anticipated start date for recruitment: November 27, 2023.

Anticipated completion date for recruitment: February 15, 2024.

Protocol version: 2

Date: 4 October 2023.

Availability of data and material

Night Shift 2024 is available for download on the iOS Apple Store at https://apps.apple.com/us/app/night-shift-2024/id6448066837. SONAR will be available for use at https://howdodoctorsthink.study.ccm.pitt.edu/.

Abbreviations

ABEM:

American Board of Emergency Medicine

ANOVA:

Analysis of variance

ATLS:

Advanced Trauma Life Support

CFIR:

Consolidated Framework for Implementation Research

CMS:

Centers for Medicare and Medicaid Services

DSMB:

Data Safety Monitoring Board

ED:

Emergency department

FFS:

Fee-For-Service

GLMM:

Generalized linear mixed model

HCRIS:

Healthcare Cost Report Information System

HRPO:

Human Research Protection Office

ICC:

Intraclass Correlation Coefficient

ICD10:

International Classification of Diseases, Tenth Revision

ICPSR:

Inter-university Consortium for Political and Social Research

ICU:

Intensive care unit

ISS:

Injury severity scores

MAR:

Missing at random

NIA:

National Institute on Aging

NPI:

National Provider Identifier

Q:

Quarter

PI:

Primary investigator

RE-AIM:

Reach, Effectiveness, Adoption, Implementation, Maintenance

TIEP:

Trauma Information Exchange Program

US:

United States

References

  1. National Center for Injury Prevention and Control. Older adult fall prevention. Centers For Disease Control 2023 Available from: https://www.cdc.gov/falls/index.html accessed.

  2. Jarman MP, Jin G, Weissman JS, et al. Association of trauma center designation with postdischarge survival among older adults with injuries. JAMA Netw Open. 2022;5(3):e222448. https://doi.org/10.1001/jamanetworkopen.2022.2448. (publishedOnlineFirst: 2022/03/17).

    Article  PubMed  PubMed Central  Google Scholar 

  3. US Department of Health and Human Services. Model trauma system planning and evaluation 2006 Available from: https://www.hsdl.org/?view&did=463554 accessed.

  4. Macias CA, Rosengart MR, Puyana JC, et al. The effects of trauma center care, admission volume, and surgical volume on paralysis after traumatic spinal cord injury. Ann Surg. 2009;249(1):10–7. https://doi.org/10.1097/SLA.0b013e31818a1505. (publishedOnlineFirst:2008/12/25).

    Article  PubMed  Google Scholar 

  5. Mackenzie EJ, Rivara FP, Jurkovich GJ, et al. The national study on costs and outcomes of trauma. J Trauma. 2007;63(6 Suppl):S54–67. https://doi.org/10.1097/TA.0b013e31815acb09. (discussion S81-6 published Online First: 2007/12/22).

    Article  PubMed  Google Scholar 

  6. Mackenzie EJ, Rivara FP, Jurkovich GJ, et al. The impact of trauma-center care on functional outcomes following major lower-limb trauma. J Bone Joint Surg Am. 2008;90(1):101–9. https://doi.org/10.2106/jbjs.F.01225. (publishedOnlineFirst:2008/01/04).

    Article  PubMed  Google Scholar 

  7. Zhou Q, Rosengart MR, Billiar TR, et al. Factors associated with nontransfer in trauma patients meeting American College of Surgeons’ criteria for transfer at nontertiary centers. JAMA Surg. 2017;152(4):369–76. https://doi.org/10.1001/jamasurg.2016.4976. (publishedOnlineFirst:2017/01/05).

    Article  PubMed  PubMed Central  Google Scholar 

  8. Delgado MK, Yokell MA, Staudenmayer KL, et al. Factors associated with the disposition of severely injured patients initially seen at non-trauma center emergency departments: disparities by insurance status. JAMA Surg. 2014;149(5):422–30. https://doi.org/10.1001/jamasurg.2013.4398. (publishedOnlineFirst:2014/02/21).

    Article  PubMed  PubMed Central  Google Scholar 

  9. Chang DC, Bass RR, Cornwell EE, et al. Undertriage of elderly trauma patients to state-designated trauma centers. Arch Surg. 2008;143(8):776–81. https://doi.org/10.1001/archsurg.143.8.776. (discussion 82 published Online First: 2008/08/20).

    Article  PubMed  Google Scholar 

  10. Mohan D, Wallace DJ, Kerti SJ, et al. Association of practitioner interfacility triage performance with outcomes for severely injured patients with fee-for-service medicare insurance. JAMA Surg. 2019;154(12):e193944. https://doi.org/10.1001/jamasurg.2019.3944. (publishedOnlineFirst:2019/10/24).

    Article  PubMed  PubMed Central  Google Scholar 

  11. Mohan D, Rosengart MR, Farris C, et al. Sources of non-compliance with clinical practice guidelines in trauma triage: a decision science study. Implement Sci. 2012;7:103. https://doi.org/10.1186/1748-5908-7-103. (publishedOnlineFirst:2012/10/27).

    Article  PubMed  PubMed Central  Google Scholar 

  12. National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington, DC: The National Academies Press; 2015.

  13. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Sci. 1974;185(4157):1124–31. https://doi.org/10.1126/science.185.4157.1124. (publishedOnlineFirst:1974/09/27).

    Article  ADS  CAS  Google Scholar 

  14. Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. Am Psychol. 2009;64(6):515–26. https://doi.org/10.1037/a0016755. (publishedOnlineFirst:2009/09/11).

    Article  PubMed  Google Scholar 

  15. Kahneman D, Frederick S. Representativeness revisited: attribute substitution in intuitive judgment. Heuristics of Intuitive Judgment: Extensions and Application. Edited by T Gilovich, D Griffin, and D Kahneman. Cambridge University Press; 2002. p. 49–81.

  16. Mohan D, Fischhoff B, Angus DC, et al. Serious games may improve physician heuristics in trauma triage. Proc Natl Acad Sci U S A. 2018;115(37):9204–9. https://doi.org/10.1073/pnas.1805450115. (publishedOnlineFirst:2018/08/29).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  17. Mohan D, Farris C, Fischhoff B, et al. Efficacy of educational video game versus traditional educational apps at improving physician decision making in trauma triage: randomized controlled trial. BMJ. 2017;359:j5416. https://doi.org/10.1136/bmj.j5416. (publishedOnlineFirst:2017/12/14).

    Article  PubMed  PubMed Central  Google Scholar 

  18. American College of Surgeons – Committee on Trauma. About advanced trauma life support Available from: https://www.facs.org/quality-programs/trauma/atls/about accessed 2020.

  19. American Board of Emergency Medicine. Stay certified – module content Available from: https://www.abem.org/public/stay-certified/myemcert/module-content accessed 2023.

  20. Curran GM, Bauer M, Mittman B, et al. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. https://doi.org/10.1097/MLR.0b013e3182408812. (publishedOnlineFirst:2012/02/09).

    Article  PubMed  PubMed Central  Google Scholar 

  21. King DK, Shoup JA, Raebel MA, et al. Planning for implementation success using RE-AIM and CFIR frameworks: a qualitative study. Front Public Health. 2020;8:59. https://doi.org/10.3389/fpubh.2020.00059. (publishedOnlineFirst:2020/03/21).

    Article  PubMed  PubMed Central  Google Scholar 

  22. Committee on Trauma-American College of Surgeons. Resources for optimal care of the injured patient 2006. Chicago: American College of Surgeons; 2006.

    Google Scholar 

  23. Mohan D, Barnato AE, Rosengart MR, et al. Triage patterns for medicare patients presenting to nontrauma hospitals with moderate or severe injuries. Ann Surg. 2015;261(2):383–9. https://doi.org/10.1097/sla.0000000000000603. (publishedOnlineFirst:2014/03/29).

    Article  PubMed  Google Scholar 

  24. Irony TZ. The “Utility” in composite outcome measures: measuring what is important to patients. JAMA. 2017;318(18):1820–1. https://doi.org/10.1001/jama.2017.14001. (publishedOnlineFirst:2017/11/15).

    Article  PubMed  Google Scholar 

  25. McCoy CE. Understanding the use of composite endpoints in clinical trials. West J Emerg Med. 2018;19(4):631–4. https://doi.org/10.5811/westjem.2018.4.38383. (publishedOnlineFirst:2018/07/18).

    Article  PubMed  PubMed Central  Google Scholar 

  26. Prieto-Merino D, Smeeth L, Staa TP, et al. Dangers of non-specific composite outcome measures in clinical trials. BMJ. 2013;347: f6782. https://doi.org/10.1136/bmj.f6782. (publishedOnlineFirst:2013/11/26).

    Article  PubMed  Google Scholar 

  27. Mohan D, Chang CC, Fischhoff B, et al. Outcomes after a digital behavior change intervention to improve trauma triage: an analysis of Medicare claims. J Surg Res. 2021;268:532–9. https://doi.org/10.1016/j.jss.2021.07.029. (publishedOnlineFirst:2021/09/01).

    Article  PubMed  PubMed Central  Google Scholar 

  28. Nielsen L, Riddle M, King JW, et al. The NIH Science of Behavior Change Program: transforming the science through a focus on mechanisms of change. Behav Res Ther. 2018;101:3–11. https://doi.org/10.1016/j.brat.2017.07.002. (publishedOnlineFirst:2017/11/08).

    Article  PubMed  Google Scholar 

  29. Bellg AJ, Borrelli B, Resnick B, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443–51. https://doi.org/10.1037/0278-6133.23.5.443. (publishedOnlineFirst:2004/09/16).

    Article  PubMed  Google Scholar 

  30. Borrelli B, Sepinwall D, Ernst D, et al. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. J Consult Clin Psychol. 2005;73(5):852–60. https://doi.org/10.1037/0022-006x.73.5.852. (publishedOnlineFirst:2005/11/17).

    Article  PubMed  Google Scholar 

  31. Borrelli B. The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. J Public Health Dent. 2011;71(s1):S52–s63. https://doi.org/10.1111/j.1752-7325.2011.00233.x. (publishedOnlineFirst:2011/04/19).

    Article  PubMed  PubMed Central  Google Scholar 

  32. O’Brien HL, Toms EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Am Soc Inform Sci Technol. 2008;59(6):938–55. https://doi.org/10.1002/asi.20801.

    Article  Google Scholar 

  33. Swets JA, Dawes RM, Monahan J. Psychological science can improve diagnostic decisions. Psychol Sci Public Interest. 2000;1(1):1–26. https://doi.org/10.1111/1529-1006.001. (publishedOnlineFirst:2000/05/01).

    Article  CAS  PubMed  Google Scholar 

  34. Kelley AS, Hanson LC, Ast K, et al. The serious illness population: ascertainment via electronic health record or claims data. J Pain Symptom Manage. 2021;62(3):e148–55. https://doi.org/10.1016/j.jpainsymman.2021.04.012. (publishedOnlineFirst:2021/05/03).

    Article  PubMed  PubMed Central  Google Scholar 

  35. Iwashyna TJ, Odden A, Rohde J, et al. Identifying patients with severe sepsis using administrative claims: patient-level validation of the angus implementation of the international consensus conference definition of severe sepsis. Med Care. 2014;52(6):e39–43. https://doi.org/10.1097/MLR.0b013e318268ac86. (publishedOnlineFirst:2012/09/25).

    Article  PubMed  PubMed Central  Google Scholar 

  36. Cepeda NJ, Coburn N, Rohrer D, et al. Optimizing distributed practice: theoretical analysis and practical implications. Exp Psychol. 2009;56(4):236–46. https://doi.org/10.1027/1618-3169.56.4.236. (publishedOnlineFirst:2009/05/15).

    Article  PubMed  Google Scholar 

  37. Greene NH, Kernic MA, Vavilala MS, et al. Validation of ICDPIC software injury severity scores using a large regional trauma registry. Inj Prev. 2015;21(5):325–30. https://doi.org/10.1136/injuryprev-2014-041524. (publishedOnlineFirst:2015/05/20).

    Article  PubMed  Google Scholar 

  38. MacKenzie EJ, Steinwachs DM, Shankar B. Classifying trauma severity based on hospital discharge diagnoses Validation of an ICD9CM to AIS85 conversion table. Med Care. 1989;27(4):412–22. https://doi.org/10.1097/00005650-198904000-00008. (published Online First: 1989/04/01).

    Article  CAS  PubMed  Google Scholar 

  39. Fleischman RJ, Mann NC, Dai M, et al. Validating the use of ICD-9 code mapping to generate injury severity scores. J Trauma Nurs. 2017;24(1):4–14. https://doi.org/10.1097/jtn.0000000000000255. (publishedOnlineFirst:2016/12/30).

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Funding

DP2 LM012339 (Mohan).

R01 AG 076499(Mohan).

K23 NS097629 (Elmer).

K24 HL148314 (White).

The funding organizations and sponsor had no role in the design and conduct of the study, in the collection, management, analysis and interpretation of the data, in the preparation, review or approval of the manuscript, or in the decision to submit the manuscript for publication.

Author information

Authors and Affiliations

Authors

Contributions

Study concept and design: dm, dca, ccc, je, bf, kjr, jlb, abp, dbw, Drafting of manuscript: dm, Critical revision of the manuscript for important intellectual content: dca, ccc, je, bf, kjr, jlb, abp, dbw, All authors read and approved the final manuscript.

Corresponding author

Correspondence to Deepika Mohan.

Ethics declarations

Ethics approval and consent to participate

Study protocol approved by the University of Pittsburgh Human Research Protection Office (STUDY 23070156). All participants will provide consent prior to enrollment in the trial. A sample consent form is included in Additional file 1: Appendix.

Consent for publication

We will not ask participants to provide consent for publication. However, all data will be de-identified and aggregated before publication.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: 

Appendix. eFigure 1. Trial schematic. eFigure 2. Conceptual model of intervention.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohan, D., Angus, D.C., Chang, CC.H. et al. Using a theory-based, customized video game as an educational tool to improve physicians’ trauma triage decisions: study protocol for a randomized cluster trial. Trials 25, 127 (2024). https://doi.org/10.1186/s13063-024-07961-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-024-07961-w

Keywords