Skip to main content

Strategies for Implementing GlobalConsent to Prevent Sexual Violence in University Men (SCALE): study protocol for a national implementation trial

Abstract

Background

Globally, women 15–24 years are at heightened risk of sexual violence victimization, a risk factor for adverse mental, physical, and behavioral health outcomes. Sexual violence is common at universities and most often perpetrated by men, yet few evidence-based prevention strategies targeting men have been tested in low- and middle-income countries. GlobalConsent is a six-module, web-based educational program adapted from an efficacious U.S.-based program. Nine months post-treatment in a randomized trial in Vietnam, GlobalConsent reduced men’s sexually violent behavior (odds ratio [OR] = 0.71, 95%CI 0.50–1.00) and increased prosocial intervening behavior (OR = 1.51, 1.00–2.28) relative to an attention-control. Evidence regarding optimal implementation strategies for scale up is needed.

Methods

We will randomize six medical universities in North, Central, and South Vietnam to deliver GlobalConsent using two different packages of implementation strategies that vary in intensity. Higher-intensity strategies will include greater (1) pre- and post-implementation engagement with university leaders and faculty and (2) greater pre-implementation outreach, follow-up, and incentives for students to promote engagement and completion of GlobalConsent. Higher intensity universities will receive additional training and support for their added activities. We will compare implementation drivers and outcomes, intervention effectiveness, and cost-effectiveness across the two implementation bundles. Our mixed-methods comparative interrupted time series design includes (1) qualitative interviews and quantitative surveys with university leaders and implementation teams to assess implementation barriers and facilitators; (2) repeated surveys with leaders and faculty, implementation teams, and male students to assess multilevel implementation drivers and outcomes; (3) repeated surveys with male students to assess behavioral outcomes (sexual violence and intervening behavior) and mediating variables (knowledge, attitudes, affect, and capacities); and (4) time diaries and cost tracking to assess cost-effectiveness of the two implementation-strategies bundles.

Discussion

This project is the first to assess packages of implementation strategies to deliver an efficacious web-based sexual violence prevention program for undergraduate men across all regions of Vietnam and synergizes with a violence-prevention training initiative (D43TW012188). This approach will produce rigorous evidence about how to disseminate GlobalConsent nationally, which holds promise to reduce gender-based health inequities linked to sexual violence as GlobalConsent is brought to scale.

Trial registration

NCT06443541. Retrospectively registered with ClinicalTrials.gov. Registered on June 05, 2024.

Peer Review reports

Administrative information

Note: The numbers in curly brackets in this protocol refer to Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist item numbers. The order of the items is modified to group similar items (http://www.equator-network.org/reporting-guidelines/spirit-2013-statement-defining-standard-protocol-items-for-clinical-trials/).

Title {1}

SCALE: Strategies for Implementing GlobalConsent to Prevent Sexual Violence in University Men

Trial registration {2a and 2b}.

ClinicalTrials.gov

NCT06443541

Protocol version {3}

Version 1: May 31, 2024

Funding {4}

R01MH133259; PI Kathryn Yount

Author details {5a}

Kathryn M. Yount1, Daniel J. Whitaker2, Xiangming Fang2, Quach Thu Trang3, Meghan Macaulay1, Tran Hung Minh3

1Emory University

2Georgia State University

3Center for Creative Initiatives in Health and Population

Name and contact information for the trial sponsor {5b}

National Institute of Mental Health; Office of the Director, National Institutes of Health

Role of sponsor {5c}

The study sponsor requested that mental health common data elements be collected and that a data safety and monitoring board be established to provide independent oversight of the implementation trial.

Introduction

Background and rationale {6a}

Sexual violence is prevalent in adolescence and heightens the risk of harmful long-term health effects. Sexual violence includes any sexual act committed against a person without freely given consent [1]. All genders may experience sexual violence, but sexual violence more often burdens women than men globally [2, 3], and men most often perpetrate such violence [4, 5]. Adolescence is a period of vulnerability to sexual violence [6], with about one in five college women in the USA experiencing a campus sexual assault [7], and 91% of victims being women [8]. Less is known about rates of sexual violence on college campuses in LMICs, but estimates from large, multi-country surveys confirm that young men’s reported sexually violent behavior [9, 10] and young women’s reported sexual violence victimization [6] are high, including in Asia/Pacific. In Vietnam [11], from 2010 to 2019, women’s reports of lifetime sexual violence by a partner increased (10 to 13%), especially in women 18–24 years (5 to 14%). Such trends may reflect changing exposure and more openness to discuss sex and sexual violence. Also, nearly one in ten women (9%) report non-partner sexual violence since age 15, mostly perpetrated by non-family male acquaintances, co-workers, or strangers. Young women who are victims of sexual violence are at heightened risk of acute and chronic mental and physical health conditions [12].

Evidence-based sexual violence prevention programs tailored to men are limited. Several reviews since 2016 confirm that interventions to prevent sexual violence in young men are rare, especially in LMICs, do not follow best practices for behavioral change, and yield mixed results. A 2017 review of reviews identified few interventions with adolescents focused on boys or young men in LMICs [13]. A subsequent review of 44 bystander intervention studies in North America found that, most often, programs are conducted in college populations (75%) and mixed-gender groups (56%) and involve a single (75%) in-person (68%) presentation and discussion (54%) of less than 2 h [14]. Relatively few of these interventions were tailored to men (27%) or involved technology-based delivery using the web (11%), media (36%), or social media (7%) [14]. Study designs also tended to be weak, often involving modest sample sizes (mean 536) of majority-White participants (45%), high attrition (mean 36%), non-randomized controlled designs (62%), infrequent follow-up beyond 6 months (11%), and infrequent measurement of behavioral outcomes (34%) [14]. A third systematic review [15] focused on interventions to prevent intimate-partner, dating, and sexual violence in men and boys did find that most of the nine included studies used cluster-randomized designs and evaluated multisession programs delivered in groups to undergraduates; yet, most studies were US-based and only one program reduced men’s self-reported sexually violent behavior [15]. A fourth review, focused on intervention studies to change hegemonic masculinities, found that eight of the 10 included studies were conducted in the USA or Africa, only one (in the USA) was web-based (but not delivered to a mobile device), and impacts on sexually violent behavior were mixed [16]. Finally, one review of 31 mHealth interventions to address partner violence found that mobile-phone platforms were acceptable, but that victim response of women was the focus over behavioral prevention with men, and evidence of efficacy was limited [17]. Thus, especially in low- and middle-income countries (LMICs), sexual violence prevention programs tailored to men are rare, and theoretically grounded, web-based sexual violence prevention with men has not been implemented at scale.

Effective sexual violence prevention programming for men requires a cross-cultural theory of change. Given the limitations of prior prevention interventions tailored to men and expanding needs in LMICs, our team’s long-term research agenda has been to develop, test, and scale an efficacious sexual violence prevention program for young men in LMICs. To do so, our team has been guided by an integrated theory of change, drawing on social cognitive theory [18], social norms theory [19], and the bystander education model [20] (Fig. 1). Social cognitive theory posits that behavior is influenced by and influences socio-contextual factors and personal factors, in a dynamic known as reciprocal determinism. Social norms are important socio-contextual factors that may promote sexually violent behavior through men’s perceptions or misperceptions of socially expected behavior. These expectations may be communicated through the media or socialization processes in families, peer networks, or institutional settings. Personal factors, including cognitions, attitudes, affect, and biological events, interact with perceived or misperceived social norms, by countering or reinforcing them. For example, a young man with more knowledge about sexual violence may be able to counter perceived messaging of sexual violence as normal, but a young man with less knowledge about sexual violence may be unable to counter such messaging. Finally, sexually violent behavior is a manifestation and perpetuation of perceived or misperceived social norms about sexual violence, whereas prosocial intervening behavior conveys the act of intervening as normal and sexually violent behavior as abnormal. Thus, one man’s behaviors can reinforce or modify social norms about sexual violence and, in turn, the behaviors of peer witnesses. The nature of and interactions between socio-contextual factors, personal factors, and sexually violent behavior may vary across societies and cultures, but the interplay of these factors is thought to be widespread [21].

Fig. 1
figure 1

Cross-cultural theory of change: effects of global consent on sexually violent behavior and prosocial intervening behavior

Thus, the program we adapted and tested in Vietnam, GlobalConsent, was designed to disrupt the reinforcing interplay between (1) pro-violence socio-contextual factors including perceived or misperceived social norms that sexually violent behavior is normal, (2) pro-violence personal cognitions, attitudes, and affect, and (3) weak social norms of intervening behavior. By disrupting this interplay, GlobalConsent was able to reduce sexually violent behavior and to increase prosocial intervening behavior among men attending two universities in Vietnam. In sum, the scale of sexual violence in adolescence, its long-term health effects, the strong theoretical premise of GlobalConsent, its efficacy, and the need for national programming in Vietnam motivate our team’s next step—to test two strategies to implement GlobalConsent at scale. This step is unprecedented in the global violence-prevention field and paves a path for national uptake.

Implementation strategies may need to be bundled for public health impact. A key question now is how best to implement GlobalConsent to achieve broad public health impact. Research has confirmed that strong implementation is needed to continue to see outcomes obtained in clinical trials [22, 23] and that very simple implementation methods often do not yield implementation, implementation with fidelity, or sustained implementation [23]. Though the field of implementation science still is young, several theoretical models specify (1) the stages and processes of implementation and (2) multilevel influences that act as barriers or facilitators of implementation. For example, the Reach-Effectiveness-Adoption-Implementation-Maintenance (RE-AIM) model [24] specifies reach, effectiveness, adoption, implementation, and maintenance as multilevel outcomes that drive public health impact. An expanded version of RE-AIM identifies contextual factors in the outer and inner (organizational) setting related to these outcomes [25, 26]. The Evidence-based Practice Implementation in public Service sectors (EPIS) model [27] specifies stages of implementation for the adoption of an innovation, including exploration, pre-implementation, implementation, and sustainment. At a more micro level, others have identified key processes within an implementation that lead to success, including skills-based training, follow-up coaching, administrative supports, and examination of implementation and outcome data to ensure results [28]. Salient implementation strategies across organizational levels can be clustered to focus on developing interrelationships between stakeholders and engaging leadership, training and supporting those delivering the intervention, engaging potential consumers, and addressing institutional norms and infrastructure to facilitate intervention delivery [29, 30]. A key question is how to employ these strategies efficiently; strong bundled implementation strategies can be resource intensive, and understanding their incremental cost-effectiveness is crucial, especially in LMICs. We aim to compare (1) the implementation of GlobalConsent, (2) implementation drivers and outcomes, (3) effectiveness outcomes, and (4) cost-effectiveness across two bundles of implementation strategies.

Objectives {7}

The primary objective of this study is to conduct an implementation trial of GlobalConsent in six universities across Vietnam. We will use the RE-AIM [24, 31] and Proctor et al. [32] frameworks and a mixed-methods, comparative interrupted time series (CITS) design to compare implementation, implementation drivers and outcomes, implementation effectiveness, and cost-effectiveness of lower-intensity implementation strategies (LIS) versus higher-intensity implementation strategies (HIS) to deliver GlobalConsent. Pair-matched study universities will be assigned randomly to LIS or HIS groups, and a local implementation team at each university will support the intervention (which is delivered digitally).

Implementation strategies will target multiple stakeholder groups—university leaders, faculty, and students—to address barriers and facilitators to implementation at multiple institutional levels. The LIS universities will deliver basic implementation strategies often used to deliver online programs at US universities [15]. The HIS universities will deliver additional strategies and at a higher intensity [30] that were identified in the GlobalConsent trial [33] and from learning-collaborative research [34] and that align with interviews [35] with influential university leaders across Vietnam. Leaders in both groups will receive pre-implementation educational outreach to address knowledge and normative institutional barriers to implement GlobalConsent. University leaders in the HIS group will receive additional outreach during and after implementation. University faculty in the HIS group, but not the LIS group, will be engaged around supporting the implementation. University students in both groups will receive remote invitations to take part in and reminders to complete GlobalConsent. Students in the HIS group also will receive from internal facilitators pre-implementation in-person outreach, incentives to increase demand, and more frequent reminders to progress and complete the program.

Regarding training and support, internal implementation teams in both groups will receive manualized pre-implementation training on how to support and deliver GlobalConsent. Implementation teams at HIS universities will receive more intensive training and support than those at LIS universities to implement the additional strategies, described above, including pre-implementation leadership training to champion GlobalConsent with internal and external stakeholders.

Our four specific aims are to:

Aim 1. Compare implementation (barriers, facilitators, modifications) of delivering GlobalConsent in LIS versus HIS groups [36]. Separate focus group discussions with university implementation teams and the purveyor/training entity (CCIHP) will identify implementation barriers, facilitators, and modifications for HIS and LIS universities. Key informant interviews with university leaders will be used to identify organizational, internal, and external policy conditions that may affect implementation. A checklist to male students will identify modifications to intended program delivery at the user level.

Aim 2. Compare implementation drivers (e.g., institutional norms) and outcomes (e.g., penetration) in LIS and HIS universities. Assessments will be based on repeated surveys with university leaders, implementation teams, faculty, and male students and administrative data on program adoption and penetration among male students. We expect implementation drivers and outcomes to be more favorable in the HIS versus the LIS group over time because of the implementation efforts.

Aim 3. Compare effectiveness outcomes (knowledge, attitudes, affect, capacities, and behavior related to sexual violence) in the LIS and HIS groups using 6-monthly surveys with male university students. We expect that men in the HIS group will report more favorable, more sustained outcomes in all domains than men in the LIS group.

Aim 4. Evaluate the cost-effectiveness of implementing GlobalConsent in the HIS group versus the LIS group. We expect that HIS will be cost-effective relative to LIS, e.g., HIS’s additional costs will be justified by the greater impact on reducing sexually violent behavior and increasing prosocial intervening behavior when compared to LIS.

This study is the first to assess two multifaceted implementation strategies to deliver a theoretically grounded, efficacious web-based sexual violence prevention program to male students attending six universities across Vietnam. If successful, our multidisciplinary, cross-cultural team will be the first to bring rigorous evidence to university and national leaders of the contextual effectiveness of these strategies for delivering web-based sexual violence prevention programming to large populations of men in adolescence, a period of heightened risk for sexually violent behavior. Our choice to develop, test, and scale GlobalConsent with universities in Vietnam is strategic, given the scale of sexual violence among young people, expanding rates of university attendance [37,38,39], and the openness of several university leaders to efficacious programming about sexual violence. Our choice to engage universities across all regions of Vietnam provides a novel test of these implementation strategies in different structural and sociopolitical environments, with promise to advance sexual violence prevention policies in university systems at regional and national levels. Evidence for the effectiveness and incremental cost-effectiveness of these implementation strategies across regions will pave the way for GlobalConsent to address an important, gendered risk factor for chronic mental, physical, and behavioral health conditions over the life course. Thus, by providing novel evidence about how best to bring GlobalConsent to scale nationally, our team has the potential to reduce gender-related health inequities and to improve quality of life by averting acts of sexual violence that may lead to chronic health conditions over the life course among victims. By partnering with universities engaged in CONVERGE, an ongoing violence-prevention training program in Vietnam (D43TW012188), these innovations will be achieved through synergistic investments to strengthen local capacity for implementation research, data harmonization, and stakeholder engagement to manage and to prevent sexually violent behavior in young people.

Methods: participants, interventions and outcomes

Trial design {8}

The present study is a mixed-methods, comparative interrupted-time series (CITS) study [40, 41] to compare the implementation metrics, drivers and outcomes, effectiveness, and cost-effectiveness of two bundled implementation strategies to deliver GlobalConsent to men attending six pair-matched universities in North, Central, and South Vietnam. The Framework for Reporting Adaptations and Modifications-Expanded (FRAME) [36] guides the aim 1 implementation assessments. The RE-AIM [24, 25, 42] and Proctor et al. [32] frameworks guide the aim 2 implementation drivers and outcomes and aim 3 implementation effectiveness assessments. A micro-costing approach guides the aim 4 cost-effectiveness assessment [43, 44].

Study setting {9}

Table 1 provides details on the six universities taking part in this proposed study, including location, year founded, programs of study, and faculty/student population sizes. Partnering universities have experience collaborating on studies supported by major funders, including the National Institutes of Health (NIH), World Health Organization (WHO), United States Agency for International Development (USAID), and the President’s Emergency Plan for AIDS Relief (PEPFAR)/Substance Abuse and Mental Health Services (SAMHSA). Several universities are partners in an ongoing science leadership program in violence prevention [45]. Several also have partnered with the primary institutions (Emory University and the Center for Creative Initiatives on Health and Population [CCIHP]) on prior projects and, thus, have experience with the kind of collaboration proposed here. Lastly, across universities, there is substantial interest in sexual violence and intersecting health-related topics, including HIV prevention, sexual and reproductive health, general violence prevention, and preventive medicine. These interests and strong letters of support bode well for ensuring a strong commitment to the proposed project.

Table 1 Characteristics of participating universities in Vietnam

Eligibility criteria {10}

University leaders (aim 1)

We will sample university leaders purposively to identify those most knowledgeable of and critical to the implementation of GlobalConsent [46, 47]. To the extent possible, leaders will be matched on position and rank across LIS and HIS groups (e.g., Dean/Vice Dean and Department Head/Deputy Head). If necessary to achieve saturation, we will use snowball sampling based on recommendations from interviewed stakeholders to identify additional leaders to serve as key informants [48].

Implementation team members (aims 1, 2, and 4)

Implementation team members will be sampled purposively to identify those with the most relevant expertise for the implementation of GlobalConsent. To the extent possible, implementation team members will be matched on position and rank (e.g., lecturer and/or staff by rank) across LIS and HIS groups.

University faculty (aim 2)

All fulltime, permanent lecturers who are currently working (not on extended leave) and not in the leader or implementation team samples at the time of the baseline faculty survey will be eligible to participate. The list of eligible lecturers will be refreshed before each survey wave to ensure that all eligible lecturers are included at each wave.

First-year male students (aims 1 and 3)

Eligible student participants will be male (sex assigned at birth), 18–24 years old at first contact, self-identified as heterosexual or bisexual (are attracted to women), and matriculating into the study universities in project year 2.

Who will take informed consent? {26a}

Key informant interviews (KIIs) and focus group discussions (FGDs) and reflections (aim 1)

Researchers at CCIHP who will conduct the key informant interviews and who will facilitate the focus group discussions will obtain consent from each participant. Because written informed consent is not considered suitable for this setting, local interviewers and group facilitators will digitally record verbal informed consent with a witness before starting the qualitative interviews and group discussions. Informed consent for the qualitative research will require a clear understanding of the study’s purpose; voluntariness, nature, extent and duration of participation; procedures to ensure confidentiality; and right to not answer questions or to withdraw from the study at any time. Participants will be informed that, with permission, interviews will be digitally recorded, and interviewers will keep field diaries of their observations and experiences with participants. All qualitative data collection will be conducted in private settings at the study sites, where the interviewers and facilitators will provide more detail about the exact nature of the study, procedures, and any expected risks and benefits to eligible participants. If necessary, interviews and focus group discussions may be conducted via HIPAA-compliant video-conferencing software. Participants will be informed that (1) digital recordings and fully de-identified Vietnamese and English transcripts will be uploaded to separate folders on a HIPAA-compliant, secure network drive maintained by Emory University, with access limited to the study team for a specified duration before being destroyed; (2) that analysis of the transcripts will take place on a secure, password-protected computer in a private space that can be locked; and (3) that all participants will be compensated for their time and will be offered refreshments (for in-person interviews and group discussions). Participants also will be informed that the study team will recontact them at a later point in the study for follow-up interviews and/or focus group discussions (Appendix).

Online quantitative surveys (aims 2–3)

Eligible participants in the quantitative portions of the study will read an informed consent form provided in an online REDCap survey [49]. Eligible participants will indicate with check boxes that they have read each paragraph in the consent form and will be provided with a phone number to call if they have any questions that a non-study team member can answer. After all paragraphs are checked, the eligible participant will be invited to provide a response to confirm their consent or non-consent to participate in the study. Informed consent will be obtained before participants will be allowed to view and to participate in the online REDCap survey for which they are eligible (Appendix).

Costing surveys (aim 4)

CCIHP staff (including external facilitators and administrative staff involved in GlobalConsent) and university GlobalConsent implementation team members will complete costing forms regularly, either monthly or weekly, depending on the implementation phase. These individuals will receive a consent form to sign, indicating their agreement to participate. The consent form will include an information sheet, a statement confirming that they have read and understand the information sheet, a statement agreeing to participate in the survey, and contact information for a person who can answer any questions or concerns. The information sheet will cover the purpose of the costing surveys, how the data will be collected, used, and stored, the voluntary nature of participation, an assurance of confidentiality, risks and benefits of participation, and procedures to manage any risks (Appendix).

Additional consent provisions for collection and use of participant data and biological specimens {26b}

There are no additional consent provisions for the collection and use of participant data in this study. No biological specimens are being collected.

Interventions

Explanation for the choice of comparators {6b}

A local implementation team at each participating university will deliver the GlobalConsent intervention. This study will vary the implementation strategies that are delivered, with some universities using lower-intensity implementation strategies (LIS) and other universities using higher-intensity implementation strategies (HIS). Implementation strategies in the LIS group model standard approaches to deliver online sexual violence primary prevention programs in universities in the USA. Implementation strategies in the HIS group were selected from those used in the GlobalConsent efficacy trial [33, 50, 51], from the literature on learning collaboratives [34], and to address barriers and facilitators to program implementation that are common to university settings in Vietnam [35] (Table 2). Nomenclature for specific implementation strategies follows the Expert Recommendations for Implementing Change (ERIC) project [30]. The LIS and HIS implementation strategies are discussed by cluster and organizational level, where key stakeholders were identified to understand the implementation process (aim 1), drivers and outcomes (aim 2), and effectiveness (aim 3).

Table 2 “Lower” and “higher” intensity strategies to implement GlobalConsent with undergraduate men, six universities in Vietnam

Develop interrelationships between stakeholders and engage university leaders

Given the demonstrated efficacy of GlobalConsent and the need for high-level institutional commitment for successful implementation, some strategies with university leaders [52] (Deans, Vice Deans, Department heads, Deputy heads) are common to both groups, and the HIS group will receive additional strategies (Table 2). Implementation strategies in both IS groups include prework to obtain formal commitments [30, 34], passive external web-support with educational materials [34], and pre-implementation educational outreach [34]. Prework, led by CCIHP, involves invitations to each university to take part, a written summary of the proposed project, site-specific dialogue, and written commitments to participate. External web-based support typically is available to US universities delivering online sexual violence prevention programs [53, 54]. Our Vietnam-based website will include links to the open-access GlobalConsent study protocol and impact assessments [33, 50, 51], short videos providing an overview of sexual violence and GlobalConsent in Vietnam and explaining findings in lay terms, one-page briefs with overviews of GlobalConsent and sexual violence among young people in Vietnam, one-page briefs about these findings and sexual violence among young people in Vietnam generally, and answers to frequently asked questions. The 2-h pre-implementation educational outreach led by CCIHP will address misinformation about sexual violence as uncommon, the normative climate for evidence-based prevention, and a description of the GlobalConsent program.

Engage potential consumers (university student users of GlobalConsent)

Internal facilitators in both IS groups will prepare students to become active consumers of GlobalConsent with an email introduction to the program, process and schedule for delivery, procedures for data collection, and consent to take part.

Train and support internal implementation facilitators and teams

Internal implementation facilitators and teams in both IS groups will receive passive external web-support with educational materials and 3 days of in-person pre-implementation manualized educational outreach on how to deliver GlobalConsent to eligible undergraduate men at their university (Table 3). The 3 days of in-person training will cover procedures about how to: maintain records on program adoption and penetration among students; identify and invite eligible students to complete an online informed consent and, if completed, a series of short surveys; and send text and email reminders to complete each module within two weeks.

Table 3 External training and support activities provided by CCIHP for implementation teams delivering GlobalConsent

Intervention description {11a}

Implementation strategies in the HIS group include additional strategies (1) to develop inter-relationships between stakeholders and to engage university leadership and (2) to engage potential consumers (student users) of GlobalConsent (Table 2). CCIHP will provide additional training and support to the implementation team facilitators and members who are delivering GlobalConsent, in service of carrying out the intervention (Table 3).

Develop interrelationships between stakeholders and engage university leaders

Additional implementation strategies to develop interrelationships and to engage university leaders only in the HIS group will include internal-facilitator efforts to inform local opinion leaders [34] about implementation progress via regular emails to university leaders. Additional efforts to inform local opinion leaders will involve meetings with the university faculty (organized and led by university implementation teams) about sexual violence as a problem and prevention with GlobalConsent. Post-implementation educational outreach will entail a 1-h webinar jointly organized by CCIHP and internal implementation teams sharing anonymized findings by IS group and discussing plans to sustain GlobalConsent with future cohorts of university men.

Engage potential consumers (university student users of GlobalConsent)

Students only in the HIS group additionally will receive the following: educational outreach in a pre-implementation in-person orientation to GlobalConsent covering similar topics and three monthly 1-h learning sessions during implementation in which technical questions about program access or progression can be addressed, more intensive intervention to enhance adherence with more frequent email/SMS completion reminders (weekly versus every 2 weeks for 12 weeks), and demand generation with an option to enter a lottery to win prizes upon program completion.

Train and support internal implementation facilitators and teams

Internal implementation teams and facilitators only in the HIS group additionally will receive leadership training before implementation and external support and technical assistance (TA) during implementation. The 2-day leadership training will cover skills needed to champion GlobalConsent with diverse internal stakeholder groups. Topics will cover leadership styles, managing teams, influence without authority, managing conflict, emotional intelligence, negotiation, and leading institutional change. The leadership training also will cover effective ways to facilitate a student orientation to GlobalConsent, facilitate faculty/staff town halls (outreach sessions) about sexual violence, and send effective communications to leaders on implementation progress (Table 3). Ongoing external support and TA will involve six 1-h quality-improvement team consultations with CCIHP to provide refresher training, discuss implementation progress and modifications, build peer networks, and discuss anonymized data on implementation progress for shared problem-solving. Consultation sessions will be recorded and later coded as part of data collection activities.

Criteria for discontinuing or modifying allocated interventions {11b}

CCIHP will encourage adherence to the implementation plan of the LIS and HIS groups in response to any questions that are posted to the GlobalConsent website regarding deviations to the implementation plan. CCIHP also will encourage adherence to the implementation plan of the HIS group in its regular consultative meetings, when it provides technical support. The study team will conduct regular focus group discussions with implementation teams to monitor modifications to implementation strategies that LIS and HIS implementation teams may apply (aim 1). The decision to discontinue a student’s participation in GlobalConsent will be made based on an adverse event protocol that is described elsewhere in this study protocol.

Strategies to improve adherence to interventions {11c}

To improve implementation-team adherence to the LIS and HIS implementation strategies protocols, Emory and CCIHP will establish deliverable-based contracts with each university clarifying the terms of reference (TOR) and payment schedule for mutually agreed implementation activities. At the time of implementation team training, each team member at each participating university will assume specific roles and responsibilities, and a team supervisor will be responsible for overseeing the activities of all implementation team members. The completion of all activities will be assessed at the time that each university invoices CCIHP for its work, and payment of invoices will be based on the demonstrated completion of implementation activities in the TOR.

To support students’ adherence to the GlobalConsent program in the HIS group, the contracted IT company (with support from internal implementation team members) will send students weekly email and/or text reminders to complete each program module. In the LIS group, the contracted IT company, with support from internal implementation team members, will send students email and/or text reminders once every 2 weeks to complete each program module. Students will be offered to enter their unique ID into a lottery to win a small prize for completing each module. Student adherence to program participation will be monitored with a short survey after each program module and with passive monitoring by the IT company delivering the program (times module opened, time spent with module open).

Relevant concomitant care permitted or prohibited during the trial {11d}

There is no concomitant care that is specifically permitted or prohibited during the trial.

Provisions for post-trial care {30}

A case management protocol for post-trial care is applied to all students who participate in the GlobalConsent program and each survey wave. First, near the end of each survey, all student participants are provided a comprehensive resource list of local fee-based and non-fee-based services. This resource list is provided to all participants. Second, after providing the resource list, all participants are asked to report their level of distress (1 = not at all distressed to 10 = extremely distressed) and the manageability of their reported distress (0 = manageable, 1 = manageable with resources, or 2 = not at all manageable). Any participant who reports extreme distress (= 10) or “not at all manageable” distress (= 2) regardless of the distress level reported will receive an emergency contact number in REDCap and will be offered the opportunity to follow-up with a professional counselor. The message will read: “Your wellness is important to us, and someone outside of the study can follow-up with you, if you wish. They will have no information about your answers. Please indicate how you are most comfortable seeking help.” If the participant reports that they want someone to follow-up with them, their ID will be shared with a non-study staff member at CCIHP. Within 3 days of the participant’s responses being submitted in REDCap, this staff member will introduce the case and his contact information to an expert (clinical psychologist) who is responsible for supporting participants in the relevant geographic region (North, Central, South). Within 1–3 days after the expert receives the participant’s contact information, the expert will contact the case via phone call to introduce themself and their professional background and to set an appointment for an online or in-person assessment. The expert will attempt to contact the participant over three days. If unsuccessful, the expert will re-confirm or correct the contact details and attempt contact again. If successful, the expert will schedule an appointment and complete an assessment, including whether the unmanageable distress was study-related. The expert will make recommendations regarding strategies for intervention, including fee-based and non-fee-based services. If the case refuses support at the time of the expert’s call, the expert will inform the case about available fee- and non-fee-based services. In all cases, the expert will report the general outcomes of their follow-up attempts and whether any unmanageable distress was attributable to the study or the intervention. The study team will report the findings of all adverse events (extreme distress and/or unmanageable distress, regardless of the distress level reported) to the responsible IRBs, independent data safety and monitoring board (DSMB), and study sponsor, in accordance with the timetable required by the National Institutes of Mental Health Reportable Events Policy [55]. In each case, determination about continuation or discontinuation of the program will be made by these parties independently.

Outcomes {12}

Implementation process outcomes (aim 1)

We will conduct multimethod qualitative research to document all implementation strategies done, to assess modifications to the implementation of GlobalConsent and to implementation plans, and to understand barriers/facilitators to implementation across HIS and LIS groups (Table 4). Twelve group reflections between CCIHP and the study team will be conducted to understand the GlobalConsent implementation process and modifications to the implementation plan in the LIS and HIS groups. Two key informant interviews (KIIs [46]) with each of 30 university leaders (five [56] leaders per university; 15 leaders per IS group; 60 KIIs total) will collect in-depth data from individuals who are knowledgeable about external factors (policies, regulations, funding) and organizational factors (resources, time constraints, institutional climate, leadership support) that may explain modifications to GlobalConsent and implementation plans [46]. A semi-structured interview guide, with open-ended questions and prompts, will guide the KIIs (Appendix). Focal topics before implementation will include (1) perceptions about sexual violence among university students and (2) perceptions about the feasibility, acceptability, and suitability of sexual violence prevention programs at universities. Focal topics after implementation will include these topics as well as (1) attitudes of university leaders, faculty, and students about continued implementation of GlobalConsent, (2) barriers and facilitators of future implementations with detailed probes, and (3) external contextual factors that may affect future implementation.

Table 4 Data collection methods for implementation assessment (aim 1)

Longitudinal focus group discussions (FGDs) will entail four discussions with each of six groups of 5–8 [57] implementation team members (one group per university; 24 FGDs total). These FGDs will provide detailed, near-real-time data on the project’s dynamic implementation context, including features of the implementation setting; modifications to GlobalConsent or implementation plans; changes in the university, local, regional, or national context that may affect implementation; and team sense-making and learning [58]. The FGD guide [46] includes open-ended questions aligned to FRAME; the rationale, timing, and guidance for each question; and probes about the use of core implementation strategies (Appendix). Some questions in the FGD guide are like those in the KII guide to facilitate triangulation of the data during analysis.

A brief checklist will be added at the end of every program module of GlobalConsent (Appendix). This checklist will be based on the Modifications and Adaptations Checklist [59, 60], a coding scheme for recording modifications to evidence-based interventions. The checklist will ask participants to self-report any modifications they made to planned use of GlobalConsent. As students cannot skip segments or modules in GlobalConsent, the checklist will focus on the following major modifications: (1) device used to view each module of GlobalConsent, (2) percentage of module watched, (3) number of sessions required to watch the module, (4) whether or not they watched part or all of each module mor than once, (5) “drift” by multi-tasking or doing other things while a module was open, ,6) “drift” by stepping away from the computer or mobile device while a module was open, (7) extent of satisfaction with the content of the module, and (8) any comments about the module.

Implementation drivers and outcomes (aim 2)

Table 5 summarizes the constructs to be measured, data sources, study samples, number of assessment points by focal population, and for variables measured in surveys (indicated with a superscript), the number of items per construct. The main constructs to be measured are drawn from implementation science and based on the hypothesized processes and outcomes in the present study. Scales that were created to assess general implementation of evidence-based practices in a medical or social-service setting are adapted to fit the current context of implementation by modifying item wording to focus generally on sexual violence prevention programming and specifically on GlobalConsent in a university context (Appendix).

Table 5 Implementation outcomes and potential drivers aligned with RE-AIM [24, 25] and Proctor et al. [32] for comparison across implementation strategies groups

Demographic implementation drivers that are measured in all focal populations include age in years, sex assigned at birth, gender identity, sexual orientation, and ethnicity. Questions on sex, gender, and sexual orientation are based on recommendations from the National Academies of Sciences, Engineering, and Medicine (NASEM) [61]. Normative implementation drivers include pre-implementation perceptions in all focal populations about sexual violence as a problem to address, perceptions of campus climate, legal knowledge of sexual violence, knowledge about active consent, and rape myth acceptance. Other implementation drivers include perceptions of implementation (1) leadership, (2) collaboration, and (3) climate.

Implementation outcomes include the (1) perceived acceptability, appropriateness, and feasibility of general sexual violence prevention programming in all focal populations, (2) perceived acceptability, appropriateness, and feasibility specifically of GlobalConsent in all focal populations, (3) implementation adoption among eligible students consenting to take part in GlobalConsent, and (4) implementation penetration among eligible students consenting to take part in GlobalConsent. Intervention adoption and penetration will be measured continuously using records from the collaborative IT company on the implementation of GlobalConsent by tracking the number of eligible male student participants who consent to take part in GlobalConsent (adoption) and who complete the intervention (penetration). Notably, the normative climate among leaders, faculty, and implementation teams at baseline also may change as a result of implementation, so these measures are listed as implementation outcomes.

Implementation effectiveness outcomes (aim 3)

Student-level primary outcomes (sexually violent behavior; prosocial intervening behavior) and student-level secondary outcomes related to cognition/knowledge, attitudes/beliefs, affect, and capacity are summarized in Table 6, and all question sets are provided in English and Vietnamese in the Appendix. Table 6 also includes the National Institutes of Health (NIH) common data elements to be collected.

Table 6 Primary and secondary implementation effectiveness outcomes among students

Incremental cost-effectiveness outcomes (aim 4)

The primary effectiveness outcomes for the cost-effectiveness analysis will be the frequency of sexually violent behavior and prosocial intervening behavior. The costing items are organized sequentially based on the implementation phases: pre-implementation, implementation, and post-implementation. Each phase includes a detailed list of activities (Tables 2 and 3), with costs related to personnel, travel, space, and supplies/equipment collected for each activity. We will estimate the aggregate costs for each university and then divide this aggregate by the total number of participants at that university to determine the cost per participant.

Participant timeline {13}

Figure 2 summarizes the start dates and end dates for enrolment (beginning March, 2024), interventions (implementation activities and delivery of GlobalConsent), and assessments (quantitative surveys, key informant interviews, focus group discussions, modification checklist, and costing forms) with each focal population in the “high intensity” implementation strategies group.

Fig. 2
figure 2

Schedule of enrolment, interventions and assessments

Sample size {14}

Focal populations for each of these assessments include university leaders who support the GlobalConsent implementation (n = 5 per university, total of 30), members of implementation teams (n ≈ 5 per university, total ≈ 30 members), fulltime permanent faculty at each university (estimated mean 589 per university, range 186–1038, total 3532; based on official figures for 2021), and male student participants in GlobalConsent (estimated mean n = 796 per university, range 449–1202, total = 4776 18–24 year-old, first-year undergraduate, heterosexual or bisexual men, who are attracted to women; based on official enrolment figures for 2021). We will assume 80% participation and 90% retention in each focal population.

Recruitment {15}

University leaders (aim 1)

University leaders will be identified purposively via discussions between CCIHP key personnel and implementation team focal persons. Eligible participants will be informed during the online informed consent process that they will be compensated $10 for completing each of two online surveys and $20 for completing each key informant interview (KII), for a total compensation of $60 for completing all assessments.

Implementation team members (aims 1, 2, and 4)

Implementation team supervisors and team members will be identified purposively via discussions between CCIHP key personnel and the identified focal implementation team member in each university and finally approved by the leaders of that university. Eligible implementation team members will be informed during the online informed consent process that they will be compensated $10 for completing each of four online surveys and $16 for participating in each focus group discussion (FGD), for a total compensation of $104 for completing all assessments.

University faculty (aim 2)

Each university will submit to non-study staff the list of all permanent, fulltime faculty (lecturers) at their university, with professional contact details. All eligible lecturers will receive an email invitation from staff within the university introducing the study and inviting their participation. Two days later, each faculty member will receive a secure REDCap link to the informed consent form. While completing the online consent form, the faculty will have an opportunity to call a non-study staff member to answer any questions about the study, as needed, before consent and participation. The faculty will be informed during the consent process that they will receive $10 for completing each online survey. If consent is confirmed, the faculty member will be directed immediately to the online REDCap survey. Any faculty member who has not completed the online survey will receive up to five automated reminders from REDCap at 2.5-day intervals over 2 weeks. In addition, on days 8 and 15 of the survey period, all faculty will receive a reminder invitation from within their university to complete the survey. On days 22 and 29 of the survey period, any faculty member who has not completed the survey will receive additional reminder invitations. During the 2 weeks following day 15, any faculty member who still has not completed the survey will receive up to two standard SMS text reminders with their unique survey link encouraging them to complete the survey. If, after 4 weeks, the faculty have not yet completed the faculty survey, they will receive a standard follow-up reminder and unique survey links via Zalo and a call via Zalo, an encrypted Vietnamese communication application similar to WhatsApp. Once this protocol is completed, implementation teams will send a general reminder through informal faculty networks at their universities for faculty to complete the survey.

First-year undergraduate male students (aims 1 and 3)

All eligible students will be invited to participate in six 6-monthly surveys and the GlobalConsent program. For the LIS and HIS groups, implementation team members will send a standard email invitation to all eligible men with a description of the GlobalConsent program, a description of the study, and an invitation to participate. They will separately receive a secure REDCap link to the online informed consent form and, if completed, the online survey (Table 6). For the HIS group, internal facilitators will organize an orientation session for all eligible male students to describe these elements in person and to answer any questions about participation. All eligible participants will be informed during the online informed consent process that they will be compensated $5.50 in average for completing each survey assessment, for a total compensation of $33 for completing all six assessments. All eligible participants also will be informed during the online informed consent process that they will be eligible to enroll in a lottery to receive a prize (e.g., smartphone) after completing three survey assessments and that they will be eligible to enroll in a second lottery to receive a prize (e.g., smartphone) after completing six survey assessments. All eligible participants in the HIS group will be informed about this compensation schedule during the recruitment orientation meeting.

Assignment of interventions: allocation

Sequence generation {16a}

All universities in the sample are broadly matched on programs of study (health sciences) and are pair-matched on region (North, Central, South). Within matched pairs, universities will be randomized using a computer random number generator to receive “lower intensity” (LIS) or “higher intensity” (HIS) implementation strategies to deliver GlobalConsent to eligible undergraduate men.

Concealment mechanism {16b}

The team will separate the act of randomization from the recruitment of participants as follows. First, CCIHP will be responsible for completing the randomization of university to IS groups. Second, a staff member at each participating university will be responsible for recruitment strategies within their respective university. Third, Emory study staff will pre-program the REDCap data system to send automated reminders to participate in each study wave, at the schedules described previously.

Implementation {16c}

Non-study staff at CCIHP will generate the allocation sequence, randomly assign universities to study arms, and upload contact lists for eligible participants in each focal population into REDCap. Formal invitations to participate will be sent automatically via the customized REDCap data system, or for university leaders, by the staff member at CCIHP conducting KIIs.

Assignment of interventions: blinding

Who will be blinded {17a}

Implementation teams at each university will be blinded to the bundle of implementation strategies to which they are being compared. Students will be blinded to the implementation strategies being implemented at their and other universities, except for the strategies to which they are directly exposed. Study team members at Emory University and Georgia State University will be blinded to the assignments of universities until analyses are completed by using a non-ordered numerical assignment to university and study arm in the REDCap database. Members of the data safety and monitoring board will be blinded to implementation strategy assignments until they request to be unblinded.

Procedure for unblinding if needed {17b}

Unblinding of implementation strategies assignments may occur before study completion if the Institutional Review Boards, data safety and monitoring board, or study sponsor deem that unblinding is necessary. Otherwise, unblinding of the study team members will occur upon completion of aims-specific analyses.

Data collection and management

Plans for assessment and collection of outcomes {18a}

Group reflections with CCIHP

Each of the 12 group reflections with CCIHP (Table 4) will be recorded, transcribed, and translated. Session transcriptions and team reflection notes will be analyzed on an ongoing basis to understand modifications to planned external training and support provided by CCIHP.

Key informant interviews (KIIs) with university leaders

The KII guide will be drafted in English, translated into Vietnamese, piloted, revised, and back-translated into English to confirm consistency of the translation with the original intended meaning. Masters- or PhD-level social scientists at CCIHP with expertise in qualitative research and stakeholder interviews on sexual violence will conduct the interviews. Interviews will occur in the pre-implementation phase, before any educational outreach by CCIHP, and after implementation to assess views and practices related to sustainment. All interviews will occur by audio-conference call. Interviews will be recorded, professionally transcribed, and translated into English. All interviewers will keep a diary in which they will take field notes after each KII [77, 78].

Focus group discussions with implementation teams

The FGD guide will be finalized in English, translated into Vietnamese, piloted, revised, and back-translated into English to confirm consistency of the translation with the original intended meaning. Similar staff at CCIHP will facilitate the FGDs with implementation teams. Discussions will be structured as 45–60-min in-person discussions [79] held before the start of implementation training and then immediately before, during, and after implementation of GlobalConsent. FGDs will occur on-site at universities. Interviews will be recorded, professionally transcribed, and translated into English. All facilitators will take field notes after each FGD [77, 78].

Survey-based data collection (aim 1 checklist; aims 2–3)

Survey-based data from all target populations will be collected via REDCap [49], a HIPAA-compliant, web-based data system that allows for the secure collection, transfer, storage, and analysis of study data. Customized surveys will be piloted with individuals like the focal populations but at non-study universities to gauge acceptability, inform final revisions, and assess timings to minimize respondent burden (e.g., 10–15 min for each leader, faculty, and implementation team survey; 16 min for each student survey). The survey implementation protocol described, above, under the “Recruitment {15}” section, will be followed for at each survey wave for faculty, implementation teams, and students to maximize their participation at baseline (~80%) and retention at endline (~90%), based on prior experience surveying students in Vietnam [51].

Measurement time points will vary for each target sample, according to the project timeline (Fig. 2). Leaders will be assessed twice, once before and once after implementation. Implementation teams will be assessed quantitatively at four time points: once before the start of any implementation activities and once immediately before, during, and after implementation of GlobalConsent. Other data on the implementation will come from administrative records collected by the IT company that is responsible for delivering GlobalConsent. Faculty will complete one assessment before implementation (year 1) and two after the planned 12-week delivery period of GlobalConsent (years 3 and 5). Three surveys will be administered to students at 6-month intervals before the planned 12-week delivery period of GlobalConsent, and three 6-monthly surveys will be administered at the same interval after implementation, for a total of six assessments. At each occasion, we will send to students’ smartphones an encrypted link to a REDCap survey with ~189 questions measuring two primary behavioral outcomes and seven cognitive/knowledge, attitudinal/belief, affective, and capacity-related secondary outcomes validated in the efficacy trial [33, 50, 51] (Table 6). Secondary outcomes that are included align with our theory of change (Fig. 1) and showed evidence of mediation in the trial [50]. A short modification checklist of 5–10 questions will be administered at the end of each GlobalConsent program module to understand participants’ engagement and satisfaction with the module (Table 4).

Costing surveys (aim 4)

Cost elements for activities involving CCIHP staff and university implementation team members are detailed in the Appendix. The costing items for CCIHP should be completed monthly for each CCIHP staff member involved in the GlobalConsent intervention activities. For university implementation teams, costing items should be completed monthly outside the 12-week program implementation period and weekly during the 12-week program implementation period for each team member involved. CCIHP will collect all costing forms monthly outside the 12-week program implementation period and weekly during the 12-week program implementation period, de-identify the data, and then send the forms to the health economist for cleaning and analysis.

Plans to promote participant retention and complete follow-up {18b}

To maximize retention across occasions, we will conduct brief cognitive interviews and pilot all REDCap surveys in similar populations at other universities in Vietnam. These steps will ensure comprehension of the questions as intended, acceptability of the questions, and appropriate length to minimize participant burden. Implementation teams and the REDCap application will administer a standard protocol of internal and automated reminders to complete each survey. To clarify institutional support and to encourage participation, leaders and/or implementation team members at each participating university will send invitation letters via email prior to survey launch. After the initial survey launch, a standard protocol of follow-up communications will be implemented for all survey waves with faculty, students, and implementation team members to maximize enrollment and retention. University leaders will receive a survey link from the CCIHP study member who conducts their KII, who will confirm individually that they have completed the survey. In addition to implementing a systematic protocol for outreach, participants who complete at least part of each survey will be compensated for their time. Also, for students, we will offer options to enter a lottery to win prizes (a smartphone) after completing all pre-implementation surveys and then all post-implementation surveys.

Data management {19}

Qualitative data (reflections, KIIs, FGDs, field notes for aim 1)

KIIs and FGDs will be digitally recorded, and all recordings will be numerically labeled. Digital files will be transcribed verbatim, and all Vietnamese transcripts will be fully de-identified, with participants identified numerically or by pseudonym. De-identified Vietnamese transcripts will be translated into English. One CCIHP non-study staff member will check one random segment of each digital file against the transcription and one random segment of each transcription file against the translation to ensure accuracy and/or to make any corrections. Digital files and de-identified transcribed and translated files will be uploaded to separate folders maintained on a HIPAA-compliant, secure network drive maintained by Emory University. Each folder will be accessible only to selected study staff for the purposes of data management and/or analysis. Upon completion of the analyses or within a suitable timeframe, digital files will be destroyed to protect the confidentiality of study participants.

Quantitative data (aim 1 checklist, aims 2–3)

The REDCap data system will be used to facilitate secure data collection, transfer, processing, and storage of quantitative data. Data-collection modules for each wave will be built in a REDCap project and self-administered on personal devices to ensure flexibility and privacy. All required fields (e.g., documentation of consent) are specified as such, and participants cannot continue with the survey until required responses are completed. Allowable response options for each question are programmed into the data system, and participants are asked to correct or to confirm any missing or out-of-range responses. Each page of the application includes a check at the bottom to ensure that participants correct or confirm any missing responses to questions on the page. The web application then transmits data entered by each participant through an encrypted network connection to a centralized database running on a secure network and infrastructure maintained by Emory University, ensuring secure electronic data movement.

Each participant is assigned a unique random number identifying him/her across study waves and REDCap projects. All identifiable data (names, contact info, etc.) and randomized “lower-intensity” or “higher-intensity” study arm assignment will be stored with the unique ID and accessible only to non-study staff. The Emory and Georgia State study teams will receive randomization data only after completing the main study analyses but will not have access to identifiers in the REDCap project. CCIHP and Emory staff will create and run applications for more refined range and consistency checks in the centralized database. Systematic data collection errors for each wave of data collection will be identified, resolved, and documented using the logging application in REDCap. For analyses done with external statistical software, de-identified data will be extracted and held on HIPAA-compliant, secure networks and computing workstations.

Cost data (aim 4)

The completed costing forms will be checked for consistency and missing information upon receipt. If any inconsistencies or missing information are found, the forms will be returned to CCIHP or the specific university via CCIHP for correction. Once the forms are verified as complete, they will be stored with a unique university ID and personnel ID in a folder accessible only to the health economist.

Confidentiality {27}

Qualitative data (reflections, KIIs, FGDs, field notes for aim 1)

No identifying information will be recorded on the KII guide or FGD guide. All digital files will be labeled numerically. All Vietnamese transcripts will be fully de-identified, with participants identified numerically or by pseudonym. All translation files, likewise, will be fully de-identified. Digital files and fully de-identified transcription and translation files will be uploaded to separate folders on a HIPAA-compliant, secure network drive maintained by Emory University. Digital files will be accessible only to selected CCIHP staff for quality management. De-identified transcription and translation files will be accessible only to selected study staff for the purposes of analysis. Upon completion of the analyses or within a suitable timeframe, digital audio files will be destroyed to protect the confidentiality of study participants.

Online quantitative survey data (aim 1 checklist, aims 2–3)

Procedures to minimize loss of confidentiality include the encrypted entry, transfer, and storage of data collected via REDCap. Collected data also will be identified only by unique, random identification numbers with no personal identifiers. Personal identifiers, for the purposes of recontacting participants for follow-up data-collection waves, will be accessible only to non-study-team members. Confidential data will be maintained on a HIPAA-compliant, secure network drive with user-defined access to selected study team members. Data that are extracted for analysis using other statistical software will be analyzed on password protected computing systems in locked offices. Quantitative study data will be presented in aggregate form only, and participating universities will not be named in the dissemination of study findings.

Cost data (aim 4)

The costing data will be presented at an aggregate level, comparing the HIS group to the LIS group. Participating universities will not be identified in the dissemination of study findings.

Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use {33}

This section is not applicable, as the study team is not collecting biological specimens.

Statistical methods for primary and secondary outcomes {20a}

Qualitative data analysis (aim 1)

Our analysis of qualitative data will be based on FRAME [36] to enable us to characterize and explain contextual facilitators and barriers to the implementation of GlobalConsent as well as planned and unplanned modifications to implementation strategies in the HIS and LIS groups. We will use rapid content analysis [80, 81] and a hybrid inductive-deductive approach [80, 81] to analyze the data. We will transfer all reflection notes, field diaries, and transcripts of KIIs and FGDs into matrices and use matrix analysis methods to examine core FRAME constructs of (1) engagement with the implementation training materials; (2) GlobalConsent delivery practices; (3) program modifications; (4) implementation strategies over time, including the nature of, timing, reasons for, and decision-makers involved in planned or unplanned implementation modifications; and (5) factors in the outer context or internal organizational context that may explain these modifications. Prior constructs and assumptions will be evaluated against the data, and new themes will be incorporated into the matrix coding scheme [82]. Matrices will systematically note the similarities, differences, and patterns in responses across participating universities and implementation-strategies groups, for a comparative synthesis of the findings [83]. Identifying these barriers, facilitators, and modifications, and their potential influence on implementation drivers and outcomes (aim 2) and effectiveness (aim 3) by implementation strategies group will provide insights about recommended adaptations to proposed implementation strategy bundles here as well as the time and skills needed to facilitate the implementation of GlobalConsent by operational partners at other universities.

Analysis of quantitative data from leaders and implementation teams (aim 2)

One set of analyses to explore primary hypotheses for aim 2 will examine changes over time across LIS and HIS groups in implementation drivers and outcomes among university leaders and implementation teams (Table 5). Quantitative analyses will focus on the same measures across implementation groups for comparisons within and across periods. Cronbach’s α will be estimated with the total sample (N ≈ 78) using pre-implementation baseline survey data to confirm the internal consistency of each scale. Quantitative responses within item sets will be summed, and scores will be standardized by averaging the items in each scale. We will examine correlations of scale constructs at each time point. Comparisons between HIS and LIS groups will be conducted within each time period by target (i.e., faculty, leaders, implementation teams) to examine whether mean differences are statistically significant, at p < .05. We also will examine HIS and LIS group differences in means for implementation drivers (e.g., perceptions of leadership, collaboration, climate) and implementation outcomes (e.g., intervention acceptability, appropriateness, feasibility) at each time point. We will test for differential change over time in mixed models by testing Group × Time interactions with target responses nested within participating university.

Statistical power for analyses of leader and implementation team data

We estimated power to test differences in implementation drivers and outcomes of leaders and implementation teams in the HIS and LIS groups. A power analysis was conducted using G*Power [84] and entering a sample size of 78 (39 per group) with four measurement points. Based on repeated measures ANOVA for continuous variables, with alpha set a .05 and assuming a correlation between repeated measurements of .50, the sample of 78 provides excellent power (.95) to detect a small to medium difference (d = .33) in differences over time between LIS and HIS groups. We expect little missing data from this collection, but assuming attrition of 20% and a sample size of 62 (N = 31 per group), power still is sufficient (.95) to detect a small-to-medium effect size of d = .39.

Analysis of quantitative data from the general faculty (aim 2)

Our analytical approach for general faculty targets will leverage their larger population sizes across universities (Table 1). We will use difference-in-difference (DD) models to assess the effects of being in the HIS group versus being in the LIS group on changes among faculty in norms about sexual violence and awareness of sexual violence as a problem, operationalized using identified scales (Table 5). We will use pre-post implementation data in project years 1 and 3 to assess short-term effects of exposure to HIS versus LIS on faculty knowledge and attitudes about sexual violence and pre-post implementation data in project years 1 and 5 to assess longer-term effects of exposure to HIS versus LIS on these outcomes. A basic DD model for our study would take the following form: Yijt = αj + βPt + γHISij + δHISij*Pt + εijt, where Yijt is the value of the outcome observed for person i in university j at time t, HISij is an indicator of person i in university j being in the HIS (treatment) group (HISij = 1) versus the LIS (comparison) group (HISij = 0), and P reflects the time period (pre = 0 vs post = 1 implementation of GlobalConsent). The parameter δ is the DD estimator; the point estimate of δ from this model is equivalent to a non-parametric approach that takes the difference in the changes over time between the two implementation strategies groups.

Statistical power for analyses of faculty data (aim 2)

We estimated the power to test the DD models that tested for differences in the knowledge of sexual violence legality and harm between LIS and HIS groups. Using estimates for the mean score measuring the knowledge of sexual violence legality and harm in the GlobalConsent Trial [50, 51], we computed the statistical power with a linear regression model allowing a separate intercept for each university via a simulation. We used the Stata command (ipdpower, model 1, a simulations-based command that calculates power for simple linear regression modeling) to perform the simulation. With a retained sample size of ~2543 faculty (3532 × 0.80 participation × 0.90 retention at wave three; ~1272 in each implementation group), a point estimate of δ of 0.12, alpha level of .05, and 5000 simulations, the estimated power was .85.

Analysis of quantitative data from students (aim 3)

To explore the public-health impact of GlobalConsent in the two implementation-strategies groups, we will estimate comparative interrupted time series (CITS) models for student-level primary outcomes (sexually violent behavior; prosocial intervening behavior) and student-level knowledge, attitudinal, affective, and capacity-related secondary outcomes (Table 6).

As a first step in the analysis, we will inspect pretreatment data closely to select the modeling approach that best fits the data. Given the multi-module nature of GlobalConsent and evidence from the efficacy trial [51], we expect to see immediate level changes (improvements) in all outcomes at the first post-test survey and slope changes for all outcomes, as cognitive, attitudinal, affective, and behavioral change attenuate partially but not entirely [51]. We expect to see greater immediate improvement and less attenuation in the HIS group than the LIS group. A basic CITS model for the frequency of men’s sexually violent behavior may take the following form: log(E(Yijt)) = B0 + B1Tt + B2Zj + B3Pt + B4Tt*HISij + B5Pt*HISij + vij, where Yt is the outcome for the ith participant for the jth university at time t, B0 is a constant term showing the average frequency of sexually violent behavior in the reference university before implementation; Zj is a vector of university dummies to allow a separate intercept for each university; Tt is the time elapsed since study start, where t = 1,…,10 quarters; B1 is the pre-implementation trend in the comparison (LIS) group, and Β1 + B4 is the pre-implementation trend in the HIS group; Pt is a vector of indicators for each post-implementation time period; B3 is the level change in the reference university in the post-implementation period; vij is the random effect for the ith participant at university j which would allow random subject-to-subject variation in the intercepts at each university. The difference in the actual post-implementation performance from the projected post-implementation performance in the HIS group, less this same difference in the LIS group, is the estimate of HIS effects (B5). This formulation assumes that all universities in the HIS group share the same trend, and all universities in the LIS group share the same trend (though possibly different from the HIS trend). This assumption could be relaxed by modeling the trends as random effects. To assess whether the slope of performance changes after implementation, we may code Pt as a dichotomous variable that equals 0 in the pre-implementation period and 1 in the post-implementation period. The change in slope can then be estimated by adding a three-way interaction between the higher-intensity implementation indicator (HISij), post-implementation indicator (Pt), and linear time trend (Tt). In this formulation, Pt would be coded to be centered on the introduction of implementation, so B5 is interpreted as the immediate shift in outcomes following implementation. Inclusion of this interaction would allow implementation effects to grow or decline over time. A similar CITS model will be applied to investigate the impact of GlobalConsent on the incidents of men’s prosocial intervening behavior in the two implementation-strategies groups.

Statistical power for analyses of student data (aim 3)

We estimated the power to test models that tested for differences in the frequency of sexually violent behavior and prosocial intervening behavior between LIS and HIS groups. Using estimates for the mean frequency of sexually violent acts in the GlobalConsent Trial [51], we computed the statistical power with a generalized linear mixed model with a random intercept for repeated count measures via a simulation. We used the Stata command (ipdpower, model 2, a simulations-based command that calculates power for mixed-effects modeling with random effects for intercept) to perform the simulation. With a retained sample size of ~3439 men (4776 × 0.80 participation × 0.90 retention the final wave; ~1719 in each implementation group), six counts for the frequency of sexually violent acts, an incidence rate ratio of 0.825 (0.65 + (1 − 0.65)/2) for the rate of change after implementation of GlobalConsent for the HIS group relative to the LIS group, alpha level of .05, and 5000 simulations, the estimated power was .91.

Analytical challenges and solutions

Leader and implementation team analysis

A primary challenge for this analysis is the non-random selection of the sample and relatively small sample sizes across the LIS and HIS groups. These limitations require the analyses to be largely descriptive and inferences to be restricted to the samples rather than to the university populations from which the samples are drawn. Still, our mixed-methods study design will allow us to triangulate findings from the qualitative and quantitative data collected from the university leaders and implementation teams and both sets of findings with those from university faculty. This multi-method, multi-sample approach will permit a more nuanced understanding of the institutional changes that may be underway and that may facilitate or inhibit the implementation of GlobalConsent and/or planned implementation strategies. Thus, all findings from our approaches to aims 1 and 2 will inform interpretation of the results for the implementation effectiveness assessment (aim 3).

University faculty analysis

DD methods provide unbiased effect estimates if the trend over time would have been the same between the treatment (HIS) and comparison (LIS) groups in the absence of the more intense elements of HIS implementation. However, selection bias may arise if the composition of these groups changes over time, such that the faculty population at participating universities changes systematically, for example, through substantial turnover or consolidation. One approach to address this issue is to restrict the sample of faculty to those who are available across all three survey waves and study years (1, 3, 5). If high turnover risks a substantial loss of power using this approach, we will consider the use of propensity score weighting to handle this type of confounding across four groups (HIS pre, HIS post, LIS pre, LIS post) [85]. Another challenge in this analysis is the cluster-randomized design, where six universities are assigned to HIS or LIS groups. We will assess the robustness of our DD findings across a range of small-sample corrections, and we will report the methods used and findings to ensure transparency and reproducibility [86, 87].

Male student analysis

(1) We will include the age-standardized male student population in person period as an offset variable to convert the outcome into a rate and adjust for potential changes in the population over time. (2) We will consider methods to adjust for seasonality [88] or other time-varying confounders [89], such as concurrent training programs or changes in COVID-related conditions that alter opportunities for in-person versus online interactions at participating universities. (3) We will diagnose and address potential issues of over-dispersion [88] and residual auto-correlation [90, 91], and (4) we will conduct sensitivity analyses to test the impact of varying model specifications, such as whether a negative binomial regression model fits the data better than a Poisson regression model or different lags in slope changes for behavior and different impact models (e.g., non-linear trend model, school-year fixed effects model) [88, 89, 92, 93]. (5) Another potential concern in CITS analyses of self-reported sexually violent behavior is systematic biases in reporting, including over time due to the repeated measurement of this sensitive behavior. Increased under-reporting over time could lead to falsely attributing declines in sexually violent behavior to the implementation of GlobalConsent. However, increased under-reporting due simply to the repeated measurement of sensitive behavior should not differ across LIS and HIS groups, such that estimates of differences in implementation effectiveness across groups should not be biased. Also, a review of survey experiments suggests little difference in reports of frequency of sensitive behaviors using standard interview methods versus month-by-month reporting [94]. Still, we will use various strategies shown to improve the reporting of potentially sensitive behaviors, including computer-assisted self-interview, assurances of anonymity, the choice for men to complete surveys in a private location of their choosing, and the use of multiple response options to capture the frequency of reported behaviors [94].

Incremental cost-effectiveness analysis (aim 4)

The cost-effectiveness analysis will be conducted from the payer’s perspective. Program costs will be calculated using a micro-costing approach, multiplying resource use by unit costs. Costs will be divided into the following two components based on the process necessary to set up and deliver the GlobalConsent program: (1) set-up costs (e.g., initial training costs, and set-up before the start of the program) and (2) program delivery costs (e.g., session time, preparation time, administrative costs, and materials/supplies). Cost data will be cleaned and used to estimate the total costs, set-up costs, and program delivery costs per participant for each of the two groups (HIS vs LIS).

Net costs (the net increase in costs from the HIS vs LIS) and net effectiveness (the difference in the actual post-implementation performance from the projected post-implementation performance in the HIS group, less this same difference in the LIS group) will be used to calculate an incremental cost-effectiveness ratio (ICER) (costs per additional incident of sexually violent behavior averted or costs per additional incident of prosocial intervening behavior increased). Bootstrapping techniques will be used to conduct uncertainty analyses to assess variability in our findings from potential sampling bias.

Interim analyses {21b}

The team will undertake basic descriptive analyses of baseline data from each target sample. The team also will undertake psychometric analysis (exploratory factor analysis, confirmatory factor analysis, and multiple-group confirmatory factor analysis) of each scale-related item set to assess the measurement invariance if item sets across universities, IS groups, and study wave. The study team will present interim results to the study’s data safety and monitoring board (DSMB) biannually during data collection, and as needed, to the DSMB, Emory IRB, and sponsor if a participant reports unmanageable distress (see the “ Composition of the data monitoring committee, its role, and reporting structure {21a}” section). The DSMB and Emory IRB will make independent determinations about the need to terminate the trial, which will be shared with the sponsor for review and a final decision.

Methods for additional analyses (e.g., subgroup analyses) {20b}

Each participating university will be invited to propose subgroup analyses of the data collected at their respective university. Proposals for these analyses will be submitted to the SCALE steering committee and reviewed on a rolling basis (Appendix).

Methods for analyses to handle protocol non-adherence and missing data {20c}

To address protocol non-adherence, we will evaluate the balance between study arms (HIS and LIS groups) based on student responses regarding any modifications to program delivery. If necessary, we will adjust for these modifications in our analysis to reduce potential bias. Moreover, we will conduct a per-protocol analysis, including only those participants who adhered to the protocol. Comparing the results of these analyses with analyses of all participants will provide insights into the impact of non-adherence.

Inadvertent missingness will be minimized at the data collection stage by pre-programming the REDCap data system to inform the participant of any missing responses and to invite them to complete their responses before proceeding to the next survey page. Sensitivity analyses will also be conducted, with missing data imputed under the assumption of missingness at random.

Plans to give access to the full protocol, participant-level data, and statistical code {31c}

The full protocol here, including study forms, will be made available through peer-reviewed publication. Participant-level data will be made available through the National Institute of Mental Health Data Archive (NDA) and Emory dataverse—Emory’s open-data repository—to support the preservation, discoverability, and accessibility of data that team members in this project produce. At the project’s completion, upon publishing the main findings, we will make study documentation, data dictionaries, and the final, cleaned, recoded, and de-identified data available through the NDA and Emory’s dataverse. We will develop a formal data-sharing agreement between key personnel at Emory University, the Center for Creative Initiatives in Health and Population (CCIHP), Georgia State University (GSU), and participating universities. This data sharing plan will describe the subsets of data to be made available to participating universities and the timeline for their release. The data sharing agreement with participating universities will provide standard procedures for applications to use the data and project guidelines for publication. Statistical code for analyses will be made available upon reasonable request to the corresponding author of project-related publications.

Oversight and monitoring

Composition of the coordinating center and trial steering committee {5d}

The overall structure of the study team will include a central steering committee comprised of key personnel, each of whom is responsible for one or more specific aims and/or local implementation of the project (Fig. 3). The principal investigator (PI) at Emory University will chair the steering committee and will provide leadership for specific aims 1 and 3, related to implementation fidelity and effectiveness. The site PIs at CCIHP and at Georgia State University will provide leadership for specific aim 2 on implementation drivers and outcomes. The health economist co-investigator at Georgia State University will provide leadership for specific aim 4 on cost-effectiveness and will provide overall statistical guidance to the study team. The site-PI and co-investigator at CCIHP will provide input on the research design and will lead external implementation support to participating universities in Vietnam. The IT company contracted in Vietnam will deliver the GlobalConsent program to participating students, will send reminders to complete each module, and will track adherence metrics at the student participant level. The steering committee will meet weekly to design, implement, and evaluate this initiative. A project coordinator/data analyst at Emory University will support the steering committee and will receive opportunities for professional development throughout the initiative. Pre-doctoral students at Emory University and Georgia State University will support all aspects of the research, from design to implementation and analysis, and will receive capacity strengthening in research and opportunities for professional development.

Fig. 3
figure 3

Overall structure of the study team

Composition of the data monitoring committee, its role, and reporting structure {21a}

Roles and membership

An independent data and safety monitoring board (DSMB) will include three experts in (1) implementation science, (2) sexual violence prevention, and (3) biostatistics. This DSMB is charged with reviewing study data for quality and integrity, adherence to the protocol, participant safety, study conduct and progress, and making determinations regarding study continuations, modifications, and suspensions/terminations. The monitoring responsibilities of the DSMB will enhance, but will not replace, the monitoring responsibilities of the principal investigator (PI) and the IRBs for this project. The PI and study team retain responsibility for real-time management of the study.

Independence of DSMB members

DSMB members will be independent from any professional or financial conflict of interest (COI) with the research project and/or study investigators. Independence ensures that competing interests do not unduly influence the DSMB and supports objectivity that enhances the safety of participants and the integrity of the trial data. Potential DSMB members will provide the NIMH with qualifications and a COI statement indicating that members have no direct involvement with the study or COI with the investigators conducting the study. DSMB members may be affiliated with the investigator’s institution or other participating sites but cannot be a scientific collaborator or co-author, supervisor, mentor/mentee, subordinate of the investigators, or a member of the investigator’s institutional department within the last 3 years. DSMB fees will be provided, per NIH and Institutional Policies.

Responsibilities and review

The DSMB will review the DSMP and study protocol before the first participant’s enrollment to establish a charter that clarifies what data points will be monitored, how they will be monitored, and the monitoring schedule. The DSMB review will include, at a minimum: enrollment data, safety data, and data integrity. As this study is blinded, the DSMB may be blinded or unblinded to the intervention assignment but will be able to be unblinded if needed.

Metrics for review

The study team will provide descriptive statistics for the DSMB’s review at each of its meetings. Descriptives will cover questions from the following survey modules for male students and will include information about missingness/non-response.

  1. 1.

    Demographics (once at baseline)

  2. 2.

    Knowledge about sexual violence (from each survey wave)

  3. 3.

    Knowledge about sexual consent (from each survey wave)

  4. 4.

    Rape Myths (from each survey wave)

  5. 5.

    Skills to engage in healthy communication (from each survey wave)

  6. 6.

    Rape empathy (from each survey wave)

  7. 7.

    Readiness to intervene (from each survey wave)

  8. 8.

    Bystander intervention strategies (from each survey wave)

  9. 9.

    Sexual Experiences (from each survey wave)

  10. 10.

    NIMH CDE GAD-7 (to be administered once)

  11. 11.

    NIMH CDE PHQ-9 (to be administered once)

  12. 12.

    NIMH CDE DSM-5 (to be administered once)

  13. 13.

    WHODAS 2.0 (to be administered once)

  14. 14.

    Single question on distress (SRQ-20 item 6)

In addition to the above enrollment and safety data, the study team will provide data to the DSMB on the following study-related variables from male students:

  1. 1.

    Participation rates in the survey, by university

  2. 2.

    Retention rates in the survey at each wave, by university

  3. 3.

    Participation rates in the program, by university

  4. 4.

    Retention rates in the program, by university

Review schedule and monitoring reports

The DSMB meeting/review schedule will be commensurate with the level of risk involved with the study but will occur no less than once per year. Additional reports may be requested, and additional meetings may be called as needed to address issues regarding participant safety. Members of the investigative team may be present for the open portion of a DSMB meeting, but not for the closed deliberations or the vote to recommend continuation, suspension, or termination of the study. The DSMB will issue a monitoring report to the PI after each review/meeting. This report will include any significant actions taken and the final recommendation(s) regarding the study’s continuation. These reports will be submitted to National Institute of Mental Health (NIMH) program staff in the annual progress report. The planned frequency of meetings is annually in 2023, 2026, and 2027 before and after the period of data collection and program implementation and twice annually in 2024 and 2025, during the period of data collection and program implementation.

Adverse event reporting and harms {22}

For this study, standard adverse event definitions are used. An adverse event (AE) refers to any unfavorable and unintended sign (including distress), symptom, or disease temporally associated with the use of a medical treatment or procedure, regardless of whether it is considered related to participation in the GlobalConsent program. A serious adverse event (SAE) is any AE that is life-threatening or results in death, an event requiring inpatient hospitalization or prolongation of existing hospitalization, or persistent or significant disability/incapacity.

Adverse events are graded as mild, moderate, or severe. A mild AE is an experience that is transient and requires no special treatment or intervention. The experience does not generally interfere with usual daily activities. This experience includes transient laboratory test alterations. A moderate AE is an experience that is alleviated with simple therapeutic treatments. The experience impacts usual daily activities. This experience includes laboratory test alterations indicating injury, but without long-term risk. A severe AE is an experience that requires therapeutic intervention. The experience interrupts usual daily activities. If hospitalization (or prolongation of hospitalization) is required for treatment, it becomes a SAE.

The study uses a standard adverse-event attribution scale. Not related means that the AE clearly is not related to the study procedures (i.e., another cause of the event is most plausible and/or a clinically plausible temporal sequence is inconsistent with the onset of the event). Possibly related means that an event that follows a reasonable temporal sequence from the initiation of study procedures, but that could readily have been produced by several other factors. Related means that the AE is clearly related to the study procedures.

A comprehensive list of resources will be made available to every participant toward the end of every survey to ensure that all participants have access to confidential care. In addition, adverse events among students who are receiving the GlobalConsent program will be identified with a series of self-report questions at the end of every survey. These questions will identify the level of distress the participant is experiencing in the moment and its self-reported manageability. Any participant who reports extreme distress (level 10 on a scale of 1–10) or any level of distress that they identify as “not at all manageable” will be given a contact number for an experienced professional for immediate support. Such participants also will be invited to have a non-study staff member connect them with an experienced professional for a confidential assessment and then appropriate referrals.

Management of risks to participants

Expected adverse events

Expected adverse events associated with the use of the web-based sexual violence prevention program include (1) mild distress from participation and potential recall of incidents of sexual violence and (2) loss of confidentiality related to sexually violent behavior reported by a participant.

Adverse event management

All adverse events will be reported to the Institutional Review Boards of Emory University and the Hanoi University of Public Health our DSMB and our sponsor following the National Institutes of Mental Health (NIMH) Reportable Events Policy (https://www.nimh.nih.gov/funding/clinical-research/nimh-reportable-events-policy). The trial IRBs and DSMB will make independent determinations about the AE and will make recommendations regarding next steps in the study and data collection. The study team will refer any participant who experiences a stress-related adverse event to confidential counseling in Vietnam, following procedures already described, above.

Frequency and plans for auditing trial conduct {23}

The study does not have plans for a formal, independent audit; however, several independent bodies will conducting regular reviews of the trials conduct and quality. First, the Emory IRB will provide independent, ongoing oversight of implementation of the IRB-approved study protocol through its annual review. A logging system ensures real-time documentation of all communications between study staff and the Emory IRB, including initial protocol submission; IRB review and approval; and submission, review, and approval of all amendments to the IRB approved study protocol. Second, an independent data safety and monitoring board (DSMB) will review data quality and study progress, will make assessments regarding any reported adverse events, will ensure that all participants receive appropriate support and care, and will submit separate determinations regarding study continuation or discontinuation during the annual report to the NIMH. The DSMB may request documentation of consent and documentation of data management. The REDCap data system is HIPAA-compliant, securely stores all study data, and includes an automated logging function that documents any modification that participants and user-defined study staff make to study data in REDCap. The project officer at NIMH will review all DSMB reports to ensure compliance. Emory requires that all investigators complete an annual financial interest disclosure.

Plans for communicating important protocol amendments to relevant parties (e.g., trial participants, ethical committees) {25}

Important protocol amendments will be submitted to the study IRBs, the DSMB, and the study sponsor as they arise, so all entitles have an opportunity to offer feedback. Trial modifications also will be recorded on the trial registration site of ClinicalTrials.gov.

Dissemination plans {31a}

The final data from this study will include the following from six universities in Vietnam: two rounds of key informant interviews and surveys with 30 university leaders (60 interviews), four rounds of focus group discussions and surveys with six implementation teams (with approximately 8 members each), three rounds of climate surveys with university faculty, and six 6-monthly surveys with male university students. We will work with the NIMH Data Archive (NDA) and Emory dataverse to support the preservation, discoverability, and accessibility of data that team members in this project produce. At the project’s completion, we will publish the main findings in peer-reviewed journals. We then will make study documentation, data dictionaries, and the final, cleaned, recoded, and de-identified data available through the NDA and Emory’s dataverse. We will develop a formal data-sharing agreement between collaborators at Emory University, the Center for Creative Initiatives in Health and Population (CCIHP), Georgia State University (GSU), and all participating universities. This plan will describe what subsets of data will be shared with participating universities for analysis and publication, the timeline for release, and the process of proposing original analyses and for requesting project data from the key personnel team. The plan will prioritize the protection of participant confidentiality, integrity of the study design, equity among participating universities, and appropriate oversight of data use by key personnel.

Aggregate project findings also may be disseminated widely as working papers on institutional websites, presentations at international and regional scientific meetings, dissemination workshops in Vietnam and the USA, and articles in peer-reviewed journals in the social and behavioral sciences and public health. We will host dissemination seminars at participating universities, at which the study findings will be shared and discussed with university leadership, to guide university policy, campus climate surveys, and campus programming to reduce the incidence of campus sexual violence, with robust attention to the experiences of students. As appropriate, we also will engage regional and national officials in dialogue about the findings, to support evidence-based policies that improve the environment for implementing sexual violence-prevention programs on university campuses in Vietnam.

Discussion

Our proposed project will be the first to assess two multifaceted implementation strategies to deliver a theoretically grounded, efficacious web-based sexual violence prevention program to undergraduate men attending six universities across Vietnam. If successful, our multidisciplinary, cross-cultural team will be the first to bring rigorous evidence to university and national leaders of the contextual effectiveness of these strategies for delivering web-based sexual violence prevention programming to large populations of men in adolescence, a period of heightened risk for sexually violent behavior. Our choice to develop, test, and scale GlobalConsent with universities in Vietnam is strategic, given the scale of sexual violence among young people, rapidly expanding rates of university attendance, and the openness of several university leaders to efficacious programming about sexual violence. Our choice to engage universities across all regions of Vietnam provides a novel test of these implementation strategies in different structural and sociopolitical environments, with promise to advance sexual violence prevention policies in university systems at regional and national levels. Evidence for the effectiveness and incremental cost-effectiveness of these implementation strategies across regions will pave the way for GlobalConsent to address an important, gendered risk factor for chronic mental, physical, and behavioral health conditions over the life course. Thus, by providing novel evidence about how best to bring GlobalConsent to scale nationally, our team has the potential to reduce gender-related health inequities and to improve quality of life by averting acts of sexual violence that may lead to chronic health conditions over the life course among victims. By partnering with universities engaged in CONVERGE, an ongoing violence-prevention training program in Vietnam (D43TW012188), these innovations will be achieved through synergistic investments to strengthen local capacity for implementation research, data harmonization, and stakeholder engagement to manage and to prevent sexually violent behavior in young people.

Trial status

Protocol version number 1

Start date of recruitment: March 26 8 pm eastern time (March 27 8 am Vietnam time), 2024.

Approximate end date of recruitment: June, 2028.

Availability of data and materials {29}

The study forms are available in the Appendix of this protocol, and the study data will be made publicly available through the National Institute of Mental Health Data Archive (NDA) and the Emory Dataverse.

Abbreviations

AE:

Adverse event

AIDS:

Acquired immune deficiency syndrome

C:

Continuously measured

CCIHP :

Center for Creative Initiatives in Health and Population

CITS:

Comparative interrupted time series

DD:

Difference-in-difference model

DSMB:

Data safety and monitoring board

EPIS:

Evidence-based Practice Implementation in public Service sectors

ERIC:

Expert Recommendations for Implementing Change

FGD:

Focus group discussion

FRAME:

Framework for Reporting Adaptations and Modifications-Expanded

GSU:

Georgia State University

HIS:

High-intensity implementation strategies

HIV:

Human immunodeficiency virus

ICER :

Incremental cost-effectiveness ratio

IRB:

Institutional Review Board

IS:

Implementation strategies

IT:

Information technology

KII:

Key informant interview

LIS:

Low-intensity implementation strategies

LMIC:

Low- and middle-income countries

NASEM:

National Academies of Science, Engineering, and Medicine

NDA:

NIMH Data Archive

NIH:

National Institutes of Health

NIMH :

National Institutes of Mental Health

RE-AIM:

Reach-Effectiveness-Adoption-Implementation-Maintenance

OR:

Odds ratio

PEPFAR:

President’s Emergency Plan for AIDS Relief

SAMSHA:

Substance Abuse and Mental Health Services

SCALE:

Strategies for Implementing GlobalConsent to Prevent Sexual Violence in University Men

SPIRIT:

Standard Protocol Items: Recommendations for Interventional Trials

TA:

Technical assistance

TOR:

Terms of reference

US:

United States

USAID:

United States Agency for International Development

WHO:

World Health Organization

References

  1. Basile KC, et al. Sexual violence surveillance: uniform definitions and recommended data elements. National Center for Injury Prevention and Control, Centers for Disease Control and Prevention: Atlanta; 2014.

    Google Scholar 

  2. Coulter RWS, Rankin SR. College sexual assault and campus climate for sexual- and gender-minority undergraduate students. Journal of interpersonal violence. 2020;35(5–6):1351–66.

    Article  PubMed  Google Scholar 

  3. Borumandnia N, et al. The prevalence rate of sexual violence worldwide: a trend analysis. BMC public health. 2020;20(1):1–7.

    Article  Google Scholar 

  4. Wright LA, Zounlome NO, Whiston SC. The effectiveness of male-targeted sexual assault prevention programs: A meta-analysis. Trauma, Violence, & Abuse. 2020;21(5):859–69.

    Article  Google Scholar 

  5. Krahé B, et al. Prevalence and correlates of young people’s sexual aggression perpetration and victimisation in 10 European countries: a multi-level analysis. Culture, Health & Sexuality. 2015;17(6):682–99.

    Article  Google Scholar 

  6. Decker M, et al. Gender-based violence against adolescent and young adult women in low- and middle-income countries. Journal of Adolescent Health. 2015;56(2):188–96.

    Article  Google Scholar 

  7. Muehlenhard CL, et al. Evaluating the one-in-five statistic: women’s risk of sexual assault while in college. The Journal of Sex Research. 2017;54(4–5):549–76.

    Article  PubMed  Google Scholar 

  8. Rennison, C., Rape and sexual assault: reporting to police and medical attention, 1992-2000, O.o.J.P. US Department of Justice, Editor. 2002: Washington, DC.

  9. Fulu E, et al. Prevalence of and factors associated with male perpetration of intimate partner violence: findings from the UN Multi-country Cross-sectional Study on Men and Violence in Asia and the Pacific. The Lancet Global Health. 2013;1(4):e187–207.

    Article  PubMed  Google Scholar 

  10. Jewkes R, et al. Prevalence of and factors associated with non-partner rape perpetration: findings from the UN Multi-country Cross-sectional Study on Men and Violence in Asia and the Pacific. The Lancet Global Health. 2013;1(4):e208–18.

    Article  PubMed  Google Scholar 

  11. MOLISA, GSO, and UNFPA, National study on violence against women in Vietnam 2019 - journey for change. 2020, UNFPA: Hanoi, Vietnam.

  12. Decker MR, et al. Prevalence and health impact of intimate partner violence and non-partner sexual violence among female adolescents aged 15–19 years in vulnerable urban environments: a multi-country study. Journal of Adolescent Health. 2014;55(6):S58–67.

    Article  Google Scholar 

  13. Yount KM, Krause KH, Miedema SS. Preventing gender-based violence victimization in adolescent girls in lower-income countries: Systematic review of reviews. Social Science & Medicine. 2017;192:1–13.

    Article  Google Scholar 

  14. Mujal GN, et al. A systematic review of bystander interventions for the prevention of sexual violence. Trauma, Violence, & Abuse. 2021;22(2):381–96.

    Article  Google Scholar 

  15. Graham LM, et al. Evaluations of prevention programs for sexual, dating, and intimate partner violence for boys and men: a systematic review. Trauma, Violence, & Abuse. 2021;22(3):439–65.

    Article  Google Scholar 

  16. Pérez-Martínez, V., et al., Positive masculinities and gender-based violence educational interventions among young people: a systematic review. Trauma, Violence, & Abuse, 2021: p. 15248380211030242.

  17. Anderson EJ, et al. Web-based and mHealth interventions for intimate partner violence victimization prevention: a systematic review. Trauma, Violence, & Abuse. 2021;22(4):870–84.

    Article  Google Scholar 

  18. Bandura A. Health promotion by social cognitive means. Health Education and Behavior. 2004;2:143–64.

    Article  Google Scholar 

  19. Fabiano PM, et al. Engaging men as social justice allies in ending violence against women: evidence for a social norms approach. Journal of American College Health. 2003;52(3):105–12.

    Article  PubMed  Google Scholar 

  20. Banyard VL, Moynihan MM, Plante EG. Sexual violence prevention through bystander education: an experimental evaluation. Journal of Community Psychology. 2007;35(4):463–81.

    Article  Google Scholar 

  21. Bandura A. Social cognitive theory: an agentic perspective. Annual Review of Psychology. 2001;52(1):1–26.

    Article  CAS  PubMed  Google Scholar 

  22. Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology. 2008;41(3):327–50.

    Article  PubMed  Google Scholar 

  23. Fixsen, D.L., et al., Implementation research: a synthesis of the literature. 2005, University of South Florida, Louis de la Parte Florida Mental Health Institute, the National Implementation Research Network.

  24. Glasgow R, Vogt T, Boles S. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American Journal of Public Health. 1999;89:1322–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. The Joint Commission Journal on Quality and Patient Safety. 2008;34(4):228–43.

    Article  PubMed  Google Scholar 

  26. McCreight MS, et al. Using the Practical, Robust Implementation and Sustainability Model (PRISM) to qualitatively assess multilevel contextual factors to help plan, implement, evaluate, and disseminate health services programs. Translational Behavioral Medicine. 2019;9(6):1002–11.

    Article  PubMed  Google Scholar 

  27. Aarons GA, Hurlburt M, Horwitz SMC. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38:4–23.

    Article  PubMed  Google Scholar 

  28. Fixsen DL, et al. Core implementation components. Research on Social Work Practice. 2009;19(5):531–40.

    Article  Google Scholar 

  29. Kirchner, J.E., et al., Getting a clinical innovation into practice: an introduction to implementation strategies. Psychiatry Research, 2020. 283.

  30. Powell BJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science. 2015;10(1):1–14.

    Article  Google Scholar 

  31. Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. American journal of public health. 2013;103(6):e38–46.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Proctor E, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(2):65–76.

    Article  PubMed  Google Scholar 

  33. Yount KM, et al. Preventing sexual violence in college men: a randomized-controlled trial of GlobalConsent. BMC Public Health. 2020;20(1):1–19.

    Article  Google Scholar 

  34. Nadeem E, et al. A literature review of learning collaboratives in mental health care: used but untested. Psychiatric Services. 2014;65(9):1088–99.

    Article  PubMed  Google Scholar 

  35. Yount KM, et al. Preventing sexual violence in Vietnam: qualitative findings from high school, university, and civil society key informants across regions. BMC Public Health. 2023;23(1):1114.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implementation Science. 2019;14(1):58.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Stoner L, et al. Global citizenship is key to securing global health: the role of higher education. Preventive medicine. 2014;64:126–8.

    Article  PubMed  Google Scholar 

  38. Schofer E, Meyer JW. The worldwide expansion of higher education in the twentieth century. American sociological review. 2005;70(6):898–920.

    Article  Google Scholar 

  39. Barakat B, Shields R. Just another level? Comparing quantitative patterns of global expansion of school and higher education attainment. Demography. 2019;56(3):917–34.

    Article  PubMed  Google Scholar 

  40. Curran G, et al. Effectiveness implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Bernal JL, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. International Journal of Epidemiology. 2017;46(1):348–55.

    PubMed  Google Scholar 

  42. Glasgow RE, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Frontiers in Public Health. 2019;7:64.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Charles JM, et al. Micro-costing in public health economics: steps towards a standardized framework, using the incredible years toddler parenting program as a worked example. Prevention Science. 2013;14(4):377–89.

    Article  CAS  PubMed  Google Scholar 

  44. Foster EM, et al. The costs of a public health infrastructure for delivering parenting and family support. Children and Youth Services Review. 2008;30:493–501.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Yount, K.M., et al., Consortium for Violence Prevention Research, Implementation, and Leadership Training for Excellence (CONVERGE): a protocol to train science leaders in gender-based-violence and violence-against-children research for impact. Implementation, and Leadership Training for Excellence (CONVERGE): A Protocol to Train Science Leaders in Gender-Based-Violence and Violence-Against-Children Research for Impact (August 29, 2022), 2022.

  46. Hamilton AB, Finley EP. Reprint of: Qualitative methods in implementation research: an introduction. Psychiatry Research. 2020;283:112629.

    Article  PubMed  Google Scholar 

  47. Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qualitative Health Research. 2015;26(13):1753–60.

    Article  Google Scholar 

  48. Palinkas LA, et al. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2015;42(5):533–44.

    Article  PubMed  Google Scholar 

  49. Harris PA, et al. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42(2):377–81.

    Article  PubMed  Google Scholar 

  50. Yount, K.M., et al., Theoretical mediators of GlobalConsent: an adapted web-based sexual violence prevention program for university men in Vietnam. Social Science & Medicine, 2022: p. 115402.

  51. Yount KM, et al. Impacts of GlobalConsent, a web-based social norms edutainment program, on sexually violent behavior and bystander behavior among university men in Vietnam: randomized controlled trial. JMIR Public Health Surveill. 2023;9:e35116.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Peterson AE, et al. Predicting the long-term sustainability of evidence-based practices in mental health care: an 8-year longitudinal analysis. The Journal of Behavioral Health Services & Research. 2014;41(3):337–46.

    Article  Google Scholar 

  53. Us, I.O. 2022; Available from: https://www.itsonus.org/.

  54. Technologies, B.S. Be Real: RealConsent(R) sexual assault prevention and alcohol education all-in-one online college program. 2022.

  55. Policy, N.I.o.M.H.R.E., 2015.

  56. Guest G, Bunce A, Johnson L. How many interviews are enough?: An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

    Article  Google Scholar 

  57. Krueger, R.A., Focus groups: a practical guide for applied research. 2014: Sage publications.

  58. Huntink E, et al. Stakeholders’ contributions to tailored implementation programs: an observational study of group interview methods. Implementation Science. 2014;9(1):185.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Stirman SW, et al. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Science : IS. 2013;8:65–65.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Marques L, et al. Provider fidelity and modifications to cognitive processing therapy in a diverse community health clinic: associations with clinical change. Journal of Consulting and Clinical Psychology. 2019;87(4):357–69.

    Article  PubMed  PubMed Central  Google Scholar 

  61. National Academies of Sciences, E., and Medicine,, Measuring sex, gender identity, and sexual orientation. 2022, Washington, DC: The National Academies Press.

  62. Mennicke A, et al. Evaluation of a social norms sexual violence prevention marketing campaign targeted toward college men: attitudes, beliefs, and behaviors over 5 years. Journal of Interpersonal Violence. 2021;36(7–8):NP3999–4021.

    PubMed  Google Scholar 

  63. Maxwell CD, Robinson AL, Post LA. The nature and predictors of sexual victimization and offending among adolescents. Journal of Youth and Adolescence. 2003;32(6):465–77.

    Article  Google Scholar 

  64. McMahon S. Rape myth beliefs and bystander attitudes among incoming college students. Journal of American College Health. 2010;59(1):3–11.

    Article  PubMed  Google Scholar 

  65. Lanier, C.A. and M.N. Elliot, A new instrument for the evaluation of a date rape prevention program. Journal of College Student Development, 1997.

  66. Aarons GA, Ehrhart MG, Farahnak LR. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership. Implementation Science. 2014;9(1):45.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Palinkas LA, et al. Measuring collaboration and communication to increase implementation of evidence-based practices: the cultural exchange inventory. Evidence & Policy. 2018;14:35–61.

    Article  Google Scholar 

  68. Jacobs SR, Weiner BJ, Bunger AC. Context matters: measuring implementation climate among individuals and groups. Implementation Science. 2014;9:1–14.

    Article  Google Scholar 

  69. Weiner BJ, et al. Psychometric assessment of three newly developed implementation outcome measures. Implementation Science. 2017;12:1–12.

    Article  Google Scholar 

  70. Burn SM. A situational model of sexual assault prevention through bystander intervention. Sex Roles. 2009;60(11):779–92.

    Article  Google Scholar 

  71. Koss MP, et al. Revising the SES: a collaborative process to improve assessment of sexual aggression and victimization. Psychology of Women Quarterly. 2007;31(4):357–70.

    Article  Google Scholar 

  72. Humphreys TP, Brousseau MM. The sexual consent scale–revised: development, reliability, and preliminary validity. Journal of Sex Research. 2010;47(5):420–8.

    Article  PubMed  Google Scholar 

  73. Bergenfeld, I., T.H. Minh, and K.M. Yount, Measuring rape empathy among university men in Vietnam. Psychological Test Adaptation and Development, 2023.

  74. Bergenfeld I, et al. Measuring sexual communication in adolescent dating relationships in Vietnam: development and validation of the sexual communications scales for attitudes, self-efficacy, and behavior. Communication Studies. 2022;73(4):380–96.

    Article  Google Scholar 

  75. Banyard VL, et al. How do we know if it works? Measuring outcomes in bystander-focused abuse prevention on campuses. Psychology of Violence. 2014;4(1):101.

    Article  Google Scholar 

  76. Deitz, S.R., et al., Measurement of empathy toward rape victims and rapists. 1982;43(2):372.

  77. Finley EP, et al. Periodic reflections: a method of guided discussions for documenting implementation phenomena. BMC Medical Research Methodology. 2018;18(1):153.

    Article  PubMed  PubMed Central  Google Scholar 

  78. Mulhall A. In the field: notes on observation in qualitative research. Journal of Advanced Nursing. 2003;41(3):306–13.

    Article  PubMed  Google Scholar 

  79. McDonald WJ. Focus group research dynamics and reporting: an examination of research objectives and moderator influences. Journal of the Academy of Marketing Science. 1993;21(2):161–8.

    Article  Google Scholar 

  80. Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. International Journal of Qualitative Methods. 2006;5(1):80–92.

    Article  Google Scholar 

  81. Hamilton, A., Qualitative methods in rapid turn-around health services research. Health services research & development cyberseminar, 2013.

  82. Ritchie J, Spencer L. Qualitative data analysis for applied policy research. In: Analyzing qualitative data. Routledge; 2002. p. 187–208.

    Google Scholar 

  83. Averill JB. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qualitative Health Research. 2002;12(6):855–66.

    Article  PubMed  Google Scholar 

  84. Faul F, et al. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods. 2007;39:175–91.

    Article  PubMed  Google Scholar 

  85. Stuart EA, et al. Using propensity scores in difference-in-differences models to estimate the effects of a policy change. Health Serv Outcomes Res Methodol. 2014;14(4):166–82.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Leyrat C, et al. Cluster randomized trials with a small number of clusters: which analyses should be used? International Journal of Epidemiology. 2017;47(1):321–31.

    Article  Google Scholar 

  87. Brewer, M., T.F. Crossley, and R. Joyce, Inference with difference-in-differences revisited. Journal of Econometric Methods, 2018. 7(1).

  88. Bhaskaran K, et al. Time series regression studies in environmental epidemiology. International Journal of Epidemiology. 2013;42(4):1187–95.

    Article  PubMed  PubMed Central  Google Scholar 

  89. Bernal JL, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. Int J Epidemiol. 2017;46(1):348–55.

    PubMed  Google Scholar 

  90. Nelson BK. Time series analysis using autoregressive integrated moving average (ARIMA) models. Academic Emergency Medicine. 1998;5(7):739–44.

    Article  CAS  PubMed  Google Scholar 

  91. Prais, S.J. and C.B. Winsten, Trend estimators and serial correlation. 1954, Cowles Commission discussion paper Chicago.

  92. Lopez Bernal JA, et al. The effect of the late 2000s financial crisis on suicides in Spain: an interrupted time-series analysis. The European Journal of Public Health. 2013;23(5):732–6.

    Article  PubMed  Google Scholar 

  93. Turner SL, et al. Comparison of six statistical methods for interrupted time series studies: empirical evaluation of 190 published series. BMC Medical Research Methodology. 2021;21(1):1–19.

    Google Scholar 

  94. Gomes HS, et al. Measurement bias in self-reports of offending: a systematic review of experiments. Journal of Experimental Criminology. 2019;15(3):313–39.

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank Charlotte Harmon, Steve Koogler, and Callen Maxwell for research administrative support at Emory University. We also thank Katherine Anderson for early support during study preparation.

Funding

Funding for this project is provided by the National Institutes of Mental Health (NIMH) under project number R01MH133259 (PI Kathryn Yount).

Author information

Authors and Affiliations

Authors

Contributions

KY (principal investigator): conceptualization; methodology; software; resources; writing—original draft; writing—review and editing; visualization; supervision; project administration; funding acquisition. DW (site PI): conceptualization; methodology; writing—review and editing; funding acquisition. XF (co-investigator): methodology; software; formal analysis; writing—review and editing. QTT (senior research scientist): investigation; resources; project administration. MM (research manager): methodology; software; project administration; writing—review and editing. THM (site PI): investigation; supervision; project administration. All authors read and approved the final manuscript.

Authors’ information

Kathryn M. Yount, PhD, is Asa Griggs Candler Professor of Global Health (2012) and Professor of Global Health and Sociology (2015) at Emory University. Her research centers on the social determinants of women’s health, including mixed-methods evaluations of social-norms and empowerment-based programs to reduce gender-based violence and health disparities in underserved populations. She has been funded continuously since 2002 from US federal agencies, private foundations, and foreign agencies to work in parts of Asia, Latin America, the Middle East, Sub-Saharan Africa, and underserved communities in Atlanta. These collaborations have culminated in more than 300 publications in the social sciences and global health. Yount is the recipient of several university-wide honors and awards, including Emory’s Women of Excellence Award for Mentorship (2016), Marion V. Creekmore Award for Internationalization (2021), Eleanor Main Graduate Mentor Award of the James T. Laney School of Graduate Studies (2022–2023), and the Albert E. Levy Senior Faculty Award for Excellence in Science (2024). She has been a Member of the National Academy of Sciences, Engineering, and Medicine’s Panel Study of Women’s Empowerment, Population Dynamics, and Socioeconomic Development (2022–2024) and was elected in 2024 as a Fellow to the National Association for the Advancement of Science for “distinguished contributions to women’s global health research, especially advances in etiologic and experimental studies of women’s empowerment and gender-based violence prevention.”

Daniel J. Whitaker, PhD, is a Distinguished University Professor Georgia State University and serves as the Associate Dean for Research in the School of Public Health. His research focuses on intervention and implementation of interventions to address family violence, specifically child maltreatment and intimate partner violence. Most recently, his implementation work has focused on understanding the implementation of interventions to prevent child maltreatment, with funding from NIH, CDC, PCORI, and AHRQ. Whitaker also directs the National SafeCare Training and Research Center, which has disseminated the SafeCare parenting model to prevention/intervention systems in 33 US states and in several non-US countries.

Xiangming Fang, PhD, is a Research Associate Professor of Health Economics at Georgia State University’s School of Public Health. He holds dual appointments at China Agricultural University and the University of Edinburgh. Dr. Fang is an expert on the economic impact of violence, focusing on the economic burden and quality-of-life impacts of violence against children. He has published over 130 works and serves on the editorial boards of six journals, including Child Abuse & Neglect, Child Protection and Practice, and the International Journal on Child Maltreatment: Research, Policy and Practice. He is also an Advisory Board Member for Words Matter Charity.

Quach Thu Trang, MA, has expertise in qualitative research, designing and implementing intervention programs in the fields of gender-based violence and sexuality education. Her 25-year work experience is enriched by contributing to the development of GBV prevention programs in Vietnam, including GlobalConsent—a web-based edutainment program on sexual violence for male students (Emory University 2017-2021); GBV action research supporting women, persons with disabilities, male perpetrators, LGBT and health staff capacity building (USAID 2019-2025; Ford Foundation 2006-2012); SEA Regional Curriculum Development on masculinities and GBV (UNDP/P4P/2010–2012); and Development Group Member for WHO Guideline on SRH and Rights of women living with HIV (2016).

Meghan Macaulay, MPH, is a research projects manager at the Rollins School of Public Health at Emory University. She currently supports two NIH-funded research and training projects focusing on sexual violence and gender-based violence. Prior to her current position, she was an epidemiologist with the Oklahoma State Department of Health, supporting state-level surveillance efforts of unintentional injuries and drug overdose.

Minh Tran Hung is a medical doctor, researcher, and public health professional with more than 30 years of professional experience in research and evaluations of various public health programs. He has expertise in the design, management, analysis, and dissemination of a wide range of quantitative and qualitative research projects. He is the author of numerous international peer-reviewed journal articles, working papers, and research reports focusing on reproductive health, gender, and violence.

Corresponding author

Correspondence to Kathryn M. Yount.

Ethics declarations

Ethics approval and consent to participate {24}

The Institutional Review Boards (IRBs) of Emory University (STUDY00006481: SCALE) and the Hanoi University of Public Health (412/2023/YTCC-HD3) provided ethical approval of the study protocol, and ethical approval will be renewed annually. Following the approved protocol, written, informed consent to participate will be obtained from all survey participants, and verbal informed consent will be obtained from all participants in the qualitative research.

Consent for publication {32}

The study team is willing to provide a model consent form upon request.

Competing interests {28}

KMY has submitted a “discovery statement” to the Office of Technology Transfer at Emory University for the development of GlobalConsent. Otherwise, the authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yount, K.M., Whitaker, D.J., Fang, X. et al. Strategies for Implementing GlobalConsent to Prevent Sexual Violence in University Men (SCALE): study protocol for a national implementation trial. Trials 25, 571 (2024). https://doi.org/10.1186/s13063-024-08401-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-024-08401-5

Keywords