Skip to main content

Digital endpoints in clinical trials: emerging themes from a multi-stakeholder Knowledge Exchange event

Abstract

Background

Digital technologies, such as wearable devices and smartphone applications (apps), can enable the decentralisation of clinical trials by measuring endpoints in people’s chosen locations rather than in traditional clinical settings. Digital endpoints can allow high-frequency and sensitive measurements of health outcomes compared to visit-based endpoints which provide an episodic snapshot of a person’s health. However, there are underexplored challenges in this emerging space that require interdisciplinary and cross-sector collaboration. A multi-stakeholder Knowledge Exchange event was organised to facilitate conversations across silos within this research ecosystem.

Methods

A survey was sent to an initial list of stakeholders to identify potential discussion topics. Additional stakeholders were identified through iterative discussions on perspectives that needed representation. Co-design meetings with attendees were held to discuss the scope, format and ethos of the event. The event itself featured a cross-disciplinary selection of talks, a panel discussion, small-group discussions facilitated via a rolling seating plan and audience participation via Slido. A transcript was generated from the day, which, together with the output from Slido, provided a record of the day’s discussions. Finally, meetings were held following the event to identify the key challenges for digital endpoints which emerged and reflections and recommendations for dissemination.

Results

Several challenges for digital endpoints were identified in the following areas: patient adherence and acceptability; algorithms and software for devices; design, analysis and conduct of clinical trials with digital endpoints; the environmental impact of digital endpoints; and the need for ongoing ethical support. Learnings taken for next generation events include the need to include additional stakeholder perspectives, such as those of funders and regulators, and the need for additional resources and facilitation to allow patient and public contributors to engage meaningfully during the event.

Conclusions

The event emphasised the importance of consortium building and highlighted the critical role that collaborative, multi-disciplinary, and cross-sector efforts play in driving innovation in research design and strategic partnership building moving forward. This necessitates enhanced recognition by funders to support multi-stakeholder projects with patient involvement, standardised terminology, and the utilisation of open-source software.

Peer Review reports

Background

Engagement and exchange in the context of digital transformation

The digital transformation of health and social care services is now a key priority across healthcare systems and wider government infrastructure [1, 2]. Enabling digital innovation in the conduct of clinical trials to streamline the discovery and delivery of new interventions requires collaboration among several stakeholders, including healthcare providers, technology experts, researchers, regulators, policymakers and the public. Knowledge exchange activities and processes are crucial for fostering the generation and sharing of knowledge among diverse stakeholders from various sectors and disciplines and propelling progress [3]. While there is an emerging body of evidence in the implementation and evaluation of these activities in health care [4,5,6] and digital health [7,8,9], there is a need to highlight case examples of knowledge exchange activities so that learnings for effective practice can be shared. This article describes a Knowledge Exchange event on Digital Endpoints, outlining how the event was organised and facilitated, key themes that were discussed, and reflections and recommendations for future multi-stakeholder events.

Digital endpoints as focus for Knowledge Exchange

New interventions are tested in clinical trials to evaluate whether they have a specific effect on patients’ health outcomes. These health outcomes are called endpoints. Digital endpoints are novel endpoints that are measured using digital technologies such as wearable devices or smartphone applications (apps) and do not require assessment in a clinical setting [10]. Examples include digital walk tests to measure exercise capacity [11], physical activity measures captured via wrist-worn accelerometers [12], electronic patient-reported outcomes completed via apps [13] and digital assessments of motor symptom severity [14]. Digital endpoints offer opportunities to change the quantity and quality of data collection and can improve patient experiences in trials. Compared to episodic in-clinic assessments such as the 6 Minute Walk test or polysomnography, endpoints captured by digital technologies typically allow for substantially more frequent measurement of health outcomes and in an individual’s chosen location(s) (instead of at the clinic) [15]. This is an example of a decentralised component of a trial, where the trial activity (in this case, data collection) occurs at locations other than traditional clinical trial sites [16]. Digital endpoints can reduce burden on patients and their carers and the captured data may more realistically reflect individuals’ experiences [17] and may reduce financial costs [18] and environmental impacts of trials [19]. There have been key developments from regulators for digital endpoints, such as the approval of Stride Velocity 95th Centile by the European Medicine Agency (EMA) as a primary endpoint for Duchenne Muscular Dystrophy in pivotal or exploratory drug therapeutic studies [20], and development of guidelines on the use of Digital Health Technologies in clinical investigations by the Food and Drug Administration (FDA) [21].

There are several challenges to widespread adoption of digital endpoints. Known challenges include questions around privacy and potential to exacerbate unequal access to digital technology [22, 23], the limited number of regulatory-approved devices and digital endpoints (for pivotal and licensing trials), and lack of unified terminology around digital endpoints [24]. Since addressing such challenges requires diverse expertise and a range of stakeholder perspectives, interdisciplinary collaborations have been called for [10, 25, 26]. Key perspectives in the ecosystem for the development, validation, and use of digital endpoints include clinicians, patients, statisticians, computer scientists, ethicists, regulators and health economists, among others. We refer to these perspectives as stakeholder perspectives and refer to communities which consist largely of one stakeholder perspective as a silo or stakeholder silo.

In November 2023, a Knowledge Exchange event on digital endpoints was organised in Cambridge, UK. This event, featuring a co-design process with session speakers, aimed to facilitate collaboration within and between stakeholder silos for a more comprehensive understanding of emerging challenges for digital endpoints.

Methods

Curation of attendee list

The curation of the attendee list was an iterative process which took several months. Organisers sent an initial survey to key stakeholders within their network, asking for ideas for potential talks and discussion topics. From this core group, organisers identified stakeholder perspectives that needed representation and extended invitations to individuals who could represent these perspectives. Discussions often led to the discovery of additional stakeholder perspectives that the organisers had not originally considered. These included perspectives on Core Outcome Sets, which are an agreed standard set of outcomes that should be measured and reported, as a minimum, in trials for a specific area of health or health care [27], and greener research practices for digital endpoints.

To include the patient and public perspective at the event, details of the event were circulated in a newsletter for the Cambridge University Hospital Patient and Public and Public Involvement (PPI) Panel (two attendees participated). A clinician attending the event invited participants from his ongoing Pulmonary Hypertension studies. One participant attended the event and two participants provided feedback which were integrated within the clinician’s presentation in the form of a video and comments. An additional patient representative was invited through connections with the investigator of a hypoglycaemia study which involved continuous glucose monitoring.

Organisers endeavoured to keep the size of the group relatively small and aimed for approximately 30 attendees so that discussions could be meaningful.

Co-design meetings

Prior to the event, organisers held co-design meetings with speakers in each session to discuss speakers’ perspectives on key challenges and the format of the session. These co-design sessions emphasised the importance of sharing opinions and experiential knowledge during the event and encouraged speakers to focus their talks on questions such as: What keeps you up at night when it comes to digital endpoints? What are your concerns around digital endpoints that you would like other stakeholders to realise? The co-design sessions allowed for networking to take place before the event and helped to set intentions for an open and collaborative atmosphere.

Event structure

The event was structured into four sessions, as detailed in Table A1 in the Supplementary Information. The first three sessions featured three short talks followed by small-group sessions. The last session included a keynote talk and a panel-led discussion.

Table 1 Details of Knowledge Exchange event

Organisers facilitated interdisciplinary discussions through rolling seating arrangements and the use of technology. In the first session, attendees were grouped into discussion tables corresponding to stakeholder silos. This allowed them to reflect on the talks from their specific stakeholder perspective. In the second and third sessions, attendees were grouped into cross-sector and cross-disciplinary tables to facilitate networking and exchange across silos. The interactive platform, Slido, was used throughout the day. It collated information from the whole group in the form of word clouds (see Figs. A1 and A2 in the Supplementary Information) and facilitated collection of questions and reflections from discussion tables. Questions and reflections were projected on the screen, which allowed attendees to see topics discussed in tables other than their own. A selection of questions was posed to the panel as discussion questions at the end of the day.

Table 1 provides key details of the Knowledge Exchange event.

Feedback processes

After the event, a feedback form was sent to attendees which queried whether the event led to exchange of stakeholder perspectives, and invited comments on the programme, topics and perspectives represented and the format of the event. Patient and public contributors were invited to provide feedback in an online meeting. Two contributors attended the meeting and summarised their feedback in a short text.

Theme extraction

A transcript was generated from an audio recording of the event, which captured the talks and plenaries. The transcript and the summarised reflections and questions from small group discussions on Slido were used to identify key themes. This was achieved by an iterative process where one author grouped emerging themes in a summary table. The organisers and a selected cross-sector group of attendees discussed the summary table and also identified the scope, aims and target audience of the summary article. All attendees were invited to be co-authors of the paper and the selected themes were shared with all co-authors and revised based on feedback.

Results

Attendee representation

In-person attendees at the event included 32 individuals from diverse expertise backgrounds representing key silos across the ecosystem with interest and/or experience in digital endpoints, including clinicians, statisticians, computer scientists, implementation scientists, ethicists, health economists and patient and public representatives. Their experiences with digital endpoints spanned several disease areas, including pulmonary hypertension, dementia, Parkinsons’, women’s health and diabetes. Figure 1 illustrates attendee perspectives in terms of their background and sectors/institution, and also indicates some perspectives that were not represented on the day and would be important to have representation for future events.

Fig. 1
figure 1

Breakdown of the number of in-person attendees at the event by background (top left) and by institution (top right). Perspectives that were not represented in the event, and would be important to have representation for future events, are indicated in the grey pie chart (bottom left). PPI, Patient and Public Involvement; RDS, National Institute of Health and Care Research (NIHR) Research Support Service; CRO, Clinical Research Organization

Themes

Patient adherence and acceptability

A key theme discussed was barriers to patient adherence and acceptability of digital endpoints. Digital endpoints typically require patients to participate in data collection for a long period of time in their everyday conditions and without the presence of clinical or research professionals. Obtaining data of high quality relies on acceptability of the digital approach to data collection. A PPI contributor who had participated in digital trials noted the convenience of doing assessments from home “without the upheaval of having to attend an appointment” but also highlighted technical challenges such as connectivity issues. Invited clinicians emphasised the need to consider specific patient needs in the continual process of designing and customising digital technologies and apps, for example providing a magnification tool to allow patients with poor eyesight to use the device.

Patient adherence in longer-term studies typically declines over time and can also depend on disease severity [28, 29]. Invited clinicians shared experiences of observing that patient adherence is lower at either extreme of disease severity. Individuals may engage less when the disease does not have much impact on their lives, and when the disease is severe, individuals may not be well enough to prioritise engaging with studies. An invited clinician also noted the impact of investigators’ interest in digital endpoint adoption and said: “investigator engagement I think sometimes [is] overlooked… I need to be able to demonstrate to a clinical colleague that what we are doing is validated enough, for example, that they are willing to accept and adopt it.” It was also highlighted that investigators' enthusiasm for adopting digital technology can lead to different levels of patient adherence across sites within a study. Strategies to help sustain engagement were discussed, including providing ongoing technical support to patients throughout the study, using nudges, notification and gamification strategies to enhance engagement, having an investigator in the loop, providing ongoing feedback to patients, and garnering support from communities and charities.

Implementation scientists highlighted that the “effectiveness of an intervention does not guarantee its uptake in routine use” and presented frameworks, e.g. non-adoption, abandonment, scale-up, spread, and sustainability (NASSS) framework [30], which can help researchers identify and assess the barriers and facilitators to adoption of digital technology in context. Implementation strategies can be used to tackle these barriers [31]; qualitative and quantitative approaches, including validated measures [32] can be used to ascertain the success of implementation outcomes, such as acceptability or adoption [32]. The discussion highlighted the scarcity of implementation science expertise in current trials research and the need to incorporate more of this expertise into research funding proposals.

Software and algorithms for devices

Digital endpoints which capture physiological characteristics typically use algorithms and software to convert sensor measurements into clinically meaningful outcomes. Several challenges around the validation of these algorithms and their use in specific study populations and contexts were highlighted. For example, researchers working on the validation of mobility and sleep-related outcomes discussed the complexities of validation in individuals’ free-living conditions. Compared to validation in lab settings, data from free-living conditions have increased inter-subject and within-subject variability. Further, they emphasised that important contextual factors may be unknown, such as whether an individual is typically in an area with open outdoor space or in an indoor space with obstructions [33], and these contextual factors can have an impact on mobility outcomes such as gait. Additional challenges included missing data, unequal representation of different groups in the data, and unexpected issues such as devices worn incorrectly.

There are also difficulties with using readily-developed algorithms for digital outcomes. Such algorithms are typically validated on one population and may not be suited for use in other populations. For example, researchers from Clinical Trials Units discussed the challenges of using thresholds based on healthy populations to quantify outcomes such as time spent in sleep using activity monitors in a trial for stroke recovery. The majority of validation studies are conducted in healthy populations and information on thresholds for activity monitoring data in chronic disease populations is lacking [34]. There is a need for further research on how to adapt these thresholds for specific populations, such as the work by Airlie et al. (2022) on adapting minimal wear time criteria in older care home residents [35].

An important discussion point was that using proprietary software has a disadvantage that the underlying algorithms are unknown to the researcher, and can also be changed by the developers unbeknownst to the researcher. Making explicit, for example in the contract between researchers and private companies, to communicate any changes in software was discussed, and the benefits of devices that allow extraction of raw data and open source software were also highlighted.

Design, analysis and conduct of clinical trials with digital endpoints

Using digital endpoints to evaluate health interventions leads to several open questions in the design, analysis and conduct of clinical trials. A key discussion point was on the choice of digital endpoint, since there is currently vast heterogeneity in digital endpoints and lack of standards in how they should be selected and reported [17]. There was a discussion about the potential for digital endpoints to be included in core outcome sets (COS), which are an agreed standard set of outcomes that should be measured and reported, as a minimum, in trials for a specific area of health or health care [27]. Core outcomes should be determined through a rigorous consensus process that involves people with lived experience of the health condition, healthcare professionals who care for those people, and researchers who would use the COS in their studies. Digital technology may be a viable option for measuring a core outcome, provided agreed standards for validity, reliability, feasibility and acceptability are met [36].

There were discussions on aspects of the design and analysis of trials with digital endpoints which lack clear guidance. Questions arose about the appropriate duration of the measurement period for individuals engaging with digital devices, whether there should be a gap between baseline and follow-up measurements, and if so, how long this gap should be. Clarity is needed regarding a sufficient amount of time for data collection, as investigators may wish to collect as much data as possible. An invited statistician emphasised that “we need to make sure that what we’re doing is not measuring more than we need to measure.” There was discussion on whether the data collected over time should be summarised into a single measure, or whether the entire time series should be analysed through, for example, Generalised Additive Models (GAMs) [37]. Certain endpoints, such as those relating to physical activity, can be strongly influenced by weather and seasons, which can lead to confounding. For example, physical activity has been shown to increase with increased daylight hours [38] and reduce with rainfall [39]. Statisticians mentioned possible approaches to mitigate the impact of seasonal effects through the recruitment of the trial as well as in the statistical analysis [40]. Finally, handling missing data for digital endpoints was a key discussion point, as digital endpoints may have complex missing data patterns which include missing not at random (MNAR) mechanisms [41,42,43].

Current operational practices on management of data need adapting for digital endpoints. For example, an industry statistician noted that digital endpoints are typically received directly from the vendor and do not go through standard in-house data cleaning processes by data management typical of data entered at site. Therefore, there is a greater need to pre-specify potential outliers/abnormalities in the Statistical Analysis Plans (SAPs) as it is more likely that they are dealt with in the analysis stage (rather than by data management). Further, the Study Data Tabulation Model (SDTM) [44], a common approach to structuring trial datasets requires adjustment for digital endpoint data which is not visit-based and leads to high-frequency data over a longer period.

Environmental impact of digital endpoints

Representatives from the Medical Research Council National Institute of Health and Care Research (MRC NIHR) Trials Methodology Research Partnership Greener Trials group discussed how data collection for digital endpoints may impact the carbon footprint of a trial. While the carbon footprint may decrease due to reduced travel by participants and reduced use and shipment of paper-based assessments, there is a need to consider and quantify the environmental impact of the manufacture, use, transport and disposal of digital devices and the storage of large amounts of data. A guidance document was presented which quantifies the carbon footprint of clinical trial activities, including carbon footprinting of digital devices, online questionnaires and data linkage [45]. A reflection on Slido was posted stating that “reducing the carbon footprint [of clinical trials via de-centralisation] requires different stakeholders to work in synergy (methodologists, data analysts, implementation scientists, funders, trial managers, etc.).”

Ongoing ethical support

Invited ethicists emphasised the need to integrate ethical reflection throughout the duration of studies utilising digital technology. Ongoing ethical reflection, as opposed to simply at the initial institutional approval, is important for two reasons, among others. Firstly, Research Ethics Committees are currently not equipped to assess these studies in a consistent way [46], and secondly, the remote and patient-dependent nature of these studies necessitates ongoing assessment of ethical issues. These studies require patients to engage with digital tools and monitoring in their daily spaces, which entails some level of responsibilisation of patients for the success of the study. This also raises issues around the role of family members and issues around privacy of bystanders (for example, when wearable cameras are used to collect data). While some issues can be envisioned at the stage of protocol design, real-world contexts may introduce unforeseen behavioural, cultural, and moral challenges that research teams must address. Ongoing ethical support can be delivered by involving ethicists throughout the study from the stage of developing a project to the delivery and assessment of the study [47, 48]. This enables the research team to identify not only known ethical issues as they appear in the literature and mitigate them in the development of the study protocol, but also to anticipate issues that are specific to the study. “Ethics clubs or clinics”, as part of regular meetings for clinical or research teams provide opportunities to discuss emerging issues together with an ethicist and embed ongoing ethical support. Inclusion of ethicists in steering committees also ensures that ethical issues are addressed and acted upon not only from bottom-up (from the research practice) but also top-down (from the leadership team).

During the event, several issues were raised which required ethical reflection, including the type of feedback to provide patients regarding their data, and how Adverse Events and Serious Adverse Events identified by digital technology should be evaluated and managed.

Furthermore, issues of fairness were highlighted, such as the exclusion of certain groups from studies, and the risk of producing results that are skewed towards certain populations. It was also pointed out how researchers using third party devices are often dependent on tech companies’ extraction and interpretation of raw data without being able to access raw data and other relevant information. The power imbalance between public research organisations and big tech corporations is an ongoing issue in digital endpoint research.

Need for multi-stakeholder collaboration

The Knowledge Exchange event highlighted the need for greater opportunities for open-forum discussions between stakeholders and highlighted the importance of consortia for development in this area, such as Mobilise-D [49], the Clinical Trials Transformation Initiative (CTTI) [50] and the Digital Medicine Society (DiMe) [51]. From an academic perspective, a need for funding organisations to offer additional support, guidance and opportunities for funding cross-disciplinary and cross-sector collaboration projects was called for, as the existing system often prioritises single-discipline approaches [52]. The importance of early engagement with regulators was emphasised, as well as a need for academics and funders to understand the regulatory requirements for operating in clinical trials versus clinical care and the pathways required. During the panel discussion, having open forums where there is open dialogue, as well as open source software and standardised terminology were mentioned as important facilitators for digital endpoint development. Recent work by the European Federation of Pharmaceutical Industries and Associations [24] and the Digital Medicine Society [53] exemplifies efforts on developing harmonised terminology.

A summary of the discussed themes is provided in Table 2.

Table 2 Summary of key themes

Event feedback

Attendees noted in the feedback form that the event broadened their understanding of other stakeholder perspectives and disease areas and that these learnings would help inform their work. The rolling seating plan for enhanced networking, small-group discussions and the use of Slido were well-received. Organisers noted the challenge for speakers to communicate their message to a broad audience and recommended future events to undertake additional co-design work to support cross-sector and cross-disciplinary communication.

Additional feedback from attendees called for a presentation on the operational aspects of running digital trials, as well as more opportunities for the PPI perspective to be heard at the event, for example through a presentation given by a PPI representative.

Learnings from PPI contributors

Feedback from PPI contributors showed considerable variation in their experience of the event and the extent to which they felt they could contribute. Some PPI contributors felt that there was too much jargon, and there was a lack of clarity on expectations about how they should contribute to the event, while other PPI contributors enjoyed the opportunity to engage with experts. PPI contributors provided ideas on how the event structure could be improved to facilitate more interaction between the PPI contributors and other stakeholders, which included providing a glossary of key terms to the PPI contributors, and making the event longer to allow more time for discussion. See Fig. 2 for quotes from two PPI contributors on their reflections. The event highlighted a need for better guidance on how to design multi-stakeholder events which allow PPI contributors to engage meaningfully [54, 55]. Additional ideas on facilitators to PPI contributions included providing a plain English summary explaining the purpose of a Knowledge Exchange event and plain English summaries for each of the talks.

Fig. 2
figure 2

Perspectives of two PPI contributors

Discussion

Key themes and under-explored challenges

The multi-stakeholder event highlighted emerging challenges and needs for digital endpoint development. This included: underexplored barriers to patient adherence, such as investigator enthusiasm towards digital endpoints, which was a novel insight uncovered during the event; the importance of implementation methods and ethical support throughout the entire lifecycle of research; the need for validation of algorithms in diverse populations and the importance of consortium building across disciplines and sectors.

Limitations

Securing representation from certain stakeholder silos was challenging; for example, organisers were unable to find a regulator who was able to attend the event, but one attendee had previous experiences of working as a regulator, and several attendees had experiences of interacting with regulators.

There were several key stakeholders who were not represented on the day, which were noted during and after the event. These included individuals working within governance, legal contracting and regulatory bodies, whose perspectives were noted as important to involve in future events due to their role as gatekeepers and facilitators for progress in the digital transformation of trials. There also was no funder representative in the room. Given that the need for change in infrastructures and incentives to allow more collaborative and cross-sector work was a key theme and challenge that emerged, the lack of a funder perspective was noted by an invited clinician as a “missed opportunity” due to their key role in “shaping the landscape”.

There was also recognition that there were several important topics that were not discussed on the day. These include, and are not limited to: situations where digital endpoints should not, or cannot be used; the potential for increased digital divide due to digital endpoints; and the challenge of meeting different priorities for each of the stakeholders in the research ecosystem. Further, we note a number of comprehensive papers that cover the ethical topics around digital endpoints that provide a basis for consideration of topics that were not covered on the day and could guide next generation events [16, 56, 57].

Organisers also became aware of methodological frameworks to support multi-stakeholder facilitation [58, 59] after the event, and encourage consideration and use of these frameworks in next generation events.

Conclusion

The Knowledge Exchange event demonstrated that, in the space of digital endpoints, there is appetite for dynamic processes of exchanging and sharing knowledge from multiple sources [60, 61]. The event serves as an example of a multi-stakeholder event with co-designed features to explore key and under-explored challenges. Several learnings were taken on topic clusters and stakeholder perspectives that need representation in future events, and learnings were generated through discussion with PPI contributors on changes to the organisation and provision of materials which could improve PPI contributors’ experience and ability to contribute meaningfully. These learnings provide useful considerations for next generation multi-stakeholder events.

Availability of data and materials

Not applicable.

References

  1. National Health Service. The NHS Long Term Plan. 2019. Available from: https://www.longtermplan.nhs.uk/publication/nhs-long-term-plan/ [accessed Apr 12, 2024].

  2. Health and Social Care Committee. Digital transformation in the NHS, Eighth Report of Session 2022–23. 2023 Jun. Available from: https://publications.parliament.uk/pa/cm5803/cmselect/cmhealth/223/report.html [accessed Apr 12, 2024].

  3. Fazey I, Bunse L, Msika J, Pinke M, Preedy K, Evely AC, Lambert E, Hastings E, Morris S, Reed MS. Evaluating knowledge exchange in interdisciplinary and multi-stakeholder research. Glob Environ Chang. 2014;25:204–20. https://doi.org/10.1016/j.gloenvcha.2013.12.012.

    Article  Google Scholar 

  4. Tako A, Kotiadis K. A facilitation workshop for the implementation stage: A case study in health care. Loughborough University. Conference Contribution. ; 2016. Available from: https://hdl.handle.net/2134/21820 [accessed Apr 13, 2024].

  5. Prihodova L, Guerin S, Kernohan WG. Knowledge transfer and exchange frameworks in health and their applicability to palliative care: scoping review protocol. J Adv Nurs. 2015;71(7):1717–25. https://doi.org/10.1111/jan.12642.

    Article  PubMed  Google Scholar 

  6. Dukhanin V, Wolff JL, Salmi L, Harcourt K, Wachenheim D, Byock I, Gonzales MJ, Niehus D, Parshley M, Reay C, Epstein S, Mohile S, Farrell TW, Supiano MA, Jajodia A, DesRoches CM. Co-Designing an Initiative to Increase Shared Access to Older Adults’ Patient Portals: Stakeholder Engagement. J Med Internet Res. 2023;22(25):e46146. https://doi.org/10.2196/46146.

    Article  Google Scholar 

  7. König LM, Allmeta A, Perski O, Smit ES, Newby K, Vandelanotte C, Poduval S, Gordon L, Gültzow T, Alblas M, Arden-Close E, Braun M, Greffin K, Hewitt RM, Knox L, Mccallum C, Mclaren T. Towards meaningful interdisciplinary and cross-sectoral digital health collaborations: Challenges and action-oriented solutions. 2024. https://doi.org/10.31219/osf.io/d9zx7.

  8. Prihodova L, Guerin S, Tunney C, Kernohan WG. Key components of knowledge transfer and exchange in health services research: Findings from a systematic scoping review. J Adv Nurs. 2019;75(2):313–26. https://doi.org/10.1111/jan.13836.

    Article  PubMed  Google Scholar 

  9. Abdolkhani R, Gray K, Borda A, DeSouza R. Quality Assurance of Health Wearables Data: Participatory Workshop on Barriers, Solutions, and Expectations. JMIR Mhealth Uhealth. 2020;8(1):e15329. https://doi.org/10.2196/15329.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Landers M, Dorsey R, Saria S. Digital Endpoints: Definition, Benefits, and Current Barriers in Accelerating Development and Adoption. Digit Biomark. 2021;5(3):216–23. https://doi.org/10.1159/000517885.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Robertson L, Newman J, Clayton S, Ferguson M, Pepke-Zaba J, Cannon J, Sheares K, Taboada D, Bunclark K, Armstrong I, Ferrer Mallol E, Davies EH, Toshner M. The Digital 1-Minute Walk Test: A New Patient-centered Cardiorespiratory Endpoint. Am J Respir Crit Care Med. 2024. https://doi.org/10.1164/rccm.202310-1855LE.

    Article  PubMed  Google Scholar 

  12. King CS, Flaherty KR, Glassberg MK, Lancaster L, Raghu G, Swigris JJ, Argula RG, Dudenhofer RA, Ettinger NA, Feldman J, Johri S, Fernandes P, Parsley E, Shah PS, Nathan SD. A Phase-2 Exploratory Randomized Controlled Trial of INOpulse in Patients with Fibrotic Interstitial Lung Disease Requiring Oxygen. Ann Am Thorac Soc. 2022;19(4):594–602 (PMID:34678128).

    Article  PubMed  Google Scholar 

  13. Schwartzberg L. Electronic Patient-Reported Outcomes: The Time Is Ripe for Integration Into Patient Care and Clinical Research. Am Soc Clin Oncol Educ Book. 2016;36:e89–96. https://doi.org/10.1200/EDBK_158749.

    Article  Google Scholar 

  14. Jha A, Menozzi E, Oyekan R, Latorre A, Mulroy E, Schreglmann SR, Stamate C, Daskalopoulos I, Kueppers S, Luchini M, Rothwell JC, Roussos G, Bhatia KP. The CloudUPDRS smartphone software in Parkinson’s study: cross-validation against blinded human raters. NPJ Parkinsons Dis Nature Research. 2020;6(1). https://doi.org/10.1038/s41531-020-00135-w.

  15. Robertson L, Newman J, Clayton S, Ferguson M, Pepke-Zaba J, Cannon J, Sheares K, Taboada D, Bunclark K, Armstrong I, Mallol EF, Davies EH, Toshner M. The Digital 1-Minute Walk Test: A New Patient-centered Cardiorespiratory Endpoint. Am J Respir Crit Care Med. 2024. https://doi.org/10.1164/rccm.202310-1855LE.

    Article  PubMed  Google Scholar 

  16. Vayena E, Blasimme A, Sugarman J. Decentralised clinical trials: ethical opportunities and challenges. Lancet Digital Health. Elsevier Ltd; 2023. p. e390–4. https://doi.org/10.1016/S2589-7500(23)00052-3.

  17. Graña Possamai C, Ravaud P, Ghosn L, Tran VT. Use of wearable biometric monitoring devices to measure outcomes in randomized clinical trials: a methodological systematic review. BMC Med. 2020;18(1):310.

    Article  PubMed  PubMed Central  Google Scholar 

  18. DiMasi JA, Dirks A, Smith Z, Valentine S, Goldsack JC, Metcalfe T, Grewal U, Leyens L, Conradi U, Karlin D, Maloney L, Getz KA, Hartog B. Assessing the net financial benefits of employing digital endpoints in clinical trials. medRxiv. 2024;2024.03.07.24303937. https://doi.org/10.1101/2024.03.07.24303937.

  19. Duran C, Fishburn N, Sample M. How clinical innovation is helping to reduce the environmental impact of clinical trials. 2023. Available from: https://www.astrazeneca.com/what-science-can-do/topics/clinical-innovation/clinical-innovation-driving-sustainable-clinical-trials.html [accessed Mar 14, 2024].

  20. European Medicines Agency. Draft Qualification Opinion for Stride velocity 95th centile as primary endpoint in studies in ambulatory Duchenne Muscular Dystrophy studies. 2023;31(February).

  21. Food and Drug Administration. Digital Health Technologies for Remote Data Acquisition in Clinical Investigations. 2023. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/digital-health-technologies-remote-data-acquisition-clinical-investigations [accessed Mar 18, 2024].

  22. Burton H, Brigden T, Flewitt A, Blandford A. Digital Health for Remote Monitoring and Self-Management : A Roadmapping Workshop. 2017;(November):1–33. Available from: http://www.fast-healthcare.org.uk/wp-content/uploads/2017/07/FAST-Digital-Health-Report.pdf.

  23. Krukowski RA, Ross KM, Western MJ, Cooper R, Busse H, Forbes C, Kuntsche E, Allmeta A, Silva AM, John-Akinola YO, König LM. Digital health interventions for all? Examining inclusivity across all stages of the digital health intervention research process. Trials. 2024;25(1):98. https://doi.org/10.1186/s13063-024-07937-w.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Leyens L, Northcott CA, Maloney L, McCarthy M, Dokuzova N, Pfister T. Why Language Matters in Digital Endpoint Development: Harmonized Terminology as a Key Prerequisite for Evidence Generation. Digit Biomark. 2024;11:1–12. https://doi.org/10.1159/000534954.

    Article  Google Scholar 

  25. Servais L, Camino E, Clement A, McDonald CM, Lukawy J, Lowes LP, Eggenspieler D, Cerreta F, Strijbos P. First Regulatory Qualification of a Novel Digital Endpoint in Duchenne Muscular Dystrophy: A Multi-Stakeholder Perspective on the Impact for Patients and for Drug Development in Neuromuscular Diseases. Digit Biomark. 2021;5(2):183–90. https://doi.org/10.1159/000517411.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Rodriguez-Villa E, Torous J. Regulating digital health technologies with transparency: the case for dynamic and multi-stakeholder evaluation. BMC Med. 2019;17(1):226. https://doi.org/10.1186/s12916-019-1447-x.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Kirkham JJ, Williamson P. Core outcome sets in medical research. BMJ Medicine BMJ. 2022;1(1):e000284. https://doi.org/10.1136/bmjmed-2022-000284.

  28. Aiyegbusi OL, Cruz Rivera S, Roydhouse J, Kamudoni P, Alder Y, Anderson N, Baldwin RM, Bhatnagar V, Black J, Bottomley A, Brundage M, Cella D, Collis P, Davies E-H, Denniston AK, Efficace F, Gardner A, Gnanasakthy A, Golub RM, Hughes SE, Jeyes F, Kern S, King-Kallimanis BL, Martin A, McMullan C, Mercieca-Bebber R, Monteiro J, Peipert JD, Quijano-Campos JC, Quinten C, Rantell KR, Regnault A, Sasseville M, Schougaard LMV, Sherafat-Kazemzadeh R, Snyder C, Stover AM, Verdi R, Wilson R, Calvert MJ. Recommendations to address respondent burden associated with patient-reported outcome assessment. Nat Med. 2024. https://doi.org/10.1038/s41591-024-02827-9.

    Article  PubMed  Google Scholar 

  29. Yorke J, Deaton C, Campbell M, McGowen L, Sephton P, Kiely DG, Armstrong I. Symptom severity and its effect on health-related quality of life over time in patients with pulmonary hypertension: A multisite longitudinal cohort study. BMJ Open Respir Res. 2018;5(1):e000263. https://doi.org/10.1136/bmjresp-2017-000263.

  30. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A’Court C, Hinder S, Fahy N, Procter R, Shaw S. Beyond adoption: A new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res. 2017;19(11):e367.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(1):139. https://doi.org/10.1186/1748-5908-8-139.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Khadjesari Z, Boufkhed S, Vitoratou S, et al. Implementation outcome instruments for use in physical healthcare settings: a systematic review. Implementation Sci. 2020;15:66. https://doi.org/10.1186/s13012-020-01027-6.

    Article  Google Scholar 

  33. Micó-Amigo ME, Bonci T, Paraschiv-Ionescu A, Ullrich M, Kirk C, Soltani A, Küderle A, Gazit E, Salis F, Alcock L, Aminian K, Becker C, Bertuletti S, Brown P, Buckley E, Cantu A, Carsin AE, Caruso M, Caulfield B, Cereatti A, Chiari L, D’Ascanio I, Eskofier B, Fernstad S, Froehlich M, Garcia-Aymerich J, Hansen C, Hausdorff JM, Hiden H, Hume E, Keogh A, Kluge F, Koch S, Maetzler W, Megaritis D, Mueller A, Niessen M, Palmerini L, Schwickert L, Scott K, Sharrack B, Sillén H, Singleton D, Vereijken B, Vogiatzis I, Yarnall AJ, Rochester L, Mazzà C, Del Din S. Assessing real-world gait with digital technology? Validation, insights and recommendations from the Mobilise-D consortium. J Neuroeng Rehabil. 2023;20(1):78 PMID:37316858.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Van Remoortel H, Giavedoni S, Raste Y, Burtin C, Louvaris Z, Gimeno-Santos E, Langer D, Glendenning A, Hopkinson NS, Vogiatzis I, Peterson BT, Wilson F, Mann B, Rabinovich R, Puhan MA, Troosters T. Validity of activity monitors in health and chronic disease: a systematic review Systematic review Introduction. International Journal of Behavioral Nutrition and Physical Activity. 2012. Available from: http://www.ijbnpa.org/content/9/1/84.

  35. Airlie J, Forster A, Birch KM. An investigation into the optimal wear time criteria necessary to reliably estimate physical activity and sedentary behaviour from ActiGraph wGT3X+ accelerometer data in older care home residents. BMC Geriatr. 2022;22(1):136 PMID:35177023.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Prinsen CAC, Vohra S, Rose MR, Boers M, Tugwell P, Clarke M, Williamson PR, Terwee CB. How to select outcome measurement instruments for outcomes included in a “Core Outcome Set” – a practical guideline. Trials. 2016;17(1):449. https://doi.org/10.1186/s13063-016-1555-2.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Lisi E, Abellan JJ. Statistical analysis of actigraphy data with generalised additive models. Pharm Stat Wiley. 2023. https://doi.org/10.1002/pst.2350

  38. Harrison F, Goodman A, Sluijs E, Andersen L, Cardon G, Davey R, Janz K, Molloy L, Page A, Pate R, Puder J, Sardinha L, Timperio A, Wedderkopp N, Jones A. Weather and children’s physical activity; How and why do relationships vary between countries? Int J Behav Nutr Phys Act 2017;14. https://doi.org/10.1186/s12966-017-0526-7.

  39. Harrison F, Van Sluijs EMF, Corder K, Ekelund U, Jones A. The changing relationship between rainfall and children’s physical activity in spring and summer: a longitudinal study. Int J Behav Nutr Phys Act. 2015;12:41.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Argha A, Savkin A, Liaw ST, Celler BG. Effect of seasonal variation on clinical outcome in patients with chronic conditions: Analysis of the commonwealth scientific and industrial research organization (csiro) national telehealth trial. JMIR Med Inform 2018;20(3). https://doi.org/10.2196/medinform.9680.

  41. Di J, Demanuele C, Kettermann A, Karahanoglu FI, Cappelleri JC, Potter A, Bury D, Cedarbaum JM, Byrom B. Considerations to address missing data when deriving clinical trial endpoints from digital health technologies. Contemp Clin Trials. 2022;113:106661. PMID:34954098.

    Article  PubMed  Google Scholar 

  42. Tackney MS, Cook DG, Stahl D, Ismail K, Williamson E, Carpenter J. A framework for handling missing accelerometer outcome data in trials. Trials. 2021;22(1):1–18 (PMID:34090494).

    Article  Google Scholar 

  43. Tackney MS, Williamson E, Cook DG, Limb E, Harris T, Carpenter J. Multiple imputation approaches for epoch-level accelerometer data in trials. Stat Methods Med Res. 2023;32(10):1936–60. https://doi.org/10.1177/09622802231188518.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Clinical Data Interchange Standards Consortium. SDTM. 2024. Available from: https://www.cdisc.org/standards/foundational/sdtm [accessed Apr 28, 2024].

  45. Griffiths J, Fox L, Williamson PR. Quantifying the carbon footprint of clinical trials: guidance development and case studies. BMJ Open. 2024;14(1):e075755. https://doi.org/10.1136/bmjopen-2023-075755.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Muurling M, Pasmooij AMG, Koychev I, Roik D, Froelich L, Schwertner E, Religa D, Abdelnour C, Boada M, Almici M, Galluzzi S, Cardoso S, de Mendonça A, Owens AP, Kuruppu S, Gjestsen MT, Lazarou I, Gkioka M, Tsolaki M, Diaz A, Gove D, Visser PJ, Aarsland D, Lucivero F, de Boer C. Ethical challenges of using remote monitoring technologies for clinical research: A case study of the role of local research ethics committees in the RADAR-AD study. PLoS One. 2023;18(7):e0285807.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  47. Tigard DW, Braun M, Breuer S, Ritt K, Fiske A, McLennan S, Buyx A. Toward best practices in embedded ethics: Suggestions for interdisciplinary technology development. Rob Auton Syst. 2023;167:104467. https://doi.org/10.1016/j.robot.2023.104467.

    Article  Google Scholar 

  48. McLennan S, Fiske A, Celi LA, Müller R, Harder J, Ritt K, Haddadin S, Buyx A. An embedded ethics approach for AI development. Nat Mach Intell. 2020;2(9):488–90. https://doi.org/10.1038/s42256-020-0214-1.

    Article  Google Scholar 

  49. Rochester L, Mazzà C, Mueller A, Caulfield B, McCarthy M, Becker C, Miller R, Piraino P, Viceconti M, Dartee WP, Garcia-Aymerich J, Aydemir AA, Vereijken B, Arnera V, Ammour N, Jackson M, Hache T, Roubenoff R. A Roadmap to Inform Development, Validation and Approval of Digital Mobility Outcomes: The Mobilise-D Approach. Digit Biomark. 2020;4(suppl 1):13–27. https://doi.org/10.1159/000512513.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Coran P, Goldsack JC, Grandinetti CA, Bakker JP, Bolognese M, Dorsey ER, Vasisht K, Amdur A, Dell C, Helfgott J, Kirchoff M, Miller CJ, Narayan A, Patel D, Peterson B, Ramirez E, Schiller D, Switzer T, Wing L, Forrest A, Doherty A. Advancing the Use of Mobile Technologies in Clinical Trials: Recommendations from the Clinical Trials Transformation Initiative. Digit Biomark. 2019;3(3):145–54. https://doi.org/10.1159/000503957.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Goldsack JC, Dowling AV, Samuelson D, Patrick-Lake B, Clay I. Evaluation, Acceptance, and Qualification of Digital Measures: From Proof of Concept to Endpoint. Digit Biomark. 2021;5(1):53–64. https://doi.org/10.1159/000514730.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Nordgreen T, Rabbi F, Torresen J, Skar YS, Guribye F, Inal Y, Flobakk E, Wake JD, Mukhiya SK, Aminifar A, Myklebost S, Lundervold AJ, Kenter R, Hammar Å, Nordby E, Kahlon S, Tveit Sekse RJ, Griffin KF, Jakobsen P, Pham MH, Côté-Allard U, Noori FM, Lamo Y. Challenges and possible solutions in cross-disciplinary and cross-sectorial research teams within the domain of e-mental health. J Enabling Technol Emerald Group Holdings Ltd.; 2021;15(4):241–251. https://doi.org/10.1108/JET-03-2021-0013.

  53. Goldsack JC, Coravos A, Bakker JP, Bent B, Dowling A V, Fitzer-Attas C, Godfrey A, Godino JG, Gujar N, Izmailova E, Manta C, Peterson B, Vandendriessche B, Wood WA, Wang KW, Dunn J. Verification, analytical validation, and clinical validation (V3): the foundation of determining fit-for-purpose for Biometric Monitoring Technologies (BioMeTs). NPJ Digit Med Springer US; 2020;3(1). https://doi.org/10.1038/s41746-020-0260-4.

  54. Papoulias S, Callard F. “A limpet on a ship”: Spatio-temporal dynamics of patient and public involvement in research. Health Expect. 2021;24(3):810–8 PMID:33745192.

    Article  PubMed  Google Scholar 

  55. Caron-Flinterman FJ, Broerse JEW, Bunders JFG. Patient partnership in decision-making on biomedical research: Changing the Network. Sci Technol Human Values. 2007;32(3):339–68. https://doi.org/10.1177/0162243906298354.

    Article  Google Scholar 

  56. Danish National Centre for Ethics. Guidance on decentralised clinical trials (DCT). Available from: https://nationaltcenterforetik.dk/Media/638001319248700745/Guidance%20on%20decentralised%20clinical%20trials%20Version%201%20Danish%20National%20Center%20for%20Ethics.pdf [accessed Feb 15, 2024].

  57. Petrini C, Mannelli C, Riva L, Gainotti S, Gussoni G. Decentralized clinical trials (DCTs): A few ethical considerations. 2022.

  58. Kotiadis K, Tako A. A Tutorial on Involving Stakeholders in Facilitated Simulation Studies. Proceedings of SW21 The OR Society Simulation Workshop Operational Research Society; 2021. https://doi.org/10.36819/SW21.005.

  59. Tako AA, Kotiadis K. PartiSim: A multi-methodology framework to support facilitated simulation modelling in healthcare. Eur J Oper Res. 2015;244(2):555–64. https://doi.org/10.1016/j.ejor.2015.01.046.

    Article  Google Scholar 

  60. Horst M, Davies SR, Irwin A. Reframing Science Communication. In: Felt U, Fouché R, Miller CA, Smith-Doerr L, editors. Handbook of science and technology Studies Fourth. Cambridge/London: MIT Press; 2017. p. 881–907.

    Google Scholar 

  61. Ward V, Smith S, House A, Hamer S. Exploring knowledge exchange: A useful framework for practice and policy. Soc Sci Med. 2012;74(3):297–304 (PMID:22014420).

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Cecilia Mascolo, Anne Blackwood, Evelyne Priestman and the patient and public contributors for their participation and contributions in the Knowledge Exchange event. The authors would like to thank Alison Quenault, the communications and events manager at the MRC-Biostatistics Unit, for her help with organising and running the event. Finally, the authors give special thanks to patient and public contributors Yazan Mehyar and Lorraine Hazlehurst for their contributions towards this paper.

Funding

The Knowledge Exchange event on Digital Endpoints was supported by funding from All Council Harmonised IAA Rapid Response Award; NIHR Cambridge Biomedical Research Centre (NIHR203312) and Cambridge Centre for Data-Driven Discovery.

MST and SSV acknowledge funding and support from the UK Medical Research Council (grant MC UU 00002/15 and MC UU 00040/03) and MT acknowledges Fellowship Track funding. JN received a British Heart Foundation Clinical Research Training Fellowship. MCF received funding from INTERVENE (INTERnational consortium for integratiVE geNomics prEdiction), a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101016775. MCF also received funding from BIOMAP (Biomarkers in Atopic Dermatitis and Psoriasis) as part of the Horizon 2020 Framework Programme: a project funded by the Innovative Medicines Initiative 2 Joint Undertaking under Grant Agreement No. 821511. H2020-EU.3.1.—SOCIETAL CHALLENGES—Health, demographic change and well-being. H2020-EU.3.1.7.—Innovative Medicines Initiative 2 (IMI2). Topic: IMI2-2017–13-02—Genome-Environment Interactions in Inflammatory Skin Disease. FL received funding from the RADAR-AD project, which received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No. 806999. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation programme and EFPIA and Software AG. This communication reflects the views of the RADAR-AD consortium, and neither IMI nor the European Union and EFPIA are liable for any use that may be made of the information contained herein. JRC received funding from the UK Medical Research Council (grant MC UU 00004/07). CHLH is funded by IDEA-FAST. MAK is funded by a personal Clinical Research Fellowship from Alzheimer's Society. FV received an MRC Experimental Medicine Award (MR/W026279/1).

Author information

Authors and Affiliations

Authors

Contributions

MST, SSV and AS co-organised the Knowledge Exchange event and conceived the paper. MST, SSV, AS, MCF, FL, JN, and JL contributed to the design and scoping of the article. MST led the writing of the paper. MCF, AS, SSV, JN and FL contributed to co-writing of the paper. All authors contributed to knowledge generated from the event, reviewed and edited the manuscript and approved the final version.

Corresponding author

Correspondence to Mia S. Tackney.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

JN received a travel grant from Aparito Ltd to attend an educational event. RAA is an employee of ICON PLC. EHD is an employee and shareholder for Aparito. WGD has received consultancy fees from Google, unrelated to this work. KBES is an employee and shareholder of AstraZeneca.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

13063_2024_8356_MOESM1_ESM.docx

Supplementary Material 1: Table A1. Schedule of Knowledge Exchange Event. Figure A1. Attendees were asked at the start of the day to state what core perspective they were bringing to the event. Responses that are repeated are indicated by larger fonts/distinct colours. Figure A2. Attendees were asked at the start of the day what they thought were the key challenges with digital endpoints. Responses that are repeated are indicated by larger fonts/distinct colours.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tackney, M.S., Steele, A., Newman, J. et al. Digital endpoints in clinical trials: emerging themes from a multi-stakeholder Knowledge Exchange event. Trials 25, 521 (2024). https://doi.org/10.1186/s13063-024-08356-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-024-08356-7

Keywords