Many of the topics deemed most important to LMIC researchers were related to trial conduct as opposed to trial design or analysis. This could stem from resource issues and may indeed highlight the requirement for capacity development, stressing the need for cost and time-effective methods. The majority (85%) of round-2 participants have been involved in the conduct of trials and, therefore, issues around trial conduct could be more relevant to them.
The priority most commonly graded as critically important, choosing appropriate outcomes to measure, was also a priority identified in the UK study and there is ongoing work on this through the Core Outcome Measures in Effectiveness Trials (COMET) Initiative [7]. Launched in January 2010, COMET aims to optimise the choice of outcomes by providing a standardised set of outcomes (core outcome sets) for specific disease areas and/or populations. Up until December 2015 only 44/248 (18%) completed core outcome set studies involved participants from LMICs. Given that choosing appropriate outcomes to measure was the topic most important to researchers in LMICs, there is a need for greater involvement of these countries in the development of core outcome sets.
It is also important to LMIC researchers to prioritise research into methods for training research staff. One example of implementing and reviewing a training programme is within the Good Health Research Practice (GHRP) initiative which aims to train researchers in applying Good Clinical Practice (GCP). A short course using an experimental learning cycle, the process of conceptualising, applying, acting and reflecting, was piloted between 2014 and 2015 in LMICs [8]. New methods to improve the training programme were identified during the pilot phase and incorporated into future programmes. Research should now be done to find methods for training which are available and effective in LMICs.
The results suggest that, although some research priorities seem to be applicable to both LMICs and high-income countries, differences may exist between these broad regions. For example, in the UK there was a greater emphasis on recruitment and retention, yet these topics did not appear in the top 10 most commonly graded as critically important by LMIC researchers, potentially due to the fact that involvement in trials guarantees access to more personalised healthcare which, outside of a trial setting, could be limited in LMICs due to capacity issues or the intervention not being available outside the trial [9, 10].
Although snowball sampling methods were used to disseminate the survey, which sometimes raises concerns with respect to representativeness, information on demographic details and professional backgrounds of the participants indicated that a wide spread of disciplines and countries were involved, thus strengthening the applicability of the results.
One limitation to note was that participants were researchers and, therefore, there was no patient and public involvement (PPI); this was due to pragmatic reasons, since it would be difficult to identify participants from trials in LMICs; however, it would be useful to obtain their views. It is important that further research based on these results includes PPI so as to conduct research into methods which are also relevant and applicable to patients and the public.
Another limitation of the survey was around the number of topics suggested which were deemed not applicable or beyond the survey scope. Those deemed not applicable were often too vague; for example, ‘trial logistics’, ‘statistical analysis’ and ‘improving trial efficiency’ or to do with a specific disease area; for example, ’HIV’, ‘malaria’; a full list of responses and groupings are provided as a Additional file 2. Participants had space to report the reason for suggesting each topic and where uncertainties to do with applicability arose, this information was used to aid decision-making. Some topics deemed not applicable may have been due to language barriers; however, translations of the invitation letters should have minimised this risk. The invitation letter was translated to French and checked by speakers who were fluent, although not native. The Chinese and Spanish versions were both translated and checked by native speakers. The survey, itself, was not translated.
Furthermore, it is possible that the 29% of people who completed background information in round 1 but did not provide any priorities perhaps did not believe there were any priorities for methodological research. However, due to the ‘free-text’ nature of the survey it is assumed that if participants completed background questions, and felt strongly that there were no priorities, a comment would have been left to indicate this.
Although LMICs share the same limitation of resource issues, it should also be noted that the specific needs of different regions within LMICs could vary; for this reason a wide spread of countries was included in the survey. An extension of this work; however, could target the priorities of specific countries or regions within LMICs.
A variety of disciplines was represented in the survey but it could also be the case that priorities vary depending on respondent affiliations (for example, private vs public).
These findings provide a preliminary step towards achieving the foundations of a global health trials methodological agenda which we hope will foster methodology research in specific areas in order to increase and improve trials in LMICs.