Skip to main content

Table 1 Trial Forge Examples of trial challenges and how Trial Forge could help

From: Making randomised trials more efficient: report of the first meeting to discuss the Trial Forge platform

General problem


Examples of how Trial Forge aims to help

Information is spread over many journals, websites, books and other publications, which makes it difficult to access and use in decision making. This makes finding and navigating the literature time-consuming and challenging.

Searching Pubmed [, searched 2 Jan 2015] using the phrase clinical trial recruitment and limiting to reviews in the last 5 years produces 252 hits, too large a number to sift through easily.

To collate, or link people to, existing high-quality evidence on key trial processes. For recruitment this would include: what influences recruitment strategies that can improve recruitment how to tailor recruitment strategies to particular contexts

A search on Google Scholar [, searched 2 Jan 2015] using the same phrase (exact phrase search) produces 1080 hits since 2010.

To develop targeted research agendas designed to fill gaps in knowledge around how best to recruit trial participants.

Searching Amazon [ searched 2 Jan 2015] for clinical trial recruitment produces 525 hits; the first page results of includes books costing less than £1 to over £900.

To make it easier for teams to work together to address these research agendas.

In the absence of high-quality evidence, provide a repository for the experience and knowledge from the community of trialists as to how they recruit participants.

There are substantial gaps in the evidence base for key issues that affect all trials and which are not being systematically targeted by methodology research.

There is little published research evidence to inform decisions about trial management options such as how best to select clinical sites, how to maintain relationships with sites, how to model the movement of patients and staff through trial processes, or how to effectively train trial and site staff.

To develop targeted research agendas designed to fill gaps in knowledge about how to design, run, analyse and report trials.

For trial management, the development of methods to allow trial managers to share their solutions without the need for full publications, which are not generally part of the career development of trial managers (ie. there is no incentive to publish).

Encourage systematic reviewers (eg. of Cochrane reviews) to suggest concrete methodological studies that need to be done and to link these to initiatives such as SWATs [43, 44] to provide ready-made protocols for those studies.

Systematically direct information about evidence gaps to funding agencies for their consideration as part of their prioritisation process for the selection of topics for funding calls.

Much trial knowledge is tacit and held by experienced staff working at trials units, other similar centres, or on individual trials.

Although many research groups and units cost, manage and create data management systems for trials, there is little easily available information on effective ways of how to complete each of these processes.

In the absence of high-quality evidence, provide a repository for the experience and knowledge from the community of trialists as to how they design, run, analyse and report their trials.

Collate and evaluate tools that are being used by groups designing and running trials such as trials units and other similar centres.

To develop targeted research agendas designed to move from tacit, often unevaluated knowledge, to high-quality evaluated evidence.

There is no easy way for individuals needing advice to access it from the potentially thousands of people who have knowledge that might help them.

If a trial data management team using the OpenClinica system encounters a technical problem, there is an active online community that provides help free ( Questions are answered quickly. There are few similar opportunities to quickly address questions on trial design, conduct, analysis or reporting.

Provide a repository for the experience and knowledge from the community of trialists as to how they design, run, analyse and report their trials.

Provide support for electronically linked communities of practice (e.g. through Question & Answer and discussion sections on its website)

Learn from The Global Health Network ( on how to build online communities in healthcare.

Information is not structured in a way that helps people find what they need to resolve their uncertainties. People working on trials have questions (such as ‘Should I visit the sites to boost recruitment?’, ‘How much quality assurance do I need to do?’, ‘Will adding an extra outcome measure affect recruitment and retention?’), but guidance is rarely organised around questions and the answers to them.

The Clinical Trials Toolkit ( provides regulatory and other information about drug trials in the UK Although useful, the information is structured like a text book. People visiting the site, however, are likely to have done so because they have a series of questions about their trial and are looking for answers. The textbook structure makes answering these questions slower than it could be.

Provide a mixed structure to Trial Forge, where much of the material is directly framed as questions and answers. Where evidence provides a clear answer, this information will be presented as a question.

Work with trialists to present information in such a way that it enables them to find answers to their questions as quickly as possible.

There is no easy way to support collaborative, trial methodology research to address evidence gaps and shortcomings.

The 2010 Cochrane review of interventions to improve trial recruitment [7] includes 45 trials evaluating 46 interventions. Despite this, the review concludes that there is high-quality evidence supporting only three or so interventions. The effectiveness or otherwise of the other interventions remains unclear.

The initiatives listed above will help to identify gaps in evidence. Trial Forge will then highlight these, including to funders in an effort to focus researcher effort on important and known gaps.

By supporting SWATs [43, 44], researchers wishing to fill at least some of these gaps will be able to use existing (and common) protocols to evaluate given interventions.

Provide electronically linked communities that can agree to work together to fill a gap by, for example, evaluating the same intervention across many trials. A good example of this approach is the MRC START project for recruitment interventions: