We have demonstrated the feasibility of a clinically-integrated randomized trial. First, the trial was conducted at extremely low cost. With the exception of protocol writing and data analysis, and start-up meetings with surgeons and on-going trial monitoring, the only non-trivial expenditure of time or money was for consenting patients. Clinic staff could readily identify and flag eligible patients with a negligible expenditure of effort. We found that surgeons could usually explain the idea behind the trial to patients in 2 - 3 minutes. Consent paperwork was handled by research assistants concurrent with other standard consents, such as for tissue use protocols. The process of randomization - stratification, faxing of consent documents, and communication of results to surgical fellows - took research assistants 5 - 10 minutes per patient. Trial data were directly downloaded from the clinical database by the trial statistician. In comparison with a traditional trial, our study avoided many costs - including those for research assistants to follow patients over time as well as the cost of data abstraction and data entry - but did not incur any additional costs, as all aspects of the clinically-integrated trial (e.g. consent, start up meetings) are also a necessary part of more traditional designs.
Second, randomization did indeed become a routine part of clinical practice, with approximately 80% of patients being approached and close to 75% of those agreeing to participate. Third, we were able to obtain outcome data on a high proportion of patients, despite there being no attempt whatsoever to follow trial patients differently outside of routine clinical care.
Various lessons were learned during study start up and implementation. Before the first patient was randomized, trial staff spent time with clinic patients discussing how different ways of describing the trial would affect the degree of comfort that they would feel in participating. The major concern of patients concerned whether they might receive substandard care if they took part. Patients appeared particularly averse to descriptions of the trial suggesting that treatment would be given at random. Accordingly, in both the written consent form (see additional file 2) and in oral presentations of the trial, the "uncertainty principle" was stressed. Patients were told that surgeons would always use their clinical judgment, and would choose the treatment approach that would lead to the best outcome; if and only if the surgeon was genuinely unsure of which approach to take would the randomized allocation be accessed. Discussions with patients also revealed that it was critical that the trial first be introduced to them by their surgeon; being approached by a research assistant or even clinical fellow (a surgeon in training working with the attending surgeon) would lead patients to suspect that their surgeon was not fully confident about the trial.
Given the critical role that administrative clinic staff played in identifying eligible patients and bringing them to the attention of surgeons, we expended considerable effort involving clinical staff in trial start up (see additional file 3). As each surgeon ran clinics in a slightly different way, we relied on clinic staff to suggest trial procedures. For example, one clinic nurse suggested including a brightly colored reminder notice in the case file given to the surgeon before entering the consultation room. Involving staff in this way increased "ownership" in the trial and provided incentives for high accrual rates. To complement this approach, we tracked consent rates for different clinics, bringing the results to the attention of surgeons and their clinic staff.
Surgical fellows play a key role in the everyday running of clinics, in most instances, being the first doctor who sees a patient considering surgery. Working closely with the surgical fellows therefore also became a key aspect of efficient trial management. Indeed, we often saw large changes in accrual rates within a particular clinic as fellows rotated. We were keen to emphasize the importance and novelty of the trial and to appeal to fellows' commitment to evidence-based medicine.
That said, encouragement of surgeons, surgical fellows and non-medical clinic staff may have been unproductive without the full support of the surgical leadership of the hospital. The co-principal investigator was the chair of the Department of Surgery. It is difficult to imagine that the trial would have accrued without this enthusiastic endorsement.
Nonetheless, further attention to routine systems of data gathering will be required before the methodology can be optimized. Recording of compliance with randomization was missing in about 20% of cases, and so clearly additional procedures need to be established to ensure this key aspect of documentation. In particular, we propose adding easy to use "tick boxes" in the operative record. Doing so would not only ease documentation, but would allow the study team to conduct on-going monitoring of compliance both with documentation and with treatment allocation. This would allow identification of surgeons with poor compliance and suitable intervention.
Recording of patient outcome, while adequate, was also less than perfect. Since the protocol was opened, we have moved to entirely electronic reporting of patient outcomes, via emails to patients at home or iPads in the clinic. To assess how this new system affects patient reporting, we studied all patients treated by radical prostatectomy between January 2010 (towards the end of the trial, when the electronic recording was fully implemented) and October 2010 (to allow all patients to have 14 months of follow-up). During this period, 599 patients were treated and we obtained data for urinary function at one year from 498, a data completion rate of 83%. We are also in the process of implementing a system that provides feedback to patients on the basis of their answers, for example, recommending referral to a voiding dysfunction specialist to patients who report urinary dysfunction . We anticipate that improving use of patient-reported outcomes in clinical practice - an approach that has been shown to improve doctor-patient communication  and decrease symptom intensity  - will also increase data completion rates in subsequent clinically-integrated trials. We also recommend the use of sensitivity analysis in any subsequent, fully powered trial, to determine whether missing data may have influenced the strength or direction of results.
That said, we are confident that the rates of data completion we report here - even if suboptimal - fully justifies the clinically-integrated randomized trial methodology. With respect to missing documentation on surgical approaches, we have no reason to believe that missing data reflected treatment choices: in discussion with clinicians, it seemed that documentation failures were inadvertent. Naturally, we cannot entirely rule out bias with respect to documentation. This is partly a function of having relatively wide confidence intervals around the estimates of differences between groups. Perhaps more importantly, the possibility of bias may change depending on the surgeons and comparison involved. For example, it might be that in some other implementation of the clinically-integrated trial methodology, a sub-group of surgeons with strong preferences might attempt to subvert the trial by selective documentation. As such, careful monitoring of documentation rates, and statistical comparisons of patients with and without documentation of the procedure used, will be important in any clinically-randomized trial.
With respect to missing outcome data, we saw no evidence that this varied by patient characteristics. To determine whether our 29% rate of missing data is in anyway extreme or outlying in the context of randomized trials in general, we examined typical rates in other fields. In two studies examining reports in major medical journals, about 20% of trials had a rate of missing data more than 20%[16, 17]. However, rates do vary depending on the patient group and length of follow-up: mean 30% at one year in weight loss research ; mean 37% for short-term studies of depression ; 20% of rheumatology trials had more than 30% missing data . It is of note that in each of these research areas, the likelihood of bias due to missing data is far higher than for the current trial. There are obvious reasons why drop-out would be associated with inefficacy in depression or weight-loss trials and with medication side-effects in rheumatology trials. In contrast, it is hard to see how a patient's allocation or continence status would affect his propensity to continue with clinical follow-up. In contrast, patients return for follow-up after radical prostatectomy to check for recurrence. If a patient with urinary dysfunction was more or less likely to return for a cancer check, then the mechanism for this is far less obvious than how continuing depression would affect a patient's willingness to continue on a drug study.
As such, missing outcome data is largely an issue of decreased sample size. A more traditional approach to the randomized trial, where patients would complete protocol-specific questionnaires under close monitoring by study staff, might well have a higher overall rate of data completion. Given the expense of such trials, and the lowered patient acceptance of and recruitment to studies that involve additional reporting burden, the overall number of patients providing data would likely be higher with a clinically-integrated trial approach. This might also be explained in a "value of information" context : the cost per data point is dramatically lower for the clinically-integrated trial, so given a fixed research budget, this approach will result in more information to help guide clinical practice.