Strengths and limitations
This pilot project assessed the effect of psychostimulants on attention, hyperactivity and higher cerebral functions of children with traumatic brain injuries. While stimulants are approved for children with ADHD, the children in this study had symptoms consistent with ADHD due to trauma. A larger trial is required to assess the efficacy of stimulants on this group of children.
There were several methodological issues. The first is that we elected to treat children with TBI displaying symptoms similar to ADHD, with stimulants. Those without those symptoms were not considered: therefore this pilot was conducted on a sub-population of children with TBI. We recognize that this method does introduce a selection bias by selecting those who appeared to derive benefit. However, in any trial, it would be unethical to offer participation where there is little likelihood of benefit. The results of this pilot and the subsequent larger trial can only be generalised to those children who have an apparent pre-trial clinical effect from stimulant medication.
Secondly, previous experience with conducting a trial of stimulants where there was a pre-trial titration with fixed doses of stimulants to find the dose with the best apparent effect, showed that large numbers of children were randomized but withdrew, due to the length of the pre-trial titration . Further, several participants withdrew because their parents were concerned that their children would be receiving placebo.
It was considered that the best option was to find a dose of stimulants in each individual that appeared to produce positive behavioural results and was well tolerated, until a ceiling dose was reached. We believe this approach is valid, in the same way that some RCTs have an escalating scale of doses until either an effective dose is reached or the maximum dose for the study has been achieved. The trial is thus comparing a clinically effective dose with placebo in each child.
Thirdly, there are no validated scales that measure hyperactivity in brain injured children. The Conners’ 3 Rating Scales are designed for non-brain injured children. Given that current regulations for the prescription of stimulant medications in Australia require a diagnosis of ADHD, this is the only feasible scale. However normative scales have yet to be created for this population. Most participants scored above the 97th percentile on normative data for children without TBI for the behaviours tested for in both placebo and stimulant cycles, that is, the very worst end of the scale for the performance of non-brain injured children. There is a reasonable spread of raw scores that map to these high percentiles, and a differential between raw scores is possible and was in fact observed between stimulants and placebo. Therefore raw scores are reported here, and differences between trial arms calculated in terms of changes in these. This will be explored more thoroughly in the analysis of the full trial. The full trial, currently underway, involves two other sites, and will recruit 42 children over two years.
Finally, we conducted the trial with either MPH or DEX, depending on patient and parent preference. Clinically, the two medications are interchangeable. Those children already on slow-release preparations of MPH did the trial on this medication, as the half-life of the MPH released is identical to immediate release preparations.
An evidence-based review conducted by the McMaster University Evidence-Based Practice Center Group studied 23 articles on specific drug-to-drug comparisons . These included eight studies comparing MPH and DEX. Also included were studies comparing different formulations of the same drug. Three studies compared regular and sustained-release formulations of MPH, and one study compared different isomers of MPH (L-MPH versus D-MPH). Finally, one study compared DEX and levoamphetamine. The stimulant-stimulant comparisons documented few, if any, differences between MPH and DEX. The studies comparing different formulations of the same drug revealed no significant formulation effects.
With any crossover trial, the possibility of carryover effects needs to be considered. Swanson and Volkow  investigated the pharmacological properties of MPH and reported uptake and clearance times of oral short-acting MPH from the brain using Positron Emission Tomography studies. Uptake time orally is approximately 20 minutes and the clearance time is approximately 90 to 120 minutes. Regardless of dose, the time to maximum concentration (Tmax) is reported as 1.5 to 2 hours and t1/2 between 2 and 3 hours. The maximum behavioural effects (reductions in overactivity, impulsivity and inattention) occur about 1 to 2 hours after oral doses, and the effects dissipate significantly after 2 hours (3 to 4 hours after each immediate release dose). Due to the small number of available data, we did not conduct any formal 'carry-over’ checks/analyses - and instead were guided by pharmacological and biological considerations. As such, data taken on the first two days of each seven-day period were omitted from the analyses to allow for washout. Informal graphical checks by day were drawn to visually ascertain patterns or trends in the data beyond variability. None were seen. If carry-over effects do exist, then the treatment effect estimates might be underestimated, as these effects would spill into the placebo results. The randomisation of treatments within cycles would reduce this effect, should it exist. In a larger confirmatory trial, formalised checks would need to be conducted.
There are some obvious limitations of this pilot study. The sample size is small and the completion rate was only 50%. Thus, individual scores are highly influential and there is reduced power to find important overall differences that may exist. Secondly, the distribution of the mean differences was assumed to be normal. No evidence existed to challenge this assumption and, invoking the central limit theorem, it is likely that this assumption is not unreasonable. In the larger trial this will be formally tested. Thirdly, those children who had negative responses or adverse events during pre-trial titration are not included in the analysis. Finally, one child withdrew because of correctly perceived success of the treatment. Because he/she did not complete all treatment cycles, there was relatively less data for this 'success’ and thus more associated individual variability, which may have biased results towards the 'null’ difference between treatments.
Recruitment was impacted by a substantial percentage of potentially eligible children (estimated to be 17% of those screened) who were involved with the Department of Child Safety, which did not give consent for those children to participate. These were different to the five eligible patients (described in Figure 2), who did not enrol because teacher consent was not obtained. Research staff endeavoured to work with department staff to encourage participation by children involved with the department. However, permission to participate was denied in all cases even though this study was evaluating the medication that the child was already taking, not introducing any new treatments.
Though there were five withdrawals, not unusual in this population , enough cycles (18) were completed to allow Bayesian analysis of the aggregated completed cycles.
The n-of-1 trial design has several strengths . n-of-1 studies provide the strongest evidence possible of the effect of a medicine on an individual. A report is provided to patients and clinicians on the efficacy of the treatment for the individual immediately after the trial is concluded. This is not possible in a standard RCT. In addition, every participant receives both the active and placebo treatments, thus making participation more attractive than in an RCT where there is a chance of being randomised to the placebo arm of the trial. Finally, because the same person contributes multiple data points to both the active and placebo arms of the trial, the sample is perfectly matched. The sample size required to complete the trial is considerably smaller than a standard RCT, thus allowing credible data to be collected in small populations where a standard RCT is virtually impossible to conduct.
Finally, if a participant leaves the trial early, completed cycles can contribute to the final analysis. This is in contrast to an RCT, where the data from people leaving the trial before completion are lost.
Validated ratings of concentration, executive functioning and behaviour were used in this pilot study, and a strength of this study is that ratings were obtained from both parents and class teachers. Given that neuropsychological assessments are time-consuming and therefore expensive, validated ratings of concentration, executive functioning and behavior are a practical method of measuring outcomes, as well as directly obtaining data on a child’s functioning as opposed to underlying impairments.
There was a greater difference between scores on stimulant medication compared to placebo for teachers compared to parents. A similar pattern was found by Bakker and Waugh [16, 17], although by contrast, in the current study teachers were blinded. A possible explanation for this trend may be that a child’s difficulties with concentration and self-regulation are more evident in the school setting where the child is expected to be able to concentrate and regulate their behaviour for extended periods of time, where difficulties may not be as evident at home where formal learning is not expected and the child’s difficulties with regulation and concentration may not impact overly on their free play.
Hierarchical Bayesian analysis
Hierarchical Bayesian models have been advocated for the analysis of n-of-1 trial data, especially when both patient-specific and population estimates are desired [30, 36, 37]. Detailed accounts of these models for normal  and binary  outcome data have been previously described. Advantages of this approach include the ability to embody prior information, coherently update this prior information with the availability of new empirical information, provide patient-specific and population probabilistic results, allow covariate and participant subgroup structures, accommodate the natural hierarchies and clusters within patient groups, and easily handle unbalanced data . Exploiting these properties and the sequential nature of n-of-1 trials, the population effect size can be given by the posterior distributions of the aggregated n-of-1 trials. The addition of new trials will update and refine this posterior distribution; in a similar manner to how evidence is accumulated between studies with meta-analyses.