Skip to main content

Monitoring adverse social and medical events in public health trials: assessing predictors and interpretation against a proposed model of adverse event reporting

Abstract

Background

Although adverse event (AE) monitoring in trials focusses on medical events, social outcomes may be important in public or social care trials. We describe our approach to reporting and categorising medical and other AE reports, using a case study trial. We explore predictors of medical and social AEs, and develop a model for conceptualising safety monitoring.

Methods

The Building Blocks randomised controlled trial of specialist home visiting recruited 1618 first-time mothers aged 19 years or under at 18 English sites. Event reports collected during follow-up were independently reviewed and categorised as either Medical (standard Good Clinical Practice definition), or Social (trial-specific definition). A retrospectively developed system was created to classify AEs. Univariate analyses explored the association between baseline participant and study characteristics and the subsequent reporting of events. Factors significantly associated at this stage were progressed to binary logistic regressions to assess independent predictors.

Results

A classification system was derived for reported AEs that distinguished between Medical or Social AEs. One thousand, three hundred and fifteen event reports were obtained for mothers or their babies (1033 Medical, 257 Social). Allocation to the trial intervention arm was associated with increased likelihood of Medical rather than Social AE reporting. Poorer baseline psycho-social status predicted both Medical and Social events, and poorer psycho-social status better predicted Social rather than Medical events. Baseline predictors of Social AEs included being younger at recruitment (OR = 0.78 (CI = 0.67 to 0.90), p = 0.001), receiving benefits (OR = 1.60 (CI = 1.09 to 2.35), p = 0.016), and having a higher antisocial behaviour score (OR = 1.22 (CI = 1.09 to 1.36), p < 0.001). Baseline predictors of Medical AEs included having a limiting long-term illness (OR = 1.37 (CI = 1.01 to 1.88), p = 0.046), poorer mental health (OR = 1.03 (CI = 1.01 to 1.05), p = 0.004), and being in the intervention arm of the trial (OR = 1.34 (CI = 1.07 to 1.70), p = 0.012).

Conclusions

Continuity between baseline and subsequent adverse experiences was expected despite potentially beneficial intervention impact. We hypothesise that excess events reported for intervention-arm participants is likely attributable to surveillance bias. We interpreted our findings against a new model that explicates processes that may drive event occurrence, presentation and reporting. Focussing only upon Medical events may miss the well-being and social circumstances that are important for interpreting intervention safety and participant management.

Trial registration

ISRCTN, ID: ISRCTN23019866. Registered on 20 April 2009.

Peer Review reports

Background

Adverse event (AE) reporting is an integral part of safety monitoring for clinical trials. However, the processes for collecting, recording, analysing, and reporting AEs can be considered to be more complex and less developed than those processes used when evaluating efficacy in a trial [1]. Safety monitoring in clinical trials has been standardised using AE and serious adverse event (SAE) reporting protocols; for example, The Medicines for Human Use (Clinical Trials) Regulations 2004, which focus on medical events of varying severity. Such AEs may or may not be associated with the intervention. In comparison to clinical trials of medicinal products, public health or social care trials will often evaluate complex interventions in populations with adverse social circumstances; for example, in deprived populations. Such interventions may still have unexpected and unwelcome effects. Monitoring unintended or unexpected outcomes in such trials and participant well-being in general will involve outcomes which are social and psychological in nature in addition to medical. Systems for monitoring these events are underdeveloped and inconsistent in public health, social care, and psychotherapy trials; for example, Duggan et al. (2014) found that the recording of AEs in a trial of psychological intervention was either not attempted/reported, or used definitions not entirely suitable to the intervention or condition being studied [2]. While some authors have attempted to expand on the Good Clinical Practice (GCP) definition of AEs and SAEs to incorporate other types of events [3,4,5,6], none of these have included social events.

The Building Blocks randomised controlled trial evaluated the effectiveness and cost-effectiveness of the Family Nurse Partnership (FNP) home-visiting programme in England [7, 8]. Field and office-based researchers were responsible for the reporting of AEs on a site level to the trial team.

Monitoring events in the trial performed two functions. The first was to detect any undesirable consequence of the intervention. FNP is a supportive and voluntary home-visiting intervention which was not expected to produce harm, but with up to 64 home visits to women in often vulnerable circumstances, the intensive and structured approach could have been unwelcome to some families. The second purpose was to monitor in general the well-being of research participants in both trial arms. This included attempting to ensure that research processes did not add to participants’ distress if they were experiencing adverse social circumstances, and to facilitate optimal trial processes.

The value of monitoring AEs in trials is in detecting harmful effects attributable to an intervention. However, this signal may be obscured by other non-relevant factors that introduce unhelpful ‘noise’. For example, some studies have found reporting rates of AEs to vary by country [3], by reporter (e.g. clinician vs. participant) [9], and by reporting site. Reporting of AEs by health professionals may depend upon their awareness of the event, their judgement about the event, and their willingness to document the event [10]. Variation in reporting AEs driven by under-developed monitoring systems or inconsistent training reduces the potential to adequately monitor unintended effects of both public health and other interventions.

In summary, systems for AE monitoring in interventional studies in public health and social care are under-developed and variation in reports may be due to factors other than the intervention itself. In this paper our first aim is to describe our approach to reporting and categorising Medical and other AE reports in a large public health trial. Our second aim is to assess variability in safety reporting, and explore factors associated with the nature (i.e. the type of event reported), level (i.e. the level of seriousness) and quality of reporting (for example, any differences between study sites) in our study sample.

Methods

The Building Blocks trial evaluated the effectiveness of the FNP programme. The intervention consisted of up to 64 home visits from a specially trained Family Nurse during pregnancy and in the 2 years after birth, with the aim of improving outcomes for the health, well-being and social circumstances of young, first-time mothers and their children. The intervention covered core content areas of personal and environmental health, life course development, maternal rôle, family and friends and access to health and social services, including promoting healthy behaviours. The control group did not receive the intervention and instead received usual services, this included the Healthy Child Programme (universally offered screening, education, immunisation, and support from birth to the child’s second birthday) delivered by specialist community public health nurses, and maternity care appropriate to clinical need. Following birth the control group continued to receive postnatal midwifery care and care from existing child health services available locally, including an allocated health visitor. Details of the intervention and control conditions, as well as the full Building Blocks trial methods, can be found in the trial protocol and results papers [7, 8]. Trial outcome data was collected during face-to-face interviews by local researchers and through telephone interviews by staff located in Cardiff who were also responsible for the reporting of AEs to the trial team. From the outset, while a primary focus for safety monitoring was on Medical AEs, other concerns could have been noted by both field and office-based researchers. The collection of AEs was also intended to monitor the general well-being of research participants in both trial arms. For example, we intended to collect information to allow the trial team to have prior knowledge if they were contacting participants at difficult times (e.g. if either a mother or child was undergoing formal safeguarding procedures). Similarly, during the 24-month follow-up interview, scoring positively for items indicating serious abuse on a domestic abuse scale also triggered the completion of an AE form [11]. Detection of domestic abuse via this scale triggered the family’s health visitor being informed, and if ongoing and a new disclosure, resulted in a mandatory referral to social services.

Participants: participants in the Building Blocks trial were 1618 women aged 19 years or under at recruitment and expecting their first child. Young maternal age was used as a programme proxy for a range of poor longer-term outcomes for both child and mother and is also associated with socioeconomic deprivation. It was expected that many trial participants would face challenging individual personal and social circumstances. Baseline characteristics of the participants were collected through a home-based interview prior to randomisation.

Setting: 18 sites in England each comprising partnerships between primary healthcare organisations and local authorities for the purposes of delivering the FNP programme.

Adverse event reporting: AEs were reported during the approximately 2.5-year follow-up period by field and office-based researchers. Field researchers were usually trained midwives or nurses. They collected trial information on outcomes from medical notes as well as in face-to-face interviews (at baseline and final 24 months’ follow-up). They also had a remit to maintain contact with participants for the purposes of data collection. The office-based researchers collected self-report data via telephone interview at late pregnancy, 6, 12 and 18 months following birth. In both telephone and face-to-face interviews AE reports were triggered by participant responses to other open-ended questions or were reported directly from a participant un-prompted. AEs could also be reported by any other health professional associated with the trial including Family Nurses (intervention group only) and general practitioners (GPs). To report AEs a form was completed and sent to the trial team via secure fax, or emailed to the Data Manager. The Building Blocks trial Manager or the Chief Investigator and one clinical member of the research team jointly assessed each form to ascertain the nature, seriousness, causality and expectedness of the AE. Following receipt of the initial form, the trial team could request follow-up data from the reporting site or researcher. Some pregnancy-related events, such as hospitalisation due to child birth, and termination of pregnancy for foetal anomaly, were expected in the context of the trial and were, therefore, not expected to be reported as AEs.

Training: prior to the start of recruitment, field and office-based researchers were trained to collect AEs using a standardised reporting form and following GCP guidance. Instructions were included in the data collection forms (e.g. for the telephone interviews), that reminded interviewers to enquire about participant well-being at the start of the interview (as an open question). Any issues related to well-being at this stage would have been reported as AEs if appropriate to do so. After variations in AE reporting rates were observed during the course of the trial follow-up, additional face-to-face training was provided to all field researchers.

Aim 1: classifying and coding AEs

For the current analyses we retrospectively developed a system to classify reported AEs. The Chief Investigator (MR), Trial Manager (EO-J), Data Manager (GM), Senior Clinical Researcher (JS), a clinical co-investigator on the Building Blocks trial (JK), and a clinically qualified qualitative researcher (CW) met to develop a classification system following some iterative discussions and review of a sample of submitted AE forms.

Developing the classification: the GCP definitions of AEs and SAEs were used to initially classify forms. A distinction was then made between physical and mental GCP AEs and SAEs as the trial team was interested in distinguishing between participants’ mental and physical well-being. Events that did not fit under the GCP definitions but were considered of particular relevance to the trial were then classified as ‘Social AEs’. These included safeguarding issues, information related to the child being fostered or adopted which in these circumstances may be a proxy for adversity [12], incidents of violence or aggression towards Family Nurses or field researchers, and issues that would be important for researchers to know about before speaking to a participant, such as social circumstances (both at baseline and any changes during the course of the trial), and instances when a participant scored positively for serious abuse on the domestic abuse scale. Events that were recorded on AE forms but did not meet the criteria for any of the above categories were classified as ‘Other events’.

Defining unique events: during classification it was important to define what constituted a discrete ‘event’ as some forms were essentially updates to previous reports. An event was defined as starting from the point of presentation, and continued to be consistently the same ‘condition’ until the end of the event. The end of the event was defined as when the participant had been either discharged from hospital, there was no further attendance or visit required, or no follow-up form was sent. When forms were sent in relation to the same event, the first form sent (by date) was classified, and the rest of the forms were marked as ‘follow-up’. All forms related to the same event were reviewed before classifying an event as ‘follow-up’ as any form could include details that would change an event’s classification. If this was the case the rater would then classify the event using the most serious classification, and thus these events were analysed on the basis of the greater degree of severity. Where more than one event was reported on a form, each event was classified separately.

Coding forms: after the final classification system was agreed, the AE forms were coded by a clinically qualified qualitative researcher (CW) from outside the research team but who had been involved in developing the classification system. A second rater (GM) coded a 10% random selection of events to ascertain reliability of the classification system using Cohen’s Kappa [13].

Aim 2: exploring sources of variation in rate of reporting AEs

We hypothesised that:

  • Poorer psycho-social status and health at baseline will be associated with higher reported rates of both Medical and Social AEs (baseline variables thought to reflect poorer psycho-social status listed below)

  • Poorer psycho-social status at baseline will more likely be associated with Social rather than Medical AEs

  • AEs reports will be more likely for those in the trial intervention arm (hypothesised to be due to surveillance bias, having received up to 64 visits from a family nurse)

    Rate of AE reporting will vary by trial site (due to various system level differences between sites which could include variability in research nurse approach; e.g. actual funded time, total number of participants at site being monitored, quality of links to local Family Nurses or other local staff). Site was a predictor we sought to modify during the course of the trial, but despite our efforts differences in site were not eradicated.

Baseline variables that we considered to indicate poorer psycho-social status were younger age at recruitment, the woman’s status being classified as NEET (Not in Education, Employment, or Training), being in receipt of benefits, having ever been homeless, having lower socio-economic status (Index of Multiple Deprivation score), lower family and lower personal subjective social status, lower relationship quality, lower social support, lower family resources, lower self-efficacy, and lower adaptive functioning.

All participants were categorised as having experienced either no, or at least one Social AE. They were also categorised as having experienced either no or at least one Medical AE (regardless of severity). These formed the two dependent variables in subsequent analyses. For each dependent variable the following sets of analyses were performed. Baseline characteristics were summarised between those who experienced either no or at least one AE (Social and Medical) using number (%), mean alongside standard deviation (SD) and median alongside the 25th to 75th centiles. Baseline characteristics included socio-demographics listed above, e.g. age; health (e.g. health status, psychological distress) and group allocation. Logistic regression models were run to examine univariable associations between baseline characteristics and AEs. Baseline characteristics that were associated at the 10% significance level were retained and entered as candidate predictors for the multivariable model to detect all characteristics independently predictive based on a significance level of 0.05 of AEs. Trial site was adjusted for by its inclusion as a random effect in all models. Multi-collinearity in each model between candidate predictors was assessed by detecting the tolerance and its reciprocal, the Variance Inflation Factor (VIF). As a rule of thumb a VIF of 1 indicates no collinearity but a VIF greater than 4 (a tolerance of 0.2) might warrant further investigation and greater than 10 would indicate that multi-collinearity is a problematic.

Results

Aim 1: classification system for reported AEs

A classification system was derived for reported AEs (Fig. 1). This distinguished between Medical AEs and Social AEs. The former were further classified into Physical or Mental and by severity (i.e. whether or not serious, severity was determined following the GCP definition). Social AEs encompassed several distinct categories, such as safeguarding, but did not further distinguish between severity. The reliability of coding reports to the classification system was high (Table 1) with an overall Cohen’s Kappa [13] rating of 0.925. Of the 1315 uniquely reported events, 78.6% were Medical AEs (552 SAEs, 481 AEs), 19.5% were coded as Social AEs and a further 25 (1.9%) were coded as ‘Other’ events.

Fig. 1
figure1

Adverse event (AE) classification in the Building Blocks trial

Table 1 Reliability of Building Blocks adverse event (AE) classification system

The number of unique events reported by trial site and their classification, whether the event was related to the mother or baby, source of notification, and trial arm are described in the following paragraphs.

One thousand, three hundred and fifteen completed forms were sent to the trial team, relating to 667/1618 (41.2%) participants (or their baby(ies)). The number of events per participant varied considerably from 0 to 27. On average, 0.81 events were reported for each participant (Table 2). For Physical SAEs the rates of reported events ranged from 0.07 to 1.53 per participant (a more than 20-fold difference in similarly sized trial sites).

Table 2 Number of events per participant within each site

None of the Social AEs were related to violence or aggression towards Family Nurse or Researchers (as self-reported by the professionals), and most events related to safeguarding (Table 3).

Table 3 Details of events classified as Social adverse events (AEs)

Events related to mothers accounted for 36.7%, events related to baby(ies) accounted for 42.7%, and events related to both mother and baby(ies) accounted for 20.6%. 614/1315 (46.69%) of events were recorded as being before the birth of the Building Blocks baby(ies).

Over 90% of events were reported by field and office-based researchers as opposed to other health professionals involved with the trial (Table 4).

Table 4 Source of event notification

After variations in AE reporting rates were observed during the course of the trial follow-up, additional face-to-face training was provided on two dates to all field researchers. The number of events reported before the first training day was 1030 (78.3%), the number of events reported between the two training dates (including first training date) was 14 (1.1%), and the number of events reported post training (including the second training date) was 109 (8.3%); 162 (12.3%) events were reported that did not contain an event date. The referenced here is event date, rather than reporting date; therefore, caution should be taken as the event may have taken place sometime before it was reported.

Aim 2: analysis of variation in rate of reporting AEs

Baseline characteristics were compared for participants with and without at least one Social AE (Table 5) and for participants with and without at least one Medical event (either AE or SAE) (Table 6).

Table 5 Baseline characteristics of participants with and without at least 1 Social adverse event (AE)
Table 6 Baseline characteristics of participants subsequently with and without at least 1 Medical adverse event (AE) or serious adverse event (SAE)

Numerous baseline characteristics were identified to be associated with Social AEs including younger mothers, lower family and personal subjective social status, NEET, being in receipt of benefits, homelessness, lower self-efficacy and social support, difficulty in at least one basic skill, lower quality of life, having a limiting long-term illness, more likely to have substance abuse, antisocial behaviour, lower relationship quality and family resources and worse psychological distress (Table 5). No multi-collinearity was found between any of the candidate predictors in the multivariable model (VIF = 1.26). Three predictors were found to be independently associated based on a significance level of 0.05 with Social AEs after adjusting for all other candidate predictors. Participants with at least one Social AE were more likely to be younger at recruitment (odds ratio (OR) = 0.78 (CI = 0.67 to 0.90), p = 0.001), to receive welfare benefits (OR = 1.60 (CI = 1.09 to 2.35), p = 0.016), and have a higher score on a measure for antisocial behaviour (OR = 1.22 (CI = 1.09 to 1.36), p < 0.001) (Table 5).

For Medical S/AEs, fewer predictors were apparent at univariable level including a higher deprivation score, less than perfect health, a limiting long-term illness, difficulty in at least one basic skill and having at least one adaptive functioning burden, antisocial behaviour, more psychological distress and randomised to receive FNP (Table 6). Again no collinearity was found between any of the candidate predictors in the multivariable model (VIF = 1.09). Three predictors of Medical S/AEs remained based on a significance level of 0.05 after adjusting for all other candidate predictors in the model (Table 6).

Participants with at least one Medical S/AE were more likely to have a limiting long-term illness (OR = 1.37 (CI = 1.01 to 1.88), p = 0.046), were more likely to score higher on a measure of psychological distress/mental health (OR = 1.03 (CI = 1.01 to 1.05), p = 0.004), and were more likely to be in the intervention arm of the trial (OR = 1.34 (CI = 1.07 to 1.70), p = 0.012).

Missing data was limited as baseline trial data was well completed (apart from two variables; NEET and relationship quality) and these were omitted from the multivariable analyses.

Discussion

Most AEs reported to the Building Blocks trial were classified as being Medical SAEs or AEs of a physical nature. However, our finding that over 19% of events were Social AEs supports the idea that the GCP definition of AEs and SAEs cannot capture all events related to well-being and social circumstances that might be important for a public health or social care trial.

Reporting of AEs in trials requires a number of inter-related processes to occur (Fig. 2). First, there has to be a reportable event; therefore, an ‘event’ needs to be defined. Pre-existing factors related to the individual may affect this; for example, ongoing or intermittent ill-health which may or may not be related to the individual’s trial eligibility. Factors arising during the course of the trial will also affect this, perhaps most notably, but not solely, exposure to the intervention. Second, events need to be recognised as reportable, either by the individual participant or by a relevant professional. The pivotal factors at this stage are how observable the event is, and its severity. Third, a decision needs to be made to formally report. This may involve decision-making by the participant as well as a professional and key to this will be an assessment of relevance (i.e. is the event of sufficient importance?). This is of course a judgement that can be dependent on many factors; e.g. value placed on the particular event, and is it within the scope of interest of the trial? Most of this is pre-defined. Finally, a mechanism needs to facilitate capture of the event. As we have seen in our trial, mechanisms for capture include direct reporting (e.g. to field or office-based researchers using standardised forms), identification through review of routine records, or identification via screening questions.

Fig. 2
figure2

A proposed model of adverse event presentation and reporting

How well a trial system can capture with precision all relevant events will depend on adequate progression through each of the stages described above. Clinical trials of investigational medicinal products, which may be most concerned with reporting serious Medical AEs, may fare better in adequately progressing through these required processes than trials of complex interventions where unexpected and undesirable impacts may be less tangible and arise within a broader social context. Defining undesirable impacts may be more complex in public health or social care trials, and the use of Public and Patient Involvement to assist with definitions may be particularly useful in some cases.

We hypothesised that lower baseline psycho-social status or poorer health status may increase the likelihood of both Medical and Social AEs. This relates to the first step of our model (i.e. pre-existing factors). Participants with existing conditions are more likely to continue or repeat experience of that condition. We also hypothesised that poorer psycho-social status would be a better predictor of Social AEs rather than of Medical AEs or SAEs, and this hypothesis was also supported. Our third hypothesis was that participants with at least one reported AE or SAE, regardless of whether Medical or Social event, are more likely to be in the intervention arm. This relates to the Recognition and Decision-making steps in our model. Women in receipt of intervention were regularly in contact with a health professional who in turn also promoted her access to supportive services. The personal relationship between a participant and her Family Nurse would have meant an increased number of opportunities for observing events, and would also have increased the likelihood of the women disclosing a concern, which they may not have otherwise presented to another health professional or researcher. We found trial arm to be a predictor of Medical S/AE, but not of Social AEs, thus providing partial support for our hypothesis. It is possible that expected social concerns may have simply been addressed within the routine remit of the Family Nurse’s work, rather than being documented or reportable as a trial AE. Our final hypothesis was that site-level differences would affect reporting of both Medical S/AEs and Social AEs. While we have been unable to fully explore this facet of the process in our analysis, factors that may vary by site cumulatively impact upon successive stages of event processing and are discussed more fully below. These factors could include local capacity, experience of field researchers and adequacy of training cascaded to local professional and research staff. Table 7 summarises our hypotheses in relation to our results.

Table 7 Results in relation to hypotheses

How trial teams can optimise the capture of AEs is represented to the right of our model (Fig. 2). These include well-established practices such as having a clearly defined set of criteria for reportable events, awareness-raising amongst key stakeholders and provision of accessible reporting forms. While for clinical trials of medical interventions, the scope of reportable events is well established, this will need to be expanded for trials of complex public health and social care interventions. Adherence to these processes will need to be supported through training, performance monitoring and feedback mechanisms which could involve one-to-one review of reported events and/or systematic assessment of sets of reported events. These combined processes are most likely to impact upon the Recognition, Decision-making and Reporting stages of the model.

Taking our trial as an example, process optimisation would involve training field and office-based researchers to ensure that AEs were collected in a standardised manner. It is important to collect AE data in a standardised manner to enable researchers to pool evidence from large trials [3], and standardisation also allows researchers to compare efficacy outcomes with AEs reported. There were some variations reported in the ways that AEs were collected in the Building Blocks trial, and this may have had a bearing on the proportion of AEs collected from each site. While advice was provided at the outset about what was reportable as an AE (i.e. a clear definition of what an event is), we revised this advice based on early experiences in the trial. Researchers were responsible for asking local health professional teams; for example, Family Nurses, to alert them to any AEs concerning Building Blocks participants. Stickers were also placed inside participant hospital notes alerting hospital staff to contact the researcher with details of any AEs. Having accessible reporting forms and other guidance defining what is reportable is key. Even though field and office-based researchers were trained in the collection of AEs, verbal reports alluded to some variation in the way AEs that were collected in practice. Some researchers reviewed hospital notes for AEs when collecting data for the birth data collection phase of the trial. While this was valuable for identifying some otherwise unreported events, clearer direction at the outset to target this activity would have reduced some apparent unhelpful variation by site. Scoring positively for items indicating serious abuse on a domestic abuse scale also triggered the completion of an AE form, and formally triangulating between data sources to identify AEs where possible might be another way to improve the collection of AEs. It should also be noted that some events have a subjective element; for example, events related to mental health are probably more subjective than those relating to physical health, and the recognition of an event may be affected by the subjectivity of that event. Other researchers have written about the importance of systematic collection of events in medical trials to produce reliable data [14] and to prevent biased reporting [15]. The subjectivity of medical events may be a reason for the slightly lower agreement during classification when compared to physical events which include more objectively observable physical descriptions symptoms/signs/diagnoses, and for AEs (rather than SAEs) the rôle of subjective decision-making may be greater as the apparent importance is less severe. In this study, although reporting systems for SAEs were systematic (i.e. using a common reporting form) and reporting came via multiple routes, there was not a wholly systematic process for their identification. Ensuring that data was collected in a more systematic way could have been done in a number of ways in the current trial. For example, we could have asked researchers to all either periodically search notes for AEs (this was done by a proportion of researchers) or to do this at the end of the trial. While domestic violence was systematically screened for at the end of the study period and where applicable reported as an SAE, other items specifically designed to collect AE data could have been included in the various data collection stages. Tools such as MedDRA have been used for safety monitoring in drug trials; something similar could be used, with supplementary items designed to capture social events. These amendments, however, would have increased costs and participant burden and doing so would have to be balanced against the risk of missing such harms.

Improvements could also have been made in the training given to field and office-based researchers to ensure that AE forms were completed in a standardised manner. The quality of an individual case safety report is dependent on the accuracy and completeness of the information gleaned about the case [16], and the same can be said in the case of AE reporting in the Building Blocks trial. The need for training on the completion of a form should be balanced with ensuring that the forms are self-explanatory as many health professionals completing the forms will be doing so without receiving any formal training. For example, as well as field and office-based researchers, other health professionals and even participants may provide information on AEs in the trial. Guidance on determining expectedness of events was provided during the training; however, some events reported as being ‘unexpected’ were subsequently reclassified due to the context of the Building Blocks trial.

Horigian et al. [17] listed five principles for defining AEs in behavioural research and our own study can be viewed in light of these. Firstly, that they should be grounded in previous research, and secondly, queries on AEs should include domains plausibly affected by the interventions being tested. The current study also defined AEs in light of research, but were more open in what were accepted as AE reports. This may have caused some problems with too much interpretation by Research Nurses and too much variation in reporting by site; this issue was responded to with more training. Perhaps a framework of possible AEs should be put in place a priori which then allows for unanticipated AEs to be observed and reported. Compared to some psycho-therapeutic settings, home-visiting is a more complex intervention, may impact on a broader range of outcomes and not solely for the participant (for example, there could be an impact on a partner, parent, etc.). Even though a logic model and previous literature can inform in advance what AEs may be likely, some flexibility within an overarching framework is helpful. Thirdly, monitoring should attempt to assess relatedness between interventions and AEs, we agree with this but it should be kept in mind that relatedness is perhaps even harder to establish when an intervention is delivered over such a long period of time (2.5 years) and where the intervention (in this case FNP) is also seeking to engage the client with a range of other services, social and family support, this simply adds to the complexity of causation. We agree with both the fourth principle that systematic monitoring is essential for identifying unexpected events, and the fifth, that effective monitoring is a shared responsibility. In summary, the current piece of work provides support for Horigian’s model in a different setting (community-based public heath within families of young children). As they comment on the need for the utility of the principles on other settings in their paper, we provide some evidence of that generalisability. A robust theory to identify broad AE domains, in addition to more specific AEs, is essential to capture unexpected AEs, and that training is even more essential to ensure that. Our study provides an example of where we aimed to capture AEs, specifically Medical or Social, although the approach of Horigian et al (2010) would probably actually address both. The approach to monitoring AEs in social and public health is still limited and variable; our study perhaps identifies the need to better train staff to monitor this in more complex intervention settings rather than with clinical patients.

Strengths and limitations

We have developed a simple classification scheme for monitoring reports of AEs, which explicitly accommodates social as well as medical events. This has been developed over the course of an ongoing trial and, therefore, benefits from review and assessment of actual reports rather than hypothetical examples. Constructing the classification has benefited from the input of the trial team tasked with AE monitoring (including clinical input) which has also been involved in training research staff in collating reports in the field. The experience of discussing the purpose and practice of AE monitoring with this specific trial population has helped to clarify the purpose and scope of event monitoring. While the classification reflects a particular public health intervention and trial population, it nevertheless provides an example of how the existing GCP standard approach to reporting Medical AEs can be expanded to reflect the needs of a specific trial. Finally, while our classification distinguished reliably between Medical and Social AEs, a small number of ‘Other events’ were categorised as neither and excluded from further analysis. It is possible that further details of the reported event or further review of the report received would have resulted in reclassification as either a Medical or a Social event. However, it is probable that other circumstances for trial participants would still be of some logistical or clinical value and, therefore, important to monitor.

The presented analysis benefited from a large sample which was well characterised at baseline, and dependent outcomes produced following a reliable coding process. Our examination of predictors was limited by the large number of levels for the ‘principal site’ variable. Therefore, we are unable to conclude whether apparent variation in reporting by sites could have been due to differences in trial participants between sites, or due to site-level factors such as the local researcher. Given the large variation in event reporting rates between sites with similarly sized participant samples it seems likely that non-participant-related factors are likely to be influencing reporting rates. This is important as it would represent unhelpful noise in an attempt by investigators to accurately monitor safety and well-being for trial participants.

Conclusions

Active systematic safety monitoring in public health and social care trials which additionally focus on Social AEs is rarely reported. In public health and social care trials, it is likely that there will be adverse experiences that are not medical but may reflect social circumstances. A system of safety monitoring should be considered which would include both Medical and Social AEs. We recognise that this may result in a valid decision not to actively monitor AEs based, for example, on likely frequency and severity. Collecting social events needs to be tailored to the circumstances of the trial and to reflect how the information is likely to be used. This could include assessing any unexpected adverse consequences of the intervention, more general safeguarding of participant well-being during a trial, identifying matters that need to be considered in running the trial (e.g. to avoid contacting participants in distress) and also exploring more broadly the mechanism and broader impacts of an intervention (Fig. 2). How information about AEs will be used should be clearly stated by researchers and guide decision-making about how best to resource and support high-quality data capture.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request but which would require additional processing to ensure confidentiality.

References

  1. 1.

    Allen EN, Chandler CI, Mandimika N, Barnes K. Eliciting adverse effects data from participants in clinical trials: The Cochrane Library; 2013. https://doi.org/10.1002/14651858.MR000039.pub2.

  2. 2.

    Duggan C, Parry G, McMurran M, Davidson K, Dennis J. The recording of adverse events from psychological treatments in clinical trials: evidence from a review of NIHR-funded trials. Trials. 2014;15(1):335.

    Article  Google Scholar 

  3. 3.

    Joelson S, Joelson IB, Wallander MA. Geographical variation in adverse event reporting rates in clinical trials. Pharmacoepidemiol Drug Saf. 1997;6(S3):31–35.

    Article  Google Scholar 

  4. 4.

    Malone DG, Baldwin NG, Tomecek FJ, Boxell CM, Gaede SE, Covington CG, Kugler KK. Complications of cervical spine manipulation therapy: 5-year retrospective study in a single-group practice. Neurosurg Focus. 2002;13(6):1–8.

    Article  Google Scholar 

  5. 5.

    Thiel HW, Bolton JE, Docherty S, Portlock JC. Safety of chiropractic manipulation of the cervical spine: a prospective national survey. Spine. 2007;32(21):2375–8.

    Article  Google Scholar 

  6. 6.

    Carnes D, Mullinger B, Underwood M. Defining adverse events in manual therapies: a modified Delphi consensus study. Man Ther. 2010;15(1):2–6.

    Article  Google Scholar 

  7. 7.

    Owen-Jones E, Bekkers MJ, Butler CC, Cannings-John R, Channon S, Hood K, Gregory JW, Kemp A, Kenkre J, Martin BC, Montgomery A, Moody G, Pickett KE, Richardson G, Roberts Z, Ronaldson S, Sanders J, Stamuli E, Torgerson D, Robling M. The effectiveness and cost-effectiveness of the Family Nurse Partnership home visiting programme for first time teenage mothers in England: a protocol for the Building Blocks randomised controlled trial. BMC Pediatr. 2013;13(1):114.

    Article  Google Scholar 

  8. 8.

    Robling M, Bekkers MJ, Bell K, Butler CC, Cannings-John R, Channon S, Martin BC, Gregory JW, Hood K, Kemp A, Kenkre J, Montgomery A, Moody G, Owen-Jones E, Pickett KE, Richardson G, Roberts Z, Ronaldson S, Sanders J, Stamuli E, Torgerson D. Effectiveness of a nurse-led intensive home-visitation programme for first-time teenage mothers (Building Blocks): a pragmatic randomised controlled trial. Lancet. 2016;387(10014):146–55.

    Article  Google Scholar 

  9. 9.

    Atkinson TM, Li Y, Coffey CW, Sit L, Shaw M, Lavene D, Bennett AV, Fruscione M, Rogak L, Hay J, Gönen M. Reliability of adverse symptom event reporting by clinicians. Qual Life Res. 2012;21(7):1159–64.

    Article  Google Scholar 

  10. 10.

    Wasson JH, MacKenzie TA, Hall M. Patients use an Internet technology to report when things go wrong. Qual Saf Health Care. 2007;16(3):213–5.

    Article  Google Scholar 

  11. 11.

    Hegarty K. Composite Abuse Scale manual. Melbourne: Department of General Practice, University of Melbourne; 2007.

    Google Scholar 

  12. 12.

    Hall D, Hall S. The ‘Family-Nurse Partnership’: developing an instrument for identification, assessment and recruitment of clients; 2007. p. 115. https://dera.ioe.ac.uk/6740/1/DCSF-RW022.pdf.

  13. 13.

    Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46.

    Article  Google Scholar 

  14. 14.

    Mayo-Wilson E, Fusco N, Li T, Hong H, Canner JK, Dickersin K. Harms are assessed inconsistently and reported inadequately; part 1: systematic adverse events. J Clin Epidemiol. 2019;113:20–27.

    PubMed  Google Scholar 

  15. 15.

    Mayo-Wilson E, Fusco N, Hong H, Li T, Canner JK, Dickersin K. Opportunities for selective reporting of harms in randomized clinical trials: selection criteria for nonsystematic adverse events. Trials. 2019;20(1):553.

  16. 16.

    Klepper MJ, Edwards B. Individual case safety reports—how to determine the onset date of an adverse reaction. Drug Saf. 2011;34(4):299–305.

    Article  Google Scholar 

  17. 17.

    Horigian VE, Robbins MS, Dominguez R, Ucha J, Rosa CL. Principles for defining adverse events in behavioral intervention research: lessons from a family-focused adolescent drug abuse trial. Clin Trials. 2010;7:58–68.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank:

• All Building Blocks core project team members: Marie-Jet Bekkers, Kerry Bell, Kristina Bennert, Christopher C Butler, Sue Channon, Belen Corbacho Martin, John W Gregory, Kerry Hood, Alison Kemp, Joyce Kenkre, Lesley Lowes, Alan A Montgomery, Eleri Owen-Jones, Kate Pickett, Gerry Richardson, Zoë E S Roberts, Sarah Ronaldson, Eugena Stamuli, Jackie Swain, David Torgerson

• Participating local centres and all others who reported AEs to the Building Blocks trial

• The Building Blocks trial participants

• Eleri Owen-Jones, Katy Addison, and Jackie Swain for their help in processing AEs

• Staff of the Department of Health Policy Research Programme Central Commissioning Facility

• The Centre for Trials Research (CTR) is funded through the Welsh Government by Health and Care Research Wales, and Cancer Research UK, the authors gratefully acknowledge the CTR’s contribution to study implementation and the funding

Funding

Funder: Policy Research Programme in the Department of Health. This is an independent report commissioned and funded by the Policy Research Programme in the Department of Health. The views expressed are not necessarily those of the Department. The Policy Research Programme in the Department of Health commissioned and funded the Building Blocks trial but was not involved in the design of the trial, the data collection; analysis; or interpretation of the data, or in the writing of the manuscript.

Author information

Affiliations

Authors

Contributions

Study conception: MR, RCJ, JS, GM. Drafting manuscript: GM. Statistical lead: RCJ. Statistical analysis: GM. Coding of AE forms: CW. MR, RCJ, JS, GM, CW, and KA critically reviewed and approved the final version of the submitted manuscript. MR is the Chief Investigator of the Building Blocks trial.

Authors’ information

Not applicable

Corresponding author

Correspondence to Gwenllian Moody.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Ethics Committee of Research Ethics Committee for Wales (ref. no. 09/MRE09/8). Informed consent was obtained from each participant before data collection and randomisation.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Moody, G., Addison, K., Cannings-John, R. et al. Monitoring adverse social and medical events in public health trials: assessing predictors and interpretation against a proposed model of adverse event reporting. Trials 20, 804 (2019). https://doi.org/10.1186/s13063-019-3961-8

Download citation

Keywords

  • Safety monitoring
  • Adverse event
  • Serious adverse event
  • Public health
  • Clinical trials
  • Home visiting