Skip to main content

Effects of the Informed Health Choices podcast on the ability of parents of primary school children in Uganda to assess the trustworthiness of claims about treatment effects: one-year follow up of a randomised trial

Abstract

Introduction

Earlier, we designed and evaluated an educational mass media intervention for improving people’s ability to think more critically and to assess the trustworthiness of claims (assertions) about the benefits and harms (effects) of treatments. The overall aims of this follow-up study were to evaluate the impact of our intervention 1 year after it was administered, and to assess retention of learning and behaviour regarding claims about treatments.

Methods

We randomly allocated consenting parents to listen to either the Informed Health Choices podcast (intervention) or typical public service announcements about health issues (control) over 7–10 weeks. Each intervention episode explained how the trustworthiness of treatment claims can be assessed by using relevant key concepts of evidence-informed decision-making. Participants listened to two episodes per week, delivered by research assistants. We evaluated outcomes immediately, and a year after the intervention. Primary outcomes were mean score and the proportion with a score indicating a basic ability to apply the key concepts (> 11 out of 18 correct answers) on a tool measuring people’s ability to critically appraise the trustworthiness of treatment claims. Skills decay/retention was estimated by calculating the relative difference between the follow-up and initial results in the intervention group, adjusting for chance. Statistical analyses were performed using R (R Core Team, Vienna, Austria; version 3.4.3).

Results

After 1 year, the mean score for parents in the intervention group was 58.9% correct answers, compared to 52.6% in the control (adjusted mean difference of 6.7% (95% CI 3.3% to 10.1%)). In the intervention group, 47.2% of 267 parents had a score indicating a basic ability to assess treatment claims compared to 39.5% of 256 parents in the control (adjusted difference of 9.8% more parents (95% CI 0.9% to 18.9%). These represent relative reductions of 29% in the mean scores and 33% in the proportion of parents with a score indicating a basic ability to assess the trustworthiness of claims about treatment effects.

Conclusions

Although listening to the Informed Health Choices podcast initially led to a large improvement in the ability of parents to assess claims about the effects of treatments, our findings show that these skills decreased substantially over 1 year. More active practice could address the substantial skills decay observed over 1 year.

Trial registration

Pan African Clinical Trial Registry (www.pactr.org), PACTR201606001676150. Registered on 12 June 2016.

Peer Review reports

What is already known?

In a trial conducted in 2016, the Informed Health Choices podcast was effective in improving people’s ability to critically assess the trustworthiness of claims about treatment effects immediately after the intervention.

What are the new findings?

The effect of the Informed Health Choices podcast on people’s ability to appraise the trustworthiness of claims about treatment effects reduced markedly in the year after implementation of the intervention, indicating a substantial skills decay.

What do these findings imply?

The effect of the Informed Health Choices podcast on people’s ability to think critically about claims about the effects of treatments likely reduces markedly with time, in the absence of additional intervention or regular practice. In order for learning to be sustained, considerations should be made to reinforce the messages of the podcast.

Background

Many countries and societies today are faced with an overabundance of claims (things people say) about the effects of treatments and advice about what we should do to improve or maintain our health [1,2,3,4,5,6]. Some of these are about the effects of medical or surgical interventions, preventative or palliative individual and public health interventions. These claims have increased in frequency, geographical reach and speed of spread as access to information, the Internet and social media use increases [7,8,9,10]. Many of these claims are not based on trustworthy evidence [11,12,13,14] and represent a portion of what some people term as “fake” health news, advice or stories. Many people do not have the required aptitude to critically appraise the trustworthiness of claims about the effects of treatments, and often act on them in making choices about treatments [15,16,17,18,19,20,21,22,23,24]. Poorly informed health choices can result in overuse of ineffective or harmful treatments (actions intended to maintain or improve the health of individuals or communities), underuse of effective treatments, waste and unnecessary suffering [25,26,27,28,29]. Making well-informed choices about treatments is especially important in low-income countries, which have few resources to waste and where the repercussions for making poor health choices are likely to be greater [30,31,32,33,34]. However, there are few resources to teach people without a health and/or research background to think more critically in evaluating claims about treatments and few studies have evaluated the effects of interventions to teach patients or the public to think critically about health choices [35, 36]. As part of the Informed Health Choices (IHC) project [37], we developed a mass media intervention (an edutainment podcast) to help fill this gap.

We began by identifying key concepts that people must understand and apply when assessing claims about treatments [38, 39]. We call these the informed health choices (IHC) key concepts. Together with journalists in Uganda, we assessed which key concepts are most important for the public to understand [40]. Our mass media intervention was developed to teach 9 of the now 49 IHC key concepts (Table 1) to parents of primary school children [41].

Table 1 Key concepts included in the IHC mass media (podcast) and primary school resources

Description of the intervention (the Informed Health Choices podcast)

We developed pre-recorded audio messages with teachings about critically appraising the trustworthiness of claims about the effects of treatments. The podcast had 13 episodes in both English and Luganda, a local language widely spoken in the study area: an introduction to the series; eight main episodes; three short recap episodes, each of which summarised two of the first six main episodes; and a conclusion. Each of the eight main episodes included a short story with an example of a treatment claim, an explanation of the IHC key concept applied to the claim, and another example within the same story illustrating the concept. The examples of claims were identified from scanning recent mass media reports and interviewing parents. We also gave the parents a checklist summarising the key messages in the podcast and a song (the IHC theme song) to reinforce the messages of the podcast [42]. The podcast is available online at https://www.youtube.com/watch?v=_QVdkJIdRA8&list=PLeMvL6ApG1N0ySWBxPNEDpD4tf1ZxrBfv.

As described in the paper describing the initial assessment results, the research assistants delivered the intervention to parents on multimedia players in the patients’ workplaces and/or homes over a period of 7–10 weeks. They listened to two new episodes each week and a recap of the previous episodes. Following this observed listening, they were given the content of the podcast on portable multimedia players to listen on their own before they completed the evaluation tool [43]. This information has been presented before [43]. Very much aware of self-plagiarism, we only present it here for purposes of making it easier for the reader, in case one may not be able to find it easily in previous publications.

In 2016, we conducted a randomised trial to evaluate the effects of the IHC podcast on the ability of parents in Uganda to apply key concepts of evidence-informed decision-making in appraising the trustworthiness of claims about the effects of treatments. The trial showed that parents who listened to the IHC podcast had a large improvement in their ability to assess treatment effects shortly after listening to all of the episodes [43]. We also developed learning resources to teach 12 of the key concepts (Table 1) to children in the fifth year of primary school in Uganda. A linked cluster-randomised trial showed that the IHC primary school intervention also had a large effect on the ability of the children to apply those IHC key concepts [44].

Follow up was for 7–10 weeks and after 1 year. In this report we present methods and results of a 1-year follow-up study of the effects of an educational podcast. The main aim of the follow-up study was to assess parents’ ability to assess the trustworthiness of claims about the effects of treatments a year after listening to the podcast. This would enable us to determine how much of the critical appraisal skills learned were retained overall, and for each IHC key concept. Many clinical trials have short follow-up periods and are implemented in highly controlled environments with highly selective outcomes prespecified by investigators. Although follow-up studies can be logistically challenging, they can provide valuable information about the longer-term effects (benefits and harms) and costs of health interventions that investigators were not able to obtain during the initial trial follow-up period. We also aimed to assess if and how parents were able to apply their newly learned key concepts in making decisions about treatments in the year following the intervention, and their intended behaviours going forward. The results of the sister studies - the 1-year follow-up study of the primary school resources and process evaluations for the podcast and the primary school resources - are reported in companion articles elsewhere [45,46,47].

Methods

This was a follow-up assessment to a parallel-group randomised trial comparing the IHC podcast for teaching critical appraisal skills to a series of recordings designed to sound like typical public service announcements about health issues. Details on the study methods can also be found in the trial protocol [48] and the report of the initial results [43]. Some of the information in this section has been presented in some form in our earlier publications [43, 48]. We reuse it here only for purposes of providing clarity to the reader who may have difficulties accessing the information from our previous publications, well aware of the concept of self-plagiarism. We have done our best to acknowledge and reference appropriately.

Eligibility

Parents in the 1-year follow-up study were those who participated in the randomised trial that evaluated the impact of the IHC podcast in 2016 [43]. To participate in that study, parents had to understand English or Luganda and provide written consent. Parents who were unable to hear or were not contactable by telephone, health researchers and participants in the development of the podcast were excluded. Parents of children who participated in the development of the primary school resources were also excluded.

Participants

The study was conducted in central Uganda. As reported previously [43, 48], we recruited parents and guardians of children in the fifth year of primary school who were participating in the IHC primary school intervention trial [44]. Parents were recruited from both intervention and control schools. We recruited a convenience sample of participants at parent meetings held at 20 intervention schools and 15 control schools, between late August and early November 2016. Of the 675 parents who consented and were randomised, 561 (83%) completed the test used to measure their ability to assess claims about the effects of treatments shortly after listening to the podcast, in 2016. We attempted to follow up all 561 parents 1 year after they completed the test. We contacted those who were still reachable by phone and asked them to complete the test again.

Randomisation and masking

We stratified the parents by highest level of formal education attained (primary school, secondary school or tertiary education) and the allocation of their children’s school in the trial of the primary school resources (intervention or control). We generated randomisation sequences with block sizes of four and six with equal allocation ratios within each block, using www.sealedenvelope.com. A statistician who was not a member of the research team generated the allocation sequence, and together with his team prepared six randomisation lists (one for each combination of the two stratification variables) with unique codes. They labelled opaque envelopes with the unique codes, inserted slips of paper with the study group allocated to each code and sealed them. We allocated groups of participants at the end of each day on which a meeting was held. Upon return to the trial management office, the research assistant responsible for allocation opened the next available envelope in the stratum corresponding to each parent’s education level and whether the child of that parent went to a school in the intervention or control arm of the primary school resources trial.

The research assistants who delivered the podcast, the principal investigators supervising them (DS and AN), the study participants and the statistician analysed the data all knew whether the participants received the IHC podcast or the public service announcements. To ensure uniform performance in delivery of the podcast and the public service announcements and in the assessment of outcomes, all study staff were trained before the start of the trial and received refresher training during the trial.

Procedures

Participants could choose whether they wanted to listen to the podcast or the announcements in English or Luganda. Participants in the control group listened to typical public service announcements about the same conditions that were used in the IHC podcast. The podcast and the public service announcements were produced in collaboration with a Ugandan radio producer and actors. Research assistants helped with recruitment, delivery of the podcast, follow up, and administration of the test used as the outcome measure. They delivered episodes of the podcast or the public service announcements to the participants over a period of 7–10 weeks. To ensure that the participants listened to each episode (or announcement), the research assistants visited each participant once per week, delivering two episodes via a portable media player and speaker. In addition to listening to the episodes delivered by the research assistants, we provided participants with the complete podcast and the IHC theme song on MP3 players, so that they could replay them at their convenience.

The test included 18 multiple-choice questions from the Claim Evaluation Tools database [49,50,51] - two for each of the nine IHC key concepts (Additional file 1). Because many parents did not have English as their first language and many had poor reading skills, we developed a Luganda audio version of the test to be administered by an interviewer [52]. We were careful to ensure that the examples used in the questions were different from what was used in the podcast, and that participants would be able to understand the language that was used without having listened to the podcast. For the 1-year follow up, participants answered the same 18 questions that they answered initially. Research assistants visited the participants individually and administered the tests.

The questions had between two and four response options, with an overall probability of answering 37% of the questions correctly by chance alone. We used an absolute (criterion-referenced) standard to set a cutoff for a passing score (11 out of 18 questions (61%) answered correctly) and a mastery score (15 out of 18 questions (83%) answered correctly) [53].

There were 8 additional multiple-choice questions included, making 26 questions in total. These questions addressed four IHC key concepts not covered by the podcast (Table 1). They were included because the same test was used in the linked randomised trial evaluating the primary school resources, and those IHC key concepts were covered in the primary school resources [44]. Responses to these eight questions were not included in the primary analyses of the podcast trial. The test also included questions that assessed intended behaviours and self-efficacy.

We calculated retention of what was learned by parents in the podcast group to help interpret the results. Retention is reported as the test scores in the podcast group after 1 year relative to their test scores shortly after listening to the podcast. Retention for the mean score is adjusted for chance, by subtracting the probability of answering questions correctly by chance (37%) from the means. These analyses were not specified in the protocol, but we decided to conduct them to help interpret the results.

In the test taken after 1 year, we also collected data on self-reported behaviours. We made the comparisons shown in Tables 2, 3 and 4, with the hypotheses shown in Table 2. These also were not specified in the original protocol for the trial but were planned prior to collecting the 1-year follow-up data.

Table 2 Comparisons related to self-reported behaviours in the 1-year follow up
Table 3 Consistent (correct) answers regarding certainty about treatment claimsa
Table 4 Exclusion criteria for self-reported behaviours

The trial employed 29 research assistants, each of whom was allocated up to 25 participants to follow up and deliver the interventions to. They were allocated either control or intervention participants but not both. The research assistants kept logs, including reasons for dropping out, and they recorded any unexpected adverse events. We also collected in-depth qualitative data from interviews and focus group discussions on potential adverse effects in the process evaluation [46].

The investigators conducted the follow-up assessment, with the help of research assistants. Given the nature of the intervention it was not possible to blind the outcome assessors.

Outcomes

The primary outcomes were:

  1. 1.

    Mean score (percent of correct answers) on the test taken a year after listening to all the podcast episodes or all the public service announcements

  2. 2.

    Proportion of participants with a score indicating a basic understanding and ability to apply the key concepts

Secondary outcomes were:

  1. 1.

    Retention of what was learned

  2. 2.

    Proportion of participants with a score indicating mastery of the concepts

  3. 3.

    for each IHC key concept, the proportion of participants answering both questions correctly

  4. 4.

    Intended behaviours and self-efficacy

  5. 5.

    Self-reported behaviours

  6. 6.

    Mean scores for the parents whose children were included in the intervention arm of the trial of the IHC primary school resources (to assess any effect of having a child in the intervention arm of a related trial teaching children the same concepts)

Statistical analysis

We estimated that 397 participants were needed to detect an improvement of 10% in the podcast group based on a method described by Donner et al. [54], as described previously [43]. Allowing for a 20% loss to follow up, we estimated that we would need a sample size of 497 participants. Participants’ data were analysed per their allocation group (intention to treat).

For the primary and secondary outcomes, we modelled the two stratification variables (education level and child’s school allocation in the IHC primary school trial) as fixed effects, using logistic regression for dichotomous outcomes and linear regression for continuous outcomes. Missing values were counted as wrong answers. For intended behaviours and self-efficacy, we dichotomized each outcome by combining categories, for example (1) “very likely” with “likely” and (2) “very unlikely”, “unlikely” and “don’t know” with missing responses. We reported the proportion in each category and in the combined categories (“likely or very likely” in this example).

For comparisons of how frequently participants reported hearing treatment claims, we analysed the ordinal data using ordinal logistic regression. We also dichotomised the responses (one claim or more most days or most weeks versus most months, almost never, do not know or missing), which we analysed using logistic regression. We dichotomised the responses for the other comparisons (Table 2).

Because these questions were not previously validated, we used open-ended questions to validate the answers to the preceding question about the type of treatment and to validate that they understood what a treatment claim is. We coded answers to these questions as correct or incorrect and excluded all the participants who did not correctly identify the type of treatment (Table 4) or who did not report a treatment claim, from the comparisons in Table 2. We also excluded participants who responded: “I have never heard of any treatment claims.” For the comparisons about a claim about a treatment for which they made a decision, we excluded participants who responded: “I have never decided to use or not to use a treatment.” We assessed the consistency of answers by matching participants’ responses with the basis of the treatment claim as shown in Table 3. Additionally, we developed exclusion criteria for consistent responses across behaviour-related questions as outlined in Table 4 below.

To explore the risk of bias due to attrition, which was larger in the control group than in the podcast group, we conducted two sensitivity analyses. First, we calculated Lee’s treatment effect bounds [55] on the mean difference in test scores, which provides worst-case and best-case estimates of the difference in test scores under extreme assumptions about the effect of possible non-random attrition. To achieve this, we calculated upper and lower bounds for the mean difference in test scores. The bounds are constructed by trimming the group with less attrition at the upper and lower tails of the outcome (test score) distribution, respectively. In this analysis, the sample was trimmed in the podcast (intervention) group so that the proportion of parents included in the analysis was equal for both groups. We did not adjust for covariates in this analysis. Second, we reanalysed the results for the primary outcomes for the initial test, excluding parents who did not complete the 1-year follow-up test.

We explored whether there were differences in the effects of the podcast on parents depending on whether they had a primary, secondary or tertiary education level. We also explored whether there were differences in the effects of the podcast on parents who had a child in a school that received the IHC primary school resources and those whose children were in a control school. These analyses were adjusted for whether the child was in an intervention school and the parent’s level of formal education, respectively, which were our stratification variables at randomisation. Odds ratios from the logistic regression analyses were converted to risk differences using the control group odds as the reference, multiplying that by the odds ratio to estimate the intervention group odds, and converting the control and intervention group odds to probabilities to calculate the difference.

We calculated the adjusted standardised mean difference (Hedges’ g) [56] for comparison to effect sizes reported in a meta-analysis of the effectiveness of other interventions to improve critical thinking [57]. The statistical analyses were performed using R (R Core Team, Vienna, Austria; version 3.4.3; using packages MASS, tidyverse, compute.es, knitr, kableExtra, scales, and digest).

Patient and public involvement

We constituted an advisory panel comprising members of the public, who advised on the design of the intervention (the IHC podcast). We worked with members of the public to refine prototypes of the podcast through iterative processes of human-centred design. Members of the public contributed the ideas for drama skits, the presentation and episode stories, explanations and examples, among others. We conducted user tests using feedback provided by members of the public, which we used to improve the podcast. Some participants helped in the recruitment when they invited their colleagues to recruitment meetings. The results will be shared with and explained to the parents.

Results

Out of 675 parents who agreed to participate and could be reached by phone, 334 were randomly allocated to listen to the podcast and 341 were allocated to the public service announcements (control) group (Fig. 1). In the podcast group, 288 parents (86.2%) completed the test initially and 267 parents (80%) completed the test again after 1 year. In the control group, 273 (80.1%) completed the test initially and 256 parents (75%) completed the test again after 1 year. The education, sex, sources of health care and sources of advice about treatments were similar for parents in the podcast and control groups initially and on the 1-year follow-up (Table 5).

Fig. 1
figure 1

Informed Health Choices (IHC) podcast trial profile

Table 5 Characteristics of the participants

After 1 year, more parents responded that they had training in research in both the podcast and control groups. There was a larger increase in the number of parents who reported prior participation in research in the control group (from 27% to 60%) than in the podcast group (from 25% to 35%). This likely reflects participation in this study, and possibly a difference in whether they perceived their participation in this study as participation in research.

Nearly half the parents had no more than primary school education. About three quarters were women. The median age was 36 years (25th to 75th percentile, 31–43) in the podcast group and 38 years (25th to 75th percentile, 32–45) in the control group. The participants reported most commonly seeking health care at government or private for-profit facilities and they were most likely to seek advice about treatments from health workers.

Primary outcomes and sensitivity analyses

After 1 year, the mean score for parents in the podcast group went down from 67.8% initially after listening to the podcast to 58.9%, whereas there was little change in the control group, which was 52.6% after 1 year (up from 52.4%) (Table 6 and Fig. 2). The adjusted difference in mean scores between the podcast and control groups was 6.7% (95% CI 3.3% to 10.1%; p = 0.0001) after 1 year, compared to 15.5% after listening to the podcast initially.

Table 6 Main results
Fig. 2
figure 2

Test score distributions. Distribution of participants’ test scores from the test performed immediately after the intervention and that performed 1 year later

In the podcast group, 47.2% of the parents had a pass score after 1 year (down from 70.5%), compared to 39.5% in the control group (up from 37.7%) (Table 6). The adjusted difference (based on the odds ratio from the logistic regression analysis) was 9.8% more parents who passed (95% CI 0.9% to 18.9%; p = 0.03) in the podcast group than in the control group (compared to 34.0% more parents initially).

We conducted two sensitivity analyses to assess the potential risk of bias from attrition - parents who did not take the 1-year follow-up test. First, we calculated Lee’s treatment effect bounds for the mean difference in test scores. This resulted in a lower (worst case) and upper (best case) mean difference of 6.2% and 6.7%, respectively (95% CI 1.8% to 9.3%) (Table 7). This indicates that in the worst-case scenario, parents who listen to the podcast would be expected to score at least 6.2% higher on the test compared to parents who listen to typical public service announcements about health issues, and that this difference is statistically significant. Second, we calculated the adjusted mean difference and the adjusted difference in the proportion of parents with a passing score shortly after listening to the podcast (initial test), excluding participants lost to 1-year follow up. There was little difference between these analyses and the primary analyses, again indicating that there was little bias from attrition.

Table 7 Sensitivity analyses

Secondary outcomes

Skills retention: there was a 29% relative reduction in the average ability of the parents in the podcast group over the year after listening to the podcast (71% retention, adjusted for chance) (Table 8). The relative reduction in the proportion of parents with a passing score was 33% (67% retention). For comparison purposes, we present the results of the parents together with those from a sister trial involving their children.

Table 8 Skill retention of parents and children

In the podcast group, 19.5% of the parents had a score indicating mastery of the nine IHC key concepts after 1 year (down from 31.6%) compared to 10.5% of the parents in the control group (up from 6.2%). The adjusted difference was 9.8% more parents with a mastery score (95% CI 2.8% to 19.6%; p = 0.003) in the podcast group than in the control group (compared to 26.0% initially).

The proportion of parents who answered both questions correctly for each IHC key concept addressed in the podcast was higher in the podcast group than in the control group for eight of the nine key concepts (Additional file 2: Table S1). However, the differences were small for seven of those key concepts (3.3% to 9.4%; p 0.03–0.43) compared to the initial results. There was no clear difference for the key concept that treatments have both beneficial and harmful effects (adjusted difference 0.0%; 95% CI − 8.4% to 9.0%; p = 0.99); whereas for the closely related key concept that treatments can harm, 19.5% more participants in the podcast group answered both questions correctly (95% CI 10.4% to 28.6%; p < 0.0001). In contrast, the proportion of parents who answered both questions correctly for each key concept addressed in the podcast were between 13% and 35% higher for all nine concepts initially.

We detected no clear difference after 1 year between the podcast and control groups in how likely they would be to find out the basis for a claim about treatment effects or to find out if the claim was based on research (Additional file 2: Table S2). Parents in the podcast group were 12.6% less likely than parents in the control group to agree to participate in research about an illness they might get (95% CI − 22.3% to − 4.8%; p = 0.0005), whereas there was little if any difference initially. Most parents in both groups (65–86%) responded positively to all three of these questions.

Initially, parents in the podcast group were more likely than parents in the control group to respond that they found it easy or very easy to assess whether a treatment claim is based on research; to find research-based information about treatments; to assess how confident they could be about research results; and to assess the relevance of research. After 1 year, the proportion of parents in the podcast group who found these tasks to be easy or very easy decreased and there was no clear difference between the podcast and control groups (Additional file 2: Table S3).

There was little difference in how frequently parents in the podcast and control groups heard treatment claims (Additional file 2: Table S4). In the podcast group 62.2% of the parents reported hearing one or more claims most days or most weeks compared to 55.5% in the control group (adjusted difference 7.6% more in the podcast group; 95% CI − 1.0% to 15.4%; p = 0.08). The proportion of parents who responded that they thought about the basis for the last claim they heard was lower in the podcast group than in the control group (adjusted difference 8.2% less in the podcast group; 95% CI − 17.3% to 0.0%; p = 0.05) (Additional file 2: Table S5). However, parents in the podcast group were less likely to be very sure or not to know how to assess how sure they should be (adjusted difference 20.9% less in the podcast group; 95% CI − 29.9% to − 2.0%; p < 0.0001) (Additional file 2: Table S6). Parents in the podcast group were also less likely to be very sure about the advantages and disadvantages of the most recent treatment they used (adjusted difference 13.3% less in the podcast group; 95% CI − 19.9% to − 5.5%; p = 0.001) (Additional file 2: Table S7).

There was no clear difference in the proportion of parents whose assessment of the trustworthiness of the last claim they heard was consistent with what they identified as the basis for the claim (adjusted difference 3.8% more in the podcast group; 95% CI − 2.8% to 12.3%; p = 0.30). There was also little if any difference in the proportion of parents who responded that they were not sure because they did not know about the disadvantages.

The standardised mean difference (Hedges’ g) was 0.32 (95% CI 0.15 to 0.50). None of the parents or research assistants who delivered the podcasts reported any adverse effects.

Subgroup analyses

The podcast was effective across parents with different levels of education (Additional file 2: Table S8). However, there was an interaction between parental education and the size of the podcast’s effect. The effect was largest for parents with tertiary education and lowest for parents with secondary education. There also was an interaction between having a child in a school that used the IHC primary school resources and the size of the effect (Additional file 2: Table S9). The effect of the podcast was less in parents who had a child in an intervention school. Neither of these interactions were consistent with what we had hypothesised, and we did not detect interactions for either of these factors in the initial results [44].

Overall, the mean score (percentage of correct answers) for parents with a child in an intervention school was 4.2% higher than that for parents with a child in a control school (95% CI 0.7% to 7.7; p = 0.02) and 11.9% more parents had a passing score (95% CI 2.8% to 21.2%; p = 0.01) after 1 year (Additional file 2: Table S10). This is in contrast to the initial results, when we did not find an association between having a child in a school that used the primary school resources and parents’ test scores.

Discussion

The size of the effect of the IHC podcast decreased substantially over 1 year, largely because the parents did not retain what they had learned. In contrast, the children who were in schools that used the IHC primary school resources retained what they learned [44]. Moreover, after 1 year, parents who listened to the podcast were less likely than parents in the control group to have thought about the basis for the last claim that they heard and less likely to agree to participate in research; and their subjective ability to assess the trustworthiness of claims had decreased. On the other hand, they were less likely to be very sure or not to know how to assess how sure they should be about the last treatment claim that they heard.

There are several possible explanations for these findings. The decrease in scores in the podcast group might be due to the parents not regularly using what they had learned 1 year previously. Results of other studies on skill retention and skill decay have identified substantial skill loss with non-practice [58, 59].

There was a 33% relative decrease in the proportion of parents who had a passing score compared to a 16% relative increase for the children in intervention schools in the IHC primary school trial [44]. Differences between the interventions and differences between adults and children might explain this difference in retention.

We expected a larger effect for the children, because the primary school intervention was multifaceted, actively engaged the children and involved more time (about 12 h over 10 to 12 weeks compared to about 1.5 h over 7–10 weeks). Active, collaborative learning is generally more effective than passive learning and may improve retention [60]. Spaced practice, with intervals between learning sessions, has been found to improve long-term retention [59, 61]. Listening to the podcast did not include any practice, other than encouraging the parents to think carefully when they hear a claim. Learners need immediate practice to move information from working memory to long-term memory [62]. Just seeing or hearing new concepts may not be enough for learning. The mind has to do some work with new information before it is reliably stored in memory.

Another potentially important difference between the podcast and the primary school interventions is that parents listened to the podcast alone. People learn from one another [63]. The research assistants who delivered the podcast episodes did not discuss the podcast with the parents. Although some parents shared the podcast with others [47], we did not actively encourage discussion of the podcast. In contrast, the primary school intervention took place in classrooms with discussion, modelling and opportunities for observation and imitation. Also, teachers were able to make adjustments to help ensure that the children’s understanding, by asking questions, using additional examples, providing additional explanations, working through activities and reviewing exercises together.

In addition to differences between the interventions, there are differences in learning between children and adults. Children are expected to be in school learning, whereas adults have other responsibilities. Adults are also more likely to have well-established routines, they may expect learning to come effortlessly, and they may be less able than children to learn cognitive skills [62]. It can take time and many demonstrations to convince adult learners of the superiority of new routines over old-established ones. They have had their misconceptions for longer than children, and may not recognise them or see them as dysfunctional. For example, some parents who participated in the process evaluation had strong prior beliefs and remained steadfast in those beliefs after listening to the podcast, even when those beliefs were in conflict with a podcast message [46].

Adults may expect learning to come effortlessly, forgetting how they worked as children to learn new concepts. When new cognitive skills are learned, it may take a lot of thought and effort, because initially they are stored in declarative (factual) rather than procedural memory [63]. Some aspects of memory, reasoning, problem solving and intellectual tasks may begin to deteriorate in the 30’s age group [64, 65]. The median age of the participants in the podcast group was 35 years (25th to 75th percentile, 31–43).

The findings for each key concept were largely consistent with the overall results and what we found initially after listening to the podcast. The scores decreased for all of the concepts. Both the initial test and the test after 1 year showed the largest effect for the concept that treatments may be harmful, and the smallest effect (no clear effect in this study) for the concept that treatments usually have beneficial and harmful effects. These two concepts are closely related, but these findings support considering them separately, and suggest that the first may be more of a problem than the second. People often exaggerate the benefits of treatments, ignore or downplay potential harms, overestimate the benefits and underestimate harm [23]. On the other hand, people are generally aware that it is important to consider the balance between benefits and harms when making a decision. It is also possible that the difference we found between these two key concepts was influenced by the nature and difficulty of the questions that were used.

We know from the process evaluation that at least some parents shared the IHC podcast with neighbours [46], but we do not know the extent to which parents in the podcast group shared the podcast with parents in the control group. Nonetheless, given that there was little change in the scores of the control group from the first to the second test, if there was contamination, it is unlikely to have had a substantial effect on the scores of the parents in the control group after 1 year.

So far as we are aware, this is the first randomised trial of the use of a podcast for non-formal education or health education, other than a podcast to aid weight loss [6, 7, 27, 37,38,39,40,41,42,43,44,45,46,47,48,49]. Few other interventions to improve the ability of non-health-professionals to think critically about treatments have been evaluated [35, 36]. A systematic review of strategies to teach people to think critically more broadly, which included 308 studies, found an average effect size (Hedges’ g) of 0.33 [57]. The average effect size for interventions that were targeted at graduate and adult students was 0.21, as was the average effect size for interventions in health or medical education. The effect size for our intervention shortly after listening to the podcast (0.83) was large in comparison. The effect size after 1 year was 0.32, which is closer to the average effect size for interventions targeted at adults. However, comparisons such as these must be made cautiously due to differences in the interventions compared in these studies, the outcome measures and the methods that were used.

Strengths

It is unlikely that the main findings of this study can be explained by random errors. We also believe there is a low risk of systematic errors (bias). The comparison groups were similar at the start of the study, they were managed similarly apart from the intervention, and outcomes were measured in the same way in both groups. There was more loss to follow up in the control group than in the podcast group (25% versus 20%), but there were no clear differences between those who completed the tests and those who dropped out. Although loss to follow up affected the precision of our estimates and may have introduced some bias, it seems unlikely to have had an important impact on the main findings of the study.

Limitations

The applicability of our findings is limited by the nature of the intervention and the outcome measure that we used. The podcast was tailored to a specific target audience - parents of primary school children in Uganda. For a podcast to be effective in another audience, it would need to be tailored to that audience [66]. Although we were careful to ensure the reliability and validity of our primary outcome measure, it was designed to measure the ability to apply the concepts that the podcast was designed to teach (“treatment inherent”). Treatment-inherent outcome measures are associated with larger effect sizes than independent measures [57, 67]. In addition, we cannot be certain about the extent to which this outcome reflects how people apply the IHC key concepts when they hear health claims in their daily lives. Our findings on actual claim assessment and decision-making behaviours are based on self-report, are inconsistent and may be unreliable. Furthermore, the parents in the podcast trial volunteered to participate. Consequently, the effect estimates from this trial indicate the potential effects of the podcast amongst parents who choose to listen to them, not the effect of simply offering the podcast to a group of parents.

Implications of these findings

Currently, many interventions for equipping people with the skills to think more critically about treatments are focused on health profession students, health workers and researchers. Overall, findings from our initial study suggest that developing mass media programmes for improving people’s ability to think more critically about treatments could be a beneficial investment. However, as demonstrated by the decay shown in the current study, in order for this investment to yield sustainable learning outcomes, such interventions need not be one-off and perhaps not be passive. Our assessment is that passive dissemination of media interventions is unlikely to be as effective as what we found our intervention to be initially after the intervention was delivered and certainly not effective a year later. Future research could include developing a spiral curriculum for teaching the IHC key concepts to adults, how to engage stakeholders to support teaching critical thinking about treatments to adults, developing outcome measures for research on making decisions about treatments, and systematic reviews of outcome assessment tools, frameworks and teaching strategies for critical thinking about treatments, among others.

Conclusions

Critical health literacy is essential for informed health choices. Yet, despite worldwide recognition of the need to improve health literacy, up to now there have been only a handful of evaluations of interventions to improve health literacy in community populations [43]. We have shown that it is possible for adults in a low-income country, mostly with no more than primary school education, to improve their short-term ability to assess claims about treatment effects by listening to a podcast. However, more active, collaborative learning strategies with spaced practice are likely to be needed to address the substantial decay that we found in these skills and in self-efficacy over 1 year. In contrast to this decay in skills, we found an increase in the same skills among children in the intervention group of the IHC primary school trial [44]. Taken together, these findings provide further support for the importance of beginning to teach these skills at a young age.

Availability of data and materials

The data files for the 1-year follow up are available from the Norwegian Centre for Research Data (http://www.nsd.uib.no/nsd/english/index.html).

References

  1. Mugyenyi P. Genocide by denial. How profiteering from HIV/AIDS killed millions. Kampala: Fountain Publishers; 2008.

    Google Scholar 

  2. Blumenkrantz, David In The new vision 11 November 1989. Thirty tons of soil: Nanyonga’s divine panacea. 1989. Available from: http://david-blumenkrantz.squarespace.com/new-page-3/. Cited 23 Nov 2018.

    Google Scholar 

  3. Kunihira NR, Nuwaha F, Mayanja R, Peterson S. Barriers to use of antiretroviral drugs in Rakai district of Uganda. Afr Health Sci. 2010;10:120–9 Makerere Medical School.

    CAS  PubMed  PubMed Central  Google Scholar 

  4. Casiday R, Cresswell T, Wilson D, Panter-Brick C. A survey of UK parental attitudes to the MMR vaccine and trust in medical authority. Vaccine. 2006;24:177–84.

    Article  PubMed  Google Scholar 

  5. Hadjikoumi I, Niekerk KV, Scott C. MMR catch up campaign: reasons for refusal to consent [3]. Arch Dis Child. 2006;91:621–2.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Mills E, Jadad AR, Ross C, Wilson K. Systematic review of qualitative studies exploring parental beliefs and attitudes toward childhood vaccination identifies common barriers to vaccination. J Clin Epidemiol. 2005;58:1081–8.

    Article  PubMed  Google Scholar 

  7. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The spreading of misinformation online. Proc Natl Acad Sci. 2016;113:554–9 National Academy of Sciences.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  8. McMullan M. Patients using the Internet to obtain health information: how this affects the patient-health professional relationship. Patient Educ Couns. 2006;63(1-2):24–8. https://doi.org/10.1016/j.pec.2005.10.006.

    Article  PubMed  Google Scholar 

  9. Ullrich PF Jr, Vaccaro AR. Patient education on the internet: opportunities and pitfalls. Spine (Phila Pa 1976). 2002;27(7):E185–E188. https://doi.org/10.1097/00007632-200204010-00019.

    Article  PubMed  Google Scholar 

  10. Wald HS, Dube CE, Anthony DC. Untangling the Web--the impact of Internet use on health care and the physician-patient relationship. Patient Educ Couns. 2007;68(3):218–24. https://doi.org/10.1016/j.pec.2007.05.016.

    Article  PubMed  Google Scholar 

  11. Schwitzer G. A guide to reading health care news stories. JAMA Intern Med. 2014;174(7):1183–6. https://doi.org/10.1001/jamainternmed.2014.1359.

    Article  PubMed  Google Scholar 

  12. Bonevski B, Wilson A, Henry DA. An analysis of news media coverage of complementary and alternative medicine. PLoS One. 2008;3:e2406 Public Library of Science.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  13. Motosko CC, Ault AK, Kimberly LL, Zakhem GA, Gothard MD, Ho RS, et al. Analysis of spin in the reporting of studies of topical treatments of photoaged skin. J Am Acad Dermatol. 2019;80(2):516–522.e12. https://doi.org/10.1016/j.jaad.2018.04.034.

    Article  PubMed  Google Scholar 

  14. Haneef R, Lazarus C, Ravaud P, Yavchitz A, Boutron I. Interpretation of results of studies evaluating an intervention highlighted in Google health news: a cross-sectional study of news. PLoS One. 2015;10:1–15 Public Library of Science.

    Article  CAS  Google Scholar 

  15. Woloshin S, Schwartz LM, Moncur M, Gabriel S, Tosteson ANA. Assessing values for health: numeracy matters. Med Decis Mak. 2001;21:382–90.

    Article  CAS  Google Scholar 

  16. Sillence E, Briggs P, Harris PR, Fishwick L. How do patients evaluate and make use of online health information? Soc Sci Med. 2007;64:1853–62.

    Article  PubMed  Google Scholar 

  17. Schwartz LM, Woloshin S, Black WC, Welch HG. The role of numeracy in understanding the benefit of screening mammography. Ann Intern Med. 1997;127:966–72.

    Article  CAS  PubMed  Google Scholar 

  18. Lokker N, Sanders L, Perrin EM, Kumar D, Finkle J, Franco V, et al. Parental misinterpretations of over-the-counter pediatric cough and cold medication labels. Pediatrics. 2009;123:1464–71.

    Article  PubMed  Google Scholar 

  19. Rothman RL, Housam R, Weiss H, Davis D, Gregory R, Gebretsadik T, et al. Patient understanding of food labels: the role of literacy and numeracy. Am J Prev Med. 2006;31:391–8.

    Article  PubMed  Google Scholar 

  20. Eysenbach G, Köhler C. How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. BMJ. 2002;324(7337):573–7. https://doi.org/10.1136/bmj.324.7337.573.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Glenton C, Nilsen ES, Carlsen B. Lay perceptions of evidence-based information - a qualitative evaluation of a website for back pain sufferers. BMC Health Serv Res. 2006;6:34.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Oxman AD, Austvoll-Dahlgren A, Garratt A, Rosenbaum S. Understanding of key concepts relevant to assessing claims about treatment effects: a survey of Norwegian adults. IHC Working Paper 2017. ISBN 978-82-8082-819-4. Available online at https://www.informedhealthchoices.org/wp-content/uploads/2016/08/CURE-survey-report-2017-03-01.pdf. Accessed 29 Jan 2020.

  23. Hoffmann TC, Del Mar C. Patients’ expectations of the benefits and harms of treatments, screening, and tests: a systematic review. JAMA Intern Med. 2015;175:274–86.

    Article  PubMed  Google Scholar 

  24. Hoffmann TC, Del Mar C. Clinicians’ expectations of the benefits and harms of treatments, screening, and tests: a systematic review. JAMA Intern Med. 2017;177:407–19.

    Article  PubMed  Google Scholar 

  25. Brownlee S, Chalkidou K, Doust J, Elshaug AG, Glasziou P, Heath I, et al. Evidence for overuse of medical services around the world. Lancet. 2017;390:156–68.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Frass M, Strassl RP, Friehs H, Müllner M, Kundi M, Kaye AD. Use and acceptance of complementary and alternative medicine among the general population and medical personnel: a systematic review, vol. 12. New Orleans: the Academic Division of Ochsner Clinic Foundation; 2012. p. 45–56.

    Google Scholar 

  27. Glasziou P, Straus S, Brownlee S, Trevena L, Dans L, Guyatt G, et al. Evidence for underuse of effective medical services around the world. Lancet. 2017;390:169–77.

    Article  PubMed  Google Scholar 

  28. Jones G, Steketee RW, Black RE, Bhutta ZA, Morris SS. How many child deaths can we prevent this year? Lancet. 2003;362:65–71.

    Article  PubMed  Google Scholar 

  29. Pierce H, Gibby AL, Forste R. Caregiver decision-making: household response to child illness in sub-Saharan Africa. Popul Res Policy Rev. 2016;35:581.

    Article  PubMed  PubMed Central  Google Scholar 

  30. WHO (World Health Organisation). 10 facts on health inequities and their causes. 2017. Available from: https://www.who.int/features/factfiles/health_inequities/en/. Cited 15 Nov 2019.

    Google Scholar 

  31. Lynch J, Smith GD, Harper S, Hillemeier M, Ross N, Kaplan GA, et al. Is income inequality a determinant of population health? Part 1. A systematic review. Milbank Q. 2004;82:5–99.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Lynch J, Smith GD, Harper S, Hillemeier M. Is income inequality a determinant of population health? Part 2. U.S. national and regional trends in income inequality and age- and cause-specific mortality. Milbank Q. 2004;82:355–400.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Cutler D, Deaton A, Lleras-Muney A. The determinants of mortality. J Econ Perspect. 2006;20:97–120.

    Article  Google Scholar 

  34. Messias E. Income inequality, illiteracy rate, and life expectancy in Brazil. Am J Public Health. 2003;93:1294–6.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Castle JC, Chalmers I, Atkinson P, Badenoch D, Oxman AD, Austvoll-Dahlgren A, et al. Establishing a library of resources to help people understand key concepts in assessing treatment claims - the “Critical thinking and Appraisal Resource Library” (CARL). PLoS One. 2017;12:e0178666.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  36. Cusack L, Del Mar CB, Chalmers I, Gibson E, Hoffmann TC. Educational interventions to improve people’s understanding of key concepts in assessing the effects of health interventions: a systematic review. Syst Rev. 2018;7:68.

    Article  PubMed  PubMed Central  Google Scholar 

  37. The Informed Health Choices Group. The informed health choices project: using evidence to change the world. 2017. Available from: www.informedhealthchoices.org. Cited 20 Jun 2017.

    Google Scholar 

  38. Austvoll-Dahlgren A, Oxman AD, Chalmers I, Nsangi A, Glenton C, Lewin S, et al. Key concepts that people need to understand to assess claims about treatment effects. J Evid Based Med. 2015;8:112–25.

    Article  PubMed  Google Scholar 

  39. Chalmers I, Oxman AD, Austvoll-Dahlgren A, Ryan-Vig S, Pannell S, Sewankambo N, et al. Key concepts for Informed Health Choices: a framework for helping people learn how to assess treatment claims and make informed choices. BMJ Evid Based Med. 2018;23:29–33.

    Article  PubMed  Google Scholar 

  40. Semakula D, Nsangi A, Oxman AD, Sewankambo NK. Priority setting for resources to improve the understanding of information about claims of treatment effects in the mass media. J Evid Based Med. 2015;8:84–90.

    Article  PubMed  Google Scholar 

  41. Semakula D, Nsangi A, Oxman M, Rosenbaum S, Oxman A, Austvoll-Dahlgren A, et al. Development of mass media resources to improve the ability of parents of primary school children in Uganda to assess the trustworthiness of claims about the benefits and harms of treatments. In: IHC Working Paper; 2018. Report No: ISBN: 978-82-8082-903-0.

    Google Scholar 

  42. The Informed Health Choices Group. Learning resources. 2017. Available from: http://www.informedhealthchoices.org/learning-resources/. Cited 28 Jun 2017.

    Google Scholar 

  43. Semakula D, Nsangi A, Oxman AD, Oxman M, Austvoll-Dahlgren A, Rosenbaum S, et al. Effects of the Informed Health Choices podcast on the ability of parents of primary school children in Uganda to assess claims about treatment effects: a randomised controlled trial. Lancet. 2017;390:389–98 Elsevier.

    Article  PubMed  Google Scholar 

  44. Nsangi A, Semakula D, Oxman AD, Austvoll-Dahlgren A, Oxman M, Rosenbaum S, et al. Effects of the Informed Health Choices primary school intervention on the ability of children in Uganda to assess the reliability of claims about treatment effects: a cluster-randomised controlled trial. Lancet. 2017;390:374–88 Elsevier.

    Article  PubMed  Google Scholar 

  45. Nsangi A, Semakula D, Oxman AD, Austvoll-Dahlgren A, Oxman M, Rosenbaum S, et al. Effects of the Informed Health Choices primary school intervention on the ability of children in Uganda to assess the reliability of claims about treatment effects, 1-year follow-up: a cluster-randomised trial. Trials. 2020;21:27. https://doi.org/10.1186/s13063-019-3960-9.

  46. Semakula D, Nsangi A, Oxman AD, Glenton C, Lewin S, Rosenbaum S, et al. Informed Health Choices media intervention for improving people’s ability to critically appraise the trustworthiness of claims about treatment effects: a mixed-methods process evaluation of a randomised trial in Uganda. BMJ Open 2019;9:e031510. https://doi.org/10.1136/bmjopen-2019-031510.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Nsangi A, Semakula D, Oxman AD, Glenton C, Lewin S, Rosenbaum S, et al. Informed health choices intervention to teach primary school children in low-income countries to assess claims about treatment effects: process evaluation. BMJ Open. 2019;9(9):e030787. Published 2019 Sep 11. https://doi.org/10.1136/bmjopen-2019-030787.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Semakula D, Nsangi A, Oxman M, Austvoll-Dahlgren A, Rosenbaum S, Kaseje M, et al. Can an educational podcast improve the ability of parents of primary school children to assess the reliability of claims made about the benefits and harms of treatments: study protocol for a randomised controlled trial. Trials. 2017;18:31.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Austvoll-Dahlgren A, Semakula D, Nsangi A; The IHC Group, et al. Measuring ability to assess claims about treatment effects: the development of the ‘Claim Evaluation Tools’BMJ Open 2017;7:e013184. https://doi.org/10.1136/bmjopen-2016-013184.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Austvoll-Dahlgren A, Guttersrud Ø, Nsangi A, Semakula D, Oxman AD. Measuring ability to assess claims about treatment effects: a latent trait analysis of items from the 'Claim Evaluation Tools' database using Rasch modelling. BMJ Open. 2017;7(5):e013185. Published 2017 May 25. https://doi.org/10.1136/bmjopen-2016-013185.

    Article  PubMed  PubMed Central  Google Scholar 

  51. The Informed Health Choices Group. The claim evaluation tools. 2018.

    Google Scholar 

  52. Semakula D, Nsangi A, Guttersrud Ø, Oxman AD, Sewankambo NK, Austvoll-Dhalgren A. Measuring ability to assess claims about treatment effects in English and Luganda: evaluation of multiple-choice questions from the “Claim Evaluation Tools” database using Rasch modelling. IHC Working Paper. 2017. Available from: http://www.informedhealthchoices.org/wp-content/uploads/2016/08/Claim-2nd-Rasch-analysis-in-Uganda-2017-03-17.pdf. Cited 15 Apr 2018.

    Google Scholar 

  53. Davies A, Gerrity M, Nordheim L, Peter O, Opiyo N, Sharples J, et al. Measuring ability to assess claims about treatment effects: establishment of a standard for passing and mastery. IHC Working Paper. 2017. Available from: http://www.informedhealthchoices.org/wp-content/uploads/2016/08/Claim-cut-off-IHC-Working-Paper-2017-01-09.pdf. Cited 15 Apr 2018.

    Google Scholar 

  54. Donner A. Sample size requirements for stratified cluster randomization designs. Stat Med. 1992;11:743–50.

    Article  CAS  PubMed  Google Scholar 

  55. Lee DS. Training, wages, and sample selection: estimating sharp bounds on treatment effects. Rev Econ Stud. 2009;76:1071–102.

    Article  Google Scholar 

  56. White IR, Thomas J. Standardized mean differences in individually-randomized and cluster-randomized trials, with applications to meta-analysis. Clin Trials. 2005;2:141–51.

    Article  PubMed  Google Scholar 

  57. Abrami PC, Bernard RM, Borokhovski E, Waddington DI, Wade CA, Persson T. Strategies for teaching students to think critically: a meta-analysis. Rev Educ Res. 2015;85:275–314.

    Article  Google Scholar 

  58. Arthur W Jr, Bennett W Jr, Stanush PL, McNelly TL. Factors that influence skill decay and retention: a quantitative review and analysis. Hum Perform. 1998;11:57–101.

    Article  Google Scholar 

  59. Custers EJFM. Long-term retention of basic science knowledge: a review study. Adv Heal Sci Educ. 2010;15:109–28.

    Article  Google Scholar 

  60. Prince M. Does active learning work? A review of the research. J Eng Educ. 2004;93:223–31.

    Article  Google Scholar 

  61. Ritter F, Baxter G, Kim JW, Srinivasmurthy S. Learning and retention. In: Lee JD, Kirlik A, Dainoff MJ, editors. The Oxford handbook of cognitive engineering. Oxford: Oxford University Press; 2013.

    Google Scholar 

  62. Cromley J. Learning to think, learning to learn: what the science of thinking and Learning has to offer adult education. Washington, DC: National Institute for Literacy; 2000.

    Google Scholar 

  63. David L. Social learning theory (Bandura). In: Learning theories; 2015. Available from: https://www.learning-theories.com/social-learning-theory-bandura.html. Cited 15 Apr 2018.

    Google Scholar 

  64. Smith J, Baltes P. A life-span perspective on thinking and problem-solving. In: Schwebel M, Maher CA, Fagley NS, editors. Promoting cognitive growth over the life span. Hillsdale: Lawrence Erlbaum Associates; 1990.

    Google Scholar 

  65. Hartshorne JK, Germine LT. When does cognitive functioning peak? The asynchronous rise and fall of different cognitive abilities across the life span. Psychol Sci. 2015;26:433–43.

    Article  PubMed  Google Scholar 

  66. The Informed Health Choices Group. Guide for translating and adapting the Informed Health Choices (IHC) podcast. In: Informed health choices working paper. Oslo; 2017. Available from: http://www.informedhealthchoices.org/wp-content/uploads/2016/08/GUIDE-for-translating-and-adapting-the-IHC-Podcast.pdf. Cited 15 Apr 2018.

  67. Slavin R, Madden NA. Measures inherent to treatments in program effectiveness reviews. J Res Educ Eff. 2011;1:370–80.

    Google Scholar 

Download references

Acknowledgements

We are grateful to the Research Council of Norway for supporting the project, to the English National Institute for Health Research for supporting Iain Chalmers and the James Lind Initiative and to the Wellcome Trust and DELTAS Africa Initiative grant #DEL-15-011 to THRiVE-2 (the Training Health Researchers into Vocational Excellence in East Africa) for supporting Neslon Sewankambo and Daniel Semakula. This work was also partially supported by a Career Development Award from the DELTAS Africa Initiative grant # DEL-15-011 to THRiVE-2. The DELTAS Africa Initiative is an independent funding scheme of the African Academy of Sciences (AAS) Alliance for Accelerating Excellence in Science in Africa (AESA) and supported by the New Partnership for Africa’s Development Planning and Coordinating Agency (NEPAD Agency) with funding from the Wellcome Trust grant # 107742/Z/15/Z and the UK government. The views expressed in this publication are those of the author(s) and not necessarily those of AAS, NEPAD Agency, Wellcome Trust or the UK government. Jan Odgaard-Jensen helped plan the statistical analyses. Øystein Guttersrud helped with the development of the outcome measure. Alun Davies, Martha Gerrity, Lena Nordheim, Peter Okebukola, Newton Opiyo, Jonathan Sharples, Helen Wilson, and Charles Wysonge determined the cutoff scores for passing and mastery. We want to thank Margaret Nabatanzi, Martin Mutyaba, Esther Nakyejwe and Solomon Segawa for their help with data management and all the research assistants who helped with recruitment, delivering the podcast, data collection and entry. We would also like to thank the producers and the musicians who helped with the production of the Informed Health Choices theme song and the podcast, particularly Abraham Jjuko and Christopher Kiwanuka. We also thank the Informed Health Choices advisory group. We are especially grateful to all the parents and journalists who helped with the development of the Informed Health Choices podcast and those who participated in this trial.

Funding

This trial was funded by the Research Council of Norway, Project number 220603/H10. The funder had no role in the study design, data collection, data analysis, data interpretation or writing of the report. The 1-year follow-up study was partially supported by a Career Development Award from THRiVE-2 - a DELTAS Africa Initiative grant holder # DEL-15-011. The principal investigator had full access to all the data in the study and had final responsibility for the decision to submit for publication.

Author information

Authors and Affiliations

Authors

Contributions

DS and AN are the principal investigators. They drafted the protocol with help of the other investigators and were responsible for the day-to-day management of the trial. NS and AO had primary responsibility for overseeing the trial. All the investigators except for CR reviewed the protocol, provided input and agreed on this version. MO together with DS had primary responsibility for developing the podcast. All the investigators contributed to the development. AA-D had primary responsibility for developing and validating the outcome measure. DS and AN had primary responsibility for data collection. AO, SR, AA-D, and IC were principal members of the coordinating group for the trial and, together with NS and the principal investigators, acted as the steering committee for the trial. They were responsible for final decisions about the protocol and reporting of the results. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Andrew D. Oxman.

Ethics declarations

Ethics approval and consent to participate

Ethics approval was obtained from the School of Medicine institutional review board at Makerere University College of Health Sciences and the Uganda National Council for Science and Technology at the beginning of the study and renewal of approval was sought for the follow-up study. Participants consented for both the initial assessment and the 1-year follow-up at the beginning of the study. Only consenting participants were included. During the follow-up study participants were included only if they were willing to continue. Participants could not be identified by the information they provided.

Consent for publication

Consent to publish was obtained at the beginning of the study together with consent to participate as described above.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

The Claim Evaluation Tools.

Additional file 2: Table S1.

Results for each concept one year after listening to the podcast. Table S2. Intended behaviours. Table S3. Self-efficacy. Table S4. Self-reported behaviour - awareness of treatment claims. Table S5. Self-reported behaviour - assessment of the basis of treatment claims. Table S6. Self-reported behaviour - assessment of trustworthiness of treatment claims. Table S7. Self-reported behaviour - assessment of advantages and disadvantages of treatments. Table S8. Subgroup analyses - education. Table S9. Subgroup analyses - child in school that used IHC primary school resources. Table S10. Effect of IHC primary school resources on parents.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Semakula, D., Nsangi, A., Oxman, A.D. et al. Effects of the Informed Health Choices podcast on the ability of parents of primary school children in Uganda to assess the trustworthiness of claims about treatment effects: one-year follow up of a randomised trial. Trials 21, 187 (2020). https://doi.org/10.1186/s13063-020-4093-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-020-4093-x

Keywords