Creating concise and readable patient information sheets for interventional studies in Australia: are we there yet?
Trials volume 23, Article number: 794 (2022)
Participant information sheets and consent forms (PICFs) used in interventional studies are often criticised for being hard to read and understand. We assessed the readability and its correlates of a broad range of Australian PICFs.
We analysed the participant information sheet portion of 248 PICFs. Readability scores were measured using three formulae: the Flesch Reading Ease, the Flesch-Kincaid Grade Level, and the Simple Measure of Gobbledygook (SMOG). We investigated how various features (including sponsor type and PICF type) correlated with PICF length and readability and examined compliance with other measures known to improve readability.
For a sample of 248 PICFs, the mean (standard deviation) Flesch Reading Ease score was 49.3 (5.7) and for the Flesch-Kincaid Grade Level 11.4 (1.1). The mean (SD) SMOG score was 13.2 (0.9). The median document length was 3848 words (8 pages). Commercial PICFs were more than twice as long as non-commercial, but statistically more readable (p = 0.03) when analysed using the SMOG formula. Subgroup analyses indicated that PICFs for self-consenters were statistically more readable than those for proxy consenters. The use of tables, but not the use of illustrations was associated with better readability scores.
The PICFs in our sample are long and complex, and only 3 of the 248 achieved the recommended readability score of grade 8 or below. The broader use of best practice principles for writing health information for consumers and the development of more context-sensitive templates could improve their utility.
Informed consent is central to the ethical conduct of research, and participant information sheets and consent forms (PICFs) are a key component of the process. PICFs often contain complex scientific information. Well-written PICFs facilitate discussion, prompt questions, and support prospective participants’ understanding of a study’s nature and purpose, as well as its risks, benefits, and alternatives.
The National Health and Medical Research Council (NHMRC) in Australia has published the national templates for interventional studies and states that PICFs should be written in plain language (at school grade 8 equivalent or below) and should contain sufficient information for decision-making without being excessively long [1, 2]. Complex PICFs confuse rather than inform [3, 4], and, when long and legalistic, are less likely to be fully read [5,6,7]. Moreover, patients prefer shorter forms [8,9,10,11].
The latest quantification of literacy levels in Australia confirmed that 44% of Australians aged 15 to 74, rising to 65% for Australians aged 60 to 74, do not have literacy skills to meet the demands of daily life . Literacy skills are measured in terms of reading levels, and ‘readability’ is how easy text is to read and understand . Readability scores are one of a number of tools recommended to encourage the development of simpler, shorter, more appealing PICFs, which, combined, may improve a person’s understanding of the information presented [14,15,16,17].
Although several studies have examined the length and readability of PICFs, few are Australian-based, and those are small and limited to specific therapeutic areas [14, 18] or evaluated PICFs from a single source [3, 19]. Therefore, the authors conducted a national project to assess the length and readability of Australian PICFs. They also examined whether sponsor type and PICF type correlated with document length and readability scores and whether illustrations and tables improved these scores. Finally, as readability scores are only one indicator of how well a document reads, the authors examined the compliance with other best-practice measures to improve PICF readability.
PICFs used between 2015 and 2020 for human interventional studies were obtained from a convenience sample of research organisations (32 were contacted with 21 providing PICFs) and from the Australian and New Zealand Clinical Trials Register. PICFs written for self-consenters or for proxy-consenters (parents/legal representatives) from all therapeutic areas and sponsor types were included. PICFs written for children or participants with learning disabilities, PICFs from non-interventional studies, and PICFs written in a language other than English were excluded. To maximise generalisability, PICFs were obtained from a convenience sample of organisations located in all Australian states and territories, including coordinating centres, industry sponsors, public and private hospitals, medical research institutes, trial networks, and research groups. To minimise selection bias, random samples of up to 25 PICFs were requested from research offices in large universities or teaching hospitals that typically host well over 25 interventional studies per year. These organisations were asked to select PICFs from their database using an online random number generator and a link to an online generator was provided. Organisations with fewer than 25 studies, typically trial networks, trial units, and individual research teams, were asked to provide all available PICFs.
A total of 289 were collected and coded by PICF type (self-consent versus proxy consent) and sponsor type (commercial versus non-commercial), study characteristics, illustrations, tables, and other elements related to document format, layout, and language use. Duplicates and ineligible PICFs were removed, leaving 278 PICFs. As we received a higher-than-anticipated response from non-commercial oncology networks/units, our sample of oncology PICFs substantially overrepresented oncology trial activity in Australia. Therefore, a random sample generator was used to select 30 non-commercial oncology PICFs for removal. A total of 248 PICFs were thus included in the analysis.
Consent forms were removed before the page, and word counts were recorded. Documents were then prepared for the calculation of readability scores. The online program ‘ReadablePro’ (formally ‘Readability-Score’) was used to calculate the readability scores . PICFs were prepared in accordance with the program’s guidelines, including the removal of titles, headings, bulleted lists, tables, and any full stops embedded in the sentences.
Readability formulae are a widely accepted method for assessing the average comprehension of a text by an average reader . For our analysis, we selected three well-established formulae. The primary outcome measure was the Flesch Reading Ease score , a continuous variable with potential scores of 0–100, where a higher score indicates easier readability. A score of between 70 and 80 is equivalent to a grade 8 reading level.
The secondary analysis was based on the Flesch-Kincaid Grade Level  and the Simple Measure of Gobbledygook (SMOG)  for the total sample and then for each group, with comparison. These measures estimate the years of education a person needs to understand a piece of writing. The Flesch formulae calculate the scores based on word and sentence length and are built into most word processing programs. The SMOG score is derived from the proportion of words with 3 or more syllables.
Regarding the rationale for using these formulae, the Flesch formulae are widely recommended in government health literacy and plain language guidance and built into word processing programs. The SMOG formula is also easily accessible and the most suitable for assessing health literature [25,26,27] unlike other formulae which test for 50–75% comprehension, SMOG tests for 100% comprehension. This is considered important for documents informing healthcare decisions, as these documents are not intended to be skim-read . Consequently, SMOG tends to produce scores that are 1–2 grades higher than the Flesch formulae.
Both Flesch-Kincaid Grade Level and SMOG have a significant correlation with expert ratings of readability conducted by health literacy experts . However, readability scores have their limitations as they do not measure factors such as cohesion between sentences, typography, and word choice [25, 28]. To extend our readability analysis, ten additional best practice measures for writing for consumers were selected from the NHMRC PICF guidance  and Australian Commission on Safety and Quality in Health Care (ACSQH) guidance [29, 30] and analysed each PICF for their presence. Although not research-specific, the ACSQH guidance documents were considered relevant as Australian hospital accreditation against ACSQH Standards now extends to its clinical trial activity. Three measures (words per sentence, sentences per paragraph, and the use of passive voice) were calculated by the ReadablePro program. To provide an objective measure for ‘word choice’, we selected seven complex words (listed in Table 3) where simpler alternatives are recommended in government guidance  and searched each PICF for inclusion of at least one of these words. These words were selected based on the likelihood that either they, or a simpler alternative, would be present in PICFs. For example, a PICF may state ‘additional blood’ will be taken, when ‘extra blood’ is the recommended alternative. Although the use of scientific terms or measurements is sometimes unavoidable, they should be explained. We searched PICFs for technical/medical terms or symbols used without a lay explanation that were likely to be unfamiliar to a lay audience (e.g. assay, subcutaneously, pharmacokinetics, peripheral vasodilatation, < 0.4/> 1.0 u/ml) or where simpler alternatives are recommended (e.g. biopsy, inflammation) .
Descriptive summary statistics (mean [SD] and median [IQR]) were used as appropriate.
Readability scores were near normally distributed, so Student’s t-test (unpaired) was used for comparison. Page and word count were non-normally distributed, so the Mann-Whitney U test was used for comparisons. All statistical analyses were performed using Stata version 15 (StataCorp, College Station, TX, USA), and p values < 0.05 were considered statistically significant.
Table 1 shows the study characteristics.
Figure 1 illustrates the breakdown of studies by therapeutic area. Our sample reflects the national estimates of trial activity with oncology the therapeutic area with the greatest amount of trial activity. As expected, oncology trials dominate trial activity in Australia.
Table 2 shows the length and readability scores for the entire sample and by sponsor type.
Length: The included PICFs (after removal of the consent forms which were up to 5 pages long) ranged from 2 to 32 pages. More than 10% had 20 or more pages. The median length was 8 pages (3848 words). PICFs for commercial studies were more than twice as long (18 pages) as those for non-commercial studies (7 pages, p < 0.001).
Readability scores: The mean Flesch Reading Ease score was 49.3 which equates to text that is difficult to read. The mean Flesch-Kincaid Grade Level was 11.4, equating to a late secondary school reading level. The mean SMOG score was nearly two grades higher, at 13.2, equating to the expected reading level of a university student.
Commercial study PICFs had higher (better) readability scores than non-commercial PICFs, being non-significantly but numerically higher with the Flesch Reading Ease (50.1 vs 49.1) and a lower (better) Flesch-Kincaid formulae (11.3 vs 11.4), and significantly higher when using SMOG (12.9 vs 13.3, p = 0.03).
Table 3 describes the correlation of four key features of interest with readability measured using the Flesch Reading Ease formula (primary outcome). The inclusion of tables was significantly associated with improved readability, but illustrations were not. Readability scores for commercial and non-commercial PICFs did not differ significantly, but PICFs designed for self-consenters had significantly higher scores than those for proxy-consenters.
Table 4 shows the proportion of PICFs that complied with best practice recommendations for document readability contained in NHMRC and ACSQH guidance. Overall, compliance was poor. Notably, only 1% of PICFs complied with the target reading grade level of ≤ 8.
There was no correlation between the length of a document in pages and its readability (FRE; (Spearman’s rho = 0.05, p = 0.46) or the number of words in a document and the Flesch Reading Ease (Spearman’s rho = 0.03, p = 0.69). However, there was a non-significant negative correlation (r = − 0.11, p = 0.08) between the number of words in a document and its SMOG score (Fig. 2).
In this large convenience sample of Australian PICFs, very few PICFs met the reading level of grade 8 and the mean reading grade scores for grades 11 (Flesch Kincaid) and 13 (SMOG). Given that 44% of the adult population has a reading level below grade 11 , a large proportion of the population would have difficulty reading and understanding these PICFs.
There is evidence that in an educational context, people are unlikely to fully read documents that contain more than 1000 words . The median length of our sample is over 3000 words, suggesting that many PICF may be too long to fully read. They also appear to be getting longer. Compared to the last Australian evaluation in 2014 , which analysed a similar proportion of commercial and non-commercial studies, the mean word count increased by 24%.
As commercial studies tend to involve novel interventions and/or unregistered drugs, it is not surprising their PICFs contain considerably more information. However, the finding that commercial PICFs are more readable was contrary to expectations. One possible explanation is that commercial sponsors may be more likely to use professional medical writers to develop their PICFs, while investigators tend to write these documents themselves.
The reason PICFs are easier to read when designed for self-consenters is unclear. Although this finding was in keeping with a prior analysis , our finding may simply be due to the higher proportion of non-commercial PICFs in the proxy-consenter cohort.
Our analysis confirms that compliance with best practice recommendations for plain language writing is patchy and appears to be dependent on whether the recommendation is reflected in the NHMRC PICF templates . The use of tables was significantly associated with improved readability scores, and this design feature was effectively used in many PICFs, especially those with extensive safety information. Illustrations did not improve readability scores, perhaps because they were used to support text rather than replace it; however, their inclusion could improve the understanding in ways that cannot be detected by readability scores.
Despite their limitations, readability formulae are useful tools. Of the three formulae, SMOG has some key advantages. It is widely accessible and, as the only formula based on 100% expected comprehension, is most suitable for assessing documents in which patients must confirm that they have read and understood every word. Readability formulae, however, cannot assess the many other factors that contribute to a document’s utility, such as its design, layout and language choice, and other best-practice recommendation for writing in plain language should supplement their use.
To ensure a participant’s decision-making capacity is not overwhelmed by the sheer volume of information in a PICF, more imaginative ways are required to present this information. Countries with flexible consent policies have shorter PICFs. The UK RECOVERY trial, for example, a platform trial of therapeutics for COVID-19 (ClinicalTrials.gov: NCT04381936ISRCTN), has recruited more than 40,000 patients using a three-page information sheet that accurately reflects the risks of a study involving repurposed therapies. In the USA, the revised common rule requires researchers to consider what information a ‘reasonable person’ would want to have to decide whether to participate . Conversely, Australian templates are criticised for their rigidity and focus on mitigating the risk of medico-legal exposure . Some commentators suggest an excessive focus on risk can harm study participants through a phenomenon known as ‘the nocebo effect’ , when excessive risk information results in participants expecting side effects and thus experiencing side effects. Others suggest that legalistic PICFs can lead to the inappropriate rejection of studies due to an exaggerated perception of risk [37, 38]. Although one-size-fits-all templates can facilitate an ethical review, they can also inhibit the critical thinking needed to determine what content is most appropriate.
Another option would be to provide plain-language guidance on best practice principles for the development of PICFs, illustrated with examples of well-written PICFs or optional templates that consumers and researchers can use to co-design these critical documents.
Finally, the best way to confirm that a PICF is fit-for-purpose is to seek advice from the people it has been prepared for. In Australia, operational requirements and infrastructure are being implemented that encourage higher levels of consumer involvement in research which should enable greater levels of end-user involvement in the development of PICFs, advocated by commentators [10, 39].
The study’s strengths were the large sample size and the diversity of the PICFs sourced compared to previous studies. In addition, our assessment extended beyond readability scores to include several features recommended by governments and national health agencies to improve document performance.
Our study has several limitations. We provided organisations with written guidance on how to obtain a random sample of PICFs; we did not monitor this requirement. Our proportion of commercial PICFs (31%) was small compared with the national statistics for public health organisations (48% were commercial) . This may have reduced the true estimate of PICF length in our combined analysis. Furthermore, although we confirmed that oncology dominates trial activity in Australia, we were unable to find precise estimates of activity, so even after reducing our sample, it may still be over representative. As oncology trial PICFs were significantly longer than other therapeutic areas, this may have led to an overestimation of PICF length. However, if this is the case, it would partially offset the underestimation of length due to the smaller than expected cohort of commercial PICFs. Another limitation is that the parameters assessed do not encompass the entire consent process. For example, a high-quality consent discussion is likely to contribute to participants’ understanding , and may well mitigate any inadequacy in the form itself. Finally, our analysis only evaluated written PICF documents without considering advances in electronic consent or the use of multimedia to improve the consent processes.
Few Australian PICFs in our sample of interventional studies are written at a reading level the population can understand and most also contain considerably more information than a person is likely to fully read. Consequently, patients may miss an important detail, which diminishes the value of the PICF as an instrument to support informed decision-making.
The present study suggests there is a need for a more context-based approach to PICF development. Although ‘one-size-fits-all standard wording’ templates are comforting for both researchers and ethics committees, their rigid application can result in the description of risks being overinflated and patients inappropriately rejecting studies. Instead, PICF guidance could be revised to incorporate existing best practice principles for creating plain language health information, with templates or exemplary PICFs used to illustrate how context-sensitive documents could be written for various study types and risk levels. The knowledge that participants prefer simpler forms is surely reason enough to redouble efforts to improve the utility of PICFs.
Availability of data and materials
The dataset generated and analysed during the current study (the PDF reports of PICFs created by the ReadablePro program) are not publicly available as they contain proprietary information, and we do not possess the rights to share. However, a summary spreadsheet of all non-proprietary data is available from the corresponding author on request.
National Health and Medical Research Council. Standardised participant information and consent forms. 2012.
The National Statement on Ethical Conduct in Human Research 2007 (Updated. The National Health and Medical Research Council, the Australian Research Council and Universities Australia. Canberra: Commonwealth of Australia; 2018.
Taylor HE, Bramley DEP. An analysis of the readability of patient information and consent forms used in research studies in anaesthesia in Australia and New Zealand. Anaesth Intensive Care. 2012;40(6):995.
Sugarman J, McCrory DC, Hubal RC. Getting meaningful informed consent from older adults: a structured literature review of empirical research. J Am Geriatrics Soc. 1998;46(4):517–24.
Sharp MS. Consent documents for oncology trials: does anybody read these things? Am J Clin Oncol. 2004;27(6):570–5.
Dresden GM, Levitt MA. Modifying a standard industry clinical trial consent form improves patient information retention as part of the informed consent process. Acad Emerg Med. 2001;8(3):246–52.
Cassileth BR, et al. Informed consent—why are its goals imperfectly realized? N Engl J Med. 1980;302(16):896–900.
Nathe JM, Krakow EF. The challenges of informed consent in high-stakes, randomized oncology trials: a systematic review. MDM Policy Pract. 2019;4(1):2381468319840322.
Krishnamurti T, Argo N. A patient-centered approach to informed consent: results from a survey and randomized trial. Med Decis Mak. 2016;36(6):726–40.
Knapp P, et al. Performance-based readability testing of participant materials for a phase I trial: TGN1412. J Med Ethics. 2009;35(9):573–8.
Antoniou EE, et al. An empirical study on the preferred size of the participant information sheet in research. J Med Ethics. 2011;37(9):557–62.
Australian Bureau of Statistics (ABS). (2013). Programme for the international assessment of adult competencies, Australia 2011-2012. 2013.
The Australian Government Style Manual: literacy and access. Available at https://www.stylemanual.gov.au/accessible-and-inclusive-content/literacy-and-access. Accessed 18 January 21.
Beardsley E, Jefford M, Mileshkin L. Longer consent forms for clinical trials compromise patient understanding: so why are they lengthening? J Clin Oncol. 2007;25(9):e13–4.
Tait AR, et al. Informing the uninformed: optimizing the consent message using a fractional factorial design. JAMA Pediatr. 2013;167(7):1–7.
Freer Y, et al. More information, less understanding: a randomized study on consent issues in neonatal research. Pediatrics (Evanston). 2009;123(5):1301–5.
Kim EJ, Kim SH. Simplification improves understanding of informed consent information in clinical trials regardless of health literacy level. Clin Trials (London, England). 2015;12(3):232–6.
Buccini L, et al. An Australian based study on the readability of HIV/AIDS and type 2 diabetes clinical trial informed consent documents. J Bioethical Inquiry. 2010;7(3):313–9.
Biggs JSG, Marchesi A. Information for consent: too long and too hard to read. Res Ethics. 2015;11(3):133.
ReadablePro. Available at https://readable.com/features/. Accessed 28 May 2021.
Schutten M, McFarland A. Readability levels of health-based websites: from content to comprehension. Int Electron J Health Educ. 2009;12.
Flesch R. A new readability yardstick. J Appl Psychol. 1948;32(3):221–33.
Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of new readability formula for navy enlisted personnel. Millington: Navy Research Branch; 1975.
Laughlin GHM. SMOG Grading-a new readability formula. J Read. 1969;12(8):639–46.
Meade CD, Smith CF. Readability formulas: cautions and criteria. Patient Educ Couns. 1991;17(2):153–8.
Fitzsimmons PR, et al. A readability assessment of online Parkinson’s disease information. J R Coll Physicians Edinburgh. 2010;40(4):292–6.
Wang L-W, et al. Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Soc Adm Pharm. 2013;9(5):503–16.
Kandula S, Zeng-Treitler Q. Creating a gold standard for the readability measurement of health texts. AMIA Annu Symp Proc. 2008, 2008:353–7.
Australian Commission on Safety and Quality in Health Care. Tip Sheet 5: Preparing written information for consumers that is clear, understandable an easy to use. Available at https://www.safetyandquality.gov.au/sites/default/files/migrated/Standard-2-Tip-Sheet-5-Preparing-written-information-for-consumers-that-is-clear-understandable-and-easy-to-use.pdf. Accessed 21 May 2021.
Australian Commission on Safety and Quality in Health Care. Health Literacy Fact Sheet 4: Writing health information for consumers. Available at https://www.safetyandquality.gov.au/sites/default/files/2019-05/health-literacy-fact-sheet-4-writing-health-information-for-consumers.pdf. Accesssed 21 March 2021.
Tasmanian Department of Health. Health Literacy Workplace Toolkit - Written Communication. Plain English Word and Phrase Swap (2018). Available at https://www.dhhs.tas.gov.au/__data/assets/pdf_file/0011/166565/B_04_2018_1213_Word_and_phrase_swap.pdf. Accessed 21 March 2021.
Royal Melbourne Hospital. Guidance for writing participant information and consent forms (PICF) in plain English. Available at https://www.thermh.org.au/file/108. Accessed June 2021.
Kass N, et al. Length and complexity of US and international HIV consent forms from federal HIV network trials. J Gen Intern Med. 2011;26(11):1324–8.
Code of Federal Regulations. Federal Register Volume 82, Number 12. U.S. Federal Policy for the Protection of Human Subjects (§___ 116(a)(4)) 2017.
Jefford MD, Moore R. Improvement of informed consent and the quality of consent documents. Lancet Oncol. 2008;9(5):485–93.
Kirby N, et al. Nocebo effects and participant information leaflets: evaluating information provided on adverse effects in UK clinical trials. Trials. 2020;21(1):658.
Modi N. Ethical pitfalls in neonatal comparative effectiveness trials. Neonatology. 2014;105(4):350–1.
Snowdon C, Elbourne D, Garcia J. Declining enrolment in a clinical trial and injurious misconceptions: is there a flipside to the therapeutic misconception? Clin Ethics. 2007;2(4):193–200.
Greenlee R, et al. Measuring the impact of patient-engaged research: how a methods workshop identified critical outcomes of research engagement. J Patient Centered Res Rev. 2017;4(4):237–46.
Australian Government Department of Health. Clinical trials in Australian public health institutions 2018-19 (NAS 4 report). Available at https://www.health.gov.au/resources/publications/clinical-trials-in-australian-public-health-institutions-2018-19-nas-4-report. Accessed 15 January 2021.
Flory J, Emmanuel E. Interventions to improve research participants’ understanding in informed consent for research: a systematic review. JAMA. 2004;292(13):1593–601.
The authors received no financial support for the research, authorship, and/or publication of this article.
Ethics approval and consent to participate
The study was reviewed by a sub-committee of the Northern Sydney Local Health District Human Research Ethics Committee who confirmed that the project met the criteria for a quality improvement study. Consent was not applicable as this study was not human research.
Consent for publication
TS provides clinical research consulting services in Australia and the UK; however, no organisation controlled or influenced the development of this manuscript. JD declares no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Symons, T., Davis, J.S. Creating concise and readable patient information sheets for interventional studies in Australia: are we there yet?. Trials 23, 794 (2022). https://doi.org/10.1186/s13063-022-06712-z