Healthcare-associated infections: getting the balance right in safety and quality v. public reporting
Brett G. Mitchell A B D , Anne Gardner B and Alistair McGregor CA School of Nursing, University of Tasmania, Private Bag 135, Hobart, TAS. 7001, Australia.
B School of Nursing, Midwifery and Paramedicine, Australian Catholic University, PO BOX 256, Canberra, ACT, Australia. Email: anne.gardner@acu.edu.au
C Department of Infectious Diseases and Microbiology, Royal Hobart Hospital, Liverpool St, Hobart, TAS. 7001, Australia. Email: alistair.mcgregor@dhhs.tas.gov.au
D Corresponding author. Email: bgmitc001@myacu.edu.au
Australian Health Review 36(4) 365-366 https://doi.org/10.1071/AH11200
Submitted: 22 July 2012 Accepted: 30 August 2012 Published: 15 October 2012
Journal Compilation © AHHA 2012
Abstract
Healthcare settings are dangerous places. For those receiving care, the risk of unintended harm from healthcare failures continues to be significant. Given this, there is a need to monitor standards in healthcare, not only to identify potential issues, but also to plan and evaluate interventions aimed at improving healthcare standards. Public reporting of performance standards is one aspect to monitoring standards, but not the only one. Public reporting also brings with it challenges. This perspective explores the recent move to publicly report one healthcare-associated infection (HAI) on the MyHospitals website and comments on the broader issue of using existing HAI data for the purposes of public reporting.
Risks and adverse outcomes associated with events such as falls, medication errors and healthcare-associated infection (HAI) have spawned an entire industry charged with attempting to keep people safe from the dangers associated with consuming of healthcare. These points are demonstrated in a landmark paper ‘To err is human: building a safer health system.’1 In Australia, the themes of this paper are reflected in the establishment and work of the Australian Commission on Safety and Quality in Health Care (ACSQHC), an independent, statutory authority, established under the National Health and Hospitals Network Act 2011. The purpose of the ACSQHC is to lead and coordinate improvements in the safety and quality of healthcare across Australia. The ACSQHC work includes the prevention of HAIs. Healthcare-associated infection is the contemporary term used to refer to infections acquired in healthcare facilities and those that occur as a result of healthcare interventions.2
The work of the ACSQHC in the area of HAIs is to be commended, with programs such as the national hand hygiene initiative being rolled out nationwide. Australia is arguably a world leader in hand hygiene initiatives and research is underway to evaluate the program.3,4 Other HAI prevention activities have included the development of national surveillance definitions for two infections, Staphylococcus aureus bacteraemia (SAB) and Clostridium difficile infection (CDI). This was a crucial step in being able to reliably monitor these infections, make valid comparisons and plan prevention strategies.
Largely through the work of the ACSQHC, the prevention of HAIs is becoming increasingly recognised as an important health issue in Australia. The increase in the profile of HAIs is demonstrated by the Coalition of Australian Government (COAG) agreement between the Commonwealth and States and Territories. The COAG agreement includes a requirement for monitoring healthcare-associated (HCA) SAB. A target for reduction of HCA SAB has also been set.5 More recently, there has been the introduction of the MyHospitals website, and individual hospital rates of HCA SAB and hand hygiene compliance have been published.6 Further, the national surveillance definition for CDI, developed by the ACSQHC, has been endorsed by Australian Health Ministers.7
It is important to note that the ACSQHC are not responsible for the management of HAI data on the MyHospitals website; rather the data presented on this website were derived from processes established by the ACSQHC, working in collaboration with relevant infection control and infectious disease experts across Australia. For this purpose, national surveillance definitions for HCA SAB and CDI were developed in the context of a safety and quality framework, and subsequently used to inform and evaluate interventions to reduce HAIs. The subsequent use of HAI data in a public reporting and therefore performance indicator arena, brings with it challenges.
The use of HCA SAB data as a performance indicator may be appropriate given that it is possible to identify HCA cases of SAB8 and many of the factors associated with this infection can be largely prevented or modified. For example it has been shown that improved hand hygiene compliance and management of intravascular devices in hospitals can prevent cases of HCA SAB.9–11 The same, however, cannot necessarily be said for CDI.
The development of a national surveillance definition for CDI is welcomed, but it is important to note that this definition is based on where the infection was identified (i.e. hospital-identified CDI),12 which is not necessarily where the infection originated. In other words, unlike SAB, cases of CDI are not defined by the location at which the infection originated or whether the infection was associated with being a recipient of healthcare.
As it stands, the national CDI definition is a surrogate marker for the incidence of CDI in a particular catchment area, not a specific marker for cases of CDI that can be prevented and controlled by an individual hospital or healthcare institution. A definition that identifies cases of HCA CDI is possible and is included as an extension of the current national definition.12 However, applying this extended definition requires additional resources or data linkage for each case of CDI, something that was not possible Australia wide at the time CDI surveillance definitions were developed, and arguably still is not possible in all hospitals. The additional use of resources in pursuit of a ‘perfect’ indicator, is not unique to HAIs. As Ibrahim explains, ‘as the degree of reliability, breadth, detail and clinical relevance of performance indicators increase, so does the cost of data collection’ (p. 432).13
Simply because hospital identified CDI surveillance data are now available as a result of the work of the ACSQHC and infection control professionals across Australia, it does not automatically mean that the data are suitable as a performance indicator, used for target setting or public reporting on a website such as MyHospitals. On the issue of HAIs and public reporting, the Centres for Disease Control and Prevention (CDC) suggest that as a first step the goals, objectives and priorities of a public reporting system should be clearly specified, with the information monitored being measurable, to ensure that the system can be held accountable by stakeholders.14 Taking note of the CDC comments and for the reasons described earlier, we would caution a move to include CDI data as they currently stand, in a public reporting arena. To do so undermines the original intent of the development of a national CDI surveillance definition. More fundamentally, cases of hospital-identified CDI cannot always be directly prevented by actions of an individual hospital, and acquisition could have occurred before being hospitalised.12 In short, holding senior health managers to account for hospital-identified CDI is inherently flawed as the outcome is arguably outside of the control of a healthcare institution.
The examples of CDI and SAB surveillance definitions demonstrate the need to understand limitations of HAI surveillance data and their subsequent use. Such an issue extends beyond HAI surveillance to that of performance indicators and public reporting in healthcare more generally. Performance indicators are complex as they are often considered to be a quantitative measure of quality.13 As the impetus to develop and report outcomes in healthcare (such as HAIs) continues, understanding the complexities of performance indicators and public reporting is paramount. In the case of developing consensus on future HAI performance indicators, engagement with clinicians, data managers and epidemiologists may assist in the development of an indicator with suitable rigour. The examples of CDI and HCA SAB surveillance provided in this article highlight some of the intricacies involved in this area.
Competing interests
Two of the authors are members of committees convened by the ACSQHC. The views expressed in this article are only those of the listed authors. The ACSQHC played no role in the development of this article. The authors have no other conflicts to declare. No funding was received in relation to the development or submission of this article.
References
[1] Kohn LCJ, Donaldson M. To err is human: building a safer health system. Medicine Io, editor. Washington: National Academy Press; 1999.[2] National Health and Medical Research Council. Australian Guidelines for the Prevention and Control of Infection in Healthcare (2010). National Health and Medical Research Council, editor. Canberra: National Health and Medical Research Council; 2010.
[3] Russo P, Pittet D, Grayson L. Australia: a leader in hand hygiene. Healthc Infect 2012; 17 1–2.
| Australia: a leader in hand hygiene.Crossref | GoogleScholarGoogle Scholar |
[4] Graves N, Barnett A, White K, Jimmieson N, Page K, Campbell M, Stephens E, Rashleigh-Rolls R, et al Evaluating the economics of the Australian National Hand Hygiene Initiative. Healthc Infect 2012; 17 5–10.
| Evaluating the economics of the Australian National Hand Hygiene Initiative.Crossref | GoogleScholarGoogle Scholar |
[5] COAG Reform Council. Healthcare 2010–11: comparing performance across Australia. Sydney: COAG Reform Council; 2012.
[6] Australian Institute of Health and Welfare. MyHospitals. Australian Government 2012; Available at http://www.myhospitals.gov.au/ [Verified 17 April 2012]
[7] Mitchell B, McGregor A, Brown S. Clostridium difficile Infection (CDI) Surveillance Protocol. V3.0. Tasmanian Infection Prevention and Control Unit, editor. Hobart: Department of Health and Human Services; 2011.
[8] Mitchell B, Gardner A, Collignon P, Stewart L, Cruickshank M. A literature review supporting the proposed national Australian definition for Staphylococcus aureus bacteraemia. Healthc Infect 2010; 15 105–13.
| A literature review supporting the proposed national Australian definition for Staphylococcus aureus bacteraemia.Crossref | GoogleScholarGoogle Scholar |
[9] Grayson L, Russo R, Cruickshank M, Bear J, Gee C, Hughes C, Johnson PDR, McCann R, et al Outcomes from the first 2 years of the Australian National Hand Hygiene Initiative. Med J Aust 2011; 195 615–319.
| Outcomes from the first 2 years of the Australian National Hand Hygiene Initiative.Crossref | GoogleScholarGoogle Scholar |
[10] Collignon P, Cruickshank M, Dreimanis D. Staphylococcus aureus bloodstream infections: an important indicator for infection control. Chapter 2: bloodstream infections – an abridged version. Healthc Infect 2009; 14 165–71.
| Staphylococcus aureus bloodstream infections: an important indicator for infection control. Chapter 2: bloodstream infections – an abridged version.Crossref | GoogleScholarGoogle Scholar |
[11] Dreimanis D, Beckingham W, Collignon P, Roberts J. Staphylococcus aureus bacteraemia surveillance: a relatively easy to collect but accurate clinical indicator on serious health-care associated infections and antibiotic resistance. Australian Infection Control 2005; 10 127–30.
[12] Van Gessel H, McCann R, Peterson A, Cope C, Wilkinson I, Mitchell B, Wells A, Kennedy B, et al. Implementation guide for surveillance of Clostridium difficile infection. Sydney: Australian Commission on Safety and Quality in Health Care; 2011.
[13] Ibrahim J. Performance indicators from all perspectives. Int J Qual Health Care 2001; 13 431–2.
| Performance indicators from all perspectives.Crossref | GoogleScholarGoogle Scholar | 1:STN:280:DC%2BD38%2FktlWiug%3D%3D&md5=1ccf82b03e8f4f3fe3c3b6c8e7279391CAS | 11769743PubMed |
[14] McKibben LHT, Tokars JI, Fowler G, Cardo DM, Pearson ML, Brennan PJ. Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Am J Infect Control 2005; 33 217–26.
| Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee.Crossref | GoogleScholarGoogle Scholar |