Furthering the quality agenda in Aboriginal community controlled health services: understanding the relationship between accreditation, continuous quality improvement and national key performance indicator reporting
Beverly Sibthorpe A C , Karen Gardner A and Daniel McAullay BA Research School of Population Health, College of Medicine, Biology & Environment, The Australian National University, ACT 2601, Australia.
B Kurongkurl Katitjin, Edith Cowan University, 2 Bradford Street, Mount Lawley, WA 6050, Australia.
C Corresponding author. Email: bsibthorpe@bigpond.com
Australian Journal of Primary Health 22(4) 270-275 https://doi.org/10.1071/PY15139
Submitted: 4 September 2015 Accepted: 17 December 2015 Published: 6 May 2016
Journal Compilation © La Trobe University 2016
Abstract
A rapidly expanding interest in quality in the Aboriginal-community-controlled health sector has led to widespread uptake of accreditation using more than one set of standards, a proliferation of continuous quality improvement programs and the introduction of key performance indicators. As yet, there has been no overarching logic that shows how they relate to each other, with consequent confusion within and outside the sector. We map the three approaches to the Framework for Performance Assessment in Primary Health Care, demonstrating their key differences and complementarity. There needs to be greater attention in both policy and practice to the purposes and alignment of the three approaches if they are to embed a system-wide focus that supports quality improvement at the service level.
What is known about the topic? |
• There has been rapid expansion of accreditation, continuous quality improvement and key performance indicator reporting in Aboriginal primary health care but no formal analysis of the relationship between them. |
What does this paper add? |
• This paper provides the first systematic analysis of the relationship between these three approaches and considers the implications for policy and practice. |
Introduction
It is widely known that there are significant disparities between the health and social status of Aboriginal and Torres Strait Islander people and other Australians. Primary health care has a key role to play in achieving health parity for these populations. Aboriginal Community Controlled Health Services (ACCHSs) are a major provider of primary health care, with 150 services delivering care to an estimated 50% of Aboriginal and Torres Strait Islander people nationally (NACCHO 2014). There has been increasing interest in the quality of services for Indigenous people, reflected in the Council of Australian Government’s (COAG’s) ‘Closing the Gap’ (CtG) strategy introduced in 2008 (COAG 2008). As a consequence, there has been widespread uptake within the community controlled sector of accreditation using more than one set of standards, a proliferation of continuous quality improvement (CQI) and other quality improvement programs, and the introduction of key performance indicator (KPI) reporting in the Northern Territory (the Northern Territory Aboriginal Health Key Performance Indicators – NTAHKPIs) (Northern Territory Government 2015) and subsequently nationally (the nKPIs).
Accreditation, CQI and performance reporting are widely recognised as vehicles for achieving improvements in the quality and safety of health services (WHO 2003). Accreditation is now a requirement for many healthcare organisations and the Australian Government has provided funding to ACCHSs over the past 10 years to support uptake. CQI has been voluntary, unevenly taken up and not well sustained, but it will soon be linked to funding through implementation of a national CQI Framework. Also, standards relating to CQI have been incorporated into accreditation. The submission of nKPI data is compulsory for all primary healthcare services receiving Australian Government funding for services to Indigenous clients, and the results are publically reported.
While all concerned with quality, these key approaches – accreditation, CQI and nKPI reporting – have different underlying philosophies, and use different methods and tools to stimulate improvement. As yet, there has been no overarching logic that shows how they relate to each other. As a consequence, there is confusion within and outside the sector about their differences and whether or not there is complementarity or unacceptable overlap between them, as well as staggered investment in implementation. We asked: ‘What are the key features of the three approaches and the critical differences between them and what are the implications of this for policy and practice?’
To understand the focus of each approach, we mapped the measures used in each against the Framework for Performance Assessment in Primary Health Care (FPA_PHC) (Sibthorpe and Gardner 2007). Based on Donabedian’s (1988) now classic ‘structure’, ‘process’, ‘outcome’ model for assessment of quality of care, the FPA_PHC specifies measurement at four levels: the stewardship role of governments (Level 1); local health services’ organisational structures and processes (Level 2); processes of care (Level 3); and intermediate outcomes (Level 4). Importantly, processes are split across two levels – processes of care delivered belong with structures in Level 2 while processes of care received belong in Level 3. Thus, the denominator at Level 3 is always clients.
Results
Mapping of the standards and measures for each of the three approaches is shown in Fig. 1. Below, we describe the key features of each approach and the critical differences between them, before turning to the policy and practice implications for the quality improvement environment within the sector over coming years.
Accreditation
‘Accreditation is public recognition by a health care accreditation body of the achievement of accreditation standards by a health care organisation, demonstrated through an independent external peer assessment of that organisation’s level of performance in relation to the standards’ (Australian Council of Healthcare Standards 2015). While accreditation is a developmental process, ultimately there is a yes/no outcome – either an organisation is accredited against one or more sets of standards or it is not. Further, accreditation does not concern itself with results. For example, it considers whether or not there is an appropriate client record system and recall of diabetic clients but not the rate of routine testing or levels of blood glucose for those clients. Data are used for certification, and accreditation and re-accreditation occur over long cycles of several years.
The three most commonly adopted accreditation standards in the ACCHS sector are International Organization for Standardization (ISO), Quality Improvement Council (QIC) and Australian General Practice Accreditation Limited (AGPAL).
ISO 9001 : 2008 specifies requirements for a quality management system through 46 requirements in the five sections shown in Fig. 1 (ISO 2008).
The QIC Health and Community Services Standards has 18 standards in three sections (see Fig. 1). Designed for self-assessment or as a basis for external review and to reflect ‘continuous quality improvement principles’ (QIC 2013), these standards can be applied in organisations based in the public, commercial or community sectors.
The Royal Australian College of General Practitioners’ Standards for General Practices are used by AGPAL to accredit general practices. The Standards ‘are designed to be a template for safe and high quality care in the increasingly complex environment of Australian general practice’ (RACGP 2015). There is a total of 41 criteria for 15 standards in five sections (see Fig. 1).
All three sets of standards relate to Level 2 of the FPA_PHC – that is, they focus on assessing the quality of organisational systems for the provision of care. ISO standards relate more to the business end and AGPAL standards more to the clinical end. Importantly, both ISO and AGPAL include standards that must be met by CQI processes, and there is increasing recognition both within and outside the sector that formal CQI processes constitute best practice (Rubenstein et al. 2014). QIC standards sit somewhere between ISO and AGPAL on the continuum between business and clinical processes. Only QIC concerns itself with organisational governance – a critical factor in the function and sustainability of ACCHS, with their community-based boards.
Continuous quality improvement
Internationally, CQI in health care is defined as ‘a structured organisational process for involving personnel in planning and executing a continuous flow of improvements to provide quality health care that meets or exceeds expectations’ (Sollecito and Johnson 2013, p. 4). It involves frequent, routine plan–do–study–act (PDSA) cycles that provide a structure for iterative testing of changes to improve the quality of service systems (Taylor et al. 2013). Importantly, CQI focuses on services’ changing priorities, reflecting the different needs of their clients and communities over time. It examines results, and data are used for internal dialogue among health service teams and within supportive external provider networks. CQI cycles typically occur over much shorter periods than accreditation.
Its scope and internal flexibility mean that CQI typically uses a wide range of measures that may include benchmarks (reference levels) or targets (aspirational levels) that may change over time. These are drawn from the many thousands of measures in services’ electronic health records and administrative systems. Some of the measures will be performance indicators – that is, measures widely recognised as providing an evidence-based snapshot of the quality of service systems, care and outcomes in priority areas. (The measures for the three formal CQI programs operating within the ACCHS sector are shown in Fig. 1.)
One21seventy commenced as the Audit and Best Practice for Chronic Disease (ABCD) program in 2006. It now has a menu of eight audit tools that cover areas such as child health, and vascular and metabolic syndrome. Each tool has a comprehensive set of measures covering processes of care (Level 2) and intermediate outcomes (Level 3) shown in Fig. 1. The audits are combined with a systems assessment, generally completed through a facilitated discussion with the whole service or program, which covers five domains at Level 2. Some of the Level 3 and 4 measures are in the nKPI indicator set, as are measures from the other audit tools (One21seventy 2015).
The Improvement Foundation’s 2010 CtG Collaborative was part of Wave Three of the broader Australian Primary Care Collaborative (APCC) program that focused on care for Aboriginal and Torres Strait Islander people in the areas of access and chronic disease. As shown in Fig. 1, the CtG collaborative had a comprehensive set of measures focusing on Level 3 and Level 4, with a small number focusing on Level 2 (Improvement Foundation 2010). Again, some of the Level 3 and 4 measures are in the nKPI indicator set.
The Queensland Aboriginal and Islander Health Council (QAIHC) developed a set of primary healthcare measures (see Fig. 1). In partnership with the Improvement Foundation and general practice, QAIHC implemented the QAIHC CtG Collaborative (Panaretto et al. 2013). Unlike the measures for One21seventy and the APCC CtG Collaborative, QAIHC developed a smaller, more focused set of measures in priority areas (called the ‘QAIHC core indicators’) that are also in the nKPIs. QAIHC’s primary purpose was to use these core indicators for CQI purposes, while also providing an overview of how well its services were performing at the state level.
As shown in Fig. 1, the measures used in these CQI programs are all quantitative, and focus on client care and outcomes (Levels 3 and 4). However, through PDSA cycles, there can be some interest in service systems (Level 2). For example, a service wanting to improve the proportion of its diabetic clients who have good blood glucose control may develop progress measures relating to the completeness of its client data, the effectiveness of recall or the availability of dietetic services. Such upstream measures are necessary but not sufficient to monitor improvements in service delivery. While they focus at Level 2, they do not overlap with accreditation, which, for example, is interested only if there is a recall system, and not in how effective the service has been in getting diabetic clients back for routine care.
The critical differences between CQI and accreditation are summarised in Table 1. While accreditation is an organisational ‘health check’ against best practice standards and is critical as a stocktake exercise, CQI offers a process for continuous internal improvement that engages teams in reflective practice to continually adapt care in relation to client expectations.
National key performance indicator reporting
Performance indicators are ‘measurable elements of practice performance for which there is evidence or consensus that they can be used to assess the quality, and hence change of quality, of care provided’ (Crampton et al. 2004, p. 3). Always limited in number and strongly evidence/best practice based, they focus on health priorities, with a small number of indicators for each priority. The use of performance indicators in health care for both internal self-assessment and governance, and external evaluation, is internationally widespread (WHO 2003).
In 2009, the Australian Government made a commitment to the ACCHS sector to implement several changes to alleviate the widely recognised contract-reporting burden (Dwyer et al. 2011). This was to be done ‘while at the same time maintaining the supply … of health outcome and performance data for the Australian Government’ (Office of Aboriginal and Torres Strait Islander Health 2011). As part of this commitment, nKPI reporting was implemented. The first set of indicators was introduced in 2012 and the second in 2013. There are currently 24 indicators, of which 19 are being reported (AIHW 2014). More indicators are planned. As shown in the figure, all indicators are quantitative and relate only to Level 3 and Level 4 but it would be possible to add some at Level 2.
nKPI reports describe services’ performance based on the indicator data. The Australian Institute of Health and Welfare (AIHW) produces both individual service and national summary reports. While initially framed in the context of maintaining the supply of performance data, the nKPI reports state that ‘the purpose of the nKPIs is to improve the delivery of primary health-care services by supporting continuous quality improvement (CQI) activity among service providers’ (AIHW 2014, p. viii). That nKPIs can support quality is generally not in question; whether the current reports can support local CQI is more problematic. First, the timelag between data reporting periods and report release is not timely from a CQI perspective and the online platform that supports data collection does not allow services to independently access their own trend information or comparisons with peer services in real time. Additionally, some services use a different denominator so are working from different results. Second, while reports provide high-level comparisons (state and geographic regions) that may be useful for sector-level discussions, they are less useful for services because groupings are not necessarily of peers. Third, report production is distant from context and context is critical to understanding the meaning of indicator results. Fourth, the national summary reports are publically available and closely considered by governments but the dialogues they stimulate are predominantly external to services. For nKPI reporting to contribute effectively to the quality agenda, there needs to be a high level of trust and agreement between the ACCHS sector and government about: which indicators should be included; the quality of the data obtained; and how the data will be interpreted and reported, by whom and for what purpose. The critical differences between CQI and performance reporting are summarised in Table 2.
Discussion and conclusions
This analysis shows that accreditation and CQI are distinct but complementary quality improvement approaches that focus at different levels of the primary healthcare continuum. Both are necessary in a modern healthcare organisation.
There has been significant Australian Government investment in the uptake of accreditation across the ACCHS sector and most services are accredited or on the path to accreditation. However, this analysis demonstrates that neither of the two main programs – ISO and AGPAL – appears to be sufficient to cover the full scope of organisational and clinical quality necessary to run a comprehensive primary healthcare service. As a result, many services get accredited with both. Double accreditation is fostered by the Australian Government’s practice incentive payment to AGPAL-accredited private general practices, for which AGPAL-accredited ACCHS are also eligible. Given the considerable time and effort involved in accreditation, getting accredited using two different standards is inefficient. Further, neither set of standard deals with governance issues, despite these being key to the sustainability of the ACCHS sector. This is a strength of the less popular QIC standards. The Australian Government program ‘Establishing Quality Health Standards’, which supported accreditation, has now ceased and it is unclear to what extent services have the capacity to absorb its costs and what a long-term policy response might be. Consideration needs to be given to how accreditation within the sector might be rationalised.
Until recently, uptake of CQI in the sector has been patchy and poorly sustained, often hindered by a belief that services are ‘already doing CQI’ because they are accredited. However, expectations – of systematic CQI activity (including for accreditation) and of delivery on health outcomes to help close the gap for Aboriginal and Torres Strait Islander people – have risen steeply. This is reflected in the recent commissioning by the Department of Health of a 10-year CQI Framework for Aboriginal and Torres Strait Primary Health Care. Major Australian Government investment in the Framework within the ACCHS sector ($40 million over 3 years, 2015–2018) has been committed and spending has commenced. Clear sector and government articulation of the differences between accreditation and CQI, and their complementarity, will be critical to a smooth transition to universal uptake of CQI.
While the nKPI measures can be – and are – used for CQI at the local level, the role of nKPI reporting in meeting quality objectives is much less clear. Unresolved issues with nKPI reporting currently limit its effectiveness for quality improvement. There is a need to review the role of nKPI reporting if services and their peak bodies are to view it as more than an accountability chore and a process that enhances rather than undermines their quality agenda. In addition, there are gaps in the current set of indicators – for example, in health priority areas such mental health, otitis media and sexually transmitted infections, and in client satisfaction. Some additional Level 2 indicators would also be useful – for example, indicators relating to workforce.
In summary, there needs to be greater attention in both policy and practice to the purposes and alignment of accreditation, CQI and key performance indicator reporting if a system-wide focus on quality that supports quality improvement at the service level is to be embedded in the ACCHS sector.
Conflicts of interest
None declared.
References
Australian Institute of Health and Welfare (AIHW) (2014) National Key Performance Indicators for Aboriginal and Torres Strait Islander primary health care: first national results June 2012 to June 2013. Available at http://www.aihw.gov.au/publication-detail/?id=60129546941&tab=2 [Verified 5 August 2015]Australian Council of Healthcare Standards (2015) What is accreditation? Available at http://www.achs.org.au/about-us/what-we-do/what-is-accreditation/ [Verified 3 August 2015]
Council of Australian Governments (COAG) (2008) Closing the gap in Indigenous disadvantage. Available at https://www.coag.gov.au/closing_the_gap_in_indigenous_disadvantage [Verified 15 June 2015]
Crampton P, Perera R, Crengle S, Dowell A, Howden-Chapman P, Kearns R, Love T, Sibthorpe B, Southwick M (2004) What makes a good performance indicator? Devising primary care performance indicators for New Zealand. The New Zealand Medical Journal 117, 1–12.
Donabedian A (1988) The quality of care: how can it be assessed? Journal of the American Medical Association 260, 1743–1748.
| The quality of care: how can it be assessed?Crossref | GoogleScholarGoogle Scholar | 1:STN:280:DyaL1czhslajsA%3D%3D&md5=4138569533901fe338ca566e3a6b6fe5CAS | 3045356PubMed |
Dwyer J, Lavoie J, O’Donnell K, Marlina U, Sullivan P (2011) Contracting for Indigenous health care: towards mutual accountability. Australian Journal of Public Administration 70, 34–46.
| Contracting for Indigenous health care: towards mutual accountability.Crossref | GoogleScholarGoogle Scholar |
Improvement Foundation (2010) ‘APCC Closing the Gap collaborative handbook.’
International Organization for Standardization (ISO) (2008) ISO 9001: Quality management systems – requirements. Available at http://cucqae.cu.edu.eg/materials/ISO_9001_2008.pdf [Verified 26 August 2015]
National Aboriginal Community Controlled Health Organisation (NACCHO) (2014) Investing in healthy futures for generational change: NACCHO 10 point plan 2013–2030. Available at http://www.naccho.org.au/download/naccho_health_futures/NACCHO%20Healthy%20Futures%2010%20point%20plan%202013-2030.pdf [Verified 3 August 2015]
Northern Territory Government (2015) Aboriginal Health Key Performance Indicators. http://www.nt.gov.au/health/ahkpi/ [Verified 5 August 2015]
Office of Aboriginal and Torres Strait Islander Health (2011) OCHREStreams fact sheet: reporting of national Key Performance Indicators. Available at http://www.ochrestreams.org.au/Fact_Sheets/20111115%20-%20FINAL%20-%20FactSheet%20-%20OATSIH%20Reporting%20of%20nKPIs%20-%20v5.0.pdf [Verified 5 August 2015]
One21seventy (2015) CQI information. Available at http://www.one21seventy.org.au/cqi-information/ [Verified 26 August 2015]
Panaretto K, Gardner K, Button S, Carson A, Shibasaki R, Wason G, Baker D, Mein J, Dellit A, Lewis D, Wenitong M, Ring I (2013) Prevention and management of chronic disease in Aboriginal and Islander Community Controlled Health Services in Queensland: a quality improvement study assessing change in selected clinical performance indicators over time in a cohort of services. BMJ Open 3, e002759
| Prevention and management of chronic disease in Aboriginal and Islander Community Controlled Health Services in Queensland: a quality improvement study assessing change in selected clinical performance indicators over time in a cohort of services.Crossref | GoogleScholarGoogle Scholar |
Quality Improvement Council (QIC) (2013) Quality Improvement Health and Community Services Standards: 6th Edition Standards overview. Available at https://www.qip.com.au/wp-content/uploads/QIC_Standards_Overview.pdf [Verified 26 August 2015]
Royal Australian College of General Practitioners (RACGP) (2015) Standards for general practices: 4th Edition. Available at http://www.racgp.org.au/your-practice/standards/standards4thedition/ [Verified 5 August 2015]
Rubenstein L, Khodyakov D, Hempel S, Danz M, Salem-Schatz S, Foy R, O’Neill S, Dalal S, Shekelle P (2014) How can we recognize continuous quality improvement? International Journal for Quality in Health Care 26, 6–15.
| How can we recognize continuous quality improvement?Crossref | GoogleScholarGoogle Scholar | 24311732PubMed |
Sibthorpe B, Gardner K (2007) A conceptual framework for performance assessment in primary health care. Australian Journal of Primary Health 13, 96–103.
| A conceptual framework for performance assessment in primary health care.Crossref | GoogleScholarGoogle Scholar |
Sollecito W, Johnson J (Eds) (2013) ‘Continuous quality improvement in health care: theory, implementations, and applications.’ (Jones and Bartlett: USA).
Taylor M, McNicholas C, Nicolay C, Darzi A, Bell D, Reed R (2013) Systematic review of the application of the plan–do–study–act method to improve quality in healthcare. BMJ Quality & Safety 23, 290–298.
| Systematic review of the application of the plan–do–study–act method to improve quality in healthcare.Crossref | GoogleScholarGoogle Scholar |
World Health Organization (WHO) (2003) Quality and accreditation in health care services: a global review. Available at http://www.who.int/hrh/documents/en/quality_accreditation.pdf [Verified 3 August 2015]