Screening for important unwarranted variation in clinical practice: a triple-test of processes of care, costs and patient outcomes
Andrew Partington A , Derek P. Chew B , David Ben-Tovim B , Matthew Horsfall C , Paul Hakendorf B and Jonathan Karnon A DA School of Public Health, University of Adelaide, Level 7, 178 North Terrace, Adelaide, SA 5044, Australia. Email: a.r.partington@gmail.com
B School of Medicine, Flinders University, Sturt Road, Bedford Park, SA 5042, Australia. Email: derek.chew@flinders.edu.au; david.ben-tovim@flinders.edu.au; paul.hakendorf@sa.gov.au
C South Australian Health and Medical Research Institute, North Terrace, Adelaide, SA 5000, Australia. Email: Matthew.Horsfall@sa.gov.au
D Corresponding author. Email: jonathan.karnon@adelaide.edu.au
Australian Health Review 41(1) 104-110 https://doi.org/10.1071/AH15101
Submitted: 2 June 2015 Accepted: 1 February 2016 Published: 3 March 2016
Journal Compilation © AHHA 2017 Open Access CC BY-NC-ND
Abstract
Objective Unwarranted variation in clinical practice is a target for quality improvement in health care, but there is no consensus on how to identify such variation or to assess the potential value of initiatives to improve quality in these areas. This study illustrates the use of a triple test, namely the comparative analysis of processes of care, costs and outcomes, to identify and assess the burden of unwarranted variation in clinical practice.
Methods Routinely collected hospital and mortality data were linked for patients presenting with symptoms suggestive of acute coronary syndromes at the emergency departments of four public hospitals in South Australia. Multiple regression models analysed variation in re-admissions and mortality at 30 days and 12 months, patient costs and multiple process indicators.
Results After casemix adjustment, an outlier hospital with statistically significantly poorer outcomes and higher costs was identified. Key process indicators included admission patterns, use of invasive diagnostic procedures and length of stay. Performance varied according to patients’ presenting characteristics and time of presentation.
Conclusions The joint analysis of processes, outcomes and costs as alternative measures of performance inform the importance of reducing variation in clinical practice, as well as identifying specific targets for quality improvement along clinical pathways. Such analyses could be undertaken across a wide range of clinical areas to inform the potential value and prioritisation of quality improvement initiatives.
What is known about the topic? Variation in clinical practice is a long-standing issue that has been analysed from many different perspectives. It is neither possible nor desirable to address all forms of variation in clinical practice: the focus should be on identifying important unwarranted variation to inform actions to reduce variation and improve quality.
What does this paper add? This paper proposes the comparative analysis of processes of care, costs and outcomes for patients with similar diagnoses presenting at alternative hospitals, using linked, routinely collected data. This triple test of performance indicators extracts maximum value from routine data to identify priority areas for quality improvement to reduce important and unwarranted variations in clinical practice.
What are the implications for practitioners? The proposed analyses need to be applied to other clinical areas to demonstrate the general application of the methods. The outputs can then be validated through the application of quality improvement initiatives in clinical areas with identified important and unwarranted variation. Validated frameworks for the comparative analysis of clinical practice provide an efficient approach to valuing and prioritising actions to improve health service quality.
Introduction
Variation in clinical practice remains a widely acknowledged barrier to the equitable and efficient provision of health care.1 Some variation is warranted, reflecting heterogeneity in the clinical symptoms and preferences of individual patients, but there is also unwarranted variation, which results in the inefficient use of scarce healthcare resources. Unwarranted variation has been broadly defined as reflecting ‘the limits of professional knowledge and failures in its application’.2 Quality improvement to reduce unwarranted variation in clinical practice is not a trivial task,3 and so healthcare providers should focus on priority areas, in which expected net benefits are greatest.
The identification of important and unwarranted variation in clinical practice necessitates some form of comparative assessment of hospital performance. Australian Commission on Safety and Quality in HealthCare (ACSQHC) has published Clinical Care Standards for a range of key clinical areas,4 with associated sets of process indicators to assist quality improvement. A limitation of process indicators is the focus on components of care pathways that are measurable. Important aspects of a care pathway may not be measurable because of data system limitations, as well as because of the non-deterministic and qualitative nature of the processes being measured.5 This means process indicators alone provide only a partial analysis of quality.
The ACSQHC is also promoting the use of hospital mortality indicators as a screening tool to identify high and low performing areas of clinical activity.6 Lilford et al. cite the poor correlation between outcomes and quality7 while noting that the problems associated with outcome measures are reduced when they are not used to judge performance, but to inform improvement in a non-punitive manner.
Alternatively, activity-based funding aims to inform healthcare improvements through analyses of cost differences in the provision of similar services. The Independent Hospital Pricing Authority is developing methods to incorporate measures of quality within an activity-based funding framework, but currently no adjustments are made for safety and quality.8
As the above examples indicate, the alternative forms of performance measurement are generally considered in isolation. This paper presents a case study application of a triple test to screen for important variations in processes of care, costs and outcomes for patients presenting with symptoms suggestive of acute coronary syndromes (ACS) at four large public hospitals in South Australia (SA).
Methods
Routinely collected hospital data were used to inform comparative analyses of processes of care, costs and outcomes for patients presenting at the emergency department (ED) with symptoms suggestive of ACS. The following sections describe the definition of the eligible population, the data sources and the data analysis methods.
Eligible population
The eligible population comprised all patients attending the ED of one of the four main public hospitals in SA in the year to 30 June 2010 with an ED diagnosis of either chest pain (International Classifications of Diseases (ICD)-10 code R07), unstable angina (ICD-10 code I20), or myocardial infarction (MI; ICD-10 code I21) and who received at least one troponin assay (a diagnostic indicator of cardiac muscle injury) during their hospital episode.
Data sources
The four study hospitals each maintain a suite of local data warehouses containing comprehensive patient-level information that describes key procedures, pathology test results, movement between hospital departments and wards etc., as well as automated links to population-based mortality data. These local systems have comparable nomenclature and collection practices, and are collated by the state health department in the form of a single, state-wide repository.
Separate administrative data, submitted to the state health department for every in-patient separation at all public and private SA hospitals, were available from 2003 to June 2011. These data include variables such as age, gender and postcode of normal residence (to inform Socioeconomic Indexes for Areas (SEIFA) scores), as well as comorbidities.
Probabilistic data linkage methods using name, gender and date of birth were used to group public hospital separations by patient. Private hospital separations were assigned to these groups on the basis of matching Medicare numbers.
Comorbidities were coded on the basis of principal and additional diagnoses in the 12 months preceding the index ED presentation.9 The cost of the index hospital episode was estimated for every eligible patient, representing both the ED and in-patient component (for admitted patients). Detailed patient-level costs were available for all in-patient separations, which were estimated by each hospital and submitted to the state health department. Outcomes were specified as a related re-admission (for unstable angina, MI or stroke) or mortality within 30 days and within 12 months.
Variables relating to the process of care were reviewed with clinical experts to select a set of process variables with the greatest potential for insight into variations in healthcare costs and patient outcomes across hospitals. The selected process indicators included the proportion of presenting patients admitted to hospital, the time to admission (i.e. length of stay (LOS) in the ED), the proportion of patients undergoing an invasive diagnostic procedure and the proportion of those who went on to receive an invasive management procedure, as well as total in-patient LOS for admitted patients.
Data analysis
To identify variation between the hospitals, separate multiple regression models were fitted to the data for each of the specified cost, outcome and process of care dependent variables. Binary hospital attendance covariates were used to test for hospital effects, with hospital interaction terms used to identify patient subgroups that may be driving variation observed at the aggregate hospital level. Other model covariates were selected from patient-level variables (age, gender, troponin test result (positive or negative), SEIFA score), as well as a wide range of binary comorbidity and recent hospital admission variables. Interactions between key patient-level covariates were also tested.
For binary dependent variables, logistic regression models were fitted. Model specification was tested using the link test. Goodness-of-fit was assessed by the Hosmer Lemeshow test, and the area under the receiver operating characteristic (ROC) curve. For the continuous dependent variables, generalised linear models (GLMs) were fitted. Model specification was tested using the link test. Diagnostic tests included the modified Park test (for the GLM family) and the Pearson correlation test, the Pregibon link test and the modified Hosmer Lemeshow test (for the GLM link),10 as well as visual inspection of the residuals.
For each fitted model, the mean covariate values were applied to generate predicted outputs for each hospital at an aggregate level and for each patient subgroup (as defined by the hospital interaction terms in each model). Relative risks (RRs) were estimated for binary dependent variables and mean differences were determined for continuous dependent variables. To represent the uncertainty around the mean results, 1000 bootstrap samples of the dataset were generated, stratified by hospital. The regression models were refitted for each bootstrap sample, and outputs generated for a hypothetical patient with mean values for each of the covariates included in the models (e.g. mean age, proportion of male patients etc.). The bootstrap outputs informed confidence intervals around each cost, outcome and process variable for the aggregate and subgroup analyses. Using the joint cost and outcomes outputs of the bootstrap analysis, cost-effectiveness acceptability curves were generated to represent the probability of each hospital being the benchmark performer across a range of threshold values.
Ethics committee approval was granted by the SA Health Human Research Ethics Committee.
Results
The analysis included 7950 eligible patients, ranging from 1527 patients at Hospital 2 to 2368 patients at Hospital 3. Table 1 describes the key characteristics of the patients presenting at the comparator hospitals. There were statistically significant differences in some key baseline characteristics, including age, socioeconomic status, objective risk markers (troponin test) and existing circulatory conditions and diabetes.
In adjusting for differences in baseline characteristics, all the fitted regression models for the alternative process of care, cost and patient outcome dependent variables passed the a priori specified tests for goodness-of-fit and model specification. The following sections describe the model outputs for patient outcomes, costs and processes of care, respectively.
Patient outcomes
Outcome events were analysed at 30 days and 12 months, with regard to hospital admissions for cardiovascular events and mortality. Table 2 describes event rates at each hospital, as well as RRs compared with the hospital with the highest event rates (Hospital 2). Across all patients, the 30-day event rate ranged from 0.9% to 2.1%. The RRs ranged from 0.45 to 0.83, but the RR was only statistically significantly <1 at Hospital 1. The subgroup analysis, by age, suggests that Hospital 1 is achieving particularly improved outcomes in younger patients.
At 12 months, the event rate for re-admissions or mortality was significantly higher at Hospital 2 compared with all other hospitals. The mean RRs ranged from 0.64 to 0.72. Separate analysis of mortality and re-admissions at 12 months showed increased event rates for both outcomes at Hospital 2, with all RRs either at or approaching statistical significance.
Index presentation costs
Table 3 presents differences in in-patient costs associated with the index chest pain presentations across the study hospitals. The cost per presenting patient is reported, based on the proportion of patients who were admitted at each hospital. Across all patients, Hospital 2 reported the highest standardised costs per presenting patient, which were over A$600 more per patient than at Hospitals 1 and 4 (and significantly higher). Over the 1527 patients presenting at Hospital 2, these additional costs sum to almost A$1 million every year at Hospital 2.
Costs at Hospital 2 are particularly high in the subgroup of presenting patients with prior experience of a circulatory condition, which may be linked to the increased in-patient admission rate for this patient group at Hospital 2. Conversely, Hospital 3 has significantly increased costs in the subgroup of patients without an existing circulatory condition.
Figure 1 combines the above analyses of costs and patient outcomes in the form of cost-effectiveness acceptability planes, which represent the probability that each hospital is the most cost-effective (and thus the benchmark) hospital at different equivalent monetary values for avoiding admissions and mortality at 12 months (as represented on the x-axes). As an example, if we assign an equivalent monetary value of A$100 000 to avoiding a death at 12 months and a value of A$50 000 to avoiding an admission, Hospitals 1, 2, 3 and 4 have probabilities of being the most cost-effective hospital of 30%, 0%, 32%, and 38%, respectively. It is clear that the choice of the benchmark hospital varies significantly between Hospitals 1, 3 and 4 according to the values associated with the avoidance of mortality and hospital admissions. However, it is apparent that there is a very small likelihood that Hospital 2 is the benchmark hospital, which is consistent with the significantly increased costs and outcome events at Hospital 2 (Tables 2, 3).
Process indicators
Table 4 presents comparative data on three process indicators after adjustment for observed confounders, namely the proportion of presenting patients admitted as an in-patient, undergoing an invasive revascularisation procedure (e.g. percutaneous coronary intervention (PCI)) and undergoing a PCI following an invasive diagnostic procedure (angiography). Results for Hospitals 1, 3 and 4 are reported relative to Hospital 2.
Across all patients, Hospital 2 admitted the lowest proportion of presenting patients (67%), which was significantly lower than for all the other hospitals. Subgroup analyses indicated the lower aggregate admission rate at Hospital 2 was driven by a particularly low admission rate for patients with no existing circulatory condition and a negative troponin test result on presentation. The admission rates at the other hospitals were between 27% and 41% higher in this group.
The proportion of patients undergoing PCI was significantly lower at Hospital 2 compared with Hospitals 1 and 3. No consistent pattern was observed across the subgroups, indicating variation in practice across and within hospitals; for example, Hospital 1 had higher PCI rates during the week, whereas Hospital 3 had higher PCI rates for patients with positive troponin test results.
The highest proportion of patients undergoing angiography followed by PCI was at Hospital 2 (34%), compared with between 15% and 25% at the other hospitals. Given the lower aggregate rate of PCI at Hospital 2, this result is driven by a low rate of angiography. Subgroup analyses showed that Hospital 2 had a higher conversion rate from angiography to PCI regardless of initial troponin test result or day of presentation.
Table 5 reports differences in two LOS variables. For patients admitted as an in-patient, the mean time to admission was shortest at Hospital 3 by between 3.9 and 5.5 h. There was less variation between the other hospitals, and the timings did not vary greatly by patient subgroup. Hospital 4 reported the shortest mean in-patient LOS. Hospitals 2 and 3 reported significantly longer LOS for patients with a positive troponin test.
Discussion
Electronic hospital data systems collect significant amounts of data describing the processes of care experienced within hospital, as well as the resources used during hospital encounters. Modern data systems also better facilitate the linkage of data across the healthcare system, so that patient outcomes with regard to re-admissions and mortality beyond discharge can be measured. This paper has presented comparative analyses of processes of care, costs and patient outcomes using routinely collected data to inform the potential value of quality improvement around the diagnosis and management of suspected ACS.
Statistically significant casemix-adjusted differences were observed in mean in-patient costs (up to A$669 extra per presenting patient) and 30-day and 12-month cardiovascular or mortality event rates (up to 122% and 56%, respectively) across providers. The analysis of costs and patient outcomes did not identify a single benchmark hospital, but rather identified an apparent outlier hospital that incurred higher costs and poorer patient outcomes than the other hospitals.
Looking at the processes of care, the outlier hospital had the lowest in-patient admission rate, which was driven by much reduced admission rates for patients with negative troponin tests and no existing circulatory condition on presentation. This suggests that admission decisions for this seemingly low-risk patient subgroup may be reviewed. The outlier hospital made the least use of invasive management options (PCI), but a higher proportion of patients undergoing an invasive diagnostic procedure (angiography) proceeded to PCI. Interpreting this finding in conjunction with the poorer outcomes observed at this hospital implies high specificity (few false positives), but low sensitivity (more false negatives) with regard to the use of angiography. Both areas of process variation may be affected by variations in capacity (e.g. access to in-patient beds and the cardiac catheterisation laboratory), as well as underlying differences in clinical decision making and hospital-specific protocols. Two hospitals reported significantly longer in-patient LOS for high-risk patients, which may provide another priority area for quality improvement.
The present study is subject to limitations with regard to the representation of processes of care and patient outcomes, as well as to potential confounding. Additional process data would provide a clearer indication of the causes of observed variation in costs and outcomes (e.g. ED and ward staffing levels, in-patient operating capacity, medication use, allied health and rehabilitation service use and discharge referrals).
The routine collection of patient-reported outcome measures (PROMs) would improve the reported outcomes.11 However, the use of linked data informed patient outcomes after discharge, which improves on the main in-hospital measure of outcome (mortality), which has been criticised for its ‘low sensitivity (most quality problems do not cause death) and low specificity (most deaths do not reflect poor-quality care)’.12
Casemix adjustment was informed by a wide range of clinical data, including pathology results, but the statistical analyses would be improved if other diagnostic indicators, such as electrocardiogram results, were available, as well as links to ambulance data to inform pathways to the hospital. The ongoing development of electronic patient records and costing systems should inform more detailed casemix adjustment and process analyses, and possibly PROMs, over time. Such data should improve the sensitivity and specificity of the presented comparative analyses to identify important areas of unwarranted variation in clinical practice, but the perfect should not become the enemy of the good and the best available data at present should be used to identify important areas of existing unwarranted variation.
The identification of important variation does not necessarily mean that attempts to reduce variation will be a cost-effective use of scarce resources.13 The estimation of ‘policy cost-effectiveness’ incorporates the costs and effects of actions to change the delivery of care, as well as the costs and benefits of improved quality of care. In areas in which important variation is suspected, ACSQHC guidelines describe the conduct of a thorough review of data sources, casemix, hospital structures and resources, processes of care and professional issues to inform subsequent actions to improve quality.6 Lilford et al. also define an involved improvement process, comprising multiple stages, as follows: (1) investigation of the causes of variation from benchmark practice; (2) identification of potential barriers and facilitators to quality improvement; (3) decisions regarding appropriate actions; (4) implementation of the defined improvements; and (5) post-implementation evaluation.3
These are not trivial processes, which emphasises the need for careful consideration of both the importance of any observed variation and the expected effectiveness of actions taken to reduce variation and improve quality. As illustrated by the present case study, the joint interpretation of variation in processes of care, costs and outcomes informs discussions around both aspects of policy cost-effectiveness. The potential benefits of quality improvement at each hospital are informed by analyses of costs and outcomes. The analyses of processes of care identify specific areas of focus for quality improvement along the clinical pathway, which may usefully inform the costs and likelihood of benefits of a quality improvement initiative.
The reported analysis of processes of care, costs and patient outcomes could be applied to a wide range of clinical areas as a form of screening to identify clinical areas and hospitals for which further analysis and intervention is justified to diagnose (confirm) and treat (improve) important unwarranted variation in clinical practice.
Competing interests
None declared.
Acknowledgements
This research was supported by funding from the Health Contribution Fund (HCF). Research Foundation and the National Health and Medical Research Council. The authors thank Tina Hardin and Graeme Tucker for arranging data access and data linkage.
References
[1] Australian Commission on Safety and Quality in HealthCare (ACSQHC). Medical practice variation: background paper. Sydney: ACSQHC; 2013[2] Mulley AJ. Improving productivity in the NHS. BMJ 2010; 341 c3965
| Improving productivity in the NHS.Crossref | GoogleScholarGoogle Scholar |
[3] Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet 2004; 363 1147–54.
| Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma.Crossref | GoogleScholarGoogle Scholar | 15064036PubMed |
[4] Australian Commission on Safety and Quality in HealthCare (ACSQHC). FAQs clinical care standards. Sydney: ACSQHC; 2014.
[5] Pidun T, Felden C, Limitations of performance measurement systems based on key performance indicators. Proceedings of the Seventeenth Americas Conference on Information Systems, Detroit, Michigan August 4–7 2011. Paper 14. Available at: http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1013&context=amcis2011_submissions [verified 12 February 2016].
[6] Australian Commission on Safety and Quality in HealthCare (ACSQHC). Using hospital mortality indicators to improve patient care: a guide for boards and chief executives. Sydney: ACSQHC; 2014.
[7] Lilford RJ, Brown CA, Nicholl J. Use of process measures to monitor the quality of clinical practice. BMJ 2007; 335 648–50.
| Use of process measures to monitor the quality of clinical practice.Crossref | GoogleScholarGoogle Scholar | 17901516PubMed |
[8] Independent Hospital Pricing Authority (IHPA). Pricing framework for Australian public hospital services 2015–16. Sydney: Independent Hospital Pricing Authority; 2014.
[9] Duckett S, Coory M, Kamp M, Collins J, Skethcher-Baker K, Walker K. VLADs for dummies. Milton, Qld: Wiley Publishing Australia Pty Ltd; 2008.
[10] Glick HA, Doshi JA, Sonnad SS, Polsky D. Economic evaluation in clinical trials. Oxford: Oxford University Press; 2007.
[11] Nelson EC, Eftimovska E, Lind C, Hager C, Wasson JH, Lindblad S. Patient reported outcome measures in practice. BMJ 2015; 350 g7818
| Patient reported outcome measures in practice.Crossref | GoogleScholarGoogle Scholar | 25670183PubMed |
[12] Scott IA, Brand CA, Phelps GE, Barker AL, Cameron PA. Using hospital standardised mortality ratios to assess quality of care: proceed with extreme caution. Med J Aust 2011; 194 645–8.
| 21692724PubMed |
[13] Mason J, Freemantle N, Nazareth I, Eccles M, Haines A, Drummond M. When is it cost-effective to change the behaviour of health professionals? JAMA 2001; 286 2988–92.
| When is it cost-effective to change the behaviour of health professionals?Crossref | GoogleScholarGoogle Scholar | 1:STN:280:DC%2BD38%2FivFWrsg%3D%3D&md5=9583767c603320bb07c896a590da867bCAS | 11743840PubMed |