Should diagnosis codes from emergency department data be used for case selection for emergency department key performance indicators?
Stuart C. Howell A , Rachael A. Wills A and Trisha C. Johnston A BA Statistical Analysis and Linkage Team, Queensland Health, GPO Box 48, Brisbane, Qld 4001, Australia.
B Corresponding author. Email: trisha_johnston@health.qld.gov.au
Australian Health Review 38(1) 38-43 https://doi.org/10.1071/AH13026
Submitted: 22 January 2013 Accepted: 30 September 2013 Published: 6 December 2013
Abstract
Objective The aim of the present study was to assess the suitability of emergency department (ED) discharge diagnosis for identifying patient cohorts included in the definitions of key performance indicators (KPIs) that are used to evaluate ED performance.
Methods Hospital inpatient episodes of care with a principal diagnosis that corresponded to an ED-defined KPI were extracted from the Queensland Hospital Admitted Patient Data Collection (QHAPDC) for the year 2010–2011. The data were then linked to the corresponding ED patient record and the diagnoses applied in the two settings were compared.
Results The asthma and injury cohorts produced favourable results with respect to matching the QHAPDC principal diagnosis with the ED discharge diagnosis. The results were generally modest when the QHAPDC principal diagnosis was upper respiratory tract infection, poisoning and toxic effects or a mental health diagnosis, and were quite poor for influenza.
Conclusions There is substantial variation in the capture of patient cohorts using discharge diagnosis as recorded on Queensland Hospital Emergency Department data.
What is known about the topic? There are several existing KPIs that are defined according to the diagnosis recorded on ED data collections. However, there have been concerns over the quality of ED diagnosis in Queensland and other jurisdictions, and the value of these data in identifying patient cohorts for the purpose of assessing ED performance remains uncertain.
What does this paper add? This paper identifies diagnosis codes that are suitable for use in capturing the patient cohorts that are used to evaluate ED performance, as well as those codes that may be of limited value.
What are the implications for practitioners? The limitations of diagnosis codes within ED data should be understood by those seeking to use these data items for healthcare planning and management or for research into healthcare quality and outcomes.
Introduction
There are several existing and proposed key performance indicators (KPIs) at the state and national level that are defined according to diagnosis data collected in the Emergency Department (ED; Table 1). Administrative data can provide a rich resource for healthcare planning and management, as well as for research into healthcare quality and outcomes. However, the quality of administrative data, particularly those collected in the ED, has not been well explored or documented. Of particular relevance to the present study, there has been no formal assessment of ED diagnosis data in Queensland despite problems identified in previous data quality reports; for example, a diagnosis code of ‘Z53’ (‘procedure not required’) was used in up to 7% of cases.1 This raises concerns about the quality of discharge diagnosis as assigned in the ED and its suitability for use in the assessment of ED performance. Data quality statements from other jurisdictions raise similar concerns.2–4
In the present study we sought to examine the quality of discharge diagnosis as recorded on Queensland ED data collections (Emergency Department Information System (EDIS)), and its value as the basis for case selection for existing and proposed ED-based KPIs. Comparisons were made between the discharge diagnosis that appeared on the ED records and the principal diagnosis on the Queensland Hospital Admitted Patient Data Collection (QHAPDC). Although it is unreasonable to expect that all patients receive the same diagnosis in both settings, most diagnoses would be expected to be consistent, so any substantial differences between ED and inpatient diagnoses would raise questions over the suitability of ED data for identifying patient cohorts to assess KPIs.
Methods
The data were sourced from EDIS and QHAPDC. The EDIS data capture one diagnosis per patient and this represents the diagnosis at discharge from the ED.5 This is coded using a substantially abridged set of International Statistical Classification of Diseases and Health Related Problems, Tenth Revision, Australian Modification (ICD-10-AM)6 codes (~1000 codes compared with ~16 500 in the QHAPDC). The principal diagnosis in the QHAPDC is defined as ‘the diagnosis established after study to be chiefly responsible for occasioning an episode of inpatient care (Australian Coding Standard (ACS) 0001)’.7 This is determined at separation from the episode of care and often follows a more detailed clinical assessment than occurs in the ED. The principal diagnosis is recorded along with an unlimited number of ‘other’ diagnoses8 and is coded using the ICD-10-AM according to a stringent set of rules as defined by the ACS 0001.
Hospital inpatient episodes of care with a principal diagnosis from Table 1 were extracted from the QHAPDC for the year 2010–2011. The ICD code definitions for some indicators were extended to encompass episodes of care with related diagnoses. These are documented in Table 1. Inpatient episodes of care were then matched to the associated EDIS record using the patient’s unit record (UR) identifier, along with the dates of inpatient admission and ED presentation. Owing to known issues with recording of times in both ED and admitted patient data, the following criteria were applied to determine a match between an ED presentation and the corresponding hospital admission:
-
The ED presentation occurred in the same facility as the inpatient admission.
-
Discharge from the ED must have taken place within 2 calendar days before hospital admission.
-
Presentation to the ED must not have taken place more than 1 calendar day after hospital admission.
-
When the hospital admission could be matched to more than one ED presentation, the ED presentation with the shortest time between presentation and hospital admission and with a discharge code of ‘Admit to Hospital’ was chosen.
Case ascertainment was based entirely on the principal diagnosis (PD) as recorded on the QHAPDC. The outcome measure was the percentage of admitted patient records where the principal diagnosis was consistent with the diagnosis as recorded on the ED presentation record. This was assessed at two levels:
-
exact match at the three-digit ICD-10-AM code level
-
range match: the ICD-10-AM code on the EDIS does not exactly match the principal diagnosis on the QHAPDC, but falls within the code range that defines the condition of interest; for example, an ICD code of J03 on the EDIS would not match a code of J02 on the QHAPDC, but falls in the ICD code range for the definition of upper respiratory tract infection (URTI; J00–J06).
Results
In total, there were 87 428 hospital episodes of care with a principal diagnosis from a selected condition listed in Table 1 that could be matched to an EDIS record. The most common conditions (as defined by inpatient PD) were injury (55.9%), mental and behavioural disorders (21.9%) and poisoning and toxic effects (7.6%); the least common were influenza (0.5%), viral infections (2.4%), asthma (5.6%) and upper respiratory tract infections (6.0%).
The asthma cohort produced the most favourable results with respect to matching the QHAPDC principal diagnosis with the corresponding EDIS diagnosis. In total, 84% of cases had a diagnosis of asthma recorded on both collections (Table 2), whereas in 79% of cases there was an exact match at the three-digit ICD code level. The most common non-asthma EDIS diagnosis for this cohort was ‘pneumonia, unspecified’ (J18.9), which accounted for 18% of all non-asthma diagnoses.
The performance of URTI was generally modest by comparison (Table 2). The EDIS captured 61% of cases with a principal diagnosis of URTI on the QHAPDC by range matching, whereas an exact three-digit match occurred in 57% of cases. However, consistency between QHAPDC and EDIS varied by type of infection; the most favourable results were observed for ‘Acute obstructive laryngitis [croup] and epiglottitis’ and ‘Acute tonsillitis’, whereas the poorest were for ‘Acute nasopharyngitis [common cold]’ and ‘Acute upper respiratory infections of multiple and unspecified sites’.
The consistency between EDIS and QHAPDC was also quite poor when the principal diagnosis on QHAPDC was influenza (Table 2). This was range matched to an EDIS influenza diagnosis for only 13% of patients and an exact three-digit match occurred in only 10% of this cohort. The most common ICD codes recorded in EDIS for influenza patients were pneumonia (J18; 27.8%) or symptoms of influenza, particularly fever (R50.9; 9.3%). Viral infections performed marginally better, with range and exact matches being 40% in both cases.
The mental and behavioural disorders cohort was the second largest in the study, accounting for 19 177 inpatient episodes of care. Consistency between EDIS and QHAPDC was reasonable at the range level; however, matching at the three-digit level was generally quite poor (Table 2). The most common ICD code recorded on EDIS for patients in this cohort was X84: ‘Intentional self-harm by unspecified means’.
Of the 6667 admitted patient episodes with a principal diagnosis of poisoning and toxic effects, 65% had a poisoning code in EDIS (Table 2). Performance was better for codes in the range T36–T50 (Poisoning by drugs, medicaments and biological substances) than for those in the range T51–T65 (Toxic effects of substances chiefly non-medicinal as to source). The ICD code X84 (‘Intentional self harm by unspecified means’) accounted for 13% of total episodes (or 38% of those without a poisoning code in the EDIS), followed by codes for behavioural and mental disorders (9% of total poisoning episodes, 27% of those without a matching poisoning code).
Patients with an injury PD were generally captured as an injury in the EDIS (Table 3). Injuries were coded to the same body region in 75% of cases, although consistency between EDIS and QHAPDC varied by body site. The poorest matches were seen for injuries to ‘unspecified part of the trunk, limb or body region’ and ‘injuries involving multiple body regions’. However, the proportion of matches was higher for codes where a specific body part is nominated (e.g. head, neck, thorax, limb).
Discussion
In the present study, we used consistency of diagnosis in the ED with that assigned in admitted patient data as an indicator of how well patient cohorts can be defined using ED diagnosis information. Our results show that there is substantial variation in the capture of patient cohorts using diagnostic codes in ED data. Consistency appears to be highest for conditions that are well defined, easily recognisable and/or where a potential diagnosis can be confirmed from information provided by the patient. For example, asthma is easily recognisable by its core symptoms, which include laboured breathing, wheezing and coughing and a sense of constriction in the chest; the diagnosis can often be supported by an assessment of the patient’s medical history. Similarly, external injuries, such as lacerations, abrasions or burns, are easily identified by visual examination, whereas bone fractures can be identified by radiological examination. Not surprisingly, asthma and injury were the top performers in the present study.
Consistency was poorer for conditions where symptom profiles overlap with other illnesses. This included URTI, influenza, pneumonia and other viral infections, where common features may include fever and respiratory manifestations, such as nasal and/or chest congestion. In these cases, differential diagnosis may require a level of clinical assessment that is not possible within the time available in a patient presentation within the ED. Thus, in circumstances where alternative diagnoses can reasonably be applied, physicians may opt for a diagnosis that ‘best fits’ the symptom profile, or merely record the predominant symptom as the diagnosis in order to expedite a patient’s passage through the hospital system. Indeed, a Chapter 18 ‘R’ code (symptoms, signs and abnormal clinical and laboratory findings) was recorded as the ED discharge diagnosis in 15% of cases where the principal diagnosis on the QHAPDC inpatient record was either a Chapter 10 ‘J’ code (Diseases of the respiratory system) or a Chapter 2 ‘B’ code (Certain infections and parasitic diseases).6 Any limitations associated with time pressures within the ED are likely to worsen following the introduction of National Emergency Access Targets (NEAT) by 2015, whereby 90% of patients will be required to have left the ED within 4 h. That is, as patients spend less time in the ED, it will become less feasible to obtain a meaningful diagnosis from ED data for conditions where diagnosis is not straightforward.
These results mean that for many of the proposed KPIs, their validity is likely to be questionable because the specific codes used to define patient cohorts are not likely to pick up all patients of interest. Broader code ranges may improve the capture of patients of interest. However, this is likely to result in overinclusion of patients with more or less severe conditions. This issue is particularly relevant for KPIs related to respiratory conditions; further work is recommended to determine whether it is possible to identify this patient group among ED presentations and, if so, whether it is necessary to restrict to this exact patient group or whether performance across a broader spectrum of respiratory symptoms is of interest.
The results also have implications for the use of ED diagnosis in the definition of severity (urgency related groups (URGs)) for allocation of hospital funding by the Independent Hospital Pricing Authority (IHPA).12 Although flexibility is being allowed for in terms of the coding of diagnoses (a URG Grouper has been developed to map diagnosis codes from multiple versions and editions of ICD diagnosis codes and from Systematised Nomenclature of Medicine-Clinical Terms to 6th edition ICD-10-AM diagnosis codes13) and although broad code ranges are being used to identify patients with diagnoses of interest, the difficulty in identifying subgroups included in the URG classification, such as respiratory conditions, may be an issue that requires further investigation.
The ICD codes have been used to capture diagnosis in EDs because there were limited alternatives available. However, ICD codes were not developed for the purpose of recording diagnoses in the ED and modified ICD code sets have been implemented. These are not consistent across EDs, there are no coding rules applied and ED practitioners do not receive the level of training required to reliably apply codes. As a result, there is no standardised approach to the collection and coding of ED diagnosis data at the national level and variations are evident both across and within jurisdictions.1–4 These factors, in combination with the results of the present study that suggest there are variations in the capture of patient cohorts depending on the condition of interest, have major implications for the development and use of KPIs that rely on diagnostic data collected in the ED. Although it is important to identify indicators that provide clinically relevant and meaningful measures of the performance under investigation, it is equally important to ensure that the indicators are reproducible and reliable. Because evaluation of the KPIs at the state and national level is generally based on data aggregated across time and settings, any regional and temporal variations in admission criteria and/or diagnostic practices may impact on the validity of conclusions drawn from the data. That is, any conclusions drawn about higher or lower performance for individual facilities or jurisdictions could not be interpreted as indicative of performance because it is not clear that the same patient groups, in terms of diagnosis and severity, are actually being compared.
It has been suggested recently that alternatives to ICD codes may improve recording of diagnoses within the ED environment. For example the Emergency Department Reference Set (EDRS), which makes use of 7000 SNOMED-CT terms, has been proposed as an alternative to the use of ICD codes to improve the capture of information about why people present to EDs and to better understand their diagnoses.14 The implementation of this alternative nationally may also facilitate consistency across jurisdictions. However, much further refinement is required before such systems are operational. In addition, the use of this kind of system will not improve the recording of conditions where diagnosis in the ED is not feasible.14
A crucial first step in the development of ED-based performance indicators should be the development of a standardised approach to the formulation and recording of ED diagnosis at the national level. An evaluation of each indicator in the ED setting should then follow. This would involve an assessment of the level of agreement between ED physicians in assigning the same diagnosis to all patients presenting with a similar clinical picture, as well as determining whether individual physicians would apply the same diagnosis across patients. Any condition that failed to achieve a reasonable level of reliability should not be used to assess performance or alternative, more labour intensive, methods of patient identification, such as clinical information audit or individual patient follow-up, should be used.
The present study has several limitations. The study was limited to patients who were admitted. It is possible that diagnoses for these patients who are at the severe end of the spectrum may differ to those of patients who were not admitted from the ED. For example, a diagnosis of pneumonia in the ED may be more common among patients who are admitted with influenza, whereas influenza may be a more common diagnosis among patients who are not admitted. The issue would be more problematic for cohorts with lower admission rates (viral infections, acute URTI) than for those with higher admission rates (mental health and poisoning cohorts). Further analysis would be required to more fully understand the potential for over- and underinclusion of relevant cases with alternative inclusion criteria for KPIs.
Finally, some variation in the coding of patient diagnosis across the two settings is to be expected because there are differences in the code sets available and the coding standards applied. For example, the ACS 0001 provides detailed rules and criteria for determining the principal diagnosis in an inpatient episode of care and, in order to apply these correctly, the coder is trained in the use of ICD-10-AM, the Australian Classification of Health Interventions (ACHI)15 and the coding rules as described in the ACS 0001 standard. Practitioners within the ED are unlikely to have received formal training in clinical coding and, as a result, coding practices within the ED are likely to be less rigorous. Thus, for some conditions, the proportions of patients identified by the admitted patient ICD codes who would also be identified in ED data cannot be interpreted as a direct measure of the ability of the ED diagnosis code to capture the patient cohorts of interest. However, in most cases the ICD-10-AM codes used in this study are broader than those specified in indicator definitions to allow for the restricted ICD codes available in EDIS and their less rigorous application, and the results are likely to be broadly indicative of areas where the use of diagnosis coding in the ED is likely to be problematic and warrant further investigation before they are used in the assessment of performance.
Conclusions
Administrative data are a valuable resource for healthcare planning and management and for research into healthcare quality and outcomes; however, it is important that the limitations of these data are understood in order to inform their appropriate use. This study provides valuable information on the limitations of the current diagnosis data for ED patients. The study highlights specific conditions for which diagnosis data are particularly problematic and identifies issues that healthcare planners, managers and researchers should consider when using and interpreting analyses based on these data.
Competing interests
The authors declare there are no competing interests.
Acknowledgements
The authors acknowledge the contribution of Sue Cornes, Dr Anthony Bell and the Clinical Information Management Unit within Queensland Health, as well as the Data Acquisition Section of the Independent Hospital Pricing Authority for their assistance in the preparation of this manuscript and/or for providing comments on earlier drafts.
References
[1] Johnston T, Endo T. Data quality issues impacting on reporting on presentations to emergency departments in Queensland hospitals: data quality issues in emergency department data 2007/08 update. Brisbane: Health Statistics Centre, Queensland Health; 2009. Available at http://www.health.qld.gov.au/hic/tech_report/ED2.pdf [verified 18 October 2013][2] Emergency Department Data Collection. Data quality statement. 2010. Adelaide: SA Health. Available at https://www.santdatalink.org.au/available_datasets [verified 18 October 2013]
[3] Information Management and Reporting Directorate. Emergency department data collection data dictionary, Version 1.0. Perth: WA Health; 2007 Available at http://health.wa.gov.au/healthdata/docs/EDDC_dictionary.pdf [verified 18 October 2013]
[4] Centre for Health Record Linkage. Emergency department data collection. Sydney: NSW Department of Health; 2011. Available at http://www.cherel.org.au/data-dictionaries#section2 [verified 18 October 2013]
[5] Access Improvement Service. EDIS reference tables [draft]. Brisbane: Queensland Health; 2012.
[6] National Casemix and Classification Centre. The International Statistical Classification of Diseases and Related Health Problems, 10th Revision, Australian Modification (ICD-10-AM). Wollongong: National Casemix and Classification Centre; 2012.
[7] National Centre for Classification in Health. Australian coding standards for ICD-10-AM and ACHI, 7th edn. Lidcombe: National Centre for Classification in Health; 2010.
[8] Data Collections Unit. 2011–2012 Queensland Hospital Admitted Patient Data Collection (QHAPDC). Manual of instructions and procedures for the reporting of QHAPDC data, Version 1. Brisbane: Queensland Health; 2011.
[9] Australasian College for Emergency Medicine and Australian Council on Healthcare Standards Performance and Outcomes Service. Draft emergency medicine indicators: clinical indicators user’s manual v5.0. 2011.
[10] Access Improvement Service. XXX. Brisbane: Queensland Health. 2012. Available at http://qheps.health.qld.gov.au/patientflow/performance.htm [verified 18 October 2013]
[11] National Mental Health Performance Subcommittee. Mental Health Alcohol and Other Drugs Directorate. The Fourth National Mental Health Plan Measurement Strategy. 2011. Brisbane: Queensland Health. Available at http://www.health.gov.au/internet/mhsc/publishing.nsf/Content/BDA139CFA06F6EC8CA257A61000078A3/$File/meas.pdf [verified 18 October 2013]
[12] Independent Hospital Pricing Authority. The pricing framework of Australian Public Hospital services. 2012. Available at http://www.ihpa.gov.au/internet/ihpa/publishing.nsf/Content/pricing-framework-lp [verified 18 October 2013]
[13] Independent Hospital Pricing Authority. Activity based funding. URG Grouper user guide. ABF data grouping and modelling. 0000. Available at http://www.ihpa.gov.au/internet/ihpa/publishing.nsf/Content/45151D40E8573B1CCA25796E0013F415/$File/URG%20Grouper%20-%20User%20documentation%20v1.2.0.0.pdf [verified 18 October 2013]
[14] Hansen DP, Kemp ML, Mills SR, Mercer MA, Frosdick PA, Lawley MJ. Developing a national emergency department data reference set based on SNOMED CT. Med J Aust 2011; 194 S8–10.
| 21401491PubMed |
[15] Australian Classification of Health Interventions (ACHI). National Casemix and Classification Centre. Wollongong: ACHI; 2011