Free Standard AU & NZ Shipping For All Book Orders Over $80!
Register      Login
Microbiology Australia Microbiology Australia Society
Microbiology Australia, bringing Microbiologists together
RESEARCH ARTICLE (Open Access)

External quality control processes for infectious disease testing

Wayne Dimech https://orcid.org/0000-0003-3425-9419 A , Guiseppe Vincini https://orcid.org/0000-0003-0972-0066 A and Belinda McEwan https://orcid.org/0000-0003-1631-316X B
+ Author Affiliations
- Author Affiliations

A National Serology Reference Laboratory, Melbourne, Vic., Australia. Email: wayne@nrlquality.org.au, joe@nrlquality.org.au

B Royal Hobart Hospital, Pathology, Hobart, Tas., Australia. Email: belinda.mcewan@ths.tas.gov.au

Microbiology Australia 45(1) 41-43 https://doi.org/10.1071/MA24013
Published: 4 March 2024

© 2024 The Author(s) (or their employer(s)). Published by CSIRO Publishing on behalf of the ASM. This is an open access article distributed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND)

Introduction

Historically, serological testing for infectious diseases was performed using biological assays such as complement fixation or haemagglutination inhibition. These assays utilised the agglutination or haemolysis of red blood cells as biological indicators for the presence or absence of antibodies.1 Generally, a four-fold difference in doubling dilution titres was required to consider a significant difference in antibody levels. Over the 1990s, with the advent of enzyme immunoassays (EIAs), testing for infectious disease antibodies moved away from biological assays, which were labour intensive and difficult to control, to microtiter plate EIAs and then to automated platforms.1,2 The output of these tests is reported in a unit of measure calculated from the intensity of the signal produced by the reaction, be it colorimetric, immunofluorescent or chemiluminescent. This signal, often referred to as a signal to cut-off (S/Co) is an arbitrary unit based on the comparison of the signal produced by the patient sample compared with a cut-off determined by the manufacturer e.g. a multiplier of the negative control signal or the mean value of particular calibrators. The ‘cut-off’ value for these assays effectively becomes the assay decision point, separating the populations of samples with the target analyte from those that do not contain the target analyte. Whereas the S/Co or other arbitrary unit will generally increase as the amounts of antibody increases in sample tested, the test system is not measuring the quantity of antibodies present, it measures the amount of binding of the antibody in the patient sample with the antigen on the solid phase of the assay.13

Testing for infectious diseases using immunoassays gradually became available on high-throughput immunoassay platforms that also test for clinical chemistry markers. In more advanced countries, infectious disease testing has moved away from the microbiology laboratory into ‘core laboratories’, where the instruments and associated processes, including the quality control processes, are managed using a singular system within the same laboratory, typically the traditional approaches applied to clinical chemistry testing. However, testing for inert chemicals such as glucose and potassium measure the amount of analyte in the patient sample. In these situations, the test system is calibrated to a standard, often an international standard, and the results are expressed in SI units. This lends itself to certain statistical methodologies. By contrast, the arbitrary S/Co result obtained in infectious disease testing is influenced by a range of factors relating to the antibodies being detected including their avidity or affinity, genotype or subtype of causative agent, stage of disease progression, immune status of the patient, and factors relating to the assay itself such as target antigens, antibodies utilised in the conjugate, and chemistry applied to create and detect the signal.4

Quality control

The use of a quality control (QC) sample is a requirement for laboratories accredited to ISO 15189 and is defined in the standard as an ‘internal procedure which monitors the testing process to decide if the system is working correctly and gives confidence that the results are reliable enough to be released’ (section 3.11, p. 35). Further in the ISO 15189 document it states, ‘The procedure should also allow for the detection of lot-to-lot reagent and/or calibrator variation of the examination method.’ (section 7.2.7.2(a), p. 255). The National Association of Testing Authorities (NATA) ISO 15189 Standard Application Document (SAD) refers to QC processes as ‘A system must be established for the long-term monitoring of internal quality control results to assess method performance’ (section 5.6.2, p. 116). Frequently, laboratories will interpret the standard to suggest that the use of a kit control is adequate to fulfil the requirement.

Kit controls

Kit controls in infectious disease testing have the purpose of validating the test. Generally, kit controls are tested, and the results are accepted prior to testing patient samples. The manufacturer provides the kit controls and associated acceptance criteria. These acceptance criteria have been developed by the manufacturer in pre-market clinical trials and results within the established range can be taken as evidence that the test kit is performing as expected by the manufacturer and the sensitivity and specificity claimed by the manufacturer can be assured. It is often pointed out that the acceptance range for kit controls are wide. This is in fact the case because infectious disease serology assays tolerate significant changes in signal before the clinical sensitivity and specificity is compromised. Note that historically biological assays allowed a four-fold change in dilutions before a significant difference was confirmed. Kit controls are required to be tested when stated in the manufacturer’s instructions for use (IFU). All infectious disease assays are listed on the Australian Registry for Therapeutic Goods (ARTG) as class 3 or class 4 in vitro diagnostic devices (IVD).7 Laboratories reporting clinical results are required to follow the IFU without deviation. Any modification to the IFU, such as not using a specified kit control, means the assay is being used ‘off licence’ and becomes an ‘in-house IVD’ which must be registered as such with the Therapeutic Goods Administration (TGA). In cases where the manufacturer’s IFU does not require the testing of the kit control, their use is highly recommended as best practice. Kit controls should not be replaced with third party controls but utilised in conjunction with them.

Third party controls

The ISO 15189 standard states ‘To enable this, (the ability to detect lot to lot variation) the use of third-party IQC material should be considered, either as an alternative to, or in addition to, control material supplied by the reagent or instrument manufacturer’ (section 7.2.7.2(a), p. 255). Whereas the kit controls are designed to validate the assay at the time of testing, they are not designed to monitor the performance of the assay over time. Generally, kit controls are not sensitive to changes in the test system. Well-designed third-party controls are IVDs manufactured by companies other than the test kit manufacturer and are designed to monitor variation.4,8 These controls should have a reactivity at a level that can detect variation. The ISO 15189 standard states ‘the IQC material provides a clinically relevant challenge to the examination method, has concentrations levels at or near clinical decision points’ (section 7.2.7.2(b), p. 265). Immunoassays do not have a linear dose response curve. That is, as the amount of analyte being detected increases, it is expected that the signal will increase proportionally. In most immunoassays, the dose response curve is sigmoidal. Initially as the amount of analyte increases there is only a small increase in signal. As the analyte concentration increases, the curve becomes linear until such time that all or part of the components are exhausted, after which the curve plateaus. The third-party controls must therefore be reactive in the linear part of the curve to be effective in detecting variation, and the linear part of a curve may not necessarily be close to the cut-off of an assay.1

The NATA SAD states, ‘Numerical QC results should be presented graphically to assist in the early detection of trends’ (section 5.6.2, p. 116). Infectious disease testing has a numerical value (the S/Co or other arbitrary unit), noting that these numbers are not a measure of an amount of antibodies, rather a measure of binding activity. However, they can be plotted on a Levey-Jennings chart and effectively monitor variation in the test system. If the supplier of the third-party control has minimal lot-to-lot variation, the results of multiple lots of the same third-party QC can be used to monitor the assay over many years, providing the laboratory good insight into the assays long-term precision and bias.9

The use of a third-party QC optimised for the assay being monitored is highly encouraged. It serves a different, but complimentary, purpose to the kit controls and laboratories should use both the kit control and the third-party control. At a minimum the use of either the kit control or third-party control is mandatory for laboratories accredited to ISO 15189.5

Acceptance ranges for third party controls

Guidance on how QC results are managed is limited. The NATA SAD states, ‘A system must be established for the long-term monitoring of internal quality control results to assess method performance’ (section 5.6.2, p. 116). The National Pathology Accreditation Advisory Committee (NPAAC) Requirements for Quality Control, External Quality Assurance and Method Evaluation document, quality control section states ‘For quantitative assays, target values and SDs must be determined using laboratory data’ but does not specify how these ranges are determined.10 Traditionally, clinical chemists have used the mean ± x standard deviation (s.d.) of a small data set (e.g. 20–30 results) and applied Westgard rules to identify unexpected variation.11,12 As infectious disease testing moved from the microbiology laboratory to the ‘core laboratory’, it is unsurprising that these traditional methods have been applied to the S/Co or arbitrary values expressed by the immunoassays. However, it has long been anecdotally recognised, and recently published, that infectious disease testing experiences significant reagent lot-to-lot variation and the use of traditional QC methods causes unacceptable numbers of false rejections.4,13 Therefore, when an acceptance range based on 20–30 QC results is used to establish the QC acceptance range, frequently new reagent lots cause the QC to be out of range and therefore rejected. The laboratory is therefore faced with a dilemma. Do they reject the reagent based on the QC result, noting that the kit controls are usually within the manufacturer’s acceptance criteria, indicating no change in sensitivity or specificity? Or do they re-calculate the range using the next 20–30 results, with the knowledge that the introduction of a new reagent lot will repeat the same situation? The laboratory would also need to justify why it is appropriate to release patient results using multiple acceptance criteria over time.

It should be noted that recalculating the mean and s.d. on a new reagent lot only re-establishes the imprecision of the assay. However, the change in reactivity of the QC is not due to a change in imprecision, but an introduction of bias caused by the new reagent. Recalculation of the mean and s.d. therefore ignores the root cause of the change and does not address the fundamental question of how much variation due to reagent lot change is acceptable.

Irrespective of the method utilised to establish the acceptance range for each QC sample, the methodology must be based on scientific evidence using data from the same test process being controlled, rather than assuming commutability of methodology. As the ISO 15198 standard states, when selecting a QC methodology, ‘The intended clinical application of the examination should be considered, as the performance specifications for the same measurand may differ in different clinical settings’ (section 7.2.7.2(a), p. 115). This evidence should be made available to an auditor when requested.

Infectious disease specific QC requirements

The ISO 15189 standard does not specify which quality control methodologies should be employed.5 Like most standards designed for a broad set of disciplines, it is not prescriptive. This is also the case for the NATA SAD and NPAAC Requirements for Quality Control, External Quality Assurance and Method Evaluation documents, unlike the UK-equivalent Standards for Microbiology Investigations, Quality Assurance in the Diagnostic Infection Sciences Laboratory document, which implies the use of traditional methods including Westgard rules for use in infectious disease serology.14 This UK standard, however, has been recently modified to acknowledge that traditional methods are not perfect and thus includes alternative methods of QC including the use of QConnect limits.

The NATA SAD does provide additional discipline-specific QC requirements, including cartridge-based assays, chemical pathology, cytology, haematology and histopathology. To address the points relating to the provision of QC methods for infectious disease testing raised above, an additional infectious disease disciple-specific section will be added to the NATA SAD. These clauses will be included in the accreditation of medical testing laboratories in Australia.

The inclusion states:

  • Controls provided by the manufacturer (kit controls) must be used if the manufacturer’s instructions for use (IFU) state that their use is required.

  • If the use of kit controls is not required by the manufacturer’s IFU, then a laboratory must use at least one of a kit control or a third-party external quality control (EQC) to validate the test each day the test is used.

  • Use of both kit controls and EQC is recommended. Where suitable EQC specimens are available, their use in maintaining QC is recommended.

  • If the laboratory uses the kit controls to validate the test, they must use the validation rules specified by the manufacturer.

  • If the laboratory uses EQCs to validate the test, the EQC must be validated by the laboratory for use on that test.

  • The laboratory must have a documented method for establishing acceptance criteria for an EQC based on scientific evidence that is validated using infectious disease data.

  • The laboratory must have documented procedure for when the controls are outside the established acceptance criteria.

Disclosure statement

Belinda McEwan in the ASM-authorised representative on the National Association of Testing Authorities and Human Pathology Accreditation Advisory Committee of the National Pathology Accreditation Advisory Council.

References

Dimech W (2021) The standardization and control of serology and nucleic acid testing for infectious diseases. Clin Microbiol Rev 34, e00035-21.
| Crossref | Google Scholar | PubMed |

Prechl J (2021) Why current quantitative serology is not quantitative and how systems immunology could provide solutions. Biol Futur 72, 37-44.
| Crossref | Google Scholar | PubMed |

Baylis S, et al. (2021) Standardization of Diagnostic Assays. In Encyclopedia of Virology. Vol. 5, 4th edn. pp. 52–63. Elsevier.

Dimech WJ et al. (2023) Time to address quality control processes applied to antibody testing for infectious diseases. Clin Chem Lab Med 61, 205-212.
| Crossref | Google Scholar | PubMed |

International Organization for Standardization (2022) Medical laboratories — requirements for quality and competence. ISO 15189:2022. ISO, Geneva, Switzerland. https://www.iso.org/standard/76677.html

National Association of Testing Authorities, Australia (2023) General Accreditation Criteria: ISO 15189 Standard Application Document. NATA. https://nata.com.au/files/2021/05/ISO-15189-Application-Document-Medical-Testing-Supplementary-Requirements-for-Accreditation.pdf

Therapeutic Goods Administration (2020) Classification of IVD medical devices. Version 3.0, December 2020. Australian Government, Canberra, ACT, Australia. https://www.tga.gov.au/sites/default/files/classification-ivd-medical-devices.pdf

Vincini GA, Dimech WJ (2023) What is the best external quality control sample for your laboratory? Clin Chem Lab Med 61, e50-e52.
| Crossref | Google Scholar | PubMed |

Dimech W et al. (2015) Determination of quality control limits for serological infectious disease testing using historical data. Clin Chem Lab Med 53, 329-336.
| Crossref | Google Scholar | PubMed |

10  National Pathology Accreditation Advisory Council (2018) Requirements for Quality Control, External Quality Assurance and Method Evaluation, 6th edn. Commonwealth of Australia, Department of Health, Canberra, ACT, Australia. https://www.safetyandquality.gov.au/publications‐and‐resources/resource‐library/requirements‐quality‐control‐external‐quality‐assurance‐and‐method‐evaluation‐sixth‐edition‐2018

11  Westgard JO (1994) Selecting appropriate quality-control rules. Clin Chem 40, 499-501.
| Google Scholar | PubMed |

12  Westgard JO (2003) Internal quality control: planning and implementation strategies. Ann Clin Biochem 40, 593-611.
| Crossref | Google Scholar | PubMed |

13  Dimech W et al. (2018) Comparison of four methods of establishing control limits for monitoring quality controls in infectious disease serology testing. Clin Chem Lab Med 56, 1970-1978.
| Crossref | Google Scholar | PubMed |

14  Public Health England (2021) UK Standards for Microbiology Investigations: quality assurance in the diagnostic virology and serology laboratory. Standards Unit, National Infection Service, London, UK. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1005438/Q_2i8.pdf