The impact of self-assessment and surveyor assessment on site visit performance under the National General Practice Accreditation scheme
David T. McNaughton A * , Paul Mara B and Michael P. Jones AA
B
Abstract
There is a need to undertake more proactive and in-depth analyses of general practice accreditation processes. Two areas that have been highlighted as areas of potential inconsistency are the self-assessment and surveyor assessment of indicators.
The data encompass 757 accreditation visits made between December 2020 and July 2022. A mixed-effect multilevel logistic regression model determined the association between attempt of the self-assessment and indicator conformity from the surveyor assessment. Furthermore, we present a contrast of the rate of indicator conformity between surveyors as an approximation of the inter-assessor consistency from the site visit.
Two hundred and seventy-seven (37%) practices did not attempt or accurately report conformity to any indicators at the self-assessment. Association between attempting the self-assessment and the rate of indicator non-conformity at the site visit failed to reach statistical significance (OR = 0.90 [95% CI = 1.14–0.72], P = 0.28). A small number of surveyors (N = 9/34) demonstrated statistically significant differences in the rate of indicator conformity compared to the mean of all surveyors.
Attempt of the self-assessment did not predict indicator conformity at the site visit overall. Appropriate levels of consistency of indicator assessment between surveyors at the site visit were identified.
Keywords: accreditation, audit and feedback, general practice, health policy, health services, primary health care, quality and safety, reliability.
Introduction
Accreditation, according to the Royal Australian College of General Practitioners (RACGP) Standards for general practices, serves as a quality improvement tool in Australia.1 The current 5th edition standards encompass three modules (Core, Quality Improvement, and General Practice) that assess several aspects of contemporary general practice, reflecting components such as clinical safety, staff and practitioner education, and quality assurance processes.2 While the standards have undergone several iterations since 1991, there is a need to undertake more proactive and in-depth analyses of the accreditation process and outcomes to improve the quality and safety within this healthcare sector.3–5
Australian general practices are highly engaged with the existing voluntary accreditation program (84% in 2020),4 and this is, in part, assisted by financial incentive payments.6 The process of accreditation is associated with positive leadership behaviours, cultural characteristics, and improved clinical performance;7 however, there are concerns regarding the underlying process of accreditation and the specificity of quality and safety outcomes, particularly within general practice.8 As such, a recent consulting review was conducted,4 which highlighted stakeholder perspectives of the accreditation process and suggested several areas in need of critical evaluation. Two areas highlighted in this review were the engagement with the accreditation process by general practices and the consistency of assessment between, and within, surveyors from accreditation agencies.
Within the accreditation scheme, general practices are required to submit a self-assessment of the indicator standards prior to the accreditation visit.2 This provides an opportunity for internal quality assurance and facilitated learning with agency client support officers to answer questions about the standards and potentially improve areas of indicator non-conformity. While this process of self-assessment exists, the level of engagement with self-assessment has not been investigated until now, and the association between self-assessment performance and indicator conformity during the site visit has not been empirically tested.
It has further been suggested that inconsistency of assessment exists between, and within, surveyors at site visits.4 Accreditation visits are completed by a two-person team that includes at least one general practitioner (GP) surveyor.1 Inter-assessor reliability in the general practice accreditation context lacks empirical evidence,9 notwithstanding previous calls to ensure assessor judgements are consistent.10 Low levels of assessor reliability with accreditation programs have been suggested to reduce engagement in the process by GPs and practice managers.4
In summary, while it is generally accepted that accreditation programs improve quality and safety in healthcare organisations,11 transparent examination of different aspects of accreditation and publication of these results continues to be lacking. Utilising data from an accrediting agency, this paper aims to (1) determine how predictive the practice self-assessment is of the rate of indicator conformity at the accreditation site visit, and (2) determine the consistency of the rate of indicator conformity between GP surveyors at accreditation site visits.
Materials and methods
Data sources and study population
The data were collected from consecutive Australian general practice accreditation assessments made between December 2020 and July 2022 by a single accrediting agency. The accrediting agency at this time was responsible for ~30% of all general practice accreditation assessments conducted. Data were recorded from the practice self-assessment prior to the accreditation visit and surveyor assessments during or soon after the accreditation visit using a proprietary web-based application. As part of the National General Practice Accreditation (NGPA) Scheme, these data are routinely reported to the Commission in collaboration with General Practice Accreditation Coordinating Committee for performance monitoring of accreditation agencies.12
Study variables
Up to 6 months prior to the scheduled date of (re)accreditation, practices are required to submit a self-assessment of the 124 indictors from the 5th edition of the standards.12 Using a web-based interface, practices identify indicators that they believe are met. At present there is no option to identify an indicator as not met, therefore a missing value may indicate non-attempt or potentially non-conformity of a given indicator. Thus, this variable was synthesised as a dichotomous variable reflecting the practice’s assessment that the indicator was either met or was not marked. We identified general practices who did not attempt the self-assessment process or could not accurately report conformity to any indicators, as those who completed 0 of the 124 indictors at the self-assessment.
Met and not met compliance scores for each indicator from the site visit were reported by the visiting survey team. Indicator conformity rate was aggregated across all indicators and practices as well as displayed by individual indicators.
During the on-site accreditation visit, both a GP and a co-surveyor (usually an experienced practice manager, nurse, or other senior member of a practice team) determined the compliance status of all 124 indicators. In assessing an indicator at the survey visit, triangulation of findings from both external surveyors may factor in a decision. While the accreditation process is completed as a two-person team, a single indicator conformity consensus outcome for each indicator was recorded, thus it is not possible to accurately determine which indicators at the visit were assessed by each surveyor or to assess concordance between two surveyors. Therefore, inter-rater reliability analyses between surveyors at the indicator level were not possible with the data available. However, since it is mandatory for a GP to be present at each accreditation visit and GPs are randomly allocated to practice visits, we used the variation between GPs with respect to indicator conformity rates as an approximation to inter-rater reliability that could be viewed as consistency between surveyors. We further investigated the effect of GP surveyor accreditation experience (number of previous accreditation visits completed) on indicator conformity rates as this is a potential confounding factor.
Statistical approach
This aim was assessed via two approaches. First, the association between practice attempt of the self-assessment (binary yes/no; no = 0 indicators completed and 1 = at least 1 indicator completed at self-assessment) as the independent variable and the rate of indicator conformity at the survey visit as the dependant variable was evaluated via a mixed-effect multilevel logistic regression model. The practice (or individual practice or similar) was used as the sampling unit.
Second, to determine if performance at the self-assessment alters the rate of indicator conformity at the site visit in practices who attempted the self-assessment (greater than 1 indicator completed at self-assessment), a mixed-effect multilevel logistic regression model was conducted in a subset of practices who attempted the self-assessment with rate of indicator conformity as the dependent variable and rate of self-assessment conformity as the independent variable.
Mixed-effect multilevel logistic regression models were conducted with the conformity rate of indicators as the dependant variable, the GP surveyor as the independent variable, and practice as the sampling unit. The extent to which each GP surveyor differed from the mean of all GP surveyors was estimated via deviation contrasts. A deviation contrast value of or close to one indicates that the GP survey was close to average, positive values indicate a higher-than-average assessment of conformity, and lower values indicate a lower than average assessment of conformity.
Multilevel models have been employed as the data are viewed as arising from a multilevel sampling design in which repeated measures are taken on each practice and a sample of practices has been recruited. Covariates nested within all statistical models in both aims included the number of previous accreditation cycles each practice had undergone, urban or rural location, and the GP head count of the practice. All analyses were conducted in STATA v1713 and statistical significance was set at P < 0.05.
Results
A total of 757 general practice accreditation visits were recorded between December 2020 and July 2022. Table 1 displays the available characteristics of the general practices and the 34 GP surveyors included in the current analyses. The aggregated conformity rate of all indicators across all practices was 94% (n = 93 862 indicators).
N (%) | ||
---|---|---|
State | ||
ACT | 7 (1) | |
NSW | 246 (32) | |
NT | 12 (2) | |
QLD | 205 (27) | |
SA | 45 (6) | |
TAS | 13 (2) | |
VIC | 169 (22) | |
WA | 60 (8) | |
Urban location | 489 (65) | |
GP head count m (s.d.) | 5.90 (4.18) | |
Number of previous GP accreditation cycles, m (s.d.) | 3.13 (1.79) | |
Number of previous accreditation visits from surveyors, m (s.d.) | 214.75 (152.28) | |
Number of practices with zero indicators marked on self-assessment | 277 (37) | |
Number of indicators conformant at site assessments, m (s.d.) | 113.69 (8.16) |
Number of previous accreditation cycles is only based on a single accreditation agency assessment. It is possible practices have had previous accreditation cycles with alternative accreditation agencies.
m, mean; s.d., standard deviation.
Determination of how predictive the practice self-assessment is of the rate of indicator conformity at the accreditation site visit
Two hundred and seventy-seven practices (37%) did not attempt or accurately report conformity to any indicators at the self-assessment. Practices that did not complete the self-assessment had slightly more accreditation cycles with the accrediting agency (mean = 3.32, s.d. = 1.74) than those who did complete the self-assessment (mean = 3.01, s.d. = 1.81): Mann–Whitney test: z = 2.59, P = 0.01). There were no statistically significant differences between the headcount of GPs and urban/rural location of the assessed practices. Fig. 1 displays the average number of indicators assessed as conformant at the site visit by practices who either completed or did not complete the self-assessment. The association between the rate of indicator conformity at the site visit and completion of the self-assessment failed to reach statistical significance (OR = 0.99, z = −1.07, P = 0.28). This suggests completion of the self-assessment does not alter the rate of indicator conformity at the site visit. In a subset of practices who attempted the self-assessment (n = 480), performance at the self-assessment (rate of indicator conformity) was not associated with the rate of indicator conformity at the site visit (OR = 0.82, z = −1.81, P = 0.07). This suggests there is no clear evidence that reporting higher indicator conformity at self-assessment is associated with higher indicator conformity at the site visit.
Determination of the consistency of the rate of indicator conformity between GP surveyors at the site visits
Table 2 displays the consistency of the indicator conformity rate between the 34 GP surveyors. Five GP surveyors had a statistically significant odds ratio (OR) indicating an elevated rate of indicators being assessed as ‘met’ at the accreditation visit. In contrast, four GP surveyors had a statistically significantly OR indicating a reduced rate, relative to the average surveyor, of indicators being assessed as ‘met’ at the accreditation visit.
GP surveyor | Contrast with grand mean: OR (95% CI) | Contrast with grand mean: model statistics | |
---|---|---|---|
1 | 1.02 (1.32–0.85) | χ(1) = 0.11, P = .74 | |
2 | 0.43 (0.57–0.32) | χ(1) = 17.36, P < 0.005 | |
3 | 0.88 (1.16.–0.66) | χ(1) = 0.38, P = .54 | |
4 | 3.36 (5.12–2.21) | χ(1) = 10.88, P = 0.001 | |
5 | 0.45 (0.59–0.36) | χ(1) = 25.59, P < 0.005 | |
6 | 0.70 (0.90–0.55) | χ(1) = 3.54, P = 0.06 | |
7 | 2.12 (2.81–1.59) | χ(1) = 12.71, P < 0.005 | |
8 | 0.79 (1.32–0.48) | χ(1) = 0.57, P = 0.45 | |
9 | 1.34 (2.23–0.81) | χ(1) = 0.57, P = 0.45 | |
10 | 0.74 (1.00–0.55) | χ(1) = 3.44, P = 0.06 | |
11 | 1.54 (2.12–1.12) | χ(1) = 5.00, P = 0.02 | |
12 | 0.58 (0.81–0.42) | χ(1) = 6.99, P = 0.008 | |
13 | 1.17 (1.34–1.03) | χ(1) = 1.05, P = 0.31 | |
14 | 5.62 (8.48–3.73) | χ(1) = 53.74, P < 0.005 | |
15 | 2.12 (2.92–1.53) | χ(1) = 13.65, P = 0.002 | |
16 | 0.82 (1.00–0.67) | χ(1) = 1.91, P = 0.17 | |
17 | 1.04 (1.22–0.89) | χ(1) = 0.08, P = 0.78 | |
18 | 1.16 (1.31–1.02) | χ(1) = 0.85, P = 0.36 | |
19 | 0.72 (1.30–0.41) | χ(1) = 1.32, P = 0.25 | |
20 | 0.90 (1.41–0.58) | χ(1) = 0.14, P = 0.71 | |
21 | 3.39 (.5.37–2.15) | χ(1) = 20.67, P < 0.005 | |
22 | 0.90 (1.44–0.57) | χ(1) = 0.11, P = 0.74 | |
23 | 0.57 (1.53–0.21) | χ(1) = 0.72, P = 0.40 | |
24 | 1.11 (1.56–0.79) | χ(1) = 0.15, P = 0.71 | |
25 | 0.57 (1.26–0.32) | χ(1) = 1.74, P = 0.19 | |
26 | 1.12 (1.54–0.81) | χ(1) = 0.24, P = 0.62 | |
27 | 1.65 (3.03–0.90) | χ(1) = 1.30, P = 0.25 | |
28 | 0.42 (0.68–0.26) | χ(1) = 6.78, P = 0.009 | |
29 | 0.96 (1.47–0.62) | χ(1) = 0.01, P = 0.91 | |
30 | 0.38 (0.79–0.19) | χ(1) = 2.05, P = 0.15 | |
31 | 1.17 (0.188–0.74) | χ(1) = 0.19, P = 0.66 | |
32 | 0.76 (1.12–0.52) | χ(1) = 0.94, P = 0.33 | |
33 | 0.60 (1.04–0.34) | χ(1) = 1.76, P = 0.18 | |
34 | 1.05 (1.78–0.62) | χ(1) = 0.01, P = 0.91 |
Mixed-effect multilevel logistic regression models with the indicator pass rate as the dependant variable and GP surveyor ID as the independent variable.
Individual GP surveyor conformity rate was contrasted with the grand mean indicator pass rate from all GP surveyors.
Coefficient is an odds ratio (OR) and 95% CI.
Bold text indicates surveyors with statistically significant ORs, indicating a statistically different rate of indicator conformity compared to the mean of all surveyors.
The variable indicating the number of site visits each GP surveyor had previously completed could not be included into the models due to collinearity. When conducting a separate mixed-effect multilevel logistic regression model with the rate of indicator conformity as the dependant variable and number of previous site visits completed by the GP surveyors as the independent variable (variable divided by 10 to aid interpretation of coefficient), the relationship was small and statistically significant (OR = 0.99, Z = −2.97, P = 0.003). This suggests more surveyor experience is associated with less indicator conformity at the site visit.
Discussion
The purpose of the current manuscript was to investigate two components of the process for the accreditation of general practices according to the RACGP standards and serve as a basis for empirical quality assurance review of assessment processes from one accrediting agency. Completion of the self-assessment was not associated with a higher rate of indicator conformity at the site visit, and in a subset of practices who accurately reported conformity to at least one indicator at the self-assessment, higher indicator conformity self-reported was not associated with higher indicator conformity at the site visit. Finally, the rate of indicator conformity was generally consistent between GP surveyors, although a minority (9 out of 34) varied from the overall indicator conformity rate of all GP surveyors.
Prior to the site visit, we identified a high proportion of practices who did not engage or indicate conformity to the majority of the standards within the self-assessment, even though this is a mandatory component within the accreditation process.12 Completion of the self-assessment, however, did not predict a higher rate of indicator conformity at the site visit. This result may question the value of the self-assessment on a practices’ site visit performance. It is important to interpret these results cautiously, as there is qualitative evidence to suggest a self-assessment process is valuable for accreditation preparedness,14 however, preliminary results from our analyses question the magnitude of this effect on site visit performance. Furthermore, we have not investigated other outcomes of self-assessment, such as the time it takes, or resources required, to close non-conformities which may be salient and alternative outcomes within the accreditation process.
The consistency of the surveyors at site visits is a critical component of any accreditation process, with inconsistent surveying by assessors potentially undermining health services’ confidence in accreditation programs.9 We identified an acceptable level of consistency between site GP surveyors with respect to indicator conformity assessment. As there is limited empirical research in the Australian general practice accreditation context, and to our knowledge none related to the new 5th edition standards, we are unable to identify a direct comparison for this result. Regular auditing and review of inter-assessor reliability from accrediting agencies is to be encouraged.
Several mechanisms aimed at ensuring consistency in accreditation approach and outcomes exist in the current NGPA scheme.9 These include requiring agencies to demonstrate their frameworks and capabilities to execute accreditation, setting out standardised and high level assessment processes, and regularly monitoring the performance of accrediting agencies.12 Our results highlight an opportunity for further embedded, proactive, and empirically driven reflection of accreditation practices. While the overall consistency between surveyors was acceptable, further improvement may be possible via additional training of surveyors from the RACGP, commission, and/or accreditation agency to ensure a consistent application to standards for general practices.
There are several limitations to our study. First, the data were sourced from one accrediting agency and these findings may not be generalisable to alternative accrediting agencies. Second, there is a high rate of non-attempt with the self-assessment process, therefore the association between self-assessment completion and site visit performance must be interpretated cautiously. Third, while we identified a similar consistency between GP surveyors, the site visit is generally completed by a two-person team. Due to a single consensus outcome being recorded we were unable to evaluate inter-rater agreement within a single survey visit. However, as surveyors are randomly allocated to practice visits, there is no reason to believe that the inter-rater consistency reported here is not also reflective of inter-rater agreement.
Accreditation to the RACGP Standards for general practices has been reported to ‘raise the bar’ in this healthcare sector.7 There has been, however, a lack of critical and empirically driven evaluation of the process of general practice accreditation in Australia. Our findings identified that the completion of the self-assessment does not predict higher indicator conformity at the site visit, and data from one accrediting agency highlighted acceptable levels of consistency of indicator assessment between GP surveyors.
Data availability
Data may be obtained from a third party and are not publicly available. Deidentified participant data may be made available from (https://www.qpa.health) under conditions outlined by Quality Practice Accreditation Pty Ltd (QPA).
Conflicts of interest
Dr Mara is the managing director of Quality Practice Accreditation and Professor Jones is a member of its advisory board. Dr McNaughton was remunerated by QPA for formal data analyses and writing of the manuscript.
Author contributions
David McNaughton: substantial contributions to the conception of the manuscript. Data analyses. Interpretation of data for the work. Drafting of manuscript. Final approval of the version to be published. Paul Mara: substantial contributions to the conception of the manuscript and acquisition of data. Drafting of manuscript and revising it critically for important intellectual content. Final approval of the version to be published. Michael Jones: substantial contributions to the conception of the manuscript. Analysis and interpretation of data. Drafting of manuscript. Final approval of the version to be published. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Acknowledgements
The authors thank Jasmitha Venna for extracting and compiling the data prior to formal analyses.
References
1 The Royal Australian College of General Practitioners. Standards for General Practices. 5th edn. RACGP; 2020. Available at https://www.racgp.org.au/running-a-practice/practice-standards/standards-5th-edition/standards-for-general-practices-5th-ed-1
2 RACGP. Changes to the RACGP Standards for general practices, 5th edn. Available at https://www.racgp.org.au/running-a-practice/practice-standards/standards-5th-edition/standards-for-general-practices-5th-ed/introduction-to-the-standards-for-general-practice/changes-from-the-previous-edition
3 Jones M, McNaughton D, Mara P. General practice accreditation–does time spent on-site matter? Aust Health Rev 2023; 47(6): 689-693.
| Crossref | Google Scholar | PubMed |
4 MP Consulting. Review of General Practice Accreditation Arrangements. 2021. Available at https://consultations.health.gov.au/primary-health-networks-strategy-branch/review-of-general-practice-accreditation-arrangeme/user_uploads/review-of-the-ngpa-scheme---consultation-paper---final---110821--004-.pdf
5 Khoury J, Krejany CJ, Versteeg RW, et al. A process for developing standards to promote quality in general practice. Fam Pract 2019; 36(2): 166-171.
| Crossref | Google Scholar | PubMed |
6 Service Australia. Practice Incentives Program. 2023. Available at https://www.servicesaustralia.gov.au/practice-incentives-program
7 Debono D, Greenfield D, Testa L, et al. Understanding stakeholders’ perspectives and experiences of general practice accreditation. Health Policy 2017; 121(7): 816-822.
| Crossref | Google Scholar | PubMed |
8 Braithwaite J, Greenfield D, Westbrook J, et al. Health service accreditation as a predictor of clinical and organisational performance: a blinded, random, stratified study. Qual Saf Health Care 2010; 19(1): 14-21.
| Crossref | Google Scholar | PubMed |
10 Greenfield D, Pawsey M, Naylor J, Braithwaite J. Are accreditation surveys reliable? Int J Health Care Qual Assur 2009; 22(2): 105-116.
| Crossref | Google Scholar | PubMed |
11 Greenfield D, Braithwaite J. Developing the evidence base for accreditation of healthcare organisations: a call for transparency and innovation. Qual Saf Health Care 2009; 18(3): 162-163.
| Crossref | Google Scholar | PubMed |
12 Australian Commission on Safety and Quality in Healthcare. Policy - Approval under the National General Practice Accreditation (NGPA) Scheme to conduct assessments. 2022. Available at https://www.safetyandquality.gov.au/publications-and-resources/resource-library/policy-approval-under-national-general-practice-accreditation-ngpa-scheme-conduct-assessments
14 Erwin PC. A Self-Assessment Process for Accreditation Preparedness: A Practical Example for Local Health Departments. J Public Health Manag Pract 2009; 15(6): 503-508.
| Crossref | Google Scholar | PubMed |