Free Standard AU & NZ Shipping For All Book Orders Over $80!
Register      Login
Australian Health Review Australian Health Review Society
Journal of the Australian Healthcare & Hospitals Association
RESEARCH ARTICLE

Improving the utility of multisource feedback for medical consultants in a tertiary hospital: a study of the psychometric properties of a survey tool

Helen Corbett A E , Kristen Pearson B , Leila Karimi C and Wen Kwang Lim D
+ Author Affiliations
- Author Affiliations

A Medical Services, Northern Health, Cooper Street, Epping, Vic. 3076, Australia.

B Quality Unit, Northern Health, Cooper Street, Epping, Vic. 3076, Australia. Email: kristen.pearson@nh.org.au

C School of Psychology and Public Health, La Trobe University, Plenty Road, Melbourne, Vic. 3083, Australia. Email: l.karimi@latrobe.edu.au

D Department of Medicine, Royal Melbourne Hospital, The University of Melbourne, Vic. 3050, Australia. Email: kwang.lim@mh.org.au

E Corresponding author. Email: helen.corbett@nh.org.au

Australian Health Review 43(6) 717-723 https://doi.org/10.1071/AH17219
Submitted: 6 October 2017  Accepted: 21 September 2018   Published: 22 November 2018

Abstract

Objective The aim of this study was to investigate the psychometric properties of a multisource review survey tool for medical consultants in an Australian health care setting.

Methods Two sets of survey data from a convenience sample of medical consultants were analysed using SPSS, comprising self-assessment data from 73 consultants and data from 734 peer reviewers. The 20-question survey consisted of three subscales, plus an additional global question for reviewers. Analysis included the reliability coefficient (α) of the scale and the three subscales, inter-rater reliability or agreement and validity of the model, correlation between the single global question, the total performance score and the three survey subfactors (Pearson’s), interrater agreement (rWG(J)), the optimal number of peer reviewers required and model-based reliability (ρ).

Results The global question, total performance score and the three subfactors were strongly correlated (general scale r = 0.81, clinical subscale r = 0.78, humanistic subscale r =0.74, management subscale r = 0.75; two-tailed P < 0.01 for all). The scale showed very good internal consistency, except for the five-question management subscale. Model-based reliability was excellent (ρ = 0.93). Confirmatory factor analysis showed the model fit using the 20-item scale was not satisfactory (minimum discrepancy/d.f. = 7.70; root mean square error of approximation = 0.10; comparative fit index = 0.79; Tucker–Lewis index = 0.76). A modified 13-item model provided a good fit. Using the 20-item scale, a 99% level of agreement could be achieved with eight to 10 peer reviewers; for the same level of agreement, the number of reviewers increased to >10 using a revised 13-item scale.

Conclusions Overall, the 20-item multisource review survey tool showed good internal consistency reliability for both self and peer ratings; however, further investigation using a larger dataset is needed to analyse the robustness of the model and to clarify the role that a single global question may play in future multisource review processes.

What is known about the topic? Defining and measuring skills and behaviours that reflect competence in the health setting have proven to be complex, and this has resulted in the development of specific multisource feedback surveys for individual medical specialities. Because little literature exists on multisource reviews in an Australian context, a pilot study of a revised survey tool was undertaken at an Australian tertiary hospital.

What does this paper add? The aim of this study was to investigate the psychometric properties of a generic tool (used across specialities) by assessing the validity, reliability and interrater reliability of the scale and to consider the contribution of a single global question to the overall multisource feedback process. This study provides evidence of the validity and reliability of the survey tool under investigation. The strong correlation between the global item, the total performance score and the three subfactors suggests that this is an area requiring further investigation to determine the role that a robust single global question like this may play in future multisource review surveys. Our five-question management skills subscale provides answers to questions relevant to the specific organisation surveyed, and we anticipate that it may serve to stimulate further exploration in this area.

What are the implications for practitioners? The survey tool may provide a valid and reliable basis for performance review of medical consultants in an Australian healthcare setting.

Additional keywords: governance, human resource management, performance and evaluation, quality and safety, workforce.


References

[1]  Donnon T, Al Ansari A, Al Alawi S, Violato C. The reliability, validity, and feasibility of multisource feedback physician assessment: a systematic review. Acad Med 2014; 89 511–16.
The reliability, validity, and feasibility of multisource feedback physician assessment: a systematic review.Crossref | GoogleScholarGoogle Scholar |

[2]  Hammerly ME, Harmon L, Schwaitzberg SD. Good to great: using 360-degree feedback to improve physician emotional intelligence. J Healthc Manag 2014; 59 354–66.
Good to great: using 360-degree feedback to improve physician emotional intelligence.Crossref | GoogleScholarGoogle Scholar |

[3]  Forster AJ, Turnbull J, McGuire S, Ho M, Worthington J. Improving patient safety and physician accountability using the hospital credentialing process. Open Med 2011; 5 e79–86.

[4]  Hageman MGJS, Ring DC, Gregory PJ, Rubash HE, Harmon L. Do 360-degree feedback survey results relate to patient satisfaction measures? Clin Orthop Relat Res 2015; 473 1590–7.
Do 360-degree feedback survey results relate to patient satisfaction measures?Crossref | GoogleScholarGoogle Scholar |

[5]  Noonan CLF, Monagle J, Castanelli D. Development of a multisource feedback tool for consultant anaesthetist performance. Aust Health Rev 2011; 35 141–5.
Development of a multisource feedback tool for consultant anaesthetist performance.Crossref | GoogleScholarGoogle Scholar |

[6]  Wright C, Richards SH, Hill JJ, Roberts MJ, Norman GR, Greco M, Taylor MR, Campbell JL. Multi-source Feedback in evaluating the performance of doctors: the example of the UK General Medical Council Patient and Colleague Questionnaires. Acad Med 2012; 87 1668–78.
Multi-source Feedback in evaluating the performance of doctors: the example of the UK General Medical Council Patient and Colleague Questionnaires.Crossref | GoogleScholarGoogle Scholar |

[7]  Violato C, Lockyer J, Fidler H. Multi-source feedback: a method of assessing surgical practice. BMJ 2003; 326 546–8.
Multi-source feedback: a method of assessing surgical practice.Crossref | GoogleScholarGoogle Scholar |

[8]  Nurudeen SM, Kwakye G, Berry WR, Chaikof EL, Lillemoe KD, Millham F, Rubin M, Schwaitzberg S, Shamberger RC, Zinner MJ, Sato L, Lipsitz S, Gawande AA, Haynes AB. Can 360-degree reviews help surgeons? Evaluation of multisource feedback for surgeons in a multi-institutional quality improvement project. J Am Coll Surg 2015; 221 837–44.
Can 360-degree reviews help surgeons? Evaluation of multisource feedback for surgeons in a multi-institutional quality improvement project.Crossref | GoogleScholarGoogle Scholar |

[9]  Lelliott P, Williams R, Mears A, Andiappan M, Owen H, Reading P, Coyle N, Hunter S. Questionnaires for 360-degree assessment of consultant psychiatrists: development of psychometric properties. Br J Psychiatry 2008; 193 156–60.
Questionnaires for 360-degree assessment of consultant psychiatrists: development of psychometric properties.Crossref | GoogleScholarGoogle Scholar |

[10]  Tham KY. 360° feedback for emergency physicians in Singapore. Emerg Med J 2007; 24 574–5.
360° feedback for emergency physicians in Singapore.Crossref | GoogleScholarGoogle Scholar |

[11]  Overeem K, Wollersheim H, Arah A, Cruijsberg J, Grol R, Lombarts K. Evaluation of physicians’ professional performance: an iterative development and validation study of multisource feedback instruments. BMC Health Serv Res 2012; 12 80
Evaluation of physicians’ professional performance: an iterative development and validation study of multisource feedback instruments.Crossref | GoogleScholarGoogle Scholar |

[12]  Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA 1993; 269 1655–60.
Use of peer ratings to evaluate physician performance.Crossref | GoogleScholarGoogle Scholar |

[13]  Reichheld FF. The one number you need to grow. Harv Bus Rev 2003; 81 46–54.

[14]  Krol MW, de Boer D, Delnoij DM, Rademakers JJDJM. The Net Promoter Score – an asset to patient experience surveys? Health Expect 2015; 18 3099–109.
The Net Promoter Score – an asset to patient experience surveys?Crossref | GoogleScholarGoogle Scholar |

[15]  James LR, Demaree RG, Wolf G. rwg: an assessment of within-group interrater agreement. J Appl Psychol 1993; 78 306–9.
rwg: an assessment of within-group interrater agreement.Crossref | GoogleScholarGoogle Scholar |

[16]  LeBreton JM, Senter JL. Answers to 20 questions about interrater reliability and interrater agreement. Organ Res Methods 2008; 11 815–52.
Answers to 20 questions about interrater reliability and interrater agreement.Crossref | GoogleScholarGoogle Scholar |

[17]  Kozlowski SW, Hattrup K. A disagreement about within-group agreement: disentangling issues of consistency versus consensus. J Appl Psychol 1992; 77 161–7.
A disagreement about within-group agreement: disentangling issues of consistency versus consensus.Crossref | GoogleScholarGoogle Scholar |

[18]  Archer JC, McAvoy P. Factors that might undermine the validity of patient and multisource feedback. Med Educ 2011; 45 886–93.
Factors that might undermine the validity of patient and multisource feedback.Crossref | GoogleScholarGoogle Scholar |

[19]  Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence – a systematic review. JAMA 2006; 296 1094–102.
Accuracy of physician self-assessment compared with observed measures of competence – a systematic review.Crossref | GoogleScholarGoogle Scholar |

[20]  Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self assessments. J Pers Soc Psychol 1999; 77 1121–34.
Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self assessments.Crossref | GoogleScholarGoogle Scholar |