Free Standard AU & NZ Shipping For All Book Orders Over $80!
Register      Login
Journal of Primary Health Care Journal of Primary Health Care Society
Journal of The Royal New Zealand College of General Practitioners
RESEARCH ARTICLE (Open Access)

Exploring how a patient encounter tracking and learning tool is used within general practice training: a qualitative study

Michael Bentley https://orcid.org/0000-0003-3016-6194 1 , Jennifer Taylor https://orcid.org/0000-0002-5075-6629 2 , Alison Fielding https://orcid.org/0000-0001-5884-3068 1 3 , Andrew Davey https://orcid.org/0000-0002-7547-779X 1 3 , Dominica Moad https://orcid.org/0000-0002-2593-6038 1 3 , Mieke van Driel https://orcid.org/0000-0003-1711-9553 4 , Parker Magin https://orcid.org/0000-0001-8071-8749 1 3 , Linda Klein https://orcid.org/0000-0002-2063-1518 1 3 *
+ Author Affiliations
- Author Affiliations

1 GP Training Research Department, The Royal Australian College of General Practitioners, Level 1, 20 Mcintosh Drive, Mayfield West, NSW 2304, Australia.

2 GP Synergy, NSW & ACT Research and Evaluation Unit, Level 1, 20 Mcintosh Drive, Mayfield West, NSW 2304, Australia.

3 Discipline of General Practice, School of Medicine & Public Health, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia.

4 General Practice Clinical Unit, Faculty of Medicine, The University of Queensland, 288 Herston Road, Brisbane, Qld 4006, Australia.

* Correspondence to: linda.klein@racgp.org.au

Handling Editor: Felicity Goodyear-Smith

Journal of Primary Health Care 16(1) 41-52 https://doi.org/10.1071/HC23082
Submitted: 27 July 2023  Accepted: 28 October 2023  Published: 27 November 2023

© 2024 The Author(s) (or their employer(s)). Published by CSIRO Publishing on behalf of The Royal New Zealand College of General Practitioners. This is an open access article distributed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND)

Abstract

Introduction

In Australian general practitioner (GP) training, feedback and reflection on in-practice experience is central to developing GP registrars’ (trainees’) clinical competencies. Patient encounter tracking and learning tools (PETALs) that encompass an audit of consecutive patient consultations, feedback, and reflection are used to determine registrars’ in-practice exposure and have been suggested as a tool for learning within a programmatic assessment framework. However, there is limited qualitative literature on the utility of PETALs in GP training.

Aim

To provide greater understanding of how PETALs are used in GP training, using Registrars’ Clinical Encounters in Training (ReCEnT) as a case study.

Methods

Medical educators, supervisors, and registrars from two Australian regional GP training organisations participated in focus groups and interviews, designed to explore participants’ perceptions of ReCEnT’s utility. Data were analysed using reflexive thematic analysis.

Results

Eight themes were identified that enhance our understanding of: how ReCEnT reports are used (reassuring registrars, facilitating self-reflection, identifying learning needs), what enables ReCEnT to reach its full potential (a culture of reflection, meaningful discussions with supervisors and medical educators, valuing objective data), and differences in understanding about ReCEnT’s role in a programmatic assessment framework (as a tool for learning, as ‘one piece of the puzzle’).

Discussion

The findings were used to develop a Structure–Process–Outcomes model to demonstrate how ReCEnT is currently used and explores how it can be used for learning, rather than of learning, in a programmatic assessment framework for GP training. ReCEnT’s longitudinal format has potential for enhancing learning throughout training.

Keywords: clinical practice, general practice registrars, healthcare education, patient encounter data, performance and evaluation, primary healthcare, professional education, programmatic assessment, reflective practice.

WHAT GAP THIS FILLS
What is already known: In Australia’s apprenticeship-style model of GP training, in-practice learning and experience for GP registrars is central to the development of their confidence and clinical competencies. The Registrar Clinical Encounters in Training (ReCEnT) project is a patient encounter tracking and learning tool (PETAL) that has been shown to provide opportunities for GP registrars along with their supervisors and medical educators (MEs) to reflect on registrars’ clinical practice and identify their learning needs, leading to change in practice. There is limited qualitative literature on how registrars, supervisors and MEs describe the utility of workplace-based assessment tools such as PETALs in general practice training, particularly within proposed programmatic assessment frameworks.
What this study adds: This study builds on previous survey findings, providing greater depth of understanding from the perspective of GP registrars, supervisors and MEs regarding how ReCEnT can be useful as an educational and reflective PETAL tool for GP registrars during their training. Meaningful engagement between GP registrars and their supervisors and MEs enables ReCEnT to be more effective as a tool for learning in general practice training. More work needs to be done on how PETALs, such as ReCEnT, best fit within a programmatic assessment framework for general practice training.

Introduction

In Australian general practitioner (GP) training, GP registrars (trainees) undertake their in-practice learning under the guidance of experienced GP supervisors.1 Central to this relationship is feedback and reflection on registrars’ in-practice exposure as they develop their clinical competencies.2 Patient encounter tracking and learning (PETAL) tools that encompass an audit of consecutive patient consultations, feedback, and reflection are one method used to determine GP registrars’ in-practice exposure.3 PETALs, such as Registrars’ Clinical Encounters in Training (ReCEnT), offer opportunities for GP registrars, along with their supervisors and medical educators (MEs), to reflect on registrars’ practice, on their educational needs, and to encourage quality improvement.4 ReCEnT is a formative assessment with potential to be learner-led and is focused on provision of feedback without stakes/consequences if not used.

There is a move in Australia towards programmatic assessment of clinical competencies in general practice training,5 and to use PETALs within a programmatic assessment framework.6 In programmatic assessment, multiple low-stakes methods (eg PETAL) are used to provide assessment for learning while reducing/removing high-stakes assessments of learning (eg exams).7,8 Instead, multiple low-stakes assessments are aggregated to make high-stakes decisions.7 Such information is gathered longitudinally to document and support trainee learning, incorporating feedback, reflection, and mentorship.8,9 However, there is a tension between using feedback to improve clinical competencies and using the same feedback in assessment of these competencies.10 In this context, ReCEnT as a formative assessment, is not currently a low-stakes assessment. In this study, we hoped to explore how stakeholders perceived ReCEnT should be incorporated into programmatic assessment.

A challenge in implementing programmatic assessment, and choosing appropriate assessment tools, is the interplay between learner agency and assessment culture.11,12 In other words, an important feature of an assessment is the quality of the tool (whether it is fit- for-purpose) in conjunction with the ability of the registrar to use it accordingly.7 Further, a key component of an assessment’s utility is its educational impact.13 Exploring how a PETAL is used by registrars in practice might shed light on its educational impact.

PETALs offer a systematic approach to determine clinical exposure in general practice, but to our knowledge, published research investigating the utility of PETAL data is limited to our 2023 study.14 Reviews of older methods, such as logbooks that provide information on patient mix and learning15 and audits that compare clinical practice to established guidelines,16 have shown that with feedback, some changes in learning outcomes15 and in clinical practice16 can occur. A qualitative study of established GPs found feedback and peer discussion about audits provided motivation for practice change.17 However, there is limited qualitative literature on how PETALs are used as assessment for learning or change in general practice training, even though they are suggested for inclusion in the recommended framework for workplace-based assessment.6

Study aim

This study aimed to provide greater understanding of how GP registrars, MEs, and supervisors use a PETAL tool within Australian general practice training, using a case study of the longitudinal Registrars’ Clinical Encounters in Training (ReCEnT) project. The study builds on quantitative survey findings regarding perceptions of registrars, MEs, and supervisors about the utility of ReCEnT for reflection on, and change to, registrar learning and clinical practice.14

Context

In 2020–21, ReCEnT was used in three regional training organisations (RTOs), accounting for 44% of all registrars in Australian GP training.18 ReCEnT, as a formative educational tool in use since 2010, is designed to assist GP registrars, along with their supervisors and MEs, to reflect on their practice, on their educational needs, and to encourage quality improvement.4 In brief, once in each of their three 6-month mandatory general practice training terms, registrars complete details about 60 consecutive consultations, documenting information about themselves, their patients and the consultations (including registrars’ clinical actions during the consultations). In each term, registrars receive an individualised feedback report summarising this information. Each report provides comparisons of registrars’ results with their own results over time (ie term-to-term), with aggregate registrar data, and with previously published national data for established GPs, where available.19 Prompts within the report ask registrars to critically reflect on the findings presented – particularly considering: how typical the consultations were of their usual practice; and how their practice’s demographics, policies, and procedures might have influenced the findings. Reports are delivered within 3 weeks of data being provided by registrars to facilitate timely reflection.2022 The ReCEnT project is described more fully elsewhere.23 In 2020, ReCEnT was included in a pilot programmatic assessment program, implemented in one RTO. Thus, questions about ReCEnT’s potential role in programmatic assessment were included in this research.

Methods

Study design

An interpretivist perspective was chosen as an appropriate theoretical framework24 to understand and explain the phenomenon of interest, namely perceptions of how ReCEnT as a PETAL tool is used for learning and for use in programmatic assessment. Initially, focus groups, with key informants who were drivers of the use of ReCEnT within GP registrar education and with knowledge of the programmatic assessment pilot, were used to identify pre-existing assumptions and to inform interview guides (Supplementary File S1) for a broader sample of registrars, MEs, and supervisors.

Ethics approval was obtained from the University of Newcastle Human Research Ethics Committee (approval number H-2020-0103).

Participants and procedure

Participants were from two Australian RTOs: GP Synergy (covering NSW and ACT) and General Practice Training Tasmania. Inclusion criteria for participation were: 2020 GP registrars who had completed two or more rounds of ReCEnT and had completed their final round of ReCEnT (in General Practice Term 3) before the onset of the coronavirus disease 2019 (COVID-19) pandemic; and MEs and supervisors who had a registrar(s) complete ReCEnT in 2019. Key informants for focus groups were identified by the two RTOs and directly invited. For interviews, three avenues of invitation were deployed: (1) key informants who could not attend a focus group (eg unsuitable time) were automatically invited; (2) participants who completed a quantitative survey (conducted as part of a larger study)14 were invited to express interest in an interview; and (3) an email invitation was sent by each RTO to further seek interest in interviews by registrars, supervisors and MEs who met the inclusion criteria but did not respond to the quantitative survey. Among those who expressed an interest (avenues 2 and 3), purposive selection was undertaken to get a broad sample based on demographic variables (eg age, gender, country of primary medical degree, location of practice). Participants gave signed consent to take part in the study and received a AU$50 gift voucher for their time (except MEs who were employees of the RTO). The study’s chief investigator (LK) conducted the focus groups, and the Senior Qualitative Researcher (JT) conducted the interviews. Data collection was via video conference or phone. Interviews continued until no new ideas were shared.

Analysis and reflexivity

Qualitative data collection and analysis was concurrent and iterative, with the interview guide modified as needed in response to findings. NVivo 12 (QSR International) was used to organise and model the data. Inductive reflexive thematic analysis25 was conducted by two researchers (JT, LK), employing a process of constant comparison. Comparative parallel coding resulted in an initial coding framework, then applied to transcripts and regularly modified during the iterative concurrent data collection/analysis process. First-order codes were organised into second-order themes. Codes and themes were collated then abstracted to form a theoretical description.

Analysis meetings were held regularly with all authors, to discuss coding and interpretation. Research team members were from both RTOs, had health professional backgrounds (eg medicine, psychology) and expertise in GP-related qualitative research, clinical practice, and medical education. Some members were involved in day-to-day ReCEnT project management, so provided insights from knowledge of the tool. The interviewers/analysts (JT, LK) were independent. During analysis and interpretation, a process of reflexivity was used, addressing each investigator’s pre-existing assumptions, experiences, and personal interests in the study.26,27

Results

A total of 101 MEs, 818 supervisors and 187 registrars were eligible and invited to participate in the study. A total of 57 participants consented and attended either one of four focus groups (n = 12) or an interview (n = 45). The focus group key informants comprised nine MEs, two supervisors, and one ME with a dual role as supervisor. The interviewees comprised 14 MEs, 16 supervisors, and 15 registrars. All participant groups were represented by international medical graduates in interviews (ie two of 15 registrars, four of 16 supervisors, and two of 14 MEs), and a spread of age group, rurality of practice and gender.

Fig. 1 depicts the eight identified themes presented under three headings that align with the key questions of the study.

Fig. 1.

Model summarising the eight themes under three headings that align with the research questions.


HC23082_F1.gif

Illustrative quotes comparing perspectives of registrars, supervisors and MEs on the eight themes are presented in Table 1.

Table 1.Comparing registrars’, supervisors’, and MEs’ perspectives in the eight themes grouped by heading that align with the research questions.

ThemeRegistrarSupervisorMedical Educator
Utility of registrars’ ReCEnT reports
 Reassuring RegistrarsYou obviously have to take it with a grain of salt because it’s just one point in time… but I found it interesting to compare… with other registrars… (Registrar, R06)What I do remember was that she [registrar] felt a bit distressed by the results because she found she was quite an outlier. (Supervisor, S12)…often there’s really nice things to tell registrars out it. Like, you’re seeing a great range of patients. Look at how much chronic disease you’ve seen, that’s great for exams. So, there’s lots of good things to draw on as well. (Medical Educator, M13)
Whereas the ReCEnT’s quite a broad overview of all the registrars, this is what’s happening, and this is where you fit. But it doesn’t take into account those individual circumstances that might explain things. (Registrar, R15)
 Facilitating ReflectionI would definitely like it separated out into rural or regional to metro because I consider them completely different. … So, comparing myself to metro GPs didn’t seem like a fair comparison. It’s not like apples and apples… I would see a finding on the report and think, That doesn’t apply to me because that’s a city person thing. (Registrar, R06)…it’s a good reflective process for the registrar, and even for us to think about the patients that they’re seeing and try and teach them in areas that they might not be seeing as much of. (Supervisor, S15)I think it’s really variable depending on the registrar. I think some registrars are very able to do that and very reflective and will take on all these tools and see how they can work on that. Other registrars are much more likely to brush it off… I think there’s just different attitudes generally towards it… (Medical Educator, M06)
I think different registrars are going to respond in different ways to self-reflection because our personalities are very different, and our learning styles are very different. Some will find this [ReCEnT] a useful learning tool and some won’t. I think it’s a good, useful learning tool to help with that long term thought of self-reflection, but I think it would be more useful to be able to compare like with like [referring to rural/metro differences]. (Supervisor, S13)
 Identifying Learning NeedsI think when you start out you don’t really know what you don’t know and ReCEnT sort of brings a bit of attention to those areas that you are not seeing but you don’t know that you are meant to be seeing them… so then you can go back and say, Well, I’m going to need to see that patient cohort to get experience in it and also be able to pass my exams. (Registrar, R02)I’ve had one registrar for two terms and I’ve looked at the ReCEnT data-wise and it was particularly helpful in comparing the number or amount of investigation and pathology ordering my registrar was doing compared to the norm…she was ordering a lot of tests. That was helpful. (Supervisor, S07)…it was a good springboard for discussions about what they are seeing, and it was also helpful for me for any registrars that I had concerns about prior to the ReCEnT data coming out. (Medical Educator, M10)
Watching those graphs in the ReCEnT studies for what imaging I order, definitely in GPT1 and GPT2, I did notice that I do a lot of imaging compared to my peers… Definitely much more than experienced doctors and that actually led me to do an online course through the College of Radiologists. (Registrar, R05)I use it as part of the foundations for them to set up their training and learning plan, and to a degree, training advisory to help see where they’re at and what they’re seeing and what they’re not seeing. (Medical Educator, M08)
Enabling ReCEnT to reach its full potential
 A Culture of ReflectionI went through [university] and that was a lot of reflective practice, and probably a lot of the local MEs went there as well…so I guess they might have more time to develop those reflective thoughts. Whereas in the practice with your supervisor, you’ve got a half hour meeting and you’ve got questions you need to ask them… you sort of feel really under the pump just talking about day-to-day questions… (Registrar, R15)It’s us [the GP practice] saying to them [registrars], “We think this is valuable… We’ll give you an extra half hour of non-contact to do that…” If you give them the extra time, I think psychologically they feel like they’ve got bit of a breather, so it makes it easier and I find it helps that engagement a bit more… (Medical Educator and Supervisor, M05)I think good mentoring and good modelling are probably the most powerful ways of doing it because just telling people doesn’t really help. They need to experience that there actually is a different way of doing it. (Medical Educator, M01)
I think it is a great idea because that will give you a better idea of what is happening at that practice comparing more senior GPs to more junior GPs… it will overcome that problem of the supervisor seeing all the old patients, you will be able to see that there are old patients at the practice – they’re just not seen by the registrars… But I think again, time will be the problem… I think that would be a really good way of doing a proper impact evaluation of what happens at a practice to really improve things for the registrar. (Supervisor, S11)Having a discussion about, This is what ReCEnT is, this is the data we have found…This is what you can do for your registrar… If you did something like that and made it something that was available to Medical Educators and supervisors all over the state to come together, that would be great because then we can see that our colleagues are interested, so we’re more likely to be interested… (Medical Educator, M04)
 Meaningful DiscussionsBecause you don’t know what you don’t know, and you can’t reflect on something you don’t understand. So, if you don’t have someone helping you facilitate that reflection, I think that would make it difficult. (Registrar, R08)I usually have the registrar with me in a sort of mentoring-type role where they bring issues that they have concerns about and discuss the cases with me and that’s I think that’s the most useful self-reflection because they’ve really come up with the issue that they have and they can discuss it with me and reflect back… That’s on a case-by-case basis rather than IT tools that identify a sort of general practice… It’s easier to reflect on a single case rather than stats about your whole practice. (Supervisor, S09)I think asking her [registrar] to have a look at the report and reflect on it, I think that was important but then the self-reflection part of it, she was only able to be like, Oh wow! I’m spending a lot more time than other registrars. But talking about strategies to actually address that and look at why she was doing that needed the conversation. (Medical Educator, M10)
I think the individual conversation is key… We have a requirement as part of our training plans that we actually make sure it [ReCEnT] is discussed… (Medical Educator and Supervisor, M05)
 Valuing Objective DataI probably felt… that I was seeing a lot more skin than I actually was. That was a bit surprising for me… The ReCEnT data is a more objective tool in assessing that. (Registrar, R05)I would say that we tend to pat ourselves on the back and have a tendency towards complacency, and I think it’s probably good for us to get more objective feedback like ReCEnT. (Supervisor, S14)…the ReCEnT data is almost the only snapshot or window I have to give an objective representation of how they’re [registrars] doing in practice… I guess I feel like I can rely on that information a little bit more and take it at that face value for what is presented to me. Whereas sometimes when I’m getting information from a report or a supervisor or from a CT visit, I’ll be like, This was mentioned… do you have a different perspective? … I can almost act off that [ReCEnT] a little bit quicker rather than having to get two sides… (Medical Educator, M10)
ReCEnT’s role within a programmatic assessment framework
 ReCEnT as a tool for learningI think the ReCEnT data is more a learning tool. I don’t think it should or can be used for assessment purposes. (Registrar, R05)With my understanding of ReCEnT, I mean, I presume it’s not really a pass/fail thing, then because it’s not really something that could be quantified in that way. So, I wouldn’t understand how it would become, like an assessable component, if that makes sense? (Supervisor, S15)I think it’s a good idea, but I think it would have to go beyond just completing ReCEnT, because… that’s just showing that the registrars get their paperwork done on time and are diligent in that way. But I think, to harness it more for programmatic assessment, it would have a formal sort of process… literally one paragraph just saying, What stands out for you in this report? … How will this assist you in your learning and training? (Medical Educator, M02)
I guess if you discuss it with someone and then write it down it will actually impact your practice a bit more. You’re more likely to follow through with it. (Registrar, R10)I think you have to choose whether it’s going to be a self-reflection or whether it’s going to be an assessment. I don’t think it could be both… (Supervisor, S13)I think people struggle with ReCEnT looking at it as an assessment because it’s not actually assessing particular clinical competencies. Whenever we think about assessments, we routinely go back to thinking about are they doing well or are they not doing well. If we keep reminding ourselves that this is an assessment of the demographics that the registrars are being exposed to and what they’re actually seeing… That’s definitely giving something that not any other assessment gives us. (Medical Educator, FGP10)
…the idea is great, but the problem is formalising it becomes a paper exercise rather than actually being any help to people because all you end up is writing essays and stories. I’ve done hundreds of Alice in Wonderland stories about reflection while I was in [country] because that was what was expected. But that made me learn absolutely nothing whatsoever apart from how to write stories. (Supervisor, S01)I don’t necessarily think practicing by having to submit something would improve some of their reflection skills… I don’t see how helpful it would be versus how much extra workload that would generate for both the registrars and whoever was going to have to review and mark that… I don’t think it would change what we already do. (Medical Educator, M10)
 One piece of the puzzle’ in programmatic assessmentApart from ReCEnT itself, [RTO] have different modules and workshops for us and we are also encouraged with our supervisor to identify areas of need, particularly when we have a supervisor sit in to watch some of our consultations… if you see presentations that I’m not very confident in managing… that gets jotted down… there’s a lot of those CPD-directed activities and learning modules that you can work through… In a way, learning still has to happen… regardless of whether ReCEnT was there or not… (Registrar, R05)

There’s some useful pixels and in a kaleidoscope of assessment, I’m sure there will be stuff on ReCEnT, because it’s like NAPLAN, if there’s something going right across, we can see registrars from some years ago across term 1, term 2, term 3s and there’s some useful stuff in that. (Supervisor, S14)

I use the [ReCEnT] results heavily. I definitely have a look at all of those as well as any other information such as CTV reports, Term 2 quizzes, IOS reports, all the different information and I kind of make sure I’ve looked over all of that to get a bit of a feel for where a registrar is at and be able to then have a conversation with the registrar and reflect back that information and help them to identify where their gaps are and map out how to plug those gaps and move forward with their training. (Medical Educator, M08)

Utility of registrars’ ReCEnT reports

Three themes centred on how ReCEnT reports were used following distribution to GP registrars, their supervisors and MEs. These themes were how ReCEnT reports: reassured registrars; facilitated reflection; and helped to identify registrar learning needs.

Reassuring registrars

Participants reported that ReCEnT feedback reports provided ‘interesting’ information about how registrars were tracking compared with previous terms, their peers, and established GPs. This was important to registrars, who often used the comparisons for reassurance that their practice and experience was ‘normal.’

I think it was more reassurance that I’m not doing the wrong thing. (Registrar, R10)

There was a caveat that ReCEnT data did not consider registrars’ individual circumstances or location (eg differences between urban and rural practices). For example, a rural/remote-located registrar argued that their ReCEnT report did not compare well with the ‘average’ registrar. Further, comparisons could potentially distress registrars who differed substantially from their peers.

What I do remember was that she [registrar] felt a bit distressed by the results because she found she was quite an outlier. (Supervisor, S12)

Facilitating reflection

Many participants, particularly registrars and MEs, described how ReCEnT reports facilitated registrars’ self-reflection. Sections of the report covered many different aspects of practice (eg patients seen including diagnoses/problems, registrars’ management actions and in-consultation assistance- and information-seeking), each providing opportunities to reflect.

…it’s a good reflective process for the registrar, and even for us to think about the patients that they’re seeing and try and teach them in areas that they might not be seeing as much of. (Supervisor, S15)

However, a few educators noted that some registrars might lack experience to self-reflect on their own without assistance from their ME or supervisor.

It’s really just about prodding them to think that way and to raise their awareness that there is that form of reflection to learn…reflective learning. (ME, M01)

In addition to the reports, in-consultation entering of ReCEnT data gave registrars opportunities to immediately reflect on their practice and their patient management options.

While I was filling it out, I found, to reflect on what I’d just done and sometimes it made me practice better! (Registrar, R01)

Identifying learning needs

ReCEnT feedback reports had utility in understanding and identifying registrars’ learning needs. Most MEs reported that ReCEnT was a valuable tool to start a conversation, to help identify learning needs, or to flag potential problems in registrars’ training experiences. MEs reported they used ReCEnT reports to assist with registrar remediation.

…it helps me with people that need remediation. People [registrars] that are progressing well, I don’t find it that useful. It confirms that they’re seeing a broad range of people [patients]. When somebody shows a deficit then I go looking with a fine-tooth comb back through their ReCEnT reports. (ME, M12)

Reviewing ReCEnT reports helped registrars, MEs and supervisors to address any learning or clinical gaps in registrars’ training experiences. Both MEs and supervisors played a significant role. Supervisors could provide immediate teaching to fill gaps in registrars’ clinical experience, whereas MEs could take a broader perspective, such as discussing choice of future practices, learning goals, and exam preparation.

I use it as part of the foundations for them to set up their training and learning plan, and to a degree, training advisory to help see where they’re at and what they’re seeing and what they’re not seeing. (ME, M08)

There were mixed views on whether reflecting on ReCEnT reports helped registrars change their clinical practice. Although some registrars reported change following reflection, others reported that ReCEnT did not influence change or that change was influenced by multiple factors and not ReCEnT alone.

…It was a good reflection point, but I didn’t necessarily change my practice based just on that [ReCEnT report]. I didn’t change my practice because I didn’t want to change my practice. I feel more comfortable when I’m being thorough… (Registrar, R15)

MEs and supervisors reported that they often did not know whether a registrar had implemented change in response to ReCEnT feedback. This often occurred because registrars regularly changed practice location and supervisors each term. Also, supervisors, in particular, reported that their registrar did not need to change.

I think it [ReCEnT] has the potential to give a registrar the information they need to make a change. I haven’t actually seen that because it hasn’t been applicable. There’s been no need to change for my registrars. (Supervisor, S16)

Enabling ReCEnT to reach its full potential

Three themes arose where participants shared their insights into key enablers that enhanced ReCEnT’s potential as a reflection and learning tool, specifically: where there is a culture of reflection; where meaningful discussions with supervisors and MEs occur; and where objective data are valued.

A culture of reflection

The culture of medical training was reported as slowly changing to value reflection more, and that this change was progressing from the university system through to GP training.

I went through [university] and that was a lot of reflective practice, and probably a lot of the local MEs went there as well…so I guess they might have more time to develop those reflective thoughts. (Registrar, R15)

MEs recognised this transition to becoming better reflective practitioners.

Registrars now, are more reflective than they were when I was a registrar, who are probably more reflective than the registrars who came years before them. I think that we are slowly fostering reflective practice more and more in our trainees, but it’s a slow process and some people are more amenable to that than others. (ME, M04)

Supervisors reported the importance of giving registrars extra time to enter ReCEnT data to make registrars feel comfortable and safe to reflect on their practice. MEs went further to suggest the importance of having a community of support for MEs and supervisors so that ideas could be shared about how to use feedback from ReCEnT to improve learning opportunities for registrars.

Meaningful discussions

Participants reported that ReCEnT was better utilised for reflection when meaningful discussions about feedback reports occurred between registrars and educators. Discussions were key because even when registrars had reflected on or noticed that they were outliers, they needed educators’ expertise to appreciate if differences were substantive in their circumstances, and then needed the expertise to make changes.

Because you don’t know what you don’t know, and you can’t reflect on something you don’t understand. So, if you don’t have someone helping you facilitate that reflection, I think that would make it difficult. (Registrar, R08)

In recognising the importance of discussing ReCEnT, some MEs reported contacting both registrars and supervisors to ensure ReCEnT was discussed.

We have a few meetings… just gauging what other supervisors are doing, let them speak about that. You can obviously see the cogs turning in other people’s heads, so it seems to be a useful way of finding out what other people in this community of practice are actually doing. (ME and Supervisor, M05)

Valuing objective data

Valuing objective data in ReCEnT, irrespective of whether it confirmed or disconfirmed what participants thought, was a factor in an ongoing engagement with ReCEnT.

I probably felt… that I was seeing a lot more [problems relating to] skin than I actually was. That was a bit surprising for me… The ReCEnT data is a more objective tool in assessing that. (Registrar, R05)

As supervisors, we get taught how to try and get our registrar to self-reflect. …. ReCEnT is a wonderful tool to help that process on, because there is no denying it…it’s on black and white there. (Supervisor, S11)

Some participants reported that ReCEnT feedback reports confirmed what they already knew but this still had value as objective data.

I do like finding out that what I think is happening is actually what’s happening. I do think it’s [ReCEnT] useful even if there are no surprises. (Supervisor, S16)

The longitudinal nature of ReCEnT was also appreciated for providing evidence of progress.

I think the more ReCEnTs you’ve done, the more valuable the reports have been in looking at progress (Registrar, R03)

ReCEnT’s role within a programmatic assessment framework

A programmatic approach to assessment was first introduced in the two participating RTOs in 2020. Although MEs knew of programmatic assessment, registrars and supervisors were less familiar with it. Consequently, participants were given an explanation about programmatic assessment as part of the interview. The ensuing discussion identified two themes about ReCEnT’s role in programmatic assessment: as a tool for learning; and as one piece of the puzzle.

ReCEnT as a tool for learning

Some MEs described ReCEnT as a suitable component of programmatic assessment because it provided feedback for learning, rather than assessment of learning.

So, if you are thinking about this [ReCEnT] as a formative assessment… you’re looking at the registrar’s self-reflection, their ability to self-reflect and put that towards their learning and improvement (ME, FGP05)

If we keep reminding ourselves that this is an assessment of the demographics that the registrars are being exposed to and what they’re actually seeing… That’s definitely giving something that not any other assessment gives us. (ME, FGP10)

In contrast, supervisors and registrars had trouble locating ReCEnT within a programmatic assessment framework because they viewed the word ‘assessment’ as only representing an assessment of learning, rather than an assessment for learning.

I think you have to choose whether it’s going to be a self-reflection or whether it’s going to be an assessment. I don’t think it could be both… (Supervisor, S13)

I think the ReCEnT data is more a learning tool. I don’t think it should or can be used for assessment purposes. (Registrar, R05)

Although maintaining a summative rather than a formative view of assessments, participants suggested additions to ReCEnT to meet perceived ‘assessment’ requirements, such as a reflective essay. However, the additional workload was acknowledged as impractical and potentially unhelpful.

…the idea [of ReCEnT] is great but the problem is formalising it becomes a paper exercise rather than actually being any help to people because all you end up is writing essays and stories. I’ve done hundreds of Alice in Wonderland stories about reflection while I was in [country] because that was what was expected. But that made me learn absolutely nothing whatsoever apart from how to write stories. (Supervisor, S01)

I don’t see how helpful it would be versus how much extra workload that would generate for both the registrars and whoever was going to have to review and mark that… (ME, M10)

‘One piece of the puzzle’ in programmatic assessment

A premise of programmatic assessment is to have multiple assessments for learning, across various time points in training, to develop the competencies to become a GP. ReCEnT was perceived as one source of information about registrars’ progress.

I use it [ReCEnT] as just one piece of the puzzle and all the other tools, are… adding in to get the global picture… I don’t think that ReCEnT alone, as a one-only chore is going to give you all the information you would like… (ME, MO8)

Discussion

Main findings

This study further clarifies the utility of ReCEnT as a PETAL tool, as established in the quantitative arm of the project,14 and offers greater understanding of the potential of ReCEnT as a tool that not only helps to identify GP registrars’ learning needs, but also provides objective data on registrars’ clinical exposure. Although ReCEnT feedback reports aid in providing reassurance and promoting self-reflection by registrars on their practice, ReCEnT has greater potential where there is a culture of reflection leading to opportunities for meaningful discussions with supervisors and MEs. What is less clear is the perceived role of this PETAL tool within a Programmatic Assessment framework.

A structure–process–outcome model for using a PETAL tool in programmatic assessment

We propose an approach based on the synthesis of our results that draws upon Donabedian’s Structure–Process–Outcomes model.28 The model has been used in Australian general practice.29 Our approach is depicted in Fig. 2.

Fig. 2.

Using a PETAL tool in programmatic assessment: structure, process, outcome.


HC23082_F2.gif

In Australia’s apprenticeship-style model of GP training, in-practice learning and experience is central to the development of confidence and clinical competencies.1 With changes in case complexity,30 it is imperative that GP training provides opportunities for a broad patient mix.15,31 Clinical performance can manifest over a series of encounters32 as captured using a PETAL tool. However, using a PETAL tool as an assessment in GP training is problematic if maintaining a summative rather than formative view of assessment (our study participants tended to use ‘formative/summative’, rather than ‘low-stakes/high-stakes’ as used in the programmatic assessment literature).7,33 Participants recognised that ReCEnT reports can provide feedback for learning, consistent with the view ‘that current research focuses on the ‘validity’ of the user and their way of interacting with the assessment instrument rather than purely the validity of the instrument’.7 Thus, a structure is needed to set up assessments for learning in GP training where there is effective engagement and reflection on registrars’ clinical exposure.21,22,34,35

The process of tracking and learning using PETALs needs a positive culture of feedback.36,37 In this process, registrars are active participants in feedback, not just receivers of information.22 In this humanist approach to learning, supervisors and MEs have roles as coaches22 and facilitators of meaningful discussions.21,37 Our findings suggest that ReCEnT’s perceived value and utility arise from how registrars and their educators interact with it. It is in the process of engagement and reflection that learning occurs. Interestingly, many supervisors commented that their registrars performed ‘as expected’, yet did not use ReCEnT’s extensive feedback to extend their registrars even if performing well.

Finally, how can a PETAL, such as ReCEnT, fit into a framework of programmatic assessment of GP training?10 First, ReCEnT is one, among many, instruments/methods used for learning in the programmatic assessment of GP training.7 For PETALs to be effective in this framework, they need to be valued for what they provide, rather than being seen as onerous,38 or perfunctory.39 An advantage of ReCEnT as a longitudinal PETAL tool is that it provides multiple points across GP training for an assessment that can drive learning needs.8 Thus, registrars can use ReCEnT as a formative assessment of their continuity of experience, which would seem to fit well within a programmatic assessment framework.40 For example, a decline in registrar’s in-consultation advice-seeking across successive training terms might be evidence of increasing confidence,4 which could be confirmed in discussion with educators.

The challenge remains to address disagreements or misunderstandings about assessment for learning and assessment of learning.41 Although we found ReCEnT, as a formative assessment, assists in understanding registrars’ clinical experiences, confirming its educational utility as an assessment tool mighty need to go beyond this.13 Proponents of programmatic assessment argue that there needs to be a summative or low-stakes component to ensure each registrar demonstrates agency by reflecting on the feedback and discussing this with their educators, for optimal learning to occur.7 We know from our quantitative study that reflection and discussion do not always occur.14 In this study, interviewees varied markedly on this issue – some insistent on low-stakes ‘consequences’, whereas others arguing any summative component would interfere with registrars’ honest recording of their behaviour. Clearly, more work needs to be done on ensuring that this PETAL tool is fit for purpose in a programmatic assessment framework. Donabedian’s Structure–Process–Outcomes model provides a basis for a safe and supportive approach to achieve this fit for purpose.10,28

Strengths and weaknesses of the study

This study was conducted across two Australian RTOs who use ReCEnT. These two RTOs were responsible for training 36% of all Australian registrars in general practice terms and have a demographic and geographic presence across the range of Australian GP vocational training.42 There might have been different perspectives in other RTOs.

Engaging time-poor GPs is always a challenge.43 Our strategy was to use multiple methods to recruit sufficient registrars, supervisors and MEs. Overall, we achieved a relatively even mix of registrars, supervisors and MEs; however, we acknowledge there might be a volunteer bias from participants with strong feelings on the topic.

Conclusion

Overall, this study has improved our understanding of how ReCEnT as a PETAL tool is used as a longitudinal reflective and educational tool for learning in general practice training. Although the findings confirm the value of ReCEnT reports in providing useful feedback on registrars’ clinical exposure and experiences, they also identify the greater potential of ReCEnT as a reflective tool where there is a culture of reflection leading to opportunities for meaningful discussions with supervisors and MEs. Further research could explore the educational impact of PETALs as tools for learning in programmatic assessment.

Supplementary material

Supplementary material is available online.

Data availability

The data that were used in this study cannot be publicly shared due to ethical and privacy concerns. Informed consent, in line with the approving ethics committee, only allows the use of de-identified extracts within research reporting and writing, to maintain the privacy of participants.

Conflicts of interest

Several authors on this paper are investigators on the ReCEnT project, and therefore, declare an interest in the project that gave rise to this study. Specifically, Parker Magin, Alison Fielding, Andrew Davey and Dominica Moad, are all involved in the conduct of the ReCEnT project, including the educational aspects. Mieke van Driel is an investigator on several concurrent research studies arising from ReCEnT data. Linda Klein, Jennifer Taylor, and Michael Bentley were employees of the participating RTOs (GP Synergy or General Practice Training Tasmania).

Declaration of funding

This study was supported by a Royal Australian College of General Practitioners (RACGP) Educational Research Grant (ERG2020 – 013).

Acknowledgements

We acknowledge and thank Amanda Tapley, Rachael Norris, Elizabeth Holliday, and Kristen Fitzgerald who participated in the broader ReCEnT study from which this qualitative paper arose.

References

Hays RB, Morgan S. Australian and overseas models of general practice training. Med J Aust 2011; 194(11): S63-S66.
| Crossref | Google Scholar | PubMed |

Royal Australian College of General Practitioners. The Clinical Competencies for the CCE. East Melbourne, Vic.: Royal Australian College of General Practitioners; 2021. Available at https://www.racgp.org.au/education/registrars/fracgp-exams/clinical-competency-exam/the-clinical-competencies-for-the-cce/the-clinical-competencies-for-the-cce [Accessed 21 December 2022].

Magin P, Morgan S, Henderson K, et al. The Registrars’ Clinical Encounters in Training (ReCEnT) project: educational and research aspects of documenting general practice trainees’ clinical experience. Aust Fam Physician 2015; 44(9): 681-684.
| Crossref | Google Scholar |

Morgan S, Henderson K, Tapley A, et al. How we use patient encounter data for reflective learning in family medicine training. Med Teach 2015; 37(10): 897-900.
| Crossref | Google Scholar | PubMed |

Wearne SM, Brown JB. General practice education: Context and trends. In: Nestel D, Reedy G, McKenna L, Gough S, editors. Clinical Education for the Health Professions: Theory and Practice. Springer; 2020. pp. 1–20. 10.1007/978-981-13-6106-7_6-1

GPEx. Workplace-Based Assessment Framework for General Practice Training and Education. Adelaide, SA: GPEx; 2019. Available at https://www.racgp.org.au/FSDEDEV/media/documents/Education/SGR007-GPEx-Final-WBA-Framework.pdf [Accessed 8 November 2023].

Schuwirth LW, van der Vleuten CPM. How ‘testing’ has become ‘programmatic assessment for learning’. Health Prof Educ 2019; 5(3): 177-184.
| Crossref | Google Scholar |

van der Vleuten CPM, Schuwirth LWT, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach 2012; 34(3): 205-214.
| Crossref | Google Scholar | PubMed |

Lockyer J, Carraccio C, Chan M-K, et al. Core principles of assessment in competency-based medical education. Med Teach 2017; 39(6): 609-616.
| Crossref | Google Scholar | PubMed |

10  Schut S, Maggio LA, Heeneman S, et al. Where the rubber meets the road—An integrative review of programmatic assessment in health care professions education. Perspect Med Educ 2021; 10(1): 6-13.
| Crossref | Google Scholar |

11  Heeneman S, de Jong LH, Dawson LJ, et al. Ottawa 2020 consensus statement for programmatic assessment–1. Agreement on the principles. Med Teach 2021; 43(10): 1139-1148.
| Crossref | Google Scholar | PubMed |

12  Roberts C, Khanna P, Bleasel J, et al. Student perspectives on programmatic assessment in a large medical programme: a critical realist analysis. Med Educ 2022; 56(9): 901-914.
| Crossref | Google Scholar | PubMed |

13  Van Der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ 2005; 39(3): 309-317.
| Crossref | Google Scholar | PubMed |

14  Klein L, Bentley M, Moad D, et al. Perceptions of the effectiveness of using patient encounter data as an education and reflection tool in general practice training. J Prim Health Care Online Early 2023;
| Crossref | Google Scholar |

15  de Jong J, Visser M, Van Dijk N, et al. A systematic review of the relationship between patient mix and learning in work-based clinical settings. A BEME systematic review: BEME Guide No. 24. Med Teach 2013; 35(6): e1181-e1196.
| Crossref | Google Scholar | PubMed |

16  Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database of Syst Rev 2012; 13(6): CD000259.
| Crossref | Google Scholar | PubMed |

17  van Braak M, Visser M, Holtrop M, et al. What motivates general practitioners to change practice behaviour? A qualitative study of audit and feedback group sessions in Dutch general practice. BMJ Open 2019; 9: e025286.
| Crossref | Google Scholar | PubMed |

18  Radloff A, Clarke L, Matthews D. Australian General Practice Training Program: National report on the 2019 National Registrar Survey. Melbourne: Australian Council for Educational Research; 2019. Available at https://www.health.gov.au/sites/default/files/documents/2020/04/agpt-program-national-report-on-the-2019-registrar-satisfaction-survey.pdf [Accessed 21 July 2023].

19  Britt H, Miller G. BEACH program update. Aust Fam Physician 2015; 44(6): 411-414 Available at https://www.racgp.org.au/getattachment/1197ed3f-7a67-48b7-a416-f3dd8fa52f69/BEACH-program-update.aspx.
| Google Scholar | PubMed |

20  Sargeant JM, Mann KV, van der Vleuten CP, et al. Reflection: a link between receiving and using assessment feedback. Adv Health Sci Educ Theory Pract 2009; 14(3): 399-410.
| Crossref | Google Scholar | PubMed |

21  Mann KV. Reflection’s role in learning: increasing engagement and deepening participation. Perspect Med Educ 2016; 5(5): 259-261.
| Crossref | Google Scholar | PubMed |

22  Pelgrim EA, Kramer AW, Mokkink HG, et al. The process of feedback in workplace‐based assessment: organisation, delivery, continuity. Med Educ 2012; 46(6): 604-612.
| Crossref | Google Scholar | PubMed |

23  Davey A, Tapley A, van Driel M, et al. The registrar clinical encounters in training (ReCEnT) cohort study: updated protocol. BMC Prim Care 2022; 23: 328.
| Crossref | Google Scholar | PubMed |

24  Crotty M. Foundations of social research: Meaning and perspective in the research process, 1st edn. London: Routledge; 1998. 10.4324/9781003115700

25  Braun V, Clarke V. One size fits all? What counts as quality practice in (reflexive) thematic analysis. Qual Res Psychol 2021; 18(3): 328-352.
| Crossref | Google Scholar |

26  Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007; 19(6): 349-357.
| Crossref | Google Scholar | PubMed |

27  Rice PL, Ezzy D. Qualitative research methods: A health focus. South Melbourne, Australia: Oxford University Press; 1999.

28  Donabedian A. Evaluating the quality of medical care. Milbank Q 2005; 83(4): 691-729.
| Crossref | Google Scholar | PubMed |

29  Metusela C, Cochrane N, Van Werven H, et al. Developing indicators and measures of high-quality for Australian general practice. Aust J Prim Health 2022; 28(3): 215-223.
| Crossref | Google Scholar | PubMed |

30  Hays R, Sen Gupta T. Developing a general practice workforce for the future. Aust J Gen Pract 2018; 47(8): 502-505.
| Crossref | Google Scholar | PubMed |

31  de Jong J, Visser MR, Mohrs J, et al. Opening the black box: the patient mix of GP trainees. Br J Gen Pract 2011; 61(591): e650-e657.
| Crossref | Google Scholar | PubMed |

32  Oerlemans M, Dielissen P, Timmerman A, et al. Should we assess clinical performance in single patient encounters or consistent behaviors of clinical performance over a series of encounters? A qualitative exploration of narrative trainee profiles. Med Teach 2017; 39(3): 300-307.
| Crossref | Google Scholar | PubMed |

33  Schellekens LH, Bok HGJ, de Jong LH, et al. A scoping review on the notions of Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). Stud Educ Eval 2021; 71: 101094.
| Crossref | Google Scholar |

34  Dijksterhuis MG, Schuwirth LW, Braat DD, et al. A qualitative study on trainees’ and supervisors’ perceptions of assessment for learning in postgraduate medical education. Med Teach 2013; 35(8): e1396-e1402.
| Crossref | Google Scholar | PubMed |

35  Winkel AF, Yingling S, Jones A-A, et al. Reflection as a learning tool in graduate medical education: a systematic review. J Grad Med Educ 2017; 9(4): 430-439.
| Crossref | Google Scholar | PubMed |

36  Brehaut JC, Colquhoun HL, Eva KW, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med 2016; 164(6): 435-441.
| Crossref | Google Scholar | PubMed |

37  Sargeant J, Lockyer J, Mann K, et al. Facilitated reflective performance feedback: developing an evidence-and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med 2015; 90(12): 1698-1706.
| Crossref | Google Scholar | PubMed |

38  Curtis P, Taylor G, Riley R, et al. Written reflection in assessment and appraisal: GP and GP trainee views. Educ Prim Care 2017; 28(3): 141-149.
| Crossref | Google Scholar | PubMed |

39  de la Croix A, Veen M. The reflective zombie: problematizing the conceptual framework of reflection in medical education. Perspect Med Educ 2018; 7(6): 394-400.
| Crossref | Google Scholar | PubMed |

40  Torre DM, Schuwirth L, Van der Vleuten C. Theoretical considerations on programmatic assessment. Med Teach 2020; 42(2): 213-220.
| Crossref | Google Scholar | PubMed |

41  Bok HG, Teunissen PW, Favier RP, et al. Programmatic assessment of competency-based workplace learning: when theory meets practice. BMC Med Educ 2013; 13: 123.
| Crossref | Google Scholar | PubMed |

42  Taylor R, Clarke L, Radloff A. Australian General Practice Training Program: National report on the 2021 National Registrar Survey. Melbourne: Australian Council for Educational Research; 2021. Available at https://www.health.gov.au/sites/default/files/documents/2021/12/agpt-program-national-report-on-the-2021-national-registrar-survey.docx [Accessed 21 July 2023].

43  McKinn S, Bonner C, Jansen J, et al. Recruiting general practitioners as participants for qualitative and experimental primary care studies in Australia. Aust J Prim Health 2015; 21(3): 354-359.
| Crossref | Google Scholar | PubMed |