Free Standard AU & NZ Shipping For All Book Orders Over $80!
Register      Login
Journal of Primary Health Care Journal of Primary Health Care Society
Journal of The Royal New Zealand College of General Practitioners
RESEARCH ARTICLE (Open Access)

Primary care clinicians should proactively take up latest AI-based technology: Yes

Chester Holt-Quick https://orcid.org/0000-0002-8350-919X 1 *
+ Author Affiliations
- Author Affiliations

1 Kekeno Tech, Wellington, New Zealand.

* Correspondence to: chester@kekeno.tech

Journal of Primary Health Care 16(1) 105-107 https://doi.org/10.1071/HC24035
Submitted: 4 March 2024  Accepted: 4 March 2024  Published: 22 March 2024

© 2024 The Author(s) (or their employer(s)). Published by CSIRO Publishing on behalf of The Royal New Zealand College of General Practitioners. This is an open access article distributed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND)

According to the definition provided by Oxford Languages via Google, ‘proactively’ means taking action to control a situation rather than just responding to it after it has happened, and ‘take up’ means to become interested or engaged in something.

‘Latest’ with reference to AI-based technology must refer to generative AI, underpinned by transformer-based large language models (LLMs) in which ground-breaking progress was observed from early 2023, most notably with the release of ChatGPT.

So, little is being required of the YES position for this moot. It is almost self-apparent without further argument that primary care clinicians should proactively take up the latest AI technology: all that is really being asked is that clinicians do not take a hands-off approach and disengage when it comes to this technology; something which should be obvious. It is important to note that ‘take up’ is not asking to implement technology in clinical practice that is not ready for implementation; however, to even have a sense of readiness here requires engagement.

In addition to this rationale, the following arguments are presented:

  1. The potential benefits to clinical practice are compelling

  2. Proactive engagement is essential to managing risk

  3. Update is inevitable.

While evidence can help inform best practice, it needs to be placed in context. There may be no evidence available or applicable for a specific patient with his or her own set of conditions, capabilities, beliefs, expectations and social circumstances. There are areas of uncertainty, ethics and aspects of care for which there is no one right answer. General practice is an art as well as a science. Quality of care also lies with the nature of the clinical relationship, with communication and with truly informed decision-making. The BACK TO BACK section stimulates debate, with professionals presenting their opposing views regarding a clinical, ethical or political issue.

The potential benefits to clinical practice are compelling

The benefits of the latest AI-based technology are materialising across multiple fronts. Of the many, here are two: quality clinical decision support (CDS) systems and administrative assistants (AAs).

CDS systems, while not new, have been of limited value – essentially, they have not been able to encode clinical knowledge to an adequate standard. We are now witnessing a pivotal breakthrough with LLMs. The latest models now rival experts in clinical knowledge (in test-taking contexts). In 2023, researchers at Google created Med-Palm2.1 This model demonstrated expert level performance on the MedQA dataset of the US Medical Licensing Examination style questions with greater than 85% accuracy, as well as passing the MedMCQA dataset comprising Indian AIIMS and NEET medical examination questions, scoring 72.3%. The human evaluation framework used included factuality, comprehension, reasoning, possible harm, and bias.2 Additionally remarkable to this absolute performance improvement is the rate at which these models are improving against established benchmarks.

Use of AAs stands to streamline administrative work. LLM-based tools such as Nabla (https://nabla.com) are already available in the marketplace for clinical use. This software operates in in-person as well as virtual consultations and automates documentation generation. Using speech-to-text together with LLMs, this software generates structured documentation through being passively operative in the background. While the claims of hours saved per day and reduced clinician fatigue are from the company, it is not hard to be persuaded by them. With other providers such as AWS releasing analogous tools (https://aws.amazon.com/healthscribe), it is clear these tools will be increasingly commonplace and part of routine use.

In considering this technology and examples such as those above, we must reflect on our context of enormous and increasing volumes of health data. Processing all this data in clinical practice is already intractable and yet the progressive informational value present is self-apparent – take data from wearable devices for example. The latest AI-powered systems are developed to be able to use this data – organising, analysing, and providing relevant information to clinicians; arguably using such technology is the only way to fully harness such data.

Proactive engagement is essential to managing risk

All technology comes with risk. We learn to adopt approaches to manage that risk while gleaning the benefits. The risks and issues with generative AI are not trivial and include hallucinations, bias, inequity, as well as broader risks like vocational dislocation. Such issues are in active focus by the developers of this technology. This area will require strong regulation and governments should play a major role in ensuring that safety testing regimens are fit-for-purpose, which is beginning to occur. US President Joe Biden recently issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. It mandated that developers of the most powerful AI systems share data on matters such as safety with the US Government, ordered the development of standards for biological synthesis screening, and underscored the importance of tackling algorithmic discrimination.3,4

Clinicians who proactively take up this technology will become most familiar with the risks and guidelines and will be best positioned to provide leadership in this area. We need to prioritise validating this technology in real world settings.5 Clinicians should be involved in pilot studies in which optimal workflow usage and risk management strategies are established.

Implicit in use of this technology is having a human (clinician) in the loop (HITL). This is a paramount requirement. Any distant future state in which this is not the case is outside the scope of this moot. Clinicians remain the ultimate decision makers when using CDS systems; they have final review and edit control of documentation generated from AAs.

Uptake is inevitable

The technology of intelligence passed through an inflection point in 2023.6,7 Reflecting on other relatively recent technological advances (computers, the internet, and its derivations – email, smartphones, social media) reminds us of the power and inevitable widespread uptake of new technology. With the technology of intelligence we will witness this same inevitability, and the impact will likely be even greater. Nuclear energy is sometimes provided as a counter example to the argument that waves of technological advancement cannot be contained, however this remains debatable.6

Use of generative AI such as ChatGPT has exploded over the past 12 months. General use cases such as writing assistance both generally and in academia are becoming widespread.8 Organisations including New Zealand universities and Te Whatu Ora (TWO) have now relaxed their initial blanket ‘don’t use’ policies – which were understandably reactionary/scrambling when ChatGPT went live – and are now exploring and even encouraging the selective use of generative AI.9 TWO is planning to pilot the use of AI for clinical coding of hospital admissions.10

The public will increasingly use these tools (bypassing the ‘not a medical expert’ warnings) and ‘Dr Google’-based information that patients present is going to become harder to navigate and dismiss. Considering the high and increasing performance of these systems to harness medical knowledge, it simply will not hold water for clinicians to dismiss these as inaccurate or unreliable any longer. Clinicians will best be empowered to navigate this front by proactively engaging with this technology – using it safely where available and appropriate and becoming familiar with its issues and risks.

How then?

It should be clear that clinicians in primary care should proactively take up the latest AI-based technology. Perhaps the reader is not in disagreement but is faced with the question of ‘how’. I believe that the following two aspects are important cornerstones: (1) bring this area under one’s skill and knowledge set through Continuing Medical Education (CME) opportunities and (2) adopting leadership in implementation.

Clinicians should have the vocational opportunity to build knowledge and skill in this area and be empowered to make decisions around this technology. AI-based technology must form an essential part of CME for primary care clinicians; this knowledge area will also have to be included in undergraduate medical curricula (now beginning to take early form in New Zealand medical schools). New medical journals in this domain area are arising, for example https://ai.nejm.org/. RNZCGP should provide resources, support, and guidance.

Clinicians should be leading the implementation of this technology in clinical practice. Pilot and implementation research which validates this technology in real world settings is essential.

Data availability

No data were used to generate results, other than referenced sources.

Conflicts of interest

Author declares no conflicts of interest.

Declaration of funding

No specific funding was received.

References

Singhal K, Tu T, Gottweis J, et al. Towards expert-level medical question answering with large language models. arXiv arXiv:2305.09617v1.
| Google Scholar |

Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. Nature 2023; 620(7972): 172-80.
| Crossref | Google Scholar | PubMed |

Burki T. Crossing the frontier: the first global AI safety summit. Lancet Digital Health 2024; 6(2): e91-2.
| Crossref | Google Scholar |

Te Whatu Ora. Advice on the use of Large Language Models and Generative AI in Healthcare – Health New Zealand. Te Whatu Ora. 2023. Available at https://www.tewhatuora.govt.nz/our-health-system/digital-health/national-ai-and-algorithm-expert-advisory-group-naiaeag-te-whatu-ora-advice-on-the-use-of-large-language-models-and-generative-ai-in-healthcare/ [cited 4 March 2024].

Rainford S. What Is The Impact Of Artificial Intelligence On Healthcare? Forbes, 6 November 2023.

Suleyman M, Bhaskar M. The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma. Crown; 2023.

Thornhill J. Should the AI doctor see you now? Financial Times, 14 July 2023.

Prillaman M. Is ChatGPT making scientists hyper-productive? The highs and lows of using AI. Nature 2024; 627: 16-17.
| Crossref | Google Scholar |

Advice for students on using Generative Artificial Intelligence in coursework. University of Auckland. Available at https://www.auckland.ac.nz/en/students/forms-policies-and-guidelines/student-policies-and-guidelines/academic-integrity-copyright/advice-for-student-on-using-generative-ai.html [cited 4 March 2024].

10  McBeth R. Te Whatu Ora exploring AI for clinical coding. Health Informatics New Zealand; 2024.