Primary care clinicians should proactively take up latest AI-based technology: No
Luke Bradford 1 *1
The uptake of artificial intelligence (AI) across sectors and organisations worldwide is growing rapidly, with many seeing that AI has made positive transformations to their businesses. The appeal of AI in healthcare, and the potential benefits to our workloads, no doubt plays on our minds too.
But when it comes to AI in healthcare, a cautious approach to embracing it in our workplaces and practices is necessary to ensure patients’ health is not compromised and risks around consent and data sovereignty are mitigated. As clinicians, we work in a predominantly people-centric role through face-to-face consultations, working alongside a team to provide comprehensive and complex care to our communities.
We are also privy to sensitive and private information about our patients that is recorded in our practices’ patient management systems or portals. Having this information available to other relevant, national health professionals and services that play a part in improving health outcomes of the patient is a (generally) accepted course of action by patients and clinicians.
When there is the potential for AI developers to have access to this sensitive information and create AI technology to use this information for diagnosing, collecting data or analysing results, what are the rules for informing patients, and how do we give them assurances that their health records will be kept private and not used for research or commercial purposes?
The COVID-19 pandemic brought an urgent need to change the way we work and how we engage with patients, while still being available to provide them with the care they need, when they need it. To do this we turned to technology and started providing the bulk of our consultations using telehealth services.
What we have seen since the end of the pandemic is patients returning en masse to practices for that personal, face-to-face care that they feel most comfortable with. The human interaction and relationships are core to the therapeutic process. Our interactions with, and knowledge of, our patients’ lives – not just their health but their overall wellbeing – is a big part of our job. Having face-to-face contact, picking up on non-verbal clues about how someone might be feeling, or noticing something that warrants further questioning is something that is hard to believe AI will ever be able to do as accurately as a clinician. But the speed of development is exponential, and we would be naive to believe that existential risk to our profession does not exist in this space.
The main task AI is currently undertaking for us in general practice is writing medical notes. When we weigh up the benefits and the risks, it is imperative that we look to see if using this technology is in the best interests of our patients, our teams and our overall workloads now and in the future. Cutting down the paperwork that comes with writing medical notes and allowing clinicians to focus solely on the patient instead of taking notes throughout the consult is a positive. However, if the clinician is still checking the notes for accuracy, and the patient can view and request changes to their notes, is AI really saving us any time?
There are over 1000 general practices in Aotearoa. Unlike our secondary care health system, which access mostly the same systems and processes under Te Whatu Ora which have undergone rigorous reviews and comparisons, it is very hard for an individual practice to properly assess the fitness, relevance and safety of an AI service offering. Could external agencies, such as the Privacy Commission, help practices navigate these challenges and provide guidance to mitigate these unknowns?
While evidence can help inform best practice, it needs to be placed in context. There may be no evidence available or applicable for a specific patient with his or her own set of conditions, capabilities, beliefs, expectations and social circumstances. There are areas of uncertainty, ethics and aspects of care for which there is no one right answer. General practice is an art as well as a science. Quality of care also lies with the nature of the clinical relationship, with communication and with truly informed decision-making. The BACK TO BACK section stimulates debate, with professionals presenting their opposing views regarding a clinical, ethical or political issue. |
It is tempting to view AI as an economic panacea to lower manpower overheads. The books look very different across every practice in the country, and there are the pressures of workforce shortages, nurse pay parity and the financial viability of practices in a constrained funding system. However, the interaction with the broader practice team is a valued part of the social fabric for patients.1 Regardless of workforce impact, the use of AI does come with a financial cost, which could become prohibitive if reliance is absolute. The financial implications may also extend to how one AI software program is chosen over another, eg Nabla versus Dragon.
We are all working towards improved health equity – earlier prevention, better screening, diagnosis, treatment and management of conditions. Potentially, AI could open more cost-effective access for our hard-to-reach and priority populations, especially in areas where workforce shortages are more pronounced and appointments are harder to get. But like every good story, there are always two sides. The positives sound great in theory, but there are implications that need to be acknowledged.
With patients having the option of receiving heath care, advice, and treatment via AI without setting foot into a practice:
Who takes ownership for the overall care of a patient and becomes accountable for checking the accuracy of notes, test results, advice and any subsequent follow-up?
When an AI hallucination (an incorrect or misleading result) occurs, who is tasked with explaining this to the patient, and at what point does their care revert to a clinician-only model?
Currently, if a patient has concerns or complaints with their care, they can go to the Health Disability Commission (HDC) or Medical Council. What is the process for complaint about an AI-based model of care?
How will governance and regulations work – through a one-size-fits-all approach or a technology-specific approach?
How can patients and practice teams be reassured about data sovereignty – keeping patient records accessible to the relevant healthcare teams (and the patient) but protected from external threats, third party access, and hacking?
How will data be shared with other practices and technologies if a patient moves on but the new practice uses a different system?
We need to think about how we want our practices to run now and into the future. The style of healthcare we provide is often referred to as community-based care or family medicine, with GPs being family doctors. These all carry the undertone of our roles being people-centric. We are a liaison between patients, whānau, community and services. These relationships take time to build and would be hard to replicate through technology alone.
Sitting down as a team and discussing whether using these technologies more regularly in our practices will help or hinder the continuity of care, trust and whanaungatanga we have built over time with our patients and communities are the sorts of conversations that need to happen before deciding on the level of AI technology you will use, and how you will measure the success, impact or risk it has.
The College will be monitoring the development and increasing capability of AI in health care and developing a policy paper focusing on the core pillars for members to use as a guideline for the adoption of AI in their practices.