Free Standard AU & NZ Shipping For All Book Orders Over $80!
Register      Login
Australian Health Review Australian Health Review Society
Journal of the Australian Healthcare & Hospitals Association
RESEARCH ARTICLE

What is needed to mainstream artificial intelligence in health care?

Ian A. Scott https://orcid.org/0000-0002-7596-0837 A B E , Ahmad Abdel-Hafez C , Michael Barras A D and Stephen Canaris C
+ Author Affiliations
- Author Affiliations

A Princess Alexandra Hospital, Ipswich Road, Brisbane, Qld, Australia. Email: Michael.Barras@health.qld.gov.au

B School of Clinical Medicine, University of Queensland, 199 Ipswich Road, Brisbane, Qld, Australia.

C Division of Clinical Informatics, Metro South Hospital and Health Service, 199 Ipswich Road, Brisbane, Qld, Australia. Email: Ahmad.Abdel-Hafez@health.qld.gov.au; Stephen.Canaris@health.qld.gov.au

D School of Pharmacy, University of Queensland, Brisbane, Qld, Australia.

E Corresponding author. Email: ian.scott@health.qld.gov.au

Australian Health Review 45(5) 591-596 https://doi.org/10.1071/AH21034
Submitted: 2 February 2021  Accepted: 27 April 2021   Published: 24 June 2021

Abstract

Artificial intelligence (AI) has become a mainstream technology in many industries, but not yet in health care. Although basic research and commercial investment are burgeoning across various clinical disciplines, AI remains relatively non-existent in most healthcare organisations. This is despite hundreds of AI applications having passed proof-of-concept phase, and scores receiving regulatory approval overseas. AI has considerable potential to optimise multiple care processes, maximise workforce capacity, reduce waste and costs, and improve patient outcomes. The current obstacles to wider AI adoption in health care and the pre-requisites for its successful development, evaluation and implementation need to be defined.

Keywords: artificial intelligence, obstacles, strategies, operationalisation, roadmaps.


References

[1]  Scott IA, Cook D, Coiera E, Richards B. Machine learning in clinical practice – prospects and pitfalls. Med J Aust 2019; 211 203–5.e1.
Machine learning in clinical practice – prospects and pitfalls.Crossref | GoogleScholarGoogle Scholar | 31389031PubMed |

[2]  Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. Lancet Digit Health 2021;
Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis.Crossref | GoogleScholarGoogle Scholar | 33478929PubMed |

[3]  Therapeutics Goods Administration. Australian Register of Therapeutic Goods. Available at: https://www.tga.gov.au/australian-register-therapeutics-goods

[4]  He J, Baxter SL, Xu J, et al The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019; 25 30–6.
The practical implementation of artificial intelligence technologies in medicine.Crossref | GoogleScholarGoogle Scholar | 30617336PubMed |

[5]  Hendler J. Avoiding another AI winter. IEEE Intell Syst 2008; 23 2–4.

[6]  Shanafelt TD, Dyrbye LN, Sinsky C, et al Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin Proc 2016; 91 836–48.
Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction.Crossref | GoogleScholarGoogle Scholar | 27313121PubMed |

[7]  Kim MO, Coiera E, Magrabi F. Problems with health information technology and their effects on care delivery and patient outcomes: a systematic review. J Am Med Inform Assoc 2017; 24 246–50.
Problems with health information technology and their effects on care delivery and patient outcomes: a systematic review.Crossref | GoogleScholarGoogle Scholar | 28011595PubMed |

[8]  Palmer A. IBM’s Watson AI suggested “often inaccurate” and “unsafe” treatment recommendations for cancer patients, internal documents show. 2018. DailyMail.com. Available at: https://www.dailymail.co.uk/sciencetech/article-6001141/IBMs-Watson-suggested-inaccurate-unsafe-treatment-recommendations-cancer-patients.html?ito=email_share_article-top.

[9]  Bedoya AD, Clement ME, Phelan M, et al Minimal impact of implemented early warning score and best practice alert for patient deterioration. Crit Care Med 2019; 47 49–55.
Minimal impact of implemented early warning score and best practice alert for patient deterioration.Crossref | GoogleScholarGoogle Scholar | 30247239PubMed |

[10]  Courtright KR, Chivers C, Becker M, et al Electronic health record mortality prediction model for targeted palliative care among hospitalised patients: a pilot quasi-experimental study. J Gen Intern Med 2019; 34 1841–7.
Electronic health record mortality prediction model for targeted palliative care among hospitalised patients: a pilot quasi-experimental study.Crossref | GoogleScholarGoogle Scholar | 31313110PubMed |

[11]  Downing NL, Rolnick J, Poole SF, et al Electronic health record-based clinical decision support alert for severe sepsis: a randomised evaluation. BMJ Qual Saf 2019; 28 762–8.
Electronic health record-based clinical decision support alert for severe sepsis: a randomised evaluation.Crossref | GoogleScholarGoogle Scholar | 30872387PubMed |

[12]  Hill AC, Miyake CY, Grady S, et al Accuracy of interpretation of pre-participation screening electrocardiograms. J Pediatr 2011; 159 783–8.
Accuracy of interpretation of pre-participation screening electrocardiograms.Crossref | GoogleScholarGoogle Scholar | 21752393PubMed |

[13]  Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Med Inform Decis Mak 2020; 20 170
Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives.Crossref | GoogleScholarGoogle Scholar | 32698869PubMed |

[14]  Blease C, Kaptchuk TJ, Bernstein MH, et al Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views. J Med Internet Res 2019; 21 e12802
Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views.Crossref | GoogleScholarGoogle Scholar | 30892270PubMed |

[15]  Laï M-C, Brian M, Mamzer M-F. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020; 18 14
Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France.Crossref | GoogleScholarGoogle Scholar | 31918710PubMed |

[16]  Haenssle HA, Fink C, Schneiderbauer R, et al Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol 2018; 29 1836–42.
Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists.Crossref | GoogleScholarGoogle Scholar | 29846502PubMed |

[17]  Rajpurkar P, Irvin J, Ball RL, et al Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med 2018; 15 e1002686
Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists.Crossref | GoogleScholarGoogle Scholar | 30481176PubMed |

[18]  Braga A, Logan RK. The Emperor of strong AI has no clothes: Limits to artificial intelligence. Information 2017; 8 156
The Emperor of strong AI has no clothes: Limits to artificial intelligence.Crossref | GoogleScholarGoogle Scholar |

[19]  Petitgand C, Motulsky A, Denis JL, Régis C. Investigating the barriers to physician adoption of an artificial intelligence-based decision support system in emergency care: An interpretative qualitative study. Stud Health Technol Inform 2020; 270 1001–5.
| 32570532PubMed |

[20]  Romero-Brufau S, Wyatt KD, Boyum P, et al A lesson in implementation: A pre-post study of providers’ experience with artificial intelligence-based clinical decision support. Int J Med Inform 2020; 137 104072
A lesson in implementation: A pre-post study of providers’ experience with artificial intelligence-based clinical decision support.Crossref | GoogleScholarGoogle Scholar | 32200295PubMed |

[21]  Ding Y, Sohn JH, Kawczynski MG, et al A deep learning model to predict a diagnosis of Alzheimer disease by using 18F-FDG PET of the brain. Radiology 2019; 290 456–64.
A deep learning model to predict a diagnosis of Alzheimer disease by using 18F-FDG PET of the brain.Crossref | GoogleScholarGoogle Scholar | 30398430PubMed |

[22]  Komorowski M, Celi LA. Will artificial intelligence contribute to overuse in healthcare? Crit Care Med 2017; 45 912–3.
Will artificial intelligence contribute to overuse in healthcare?Crossref | GoogleScholarGoogle Scholar | 28410309PubMed |

[23]  Aboab J, Celi LA, Charlton P, et al A ‘datathon’ model to support cross-disciplinary collaboration. Sci Transl Med 2016; 8 333ps8
A ‘datathon’ model to support cross-disciplinary collaboration.Crossref | GoogleScholarGoogle Scholar | 27053770PubMed |

[24]  Scott IA. Demystifying machine learning – a primer for physicians. Intern Med J 2021;
Demystifying machine learning – a primer for physicians.Crossref | GoogleScholarGoogle Scholar | 33890365PubMed |

[25]  Scott IA, Coiera E, Carter S. A clinician checklist for assessing the suitability of machine learning applications in healthcare. BMJ Health Care Inform 2021; 28 e100251
A clinician checklist for assessing the suitability of machine learning applications in healthcare.Crossref | GoogleScholarGoogle Scholar |

[26]  Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 2021; 113 103655
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.Crossref | GoogleScholarGoogle Scholar | 33309898PubMed |

[27]  Cosgriff CV, Stone DJ, Weissman G, et al The clinical artificial intelligence department: a prerequisite for success. BMJ Health Care Inform 2020; 27 e100183
The clinical artificial intelligence department: a prerequisite for success.Crossref | GoogleScholarGoogle Scholar | 32675072PubMed |

[28]  Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA 2019; 321 31–2.
Questions for artificial intelligence in health care.Crossref | GoogleScholarGoogle Scholar | 30535130PubMed |

[29]  Magrabi F, Ammenworth E, McNair JB, et al Artificial intelligence in clinical decision support: Challenges for evaluating AI and practical implications. A position paper from the IMIA Technology Assessment and Quality Development in Health Informatics Working Group and the EFMI Working Group for Assessment of Health Information Systems. Yearb Med Inform 2019; 28 128–34.
Artificial intelligence in clinical decision support: Challenges for evaluating AI and practical implications. A position paper from the IMIA Technology Assessment and Quality Development in Health Informatics Working Group and the EFMI Working Group for Assessment of Health Information Systems.Crossref | GoogleScholarGoogle Scholar | 31022752PubMed |

[30]  Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc 2020; 27 491–7.
A governance model for the application of AI in health care.Crossref | GoogleScholarGoogle Scholar | 31682262PubMed |

[31]  Therapeutic Goods Administration. Regulation of Software as a Medical Device. Canberra: Australian Government; 2021. Available at: https://www.tga.gov.au/regulation-software-based-medical-device

[32]  Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)‐Based Software as a Medical Device (SaMD) Discussion Paper. Washington, DC: US Government; 2020.

[33]  Alami H, Lehoux P, Denis J-L, et al Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag 2020;
Organizational readiness for artificial intelligence in health care: insights for decision-making and practice.Crossref | GoogleScholarGoogle Scholar | 33258359PubMed |

[34]  Chang AC. Intelligence-based Medicine. Artificial Intelligence and Human Cognition in Clinical Medicine and Healthcare. Ch 9: Implementation of Artificial Intelligence in Medicine, p. 397–412. London: Elsevier Inc; 2020.

[35]  Chen M, Decary M. Artificial intelligence in healthcare: An essential guide for health leaders. Healthc Manage Forum 2020; 33 10–8.
Artificial intelligence in healthcare: An essential guide for health leaders.Crossref | GoogleScholarGoogle Scholar | 31550922PubMed |

[36]  Park Y, Shankar M, Park B-H, et al. Graph databases for large-scale healthcare systems: A framework for efficient data management and data services. Proceedings of 2014 IEEE 30th International Conference on Data Engineering Workshops; March 2014; Chicago, USA, IEEE; 2014. p. 12–19.

[37]  Lehne M, Luijten S, Imbusch PV, Thun S. The use of FHIR in digital health – A review of the scientific literature. Stud Health Technol Inform 2019; 267 52–8.
| 31483254PubMed |

[38]  Hood CM, Gennuso KP, Swain GR, Catlin BB. County health rankings: Relationships between determinant factors and health outcomes. Am J Prev Med 2016; 50 129–35.
County health rankings: Relationships between determinant factors and health outcomes.Crossref | GoogleScholarGoogle Scholar | 26526164PubMed |

[39]  Colling R, Pitman H, Oien K, et al Artificial intelligence in digital pathology: a roadmap to routine use in clinical practice. J Pathol 2019; 249 143–50.
Artificial intelligence in digital pathology: a roadmap to routine use in clinical practice.Crossref | GoogleScholarGoogle Scholar | 31144302PubMed |

[40]  Wiens J, Saria S, Sendak M, et al Do no harm: A roadmap for responsible machine learning for health care. Nat Med 2019; 25 1337–40.
Do no harm: A roadmap for responsible machine learning for health care.Crossref | GoogleScholarGoogle Scholar | 31427808PubMed |