Circulation 122,210 • Volume 35, No. 1 • February 2020   Issue PDF

PRO-CON DEBATE – PRO: Artificial Intelligence (AI) in Health Care

Michael Buist, MbChB, MD, FRACP, FCICM

Related Article:

CON: Artificial Intelligence is Not a Magic Pill


This Pro-Con Debate took place at the 2019 Stoelting Conference entitled “Patient Deterioration: Early Recognition, Rapid Intervention, and the end of Failure to Rescue.” The two following authors have expertise in the field of adopting artifical intelligence for managing patients who are deteriorating in the hospital setting.

Artificial intelligence (AI), or machine intelligence, has been defined as “intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans” and “…any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.”1

Wikipedia goes on to classify AI into three different types of systems1:

  1. Analytical
  2. Human-inspired, and
  3. Humanized artificial intelligence

AI was founded as an academic field in 1956. Over the ensuing six decades, it has evolved to develop systems capable of undertaking complex real-time tasks that would be unachievable by the unassisted human brain. Early adopters in AI include the military, which has used autonomous and semi-autonomous drones; finance, in which AI enables real-time fraud detection; and the automotive industry, in which AI facilitates collision avoidance.

The multitrillion-dollar industry of health care has been slow to adopt information technology (IT) in general and AI in particular. This might be due to some of the conflicting interests that exist between the doctor/patient relationship and the documentation requirements of the health care profession, management and probatory requirements, and the existence of costly legacy IT systems.2 While skepticism towards IT and AI solutions is understandable, our continued struggles with preventable adverse events,2 poor adoption of evidence-based practice, and persistence in using non-beneficial, sometimes harmful practices3 argues for an earnest evaluation of the potential for AI to improve patient safety and outcomes.

The main argument for AI in health care is the potential to provide better real-time solutions for practitioners that improve outcomes for patients. One of the most important applications is the translation of research findings into consistent, reliable evidence-based practice occurring in the office and at the bedside. Admittedly, the development of “evidence” is fraught with inferential problems,4 but there are relatively uncontroversial evidence-based practices, such as avoidance of antibiotic prescribing for acute upper respiratory tract infections, where there remains a large evidence-to-practice gap.5 AI has the potential to, in real time, incorporate all patient data and outcomes relevant to a given clinical question. Such an AI system could prompt or alert practitioners when they are deviating from practice guidelines. Conceivably, AI could also continually update and inform clinical practice guidelines using real-time patient data.

At its simplest, AI can be thought of as a decision rule: “what, if, then, and.” For example, the “what” could be the patient with urosepsis, the “if” is the prescription of gentamicin, the “then” is renal function, and the “and” is what are the other prescribed medications. AI can then warn about drug interactions and provide precise and safe dosage information, which can then be altered in real-time based on variations in drug levels, other drug doses, and changes in renal function.6 This capability exists in most electronic prescribing systems. This author developed an AI approach in response to the problem of Rapid Response Team (RRT) “afferent limb failure” (e.g., not calling for help despite activation criteria being met).7 We identified several staff cultural problems that contributed to this phenomenon.8 The solution required electronic entry of patient’s physiological observations, real-time comparison of those observations to the RRT activation criteria, and then issuing a series of automated alerts to the predetermined clinical team members. This system facilitated individualization of activation criteria for each patient as well as customization of the mode and order in which clinical team members are alerted. With this innovative approach, clinical response as per a National Health Service (NHS) Trust Hospital Early Warning Score (EWS) improved to 97% from a baseline of 68%.8

The argument for AI in health care is not about the potential for better reasoning and problem solving, knowledge presentation, natural language processing, and social intelligence; it is about doing the things in health care that, for whatever reason, we don’t do, in part due to the frailties of the human brain. A compelling example is provided by Ruth Lollgen in the New England Journal of Medicine, writing about her personal experience of intimate partner violence.9 Despite being an emergency pediatric physician, she presents herself on numerous occasions to emergency departments with injuries that are consistent with a nonaccidental aetiology. Yet the pattern of the injuries on presentations and over time does not lead to any clinical suggestion of nonaccidental injury. She laments that no one asks the question of, “Do you feel safe at home?” Asking the important questions as above may provide improved safety for our patients and providers.

The complexities of health care, a rapidly growing body of research knowledge, an internet-savvy client and patient population, and, most importantly, the frailties of the human brain, necessitate the need for assistance from AI to aid health care professionals in the day-to-day decision making about their patients. Health care professionals need to understand, and be involved in, the development of these machine-assisted decision devices so that they are constructed to the highest technical standards focused on best practice patient outcomes.

 

Dr. Buist is professor of Health Services at the University of Tasmania, Tasmania, Australia.


He is the founder, a previous director, and chief medical officer of Patientrack. This company was sold to another health ICT company called Alcidion (ALC) that is listed on the Australian stock exchange. Professor Michael Buist is a substantial stockholder in ALC.


References

  1. https://en.wkipedia.org/wiki/Artificial_intelligence. Accessed on October 29, 2019.
  2. Rudin RS, Bates DW, MacRae C. Accelerating innovation in Health IT. N Engl J Med. 2016;375:815–817.
  3. Buist M, Middleton S. Aetiology of hospital setting adverse events 1: limitations of the Swiss cheese model. Br J Hosp Med (Lond). 2016;7:C170–C174.
  4. Ioannidis JP. Evidence-based medicine has been hijacked: a report to David Sackett. J Clin Epidemiol. 2016;73:82–86.
  5. Harris A, Hicks LA, Qaseem A, High Value Care Task Force of the American College of Physicians & Centers for Disease Control and Prevention. Appropriate antibiotic use for acute respiratory tract infection in adults: advice for high-value care from the American College of Physicians and the Centers for Disease Control and Prevention. Ann Intern Med. 2016;164:425–434.
  6. Qureshi I, Habayeb H, Grundy C. Improving the correct prescription and dosage of gentamicin. BMJ Open Quality. 2012: 1, doi: 10.1136/bmjquality.u134.w317. https://bmjopenquality.bmj.com/content/1/1/u134.w317 Accessed November 4, 2019.
  7. Marshall S, Shearer W, Buist M, et al. What stops hospital clinical staff from following protocols? An analysis of the incidence and factors behind the failure of bedside clinical staff to activate the Rapid Response System (RRS) in a multi-campus Australian metropolitan health care. BMJ Qual Saf. 2012;21:569–575.
  8. Jones S, Mullally M, Ingleby S, et al. Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with a NHS Trust Early Warning Score (EWS) protocol. Crit Care Resusc. 2011;13:83–88.
  9. Lollgen, RM. Visible injuries, unrecognised truth—the reality of intimate partner violence. N Engl J Med. 2019;381:15: 1408–1409.