
Replacing Old-School Algorithms with New-School AI in Medicine
Artificial intelligence, clinical decision support and mobile apps are changing healthcare.
To understand the evolving role of artificial intelligence in healthcare, it helps to step back to view earlier AI triumphs in related fields. In 1997, for instance, IBM surprised the world by demonstrating that its supercomputer Deep Blue could defeat the reigning world chess champion Garry Kasparov. The computer accomplished this feat because its programmers had fed it millions of chess moves, and with its advanced processing power, Deep Blue was able to analyze 200 million positions a second. But by 2018, AI had taken a major step forward: Google demonstrated that its AlphaZero software could beat the best human chess opponent — not with the brute strength of IBM Deep Blue but with machine learning. AlphaZero was not programmed with millions of moves but only taught the basic rules of the game. By playing countless games, the computer taught itself how to win.
There are similarities in the way AI has developed in healthcare. Most clinical decision support (CDS) systems on the market have been using old-school algorithms to help physicians reach diagnostic and treatment decisions. These tools are static encyclopedias that provide deep knowledge of a wide range of medical topics but offer only simple search capabilities. But several innovative thinkers and vendors are pushing past these limitations to create interactive programs that will take us into the future. These systems make use of machine learning approaches like neural networks, extreme gradient boosting, causal forest modeling and the like.
What Separates the Old from the New?
Several of the more sophisticated CDS tools take advantage of neural networks. These software constructs were designed to mimic the functionality of the human brain with its neurons, synapses, axons and dendrites. The human neural network is capable of taking in information from its surroundings, interpreting it and then responding with a series of “outputs,” or instructions. Similarly, AI-generated neural networks accept inputs, which they then process through several layers of artificial neurons or nodes. Eventually, these nodes generate an output signal that can be used to augment diagnostic and treatment decisions. The process is illustrated in Figures 1 A and B.
Neural networks have been used to improve the diagnosis of diabetic retinopathy and melanoma and to help identify patients at high risk of complications from sepsis, heart disease and other conditions. In the case of skin cancer, the algorithms are linked to a digital camera and are trained to distinguish between normal moles and malignant lesions by evaluating hundreds of thousands of images. As Figure 1 A illustrates, the network begins by examining the millions of pixels that make up an individual skin photo, searching for unique features. It does this by working through a series of layers that each contain nodes. In layer 1, the software may initially recognize differences in light and dark regions. In layer 2, it may detect differences in the edges of melanomas versus normal moles — cancers typically have irregular edges. And finally, the algorithm may recognize more complex features that are unique to skin lesions and normal moles. Moles are typically spherical or oval in shape.
As the network analyzes all these features, it assigns them weights based on the relative strength of association with previously diagnosed melanomas and non-melanomas and then makes a final determination for each image, the output stage. During early attempts, the software makes numerous mistakes and mislabels images. During the process of back propagation, the program recognizes its mistakes, forcing the algorithm to rethink its conclusion and changing the weighting of each signal pathway.
A) A neural network designed to distinguish melanoma from a normal mole scans tens of thousands of images to teach itself how to recognize small differences between normal and abnormal skin growths. (B) During the process of differentiating normal from abnormal tissue, a neural network makes many mistakes. Back propagation analyzes these mistakes to help the program readjust its algorithms and improve its accuracy. (Source: Cerrato, P., Halamka, J.,
Homing in on Colorectal Cancer
Not all machine-learning algorithms rely on neural networking. A colorectal cancer screening tool created by Medial EarlySign, for example, uses extreme gradient boosting to fuel its predictive engine. XGBoost requires the use of several complex mathematical calculations, a more advanced form of multiple additive regression trees, taking advantage of the distributed, multithreaded processing power now available in today’s computing environment. The algorithm itself forms the backbone of a commercially available tool called ColonFlag.
One of the weaknesses of the Kaiser Permanente study was its retrospective design. Looking back in time is never as reliable in detecting a true cause/effect relationship or establishing a direct impact on clinical outcomes, when compared to a prospective analysis. Unfortunately, this is one of the shortcomings of many recent studies that support the role of machine learning in healthcare.
Stroke and Sepsis May Yield to AI Software
Stroke is a major cause of death and disability in the United States and the fifth leading cause of death, affecting about
Like strokes, sepsis can prove devastating to patients who are not promptly diagnosed, affecting more than 700,000 Americans annually and costing over $20 billion a year. The Duke Institute for Health Innovation has been developing a program called
The Duke initiative joins other innovators in the specialized area of medical informatics. There are several traditional risk scoring systems in place in U.S. hospitals to help detect severe sepsis, including the Sequential Organ Failure Assessment, the Systemic Inflammatory Response Syndrome criteria and the Modified Early Warning Score. But a machine-learning system called
The machine-learning program was incorporated into the hospital’s electronic health record (EHR) system, eliminating the need for clinicians to move outside the patient record to access a separate system, an obstacle that often impedes physicians and nurses. The vital signs and related lab results needed to conduct the assessment in this study were readily obtained from the EHR system. In this trial, it was APeX from Epic. Shimabukuro and colleagues also note that, “Patients in the experimental group additionally received antibiotics an average of 2.76 hours earlier than patients in the control group and had blood cultures drawn an average of 2.79 hours earlier than patients in the control group.”
A Word of Caution
Although numerous research projects have shown that AI-enhanced systems hold great promise and will likely usher in a new era in patient care, these programs have their shortcomings. Besides the fact that many rely on retrospective analysis of patient records, there is also concern about the quality of the data sets being used to train these algorithms. If the data feeding these algorithms are not truly representative of the patient populations they intend to serve, the positive results being published can be misleading.
In one case that stands out, a model based on neural networking attempted to demonstrate that it could help detect pneumonia based on the algorithm’s interpretation of chest X-rays. The project used pooled data from two large hospitals, but when it tried to replicate the findings using data from a third hospital system, it failed. “
Navigate the digital transformation with confidence.
About the Authors
Paul Cerrato has more than 30 years of experience working in healthcare as a clinician, educator and medical editor. He has written extensively on clinical medicine, clinical decision support, electronic health records, protected health information security and practice management. He has served as editor of Information Week Healthcare, executive editor of Contemporary OB/GYN, senior editor of RN Magazine and contributing writer/editor for the Yale University School of Medicine, the American Academy of Pediatrics, Information Week, Medscape, Healthcare Finance News, IMedicalapps.com and Medpage Today. HIMSS has listed Mr. Cerrato as one of the most influential columnists in healthcare IT.
John D. Halamka, M.D., leads innovation for Beth Israel Lahey Health. Previously, he served for over 20 years as the chief information officer (CIO) at the Beth Israel Deaconess Healthcare System. He is chairman of the New England Healthcare Exchange Network (NEHEN) and a practicing emergency physician. He is also the International Healthcare Innovation professor at Harvard Medical School. As a Harvard professor, he has served the George W. Bush administration, the Obama administration and national governments throughout the world, planning their healthcare IT strategies. In his role at BIDMC, Dr. Halamka was responsible for all clinical, financial, administrative and academic information technology, serving 3,000 doctors, 12,000 employees, and 1 million patients.
Related

















































