A look at the complex mathematics looking into future health.
Artificial intelligence (AI) and machine learning are being used liberally in many areas of our lives. AI now helps courts to predict who is likely to commit a crime or become a repeat offender, helps employers determine the most appropriate candidate for vacant positions, controls the facial recognition security features on our smartphones, and even regulates the temperatures in our homes. In healthcare in particular, AI is exploding.
According to Forbes, the total public and private sector investment in healthcare AI is expected to reach $6.6 billion by 2021, and the top AI applications may result in annual savings of $150 billion by 2026. When you compare the predicted savings with how much is being invested, AI seems like a no-brainer.
Of course, the “brains” powering all the healthcare AI are the algorithms—rules created and input by humans that tell the machines what to do and how to learn. Yet trouble begins when we don’t have enough transparency into the algorithms behind the AI—when we don’t understand the rationale (and potential biases) behind the rules we are giving our machines.
AI and machine learning are cropping up in healthcare in countless ways, from billing to administrative work to drug development to predictive analytics. Healthcare providers can use AI for getting routine pathology or radiology results quicker, health insurers can better understand and respond to customers contacting call centers… and pharmaceutical companies can use AI to automate drug responses, reported business consulting firm PwC recently.
“Artificial intelligence’s transformative power is reverberating across many industries, but in one—healthcare—its impact promises to be truly life-changing. From hospital care to clinical research, drug development and insurance, AI applications are revolutionizing how the health sector works to reduce spending and improve patient outcomes,” according to Forbes.
In fact, AI is permeating healthcare so rapidly that in April 2019, the FDA issued its first-ever report that offers a framework for regulating AI in medicine, titled, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device.”
Then-FDA Commissioner Scott Gottlieb, M.D., acknowledged the challenge of algorithms in healthcare in the report, writing, “A new approach to these technologies would address the need for the algorithms to learn and adapt when used in the real world. It would be a more tailored fit than our existing regulatory paradigm for software as a medical device.”
Even as we have become more and more comfortable relying on machines to either augment or completely replace people when it comes to tasks from cooking our food to driving our cars to performing complex surgeries, the AI revolution has not been all smooth sailing. The fact remains that algorithms are built by humans, and humans are naturally prone to both bias and error. Our machines and our machine learning are not infallible because we are not infallible.
“With all of the new technology that is exploding at the moment, especially artificial intelligence and machine learning, there is this question about how much we’re trusting those machines and how much power we are giving over to them, and how much our human flaws around trust and power will come into play in the future,” according to a 2018 National Geographic article.
One area in which AI hasn’t quite lived up to expectation, for example, is in the criminal justice system. Algorithms are being used to derive risk assessment scores that help predict whether a criminal is likely to commit another offense. Judges then use these automated scores as factors in the severity or leniency of their sentencing. But as a 2019 article in MIT Technology Review explains, “Modern-day risk assessment tools are often driven by algorithms trained on historical crime data … So, if you feed it historical crime data, it will pick out the patterns associated with crime. But those patterns are statistical correlations—nowhere near the same as causations.”
The consequence? “Now populations that have historically been disproportionately targeted by law enforcement … are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.”
These same algorithm bias challenges also exist in healthcare. AI has huge potential to improve health organizations’ business operations and patient outcomes, yet algorithm biases can negatively impact both the business and the patients.
Even more than in most other industries, having incomplete or inaccurate information for AI to learn from in healthcare can mean not just bias creeping into the system, but true medical mistakes in which liability and accountability are called into question.
For example, if AI recommends a particular drug as part of a standard treatment plan and the patient dies from a drug allergy or interaction because the patient record on the back-end of the AI was inaccurate or incomplete, who is accountable? Who is liable? It’s still too early in the game for regulatory pathways to have been fully paved for these kinds of issues, but that absolutely does not mean it’s too early for us to start thinking about them.
One common area in which many healthcare organizations (including hospitals, health systems, and payers) are using algorithms is within their Master Data Management (MDM) solutions. Algorithms power MDM to help match and merge patient records from a variety of disparate sources. With patient data siloed in separate departments within organizations, MDM is essential for ensuring that healthcare organizations have a complete, accurate, shareable view of each patient or member to deliver coordinated, high-quality care. This helps prevent billing errors, duplicate testing, and mistakes such as overlooking medication allergies or comorbidities when making treatment decisions.
But without transparency into the algorithms that decide what constitutes a complete patient record, how can you be sure the data is accurate?
Many MDM providers operate like a “black box,” offering no transparency into the algorithms that determine the matching rules that create the final, single patient records. With no intelligence around how the matches were created, healthcare organizations cannot be sure that their patient records are accurate, or that they are suitable for the desired use case. In other words, depending on the intended use case for the patient record (meeting quality measures, diagnostics, billing, etc.), varying levels of completeness may or may not be acceptable, and may have very different consequences. Only by having transparency into those algorithms can the organization determine if the match is viable for the desired use case.
This is why it’s crucial for healthcare organizations to seek out an MDM solution that offers user-driven, transparent approach to matching algorithms, so they can take accountability for the records that get matched, resulting in trust and confidence in both the business and healthcare decisions they make based on that data.
It may be helpful to compare a person’s understanding of the algorithms and details behind patient matches to the level of detail needed when betting on the Super Bowl. The average viewer could just as easily bet on one team or another, believing that each has a 50% chance of winning the game. Yet if you want to become more educated about the details that make up the actual probability of winning, you would need to take into consideration each team’s win/loss history, previous injuries of individual players, predicted weather on the day of the game, and a thousand other details that might not be obvious to the untrained eye. Suddenly, the odds are no longer 50/50.
In the same way, what are the odds that the details your MDM solution has used to make up a complete patient record are actually a match? And do you want to be in control of how that conclusion was reached?
With patient lives on the line, the quality of your data is not something to gamble on.
Get the best insights in digital health directly to your inbox.