OR WAIT null SECS
A conversation with Art Papier, MD, CEO and co-founder of VisualDX.
Artificial intelligence (AI) is making its way into more areas of healthcare. What are some of the pros and cons of using AI?
When used in discreet areas of healthcare, AI has shown a lot of great potential, particularly for imaging of all kinds. For example, remarkable progress is being made in using AI and machine learning (ML) to analyze images of complex medical conditions – such as autonomous diagnosis of conditions like diabetic retinopathy. There are also a lot of unproven claims about AI’s abilities to solve all of healthcare’s issues, so amid all the hype we have to be careful to not overpromise the value of AI – while also ensuring, first and foremost, that AI is safe.
Another essential component is the end-users need to know what the AI is doing and how much they can trust it. Healthcare professionals can’t become overly reliant on this technology and use it to replace the human element, as this can result in missed diagnoses and inappropriate care decisions. AI is not a panacea for all of medicine’s woes, but rather a powerful tool, and one tool of many.
What settings will benefit most from the use of AI?
VisualDx works in the field of clinical decision-making and we’re really excited about the benefits from visible light ML and ML of radiological images where the interpretation of images of all kinds is augmented by computers. The analysis of such images will play a crucial role in allowing radiologists, ophthalmologists, pathologists, dermatologists, etc. to have accurate second opinions to make the right diagnosis and improve outcomes for all patients. Patients also benefit directly from using AI based symptom checkers and other patient facing educational tools.
Does the use of AI benefit some populations more than others?
It shouldn’t. However, there have been many discussions and instances of AI being trained on one population but then used across many other diverse populations. In dermatology, you wouldn’t want to train your machine algorithms on light skin and then make the claim that the tech works equally well for patients that have pigmented skin. The training set for AI and ML models has to match the population where the AI is being used. We have to be careful in ensuring the work we’re achieving with AI is equitable and representative.
There has been some controversy involving AI due to reports that some algorithms perpetuate racial bias. What are the consequences of this in healthcare?
The consequences can be extreme on the diagnostic side with AI prone to racial biases. Particularly, this can be seen in dermatology if the imagery used to train the AI is not representative of pigmented skin. For example, redness of the skin can be a sign of inflammation or serious infectious disease, but inflammation seen on brown skin doesn’t appear red, it usually looks like a deep brown. AI needs to be trained to detect the colors of inflammation and serious infectious diseases on all skin tones or else we risk missing serious infectious diseases in people of color that could be diagnosed and treated.
What can AI developers do to ensure that AI systems do not contribute systemic racial inequalities?
Train AI on data sets that are representative of the entire patient population. If the AI is being developed to support a population with a significant Asian, Hispanic, or Black community, for example, those algorithms must be fed data that matches the population’s racial breakdown in order to avoid the bias that is otherwise inevitable.
When making decisions on whether to incorporate the use of AI technology, what questions should health systems be asking?
Health systems need to understand the quality of the data that the AI is trained on. System developers need to instill a level of trust that they’ve taken all measures and precautions to ensure the size and quality of the training sets for AI and ML is equitable and free of bias. Health systems can’t rush into adopting AI, they first must be able to ensure the AI is explainable and safe. Decisions should not singularly focus on AI but should start by asking ‘how can we augment decision-making through medical information technology’ which could be a blend of AI and other clinical decision support tools. AI is only one component of many in a growing digital health technology ecosystem.