• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

Building trust with the use of AI in healthcare

News
Article

It’s going to take time for the public to understand the use of artificial intelligence in medicine. Dr. Dhruv Khullar of Weill Cornell Medicine outlines ways to foster trust.

When it comes to artificial intelligence in healthcare, Dr. Dhruv Khullar, MD, acknowledges that there are risks and threats that must be addressed.

But he says that it’s wrong to think that healthcare isn’t going to incorporate AI, as he told Chief Healthcare Executive® in a recent interview.

“We need to start from the position of viewing this as an opportunity,” Khullar says. “And it is a tremendous opportunity to potentially improve the accuracy of diagnosis, to improve the efficiency of care, to help patients engage in self-care and improve health literacy, to improve, potentially, medical education.”

Khullar practices internal medicine at Weill Cornell Medicine in New York City, and is an assistant professor of health policy and economics. He’s the director of The Physicians Foundation Center for the Study of Physicians Practice and Leadership, and he also studies medical innovations, including artificial intelligence. He wrote about the use of AI in the discovery and development of new drugs in an article published by The New Yorker last week.

He envisions many ways AI will be part of healthcare and health education in the future.

“The goal, I think, that myself and many others are advocating for, is to think really in a nuanced way about how to make the most of that opportunity, and how to mitigate the downsides,” he says.

(See part of our conversation in this video. The story continues below.)

Accuracy and security

Many Americans still express apprehension when it comes to AI.

A KPMG survey of healthcare consumers found a majority of Americans like the idea of using AI technologies to answer questions or schedule appointments. However, only a third of the participants (33%) said they were optimistic about AI leading to a more accurate diagnosis and effective treatment. In 2023, most Americans said they would be uncomfortable if a doctor used AI in diagnosis and treatment, according to a Pew Research Center survey.

It’s going to take time to build public trust of AI in medicine, and Khullar says that’s going to take some work.

“We're not doing a great job with trust, generally, either in the medical establishment or in society,” he says. “And so we know that public trust in healthcare institutions and governmental institutions, in the private sector, all those areas, has been declining over the past few decades. So we really are starting from a tough position.”

In addition to educating the public, Khullar says it’s going to be important to help clinicians understand the potential benefits and limitations of AI tools.

“We make these types of risk-benefit calculus all the time in healthcare, and so that in a general way should be kind of the framework that we're using, is that no medical intervention is free of risk, really. But we use it in cases where we think that the benefits far outweigh the risks,” he says.

Khullar outlines key principles in building trust in healthcare. He also co-authored an article on the subject in the Journal of General Internal Medicine published in February.

He says it’s critical to ensure accuracy in AI models, and they need to follow the best available medical evidence. He also says AI models need explainability.

“Not only do we want these things to be accurate, but we want them to be able to explain or at least understand, to some extent, why they're making the recommendations,” he says.

It’s also essential for AI models to have security.

“These models are often trained on a lot of data, a lot of online data, and there is the potential to introduce errors into that purposefully or otherwise,” he says. “And so we want to make sure that the models that we're using, that they're secure, that they respect people's privacy, and that there needs to be a lot of work to ensure that we know that the models are in fact secure, if we're going to use them in a broad way in healthcare.”

Concerns of health equity

Some healthcare leaders have emphasized the concerns that AI technologies could perpetuate bias and discrimination in medicine. Researchers have found that chatbots such as Open AI’s ChatGPT and Google’s Bard were offering health information reflecting racial bias, according to findings published last October in Digital Medicine.

In the use of AI, health equity concerns are paramount, Khullar says.

He notes that training data for AI models need to be representative of the population that the model is going to be used on, which is true of other medical interventions.

“If you think about testing a new drug, or a new device, you want to make sure that the people that are going to receive that are in some way, have been represented in the trials,” Khullar says. Data used to train certain models hasn’t always been representative of the patient population, he says.

It’s also important to recognize that disparities persist in healthcare.

“Health care right now is not equitable,” Khullar says. “And so we still need a lot of work, AI aside, to make sure that we are caring for people in a way that is both safe, accurate, but also that it's equitable, that people who need extra help, that people who have been marginalized in the past, they receive just as high quality care as everyone else.”

‘Humans in the loop’

Khullar says he’s excited about AI’s potential to truly personalize medicine, to help improve diagnosis and treatment of certain conditions.

In the near future, he sees AI’s more immediate benefits in reducing the administrative burden on clinicians, automating tasks that siphon time, and happiness, from doctors and nurses.

But he doesn’t see scenarios where AI will replace physicians.

“We need always to have humans in the loop,” he says.

While AI can support a diagnosis, treatment decisions will ultimately have to be made by the doctor and patient.”

“There will always be a very central role, I think, for physicians in this,” he says. “The technology that we use may become more powerful over time, but that doesn't mean it's going to displace the physician.”

Khullar says he doesn’t think physicians are at a place where they are “just taking the recommendation of the AI and, you know, automatically passing that along to the patient. There's a lot of work that's done in between.”

With many Americans unfamiliar or uncomfortable with AI, physicians should consider talking to patients and informing them if they are using AI technology to support a diagnosis, even if it’s a supplementary tool.

“In the short term, as AI is still the novelty that it is, in some ways, there is a case for more disclosure and more transparency around its use,” Khullar says.


Recent Videos
Image: U.S. Dept. of Health & Human Services
Image: Johns Hopkins Medicine
Image credit: ©Shevchukandrey - stock.adobe.com
Image: Ron Southwick, Chief Healthcare Executive
Image credit: HIMSS
Related Content
© 2024 MJH Life Sciences

All rights reserved.