
Chatbots and bias in medicine
Dr. Marcus Schabacker of ECRI talks about the potential of chatbots to perpetuate bias and worsen disparities in the healthcare industry.
There are many concerns about the use of chatbots in the healthcare industry, says Dr. Marcus Schabacker.
Schabacker is the president and CEO of
In an interview with Chief Healthcare Executive®, Schabacker pointed to the potential of popular chatbots offering answers to questions that may be wrong and lacking medical evidence. But he also raised the potential of chatbot answers that may reflect racial bias.
“If there's bias introduced somewhere in this process, then that bias will be exaggerated the more the Chatbot is used,” he says.
Research of chatbot answers indicates
Researchers with the Stanford School of Medicine tested large language models to see if they were providing inaccurate and biased information. The
“Every LLM model had instances of promoting race-based medicine/racist tropes or repeating unsubstantiated claims around race,” the authors wrote.
Schabacker says the expansion of racial bias in chatbots is a huge concern.
Those digital deficiencies reflect longstanding failings in medical research, he says.
Most clinical trials in the past were conducted with younger white men, who were relatively healthy.
Schabacker noted in the past, some minority groups, including those in Black and Native American communities, have had a mistrust of the medical establishment. Healthcare leaders have also acknowledged the need to design trials to include more members of underrepresented groups, and to make it easier for people in disadvantaged communities to participate in those trials.
When studies involve mostly young, white men, Schabacker says, “You have a bias in your clinical trial automatically. You have a bias in your testing.”
So if clinicians use chatbots for research for issues affecting women, children, or seniors, and the chatbot is looking at data from a study limited to white men, “then automatically you have a bias,” he says. Y
“You're going to get a wrong answer,” Schabacker says. “And that bias, depending on how you use it … might be perpetuated through the use and it confirms your own bias.”
Schabacker recalls testing a chatbot to see if it could determine his age, and the answers were not correct.
“It was not gravely wrong. It was sort of in the ballpark,” he says.
But in medicine, a ‘little bit off’ can mean the difference between life and death,” he continues. “So that's where I think we need to be really, really careful, because the bot is not developed to challenge us. The bot is there to support what it thinks we want to hear.”
Healthcare leaders have drawn more attention to the need






























