
Chatbots rank as most dangerous healthcare hazard
ECRI placed chatbots atop its annual list of hazards. Dr. Marcus Schabacker, ECRI’s CEO, talks with Chief Healthcare Executive about the problems with chatbots.
Each year, ECRI, a patient safety organization, releases a list of the 10 most dangerous healthcare technology hazards.This year, the nonprofit group placed chatbots at the top of
Dr.
“Those things weren't designed at all for medical purposes, so that, number one is a concern,” Schabacker says.
“I think they have been so normalized, they have been so common practice for day-to-day activities, and that we have a concern that they have sort of found their way into medical practice as well,” he says.
(See part of our conversation here. The story continues below.)
‘Use them appropriately’
ECRI has previously raised concerns about the use of AI in its annual ranking of hazards in the healthcare industry. In 2024, ECRI pointed to
Part of the problems with the use of chatbots in answering medical questions is the responses can sound “quite convincing,” Schabacker says.
But he says he’s worried about their use because chatbots don’t hedge in their answers or offer language to say there is relatively scant data to answer a question. They aren’t saying that their answer is hazy or that they aren’t quite sure of the answer.
“It's not based on prior investigations, clinical research,” he says.
Schabacker says general chatbots have potential for helping patients.
“We don't want to demonize chatbots for medical use,” he says. “Quite the opposite. We think that they could become very useful, but we need to use them correctly.”
But he says chatbots need to be viewed with caution, and understanding that guardrails are lacking.
“We need to use them appropriately, and we need to understand that they were never, ever developed for specific medical use, so they don't have any particular safeguards in place,” he says. “They don't have any particular design input criterias which need to be met. They have not been tested, and they have definitely not been developed by people who have a medical background.”
It’s not just patients using chatbots. Schabacker says he’s worried about the way some clinicians are using those AI-powered tools.
“As a clinician, if you use a general non-medical chatbot, you’ve just got to be very, very sensitive to that. You need to verify and validate that data, what you're getting out through another independent source,” he says.
Even comparing responses with other chatbots could be a useful step, he suggests. If three different chatbots offer similar responses, Schabacker says there’s still no guarantee those answers are correct, but he says that approach could lead to better answers with additional research.
But if different chatbots give different answers, he says, “that's definitely an indication that you're on the wrong track.”
That loud uncle
Patients, and clinicians, should view chatbots as the vocal relative at holiday gatherings who proudly proclaims his views, whether they are based in fact or not.
Schabacker says chatbots are similar to “that uncle at the Thanksgiving table who's very opinionated about something and really very convincing and very kind of commanding in their opinions and and statements.”
“And you're like, No, that's nonsense … Because it's just not verified. It's not substantiated by facts. It's just regurgitating somebody's opinion. And so that's how you should treat a chatbot when you use them for medical use,” he says.
He also likens asking chatbots for a recipe to prepare a meal. The chatbot may provide the correct recipe, but that doesn’t mean a user will deliver a dish with the skill of a trained chef.
“The chatbot cannot replace experience, contextual knowledge, cannot replace people who have been doing this for a long, long time,” Schabacker says.
“These things need to be put into context,” he adds. “And think about this. An intern who's going to have the same diagnostic data, did the same investigation, has the same medical notes, they are not going to come to the same conclusion as somebody who has been doing this for 20 years. So it takes more than just knowing some of the ingredients.”
Top 10 hazards
Here’s the full list of ECRI’s top 10 health technology hazards.
- The misuse of AI chatbots
- Lack of preparation for a ‘digital darkness” event, such as loss of access to electronic health records and other key systems
- Substandard and falsified medical products
- Recall communication failures for home diabetes management technologies
- Misconnections of syringes or tubing to patient lines
- Underutilizing medication safety technologies in perioperative settings
- Inadequate device cleaning instructions
- Cybersecurity risks coming from legacy medical devices
- Health technology implementations that lead to unsafe clinical workflows
- Poor water quality used in instrument sterilization






























