News|Articles|January 22, 2026

Using AI to improve patient safety

Author(s)Ron Southwick

Healthcare leaders say artificial intelligence offers the potential to protect patients and avoid harmful events that can be protected.

While more hospitals and health systems are using artificial intelligence to streamline administrative work, healthcare leaders are seeing the potential for AI to help protect patients.

Many hospitals are being cautious about expanding AI into clinical uses, primarily out of concerns for patient safety. But AI is being used to help physicians in imaging and spotting potential problems.

Leah Binder, president and CEO of The Leapfrog Group, an organization which focuses on hospital safety, sees the potential for AI to help doctors and hospitals protect patients and avoid mistakes.

“I don't see AI being deployed directly on patient safety as much as I'd like,” Binder tells Chief Healthcare Executive®.

More hospitals and health systems are implementing AI-powered documentation tools that record patient visits and provide summaries of the conversations. Doctors say those tools are helping doctors spend less time and documentation, and Binder says that’s actually a step toward helping both clinicians and patients.

“There's indirect ways that I think AI right now is going to have an impact. So for one thing, I think it's going to have an impact on clinician burnout,” she says.

“There's a lot of clerical work, let's put it that way, that's involved now in being a clinician, and sometimes tremendously burdensome, and it is something that I think is leading to a lot of burnout or just frustration and just difficulty in the work setting. And I think AI is going to be able to reduce that. It's already reducing a lot of that burden now,” Binder says.

Still, she says The Leapfrog Group is pushing for wider adoption of AI to help protect patients.

Tracking sentinel events

There’s growing attention on the role of AI in patient safety.

The National Academy of Medicine in December announced the creation of a steering group examining patient safety in the age of AI. The academy says the group will engage in a two-year effort beginning in the spring of 2026. The project comes a quarter of a century after the National Academies’ landmark report, “To Err is Human,” which found tens of thousands of Americans die every year due to medical errors in hospitals.

With its AI initiative, the academy says the working group “will examine how AI can be responsibly and effectively deployed to strengthen core safety practices, anticipate risk, empower clinicians and patients, and close longstanding gaps in performance across care settings.”

Binder is one of a host of healthcare leaders serving on the group. “I'm very excited that they're doing that,” she says.

Aside from easing documentation burdens on doctors and nurses, Binder says she thinks health systems can be using AI tools to track trends within hospitals.

Binder says she would like to see the wider use of AI tools “to track sentinel events and adverse events.

“AI has the capacity to look at all of those and come up with a summary that will lead to sort of root causes of problems that are happening, and very quickly and in real time even. And that is exciting,” Binder says.

Binder points to the use of AI to summarize where similar infections are occurring in different parts of the hospital and determine the source of the pathogen.

Still, a recent Sage Growth Partners survey of healthcare executives showed their conflicting views on incorporating AI into patient care.

Most executives surveyed (83%) say they think AI can improve clinical decision making. But only 13% say they have a clear strategy for integrating AI into clinical workflows.

Dan D’Orazio, CEO of Sage Growth Partners, told Chief Healthcare Executive® in an August 2025 interview that some healthcare leaders remain wary of the expanded use of AI in the clinical space, even as they acknowledge that medical errors are happening at the hands of clinicians.

“Humans make a lot of mistakes, yes, but maybe we feel like we can control the scale of the mistake at a unique level. Because if AI can make things better, AI can also make things bad happen faster,” he says.

But D’Orazio sees promise in AI tools helping doctors keep up with the latest research to make better decisions. “I think that's really positive, because there's just no way they can keep up with it,” he says.

Keeping humans in the loop

Some physicians that are reluctant to adopt AI into clinical uses are worried about potential unforeseen harm to patients, says Dr. Nele Jessel, chief medical officer of athenahealth.

“I think that is the big fear that clinicians have, that the AI will do something that will compromise patient safety,” Jessel says.

The majority of America’s doctors are using AI in some fashion. An athenahealth survey found more physicians are adopting AI tools and showing enthusiasm for them.

Two out of three doctors (66%) say they are using AI in their practice, according to an American Medical Association survey released last year. But that survey also found doctors still have some concerns, and some want more federal oversight over AI-enabled medical devices.

There are significant concerns about using AI technology improperly. ECRI, a patient safety organization, named the improper use of chatbots in healthcare as the top patient safety threat in its 2026 list of the most significant health technology hazards. ECRI released the list Wednesday.

But Jessel says AI tools may help contribute to patient safety by helping doctors more quickly see the relevant information in patient records.

“It's no secret that the U.S. in particular has a very high medical error rate,” Jessel says. “So in some ways, AI may actually help that, because the AI doesn't get tired. So even after you've worked 10 hours in the OR and you're still seeing patients in your clinic, the AI may actually help you not make a mistake, by alerting you … ‘Hey, did you see this? Or no, this doesn't look right.’”

“So I think in a way, it is more likely that AI will help reduce patient errors, then increase patient errors,” Jessel says.

Still, she says humans need to be in the loop and can’t relegate patient care decisions solely to AI tools.

“Whichever AI technology we develop on the clinical side, it is paramount that there is a human between the technology and the patient who has the ultimate final sign-off and review,” Jessel says. “So I would never want to jump to having the AI make independent clinical decisions. I think we're a long way away from that. It can suggest things, but a human provider has to sign off on it.”

Newsletter


Latest CME