Shadow AI: A growing risk for hospitals

News
Video

Many organizations don’t have policies to prevent the use of AI without employer approval. Limor Kessem of IBM explains that it’s an area needing more attention.

Healthcare organizations are finding themselves vulnerable to cybersecurity incidents due to AI.

More specifically, organizations are running into problems because of a lack of guidelines or control over using AI.

The healthcare industry sees the most expensive breaches of any sector, according to a report released last month by IBM. The average healthcare data breach cost $7.42 million.

The report also talked about the growing rise of breaches tied to “shadow AI,” where employees are using AI tools that haven’t been approved by their organizations. Many organizations aren’t tracking those uses.

Limor Kessem, IBM Consulting’s global lead for cyber crisis management, told Chief Healthcare Executive® that organizations need to be paying more attention to shadow AI.

Organizations need to guard against people using “unvetted stuff in unvetted ways,” she says. And she says this is a problem in the healthcare sector.

“I think this shadow AI thing is a big deal, because people tend to do it without thinking, just wanting to speed up their work, wanting to get things done, and it just happened,” Kessem says.

“You just uploaded a bunch of company data, even if you just typed it in or pasted it,” she says. “It's gone and training an AI system somewhere, and you did not ask your employer, and your security team does not know about this. Nobody really realized what you just did. Those issues, those behaviors, have been leading to data breaches that are very sprawling.”

Such behavior is understandable, and doesn’t come from a nefarious place, Kessem acknowledges. But it can lead to trouble.

“People are trying to make their work more productive, trying to work faster and be more efficient,” she says. “Then without knowing and without having any guardrails in terms of what can we use, and how can we use it, it could become extremely risky.”

In IBM’s report, one in five organizations (20%) said they experienced a breach tied to shadow AI. And those breaches can be more expensive.

Organizations that had high levels of shadow AI saw an additional $670,000 to the cost of breaches.

In addition, breaches linked to shadow AI had a 65% increase in the exposure of personal information and a 40% increase in exposure of intellectual property data.

“These seemingly, almost innocent actions somebody could take, can be very costly mistakes for the organization,” Kessem says.

So what do health systems do? Kessem says they need to let workers know about AI tools that are approved for their use.

“A way to solve it is for companies to start assigning tools they allow or disallow, after doing a risk assessment accordingly,” Kessem says.

But organizations also must drive it home with employees.

“Over-communicate it, every other day what we're allowed to use, we're not allowed to use,” she says. “Encourage employees to use the things we're allowed to use, and make sure that everybody knows.”

Newsletter

Get the latest hospital leadership news and strategies with Chief Healthcare Executive, delivering expert insights on policy, innovation, and executive decision-making.

Recent Videos
Image: Chief Healthcare Executive
Image: ©Millet Studio - stock.adobe.com
Image: Ron Southwick, Chief Healthcare Executive
© 2025 MJH Life Sciences

All rights reserved.