Healthcare attorney Matt Fisher tells a cautionary tale about a health system that built its own EHR access monitoring tool.
Breaches aren’t just the work of malignant external hackers. Many of the breaches reported to HHS’s Office for Civil Rights (OCR) are actually mishaps or unauthorized access incidents by a healthcare organization’s own employees. In theory, that should make those easier to prevent, right?
The problem with that, as healthcare attorney Matt Fisher explained to Healthcare Analytics News™ this week, is that what you know can sometimes be a problem.
It's Not What You Know, It's What OCR Can Prove
I think that it’s important to learn some security lessons from actual everyday experiences.
A client’s internal IT department developed an auditing tool to be able to automate going through and reviewing what could potentially be inappropriate access to their EMR by employees: Are people accessing records of family members, or records of so-called “VIP patients.” Really, just going into records when they had no reason to be in there.
The IT department developed that tool, they started testing it and piloting it live in the system, and all of a sudden they were getting all of these positive results of what looked like it could have been inappropriate access.
They came back with like 40% red flags. That ended up getting reported to their compliance department. When compliance learned about it, that’s when I got involved.
They were like, “What do we do with this?”
I was like, “Well, now you’ve got to do something, narrow it down and figure out what it actually is, what is a problem and isn’t. If you’re not acting on it, and a breach happens because of 1 of those, you’re going to get hammered.
If OCR sees all that and you had knowledge that this was going on and you didn’t do anything, that’s setting yourself up to get a fine. And a pretty hefty fine.
Narrow it down, figure it out, are these red flags actually appropriate, or is the system still trying to learn? Was it overly aggressive in what it was identifying?”
They started going down through that, narrowing it down, and figuring out if they actually wanted to use the tool at that point: Was it actually refined enough to be able to use all the time in a fully live environment?
What they learned was that you can’t just turn a tool live in a system because, that’s going to lead to some type of action or response.
The tools are very good, before you start actually using them live in your system, plan ahead. Think about how you want to use it, what you’re going to do when you start getting results. That way, you should be ready to proactively look for problems while setting yourself up for success with security.