"My particular joy is in innovation, and trying to innovate before the problem occurs, before the pain is really felt, and having a solution ready for when it does hit," Suit says.
To provide perspective on the personalities, products, and mindsets that work at the intersection of healthcare and information technology, Healthcare Analytics News will feature a recurring section of CTO Q&A’s.
For the inaugural edition of this pursuit, the respondent was John Suit, Chief Technology Officer at Trivalent. Trivalent is a data security company that uses file-level data protection to ensure the safety of sensitive information, and has received Commercial Solutions for Classified (CSfC) certification for its efforts. The technology has been used in applications ranging from military to medicine, and is “the first and only NIAP certified data (DaR) protection solution allowing users to store and access sensitive and classified data, Top Secret and Secret, on mobile devices.”
In light of the constant risk of security breaches, particularly for healthcare systems and evidenced by the recent WannaCry attack, we opened a dialogue with Suit.
First, we appreciate you talking to us. Can you speak of your own history in the industry?
I’ve been in information security since 1996. A lot of my work has been specifically around not just machine learning, but machine learning where you could act on it in real time. When I got the opportunity to talk to the guys that were doing this company, I was CTO of a company called Xceedium, and we had sold that company to CA Technologies. I had been on the advisory board of this company for about four and a half years, and they thought I might be able to contribute to building systems that aren’t checkbox systems, but built to actually really do the protecting, and do the things like shredding data and the crypto so it was at at a level where you could use it for things like top secret data.
My particular joy is in innovation, and trying to innovate before the problem occurs, before the pain is really felt, and having a solution ready for when it does hit. That’s my particular area of interest, and what I’ve been doing for the last 20 or so years.
On that level, what is the philosophy behind Trivalent’s technology, and how did it come together?
We had originally worked with the federal government to build a solution that would protect information, data at rest, completely transparently: I don’t want to impact the user, I just want it to work, I don’t want them to have to do anything special or put something in a vault, or do anything that they wouldn’t ordinarily do, but I still want them to have the maximum protection possible. It also had to work in a connected or disconnected mode, so I’d be able to have devices that weren’t necessarily connected to the internet to be protected. There also had to have no discernible latency, so whatever we’re doing to protect information, it can’t slow it down or make it harder for me to do my work.
The final requirement was it had to be CSFC Certified, that stands for Commercial Solutions for Classified. It had to be able to run top secret data just by having our stuff installed on it. Those are some pretty high bars, and nothing out there existed that could meet all of those items. And at the end of the day it had to actually protect data, and do it for real.
That was what we spent our time doing, building, fielding, and providing: a system that I can have on a mobile device, on a Windows box or a laptop or a medical device that allows, whether it’s in use or resting, the same level of protection as other solutions would have had if they were completely shut off and the disc was encrypted.
Honing in further, give us as much of a primer as you can on how it works.
I’ll try not too get to commercial-y here. What we do is we take data, we encrypt the file, we shred it into little pieces and we distribute those shreds. If someone is going out of their way to steal your data, they’re not going to have access to it because it’s in bits and pieces, like a very elaborate puzzle where you can’t see the picture. It’s encrypted to such a level that it’s considered information theoretically secure, which means if you had unlimited computing power, you still couldn’t break it. That puts us in a very small class of crypto algorithms.
Let’s say you’re using Word, and you type something up and hit “Save As.” In our system, what’s happening is that Save call is intercepted by a certified device drive that will then take what would have been that file and creates a tombstone file, which looks like a real file and the operating system thinks it is, but it’s symbolic data. It takes the actual data, encrypts it, shreds it up into little pieces, and distributes those pieces. You can distribute those pieces all on the same file system, throughout an enterprise, in the cloud, whatever. If ransomware goes in and issues a delete, what will happen is it will delete the tombstone files, not the real data.
If you have all this super-high-powered data protection, and encryption at such a level that you can run top secret information on a device with our software, we’ve got to spend a lot of time making sure that’s not a pain to use. If a medical professional needs to use a specific device running Windows in kiosk mode, I need to make sure they don’t have to do anything differently.
How extensively does Trivalent deal with healthcare companies, and what challenges are unique to healthcare IT?
Pretty extensively, and there are things that are unique to healthcare, just like any vertical market. Healthcare has compliance issues that you don’t have in any other market, and there’s a specific sensitivity because of HIPAA, and I believe people at this point are overly cautious, though not saying unnecessarily so, about exposing PII data and health records. Where they might have had a set of doctors, for example, collaborating to find a cure for something, they may not feel they can have the data isolated in such a way that would be compliant.
That’s something we didn’t realize when we started to deploy our systems. You can actually put the data in a mode where people can collaborate, and you can isolate who specifically has access to it. If you had an enterprise network, you could assign who has access to directories and applications. In this case, what we’re doing is the same kind of thing, but we’re doing it for data, so we’re saying ‘these two users for the next two weeks can collaborate on this data type for this reason’ and after that time’s up, I’m going to revoke your access to that data, no matter where it migrated to.' Even if it migrated to some system on some researchers laptop, where normally you’d be freaking out because you can’t account for that, you can revoke their access to the data itself so that it can never be recoverable. Now I can enable people to actually collaborate, where before they were much more apprehensive because they weren’t sure how to stop the data from moving to a particular place, because the security was attached to the device. Now they can control it at the data level.
Can we talk about the AI possibilities here?
I was talking to somebody from a company that does this in the UK: they have people that go out to people who can’t make it into the medical facility and they take down all their personal information and they have all the data residing on these devices. The devices typically operate in kiosk mode, and you can’t go outside of that. The simplest AI for something like this would be a system where any time it receives a command that is not available to the application in kiosk mode, it needs to either not do that thing or ask the user to re-authenticate to allow that activity. Specialty devices in kiosk mode don’t have a lot of business acting like the user, but in WannaCry, ransomware went through the task scheduler as the user. It would have been really nice if something had said ‘I didn’t see you drag your mouse to do any of this, and frankly none of those functions are in this application, but you’re still finding a way to delete a bunch of files. Are you sure you want to do this?’ Something like that would’ve been really cool: basically, an AI driver that determines when a user’s activity on the UI itself does not reflect what’s being done on behalf of the user.
You mentioned the WannaCry attack, which is obviously a big reason why we gave you a call. Do you think you could sum it up and pick it apart a little bit? What deficiencies do you think led to it taking hold?
Specifically, this one was a mass attack, ‘let’s send this thing out there either through phishing or through outdated security measures and see how much we can get.’ It really did well on systems running XP, as you know, an old system, and in enterprises it’s extraordinarily old. In the device community, that stuff’s not often updated and it’s a giant pain to get authorizations to update devices, with healthcare systems the authorization chain you need to update systems running live data on them is pretty hard. There was some susceptibility there.
Summarizing from the analysis we’ve done, what it really does is it gets on a system, it mimics the privilege of a user, and it finds a way to get access to your files through the SMB protocol, creating a new service through the task scheduler. What they actually did was read your files, generate a new file, encrypt the file, and then delete the old file. I think what a lot of people thought was that they were just encrypting your data, but that’s not what was happening there. That’s a specific nuance, and one that we paid close attention to. If they were just encrypting your files, you need a certain level of access and privilege to do that, and it’s a bit different from what they did, which was creating a new file and deleting an old file.
Given that, they essentially are deleting your data and leaving encrypted remnants behind. That was of particular interest to us because the way our product works, it doesn’t actually utilize files in the way a normal operating system does.
What was your reaction to the situation?
When we heard about this we were sort of freaking out a little bit, until we got to analyze what they were doing. We thought maybe they’d discovered a way to find nested shreds. We’re not arrogant enough to think no one could figure out what we’re doing, we know it would be extraordinarily difficult and hard, and we weren’t worried about someone being able to read those files because that’s way, way out of the scope of what’s possible today, but we were worried that they’d be able to erase our data. Who knows how good these guys can get, and they get better every day, right? Something that we had designed was that if a “Delete” command was issued, it will not delete your content, it will only delete your tombstone files. Those tombstone files can be restored or reconstituted.
This particular piece of ransomware was successful in that the proliferation was so high. It was a little bit of a low bar because it was exploiting things that have been patched quite a bit by Microsoft, but there’s so many legacy systems out there that they definitely had a high level of proliferation. The question’s going to be how useful was the data that was encrypted, but not having access to data in general and not knowing what the real impact will be is the issue in general, because you can say “oh, those were old systems, we’ve heard this already, those are legacy systems, we keep those around for X/Y/Z…” well, at some point there’s going to be a dependency on one of those systems that someone didn’t predict and that data is not going to be available, which will be a real problem. It’s important to take cybersecurity seriously.
What lessons have been learned, or what lessons need to be learned, from this sort of attack?
If you don’t have a good system, you need to make sure you have some sort of data-at-rest solution. You need to make sure you can recover any data that’s lost by having it backed up and not through a method that ransomware guys are starting to get smart about attacking. They’ll get better and better at attacking the backups themselves, so you’ve got to be very mindful about the solutions you’re looking at. This comes down to a lot of the usual stuff: the phishing emails, more education, better ways to handle this with AI…but until that’s available you’ve really got to educate people not to click on stuff, this is basic. Patch your systems.
To stop the killchain, we’ve instituted technology to work by default, but any time you can protect your systems so that you can recover from something like this, the better.