OR WAIT null SECS
"FDA regulators are not security experts, and to a degree they operate in a silo void of industry experiences," contributor Hamid Karimi of Beyond Security writes.
Hamid Karimi has extensive knowledge about cyber security and for the past 15 years, his focus has been exclusively in the security space covering diverse areas of cryptography, strong authentication, vulnerability management, malware threats, as well as cloud and network protection. He is the VP of Business Development at Beyond Security, a provider for automated security testing solutions including vulnerability management, based out of Cupertino, CA.
Imagine a hospital bed attached to devices and sensors that a malintent individual across the world can find and manipulate. This is a nightmare scenario. By some accounts, a typical intensive care bed has more than a dozen sensors, most of which are network-connected, and at least few of them are connected to the internet.
These internet-connected devices are being probed for weaknesses, and often, nefarious characters are usually few steps ahead of regulations and security protocols. To understand the scope of the challenge, we need to recognize the nature of healthcare and the medical device industry. In general, the healthcare system is a conservative industry resistant to transformation. This is also an industry fraught with regulations and bureaucratic red tape, and the last thing any bureaucrat wants is to deal with a complex topic like security.
There are three distinct problematic situations that can arise from security lapses in medical devices. Firstly, the safety of patients: if a medical device cannot perform the service it was designed for or is operated outside the designed range of its operation, then the patient is clearly in jeopardy. Secondly, the breach of privacy may expose confidential records to iniquitous actors with a wide range of aims. Incidentally the latter is well-covered by HIPAA regulation, but unfortunately, the regulators made the pronouncements and did not address the root causes for most common privacy failures. Third and finally, data loss can cause the ripple effect of major service interruptions, where fallback mechanisms such as data redundancy are not in place. A likely scenario for a financially-motivated antagonist is to deny providers access to their mission-critical data and demand payment for service restoration.
Recent ransomware cases have caused concern and panic in the medical field. While there is a real cause for concern about ransomware attacks, whether screen lockers or file encryptors, are not usually targeted, and instead create collateral damage in their wake. This is not to say that a given hospital is unlikely to become a target of ransomware, but rather more likely to be an accidental victim of a broader scheme. The more likely scenario is service-oriented ransomware breaches.
What if the government and industry worked together to address these issues? This is a huge ‘if’ to say the least. Until now, the response to these challenges has been weak and sporadic. FDA began to consider security of medical devices back in 2013, and has continuously updated its list of recommendations and requirements. NIST’s approach is more actionable, but the collaboration between the two agencies is suboptimal.
The biggest problem with FDA’s lukewarm recommendations, putting it in simple language, is demanding the medical device manufacturers incorporate logical security mechanisms in their products. FDA also focuses on sharing vulnerability data, yet falls short of establishing more than a voluntary security framework. FDA regulators are not security experts, and to a degree they operate in a silo void of industry experiences. A better approach is requiring system-wide security at healthcare facilities coupled with embedded security in medical devices. FDA can demand the medical device builders show proof of best security design and practices as a part of Product Development Lifecycle (PDLC).
The European Union attempted to tackle the issue of medical device security years before FDA. EU’s approach was to retrofit the standard IT rules to the medical device industry while giving manufactures a great degree of discretion in defining an acceptable risk threshold. As a result, we are dealing with a set of guidelines that lack cohesion and normalization.
On the other hand, the security industry has offered piecemeal solutions which are reflective of its own tribal, fragmented nature. There are vendors who offer boutique approaches like deception tools (using honeypots and decoy systems) to “fool” the attackers. Such approaches lack imagination because a sophisticated attacker can circumvent these defenses by simply targeting all vulnerable devices, rather than just the easiest.
What would be the reasonable security taxonomy for design and operation of medical devices? As far as medical device development is concerned, security must be an embedded element, not an afterthought. Gartner recommends making “application self-protection a new investment priority, ahead of perimeter and infrastructure protection.” That means addressing the potential weaknesses of firmware and software that run in medical devices.
Both static and dynamic code analysis offer systemic and deep security posture checks with high probability of discovering exceptional operational failures. Intelligent fuzzing techniques can test even theoretically hardened devices and expose deficiencies. Once the systems are deployed in the field, a robust operational model must take control and ensure reasonable network security. Not all medical devices need internet connectivity or direct user input; operators can make certain that their deployment cases are security-justified. Like other mission-critical networks, medical systems must separate the control (management) plane from the data (operational) plane and avoid cross-contamination in case of a security breach in either.
Despite all steps, security is not fool-proof. A comprehensive risk-based approach as an overall operational management philosophy is paramount. Security policies must consider both mitigation as well as residual risk measures. DevOps must concatenate risk mitigation assertions in the risk assessment policy to make sure each procedure or control stated in the risk assessment has a relevant metric contained in the policy. Residual risks (what a mitigation can miss) should also be dovetailed into the deployment model as an exceptional metric. Moreover, vulnerability assessment must be sorted by descending score number on mitigated risk and priority control to objectively determine which controls are most important to audit. As the final step, DevOps must also perform gap analysis to provide protections and safeguards for remediation of risks.
Whitelisting is another effective tool in preventing rogue applications from running in medical services networks. Whitelisting partially addresses failures in authentication and authorization tools, but it is not a cure-all. It is noteworthy to also mention the externalization of IT, which is closely linked to cloud computing. Without hardening medical devices and services, it would be unwise to embrace the model given its pitfalls and uncertainties.
Let’s get the terminology right: any hack is a breach, but not every breach is a hack. While hacking is not uncommon, it normally requires sophisticated manipulation of security controls to gain access to critical data or assume unauthorized control of devices. A breach, on the other hand, is more descriptive of common security failures that we witness today.