The 17 requirements for secure connected medical devices

The use of connectivity in healthcare devices to collect and disseminate real-time data for faster, more accurate analysis, or tailored treatment has undoubtedly created a significant opportunity for medical professionals to improve diagnoses and treatment, and for healthcare providers to reduce operating costs and enable remote monitoring.

However, these devices also bring significant risks if security is not managed properly. These not only include risks to sensitive patient data, but to the patient themselves.

Possibly the highest profile security issue has been Medtronic’s implantable defibrillator, which allowed hackers to interfere with the RF communications and to place malware on the device – thereby shutting down the device or delivering jolts to a patient (see 2008, and 2018). However, many medical devices have not adopted good security practices, for instance using hard-coded passwords, giving the potential for mass hacks.

Added to this is the introduction of consumer technologies – such as fitness trackers – where sharing data with a social network is a value-add feature that also adds another level of risk. For example, last year’s data breach by Polar not only revealed a user’s location, but in some cases their names and addresses – including those of military personnel.

Analysts forecast significant growth (3.3 X between 2019 and 2024) for the connected medical device market, with the market predicted to be worth up to $63 billion in 5 years (source – Mordor Intelligence, 2019).

Yet in the US alone during Q4 2018, 45 million medical devices were recalled due to software / security issues. To put it another way, software and security issues accounted for more than a quarter (28.2%) of all US medical device recalls. And software / security errors have topped the list of recalls for 11 quarters in a row (source – Stericyle Recall Index, 2019).

The reason for this is simple: until recently connectivity has not been at the heart of the device makers’ strategies; and hospital IT departments have not needed to worry about such devices. Therefore best security practices simply aren’t known by the industry. But this creates an easily exploitable situation for hackers.

When developing a device – or assessing the risks associated with using a device – OEMs, medical professionals and hospital IT departments should consider 17 criteria. These are true for all of the four major classes of equipment used in patient treatment and monitoring, be they fixed location devices (eg an MRI scanner); portable on-site devices (eg a vital signs monitor); portable loaned systems (eg a blood pressure monitor); or patient owned electronics (eg a hearing aid / fitness watch). While patient owned devices will likely be out of the hands of medical professionals and hospital IT departments, this highlights the importance of considering the 17 criteria to protect local systems and patient data.

The 17 requirements of secure medical devices:

  • To protect patient privacy, tokenisation of patient identity should be used in data stores where feasible.
  • End-to-end encrypted data communications (typically TLS) should be used to preserve confidentiality for communications that cross the Internet, although where possible patient data should not cross the internet.
  • Digital signatures should be used to preserve integrity. Highly sensitive data such as firmware updates should only be accepted from authenticated end-points.
  • Where possible, Denial of Service attacks should be mitigated by only accepting connection attempts from trusted network zones or specific IP addresses; where this is not possible, connection attempts should be rate limited.
  • Where data flows in both directions, the security context should be mutually authenticated and cryptographic mechanisms including encryption and signature verification should be bidirectional.
  • System integrators must check that devices using encryption support compatible cipher suites which are sufficiently strong for the lifetime of the product or device.
  • Where Denial of Service could impact patient care, the protocol should include confirmation of successful delivery of data and notifications, and a fallback process should be in place in the case of failed delivery.
  • To minimise the attack surfaces, unneeded platform services should be turned off.
  • Security controls should be enabled and only relaxed when there is a sufficiently low risk to do so.
  • For safety-critical devices, consideration should be given to the use of less complex operating systems.
  • Unsupported operating system versions should never be used in a boundaryless architecture; if they have to be used, they must be put in a protected network zone.
  • Vendors should be evaluated and should demonstrate their use of secure development practices.
  • Vendors should declare what security features they provide, and do so by complying with published criteria where available.
  • When integrating systems from multiple vendors, customers should perform their own penetration testing.
  • Intrusion Detection Systems should be considered, in order to reduce the time it takes to detect security breaches.
  • Vendors should publish a vulnerability disclosure policy, and relevant information sharing bodies should be used to help manage their response to security breaches.
  • Vendor fixes should be applied as soon as they become available.

The 17 high-level requirements is an excerpt from an IoT Security Foundation white paper created to help medical professionals / IT departments assess the risks of IoT enabled medical devices; and to help device developers create more secure devices.

The white paper can be downloaded here.

Original Article: