16

Back in 2008 a wireless defibrillator was shown to be hackable. At this year's Black Hat conference a presenter showed exactly how to hack into a wireless insulin pump. Both of these demonstrated the ability for the potentially lethal hacking of wireless medical devices.

(Here's a shameless plug.) I and a couple other authors put together a book chapter that mentioned our concerns about security of various wireless medical devices available at http://www.intechopen.com/articles/show/title/wireless-telemedicine-system-an-accurate-reliable-and-secure-real-time-health-care.

While no known lethal viruses exist at this point, manual hacks becoming available brings the concept of such viruses one step closer. Perhaps I'm a pessimist, but it seems to me that assassination attempts between countries or companies would have the potential for producing such a virus... or just dumb chance that another virus on wireless networks may trigger serious problems with such a devices.

Do you have similar concerns or am I being overly concerned? Has anyone else seen or discovered lethal security issues with other wireless medical devices?

Savara
  • 490
  • 3
  • 15
Chris K
  • 446
  • 2
  • 6

4 Answers4

10

Consider this from the insulin pump story:

in true hacker fashion, he has spent the last two years trying to hack it himself.

Given two years of dedicated effort, I suspect you could hack a whole lot of things. Does that mean they are inherently unsafe or that we should take an alarmist approach? I don't think so. (btw, I'm not saying your question is an alarmist approach)

Any medical device must conduct a risk analysis. Any risk analysis must take into account forseeable misuse. In this day and age, network intrusion, viruses, etc. most assuredly falls under the category of forseeable, and will need to be mitigated.

I think the real question is how much mitigation is enough? That can only be answered in the context of a particular device.

  • Very good point. I helped setup a SCADA Security workshop, where we found many devices quite easy to break within an hour. That said, none of these were medical devices. –  Dec 12 '11 at 14:51
8

There's always going to be hackers that do these type of things because they can. The extent to which that pool overlaps with those with malicious intent is the magnitude of the concern, I think.

Like anything else in medicine, there is a tradeoff between costs and benefits, so if the convenience to the patient and power to control dosing and other parameters outweighs the risks of an attack by a hacker, then we should continue to provide such devices. I'm not familiar with the design of such an insulin system, but the firmware should provide some sort of a check to prevent a lethal dose from being given under any circumstances.

jonsca
  • 343
  • 1
  • 6
  • 21
  • 2
    I agree with the overlap point. As for firmware providing some sort of check, that's a failure in many industries of scada / control system developers. Here's an article for the hacking of an insulin pump, http://www.extremetech.com/extreme/92054-black-hat-hacker-details-wireless-attack-on-insulin-pumps. I suspect the hacker is right that _The manufacturer ... decide between being cheap and quick to market, or secure_. –  Dec 09 '11 at 21:04
5

Recently, there's been growing interest from computer security researchers in the security of medical devices, and there's started to build a literature on the subject. Here are some research papers you could read, if interested:

For more research in this area, see also HealthSec 2010, HeathSec 2011, HealthSec 2012, the Medical Device Security Center, and the SHARPS project.

A short summary would be that researchers have found many security vulnerabilities in a wide variety of medical devices. It appears that the devices have not been built with security in mind, have not followed good secure development practices, and the medical certification process does not ensure that medical devices are secure.

However, that said, I think you have to keep the risks in perspective. The leading researchers in this field repeatedly emphasize the lifesaving value of these devices, and are careful to state that their medical benefits outweigh the security risks. One of them has publicly stated that if he needed one of these implantable devices, he'd get it, no question.

I think talking about remote assassination is... a movie theater risk. Sure, in principle it may be technologically feasible, but it feels far-fetched and far from the most severe risk for the average every-day person. Instead, I think at present the greater risk is the potential for accidental or unintentional compromise or interference with medical devices. With the growth in the use of software in medical devices, what if a software virus unintentionally ends up infecting medical equipment and interferes with their operation? That's the sort of thing that I think is worth greater attention.

In addition, I would caution you that medical issues are sensitive and personal for many people. I know that security researchers sometimes get enthusiastic talking about potential worst-case scenario and awe-inspiring exploits, but there's a very real risk here: if security researchers scare the public into avoiding these devices, then the cure would be far worse than the disease. The security community could end up being indirectly responsible for avoidable deaths if it overstates or dramaticizes the risks before the public. So, when talking to potential patients, I think it is our responsibility to be extremely careful and professional about how we discuss the issue.

D.W.
  • 98,420
  • 30
  • 267
  • 572
0

While the amount of real damage may remain small, the perceived threat is significant. i.e. the "scariness factor" makes securing these devices very important. Others have mentioned the damage caused by patients refusing a device on the basis of security. There's also damage to the reputation of companies that manufacture these devices.

That said, securing implantable devices has its own set of challenges. These devices are designed to run continuously for years--the longer the better. You always run the risk of infection when you replace (implant) a device because its battery has died. In addition, the devices themselves are costly. For this reason, you must balance how much power you use for providing security (encryption, virus checking?) with the power you use for providing therapy.

Many of these devices must also be accessed and reprogrammed at a moments notice by emergency room staff at any hospital (what if the patient comes in with Pacemaker Mediated Tachycardia?). If you implement a security scheme that is too costly in terms of staff time, a patient may die.

Any security scheme you pick must also stand up to the test of time. A device may take years to go through R&D, and take another year or two to be certified in all the countries in which it is to be sold. After this, it may still be sold in some markets (like China) long after newer devices come onto the market. Then after this point, it will remain in someone's body for years. This means your security scheme may have a lifetime of 15 to 20 years.

You must also deal with the security-jungle of the hospital environment. Physicians come and go, they plug memory sticks into the programmer for your implantable device (i.e. a pacemaker programmer) so they can put ECGs into their electronic medical records. Even if you give them security updates, they may decide not to apply them. Many of these implantable device programmers are actually computers running Windows Embedded or Windows CE.

None of these things diminish the crucial need for better security in implanted medical devices, but the challenge is sometimes greater than it seems at face-value.

watkipet
  • 101