Millions of patients benefit from programmable, implantable medical devices (IMDs) that treat chronic ailments such as cardiac arrhythmia,6 diabetes, and Parkinson's disease with various combinations of electrical therapy and drug infusion. Modern IMDs rely on radio communication for diagnostic and therapeutic functionsallowing health-care providers to remotely monitor patients' vital signs via the Web and to give continuous rather than periodic care. However, the convergence of medicine with radio communication and Internet connectivity exposes these devices not only to safety and effectiveness risks, but also to security and privacy risks. IMD security risks have greater direct consequences than security risks of desktop computing. Moreover, IMDs contain sensitive information with privacy risks more difficult to mitigate than that of electronic health records or pharmacy databases. This column explains the impact of these risks on patient care, and makes recommendations for legislation, regulation, and technology to improve security and privacy of IMDs.
The consequences of an insecure IMD can be fatal. However, it is fair to ask whether intentional IMD malfunctions represent a genuine threat. Unfortunately, there are people who cause patients harm. In 1982, someone deliberately laced Tylenol capsules with cyanide and placed the contaminated products on store shelves in the Chicago area. This unsolved crime led to seven confirmed deaths, a recall of an estimated 31 million bottles of Tylenol, and a rethinking of security for packaging medicine in a tamper-evident manner. Today, IMDs appear to offer a similar opportunity to other depraved people. While there are no reported incidents of deliberate interference, this can change at any time. The global reach of the Internet and the prevalence and intermingling of radio communications expose IMDs to historically open environments with difficult to control perimeters.3,4 For instance, vandals caused seizures in photosensitive individuals by posting flashing animations on a Web-based epilepsy support group.1
Knowing that such vandals will always exist, the next question is whether genuine security risks exist. What could possibly go wrong by allowing an IMD to communicate over great distances with radio and then mixing in Internet-based services? It does not require much sophistication to think of numerous ways to cause intentional malfunctions in an IMD. Few desktop computers have failures as consequential as that of an IMD. Intentional malfunctions can actually kill people, and are more difficult to prevent than accidental malfunctions. For instance, lifesaving therapies were silently modified and disabled via radio communication on an implantable defibrillator that had passed premarket approval by regulators.3 In my research lab, the same device was reprogrammed with an unauthenticated radio-based command to induce a shock that causes ventricular fibrillation (a fatal heart rhythm).
Manufacturers point out that IMDs have used radio communication for decades, and that they are not aware of any unreported security problems. Spam and viruses were also not prevalent on the Internet during its many-decade nascent period. Firewalls, encryption, and proprietary techniques did not stop the eventual onslaught. It would be foolish to assume IMDs are any more immune to malware. For instance, if malware were to cause an IMD to continuously wake from power-saving mode, the battery would wear out quickly. The malware creator need not be physically present, but could expose a patient to risks of unnecessary surgery that could lead to infection or death. Much like Macintosh users can take comfort in that most current malware takes aim at the Windows platform, patients can take comfort in that IMDs seldom rely on such widely targeted software for now.
A second risk is violation of patient privacy. Today's IMDs contain detailed medical information and sensory data (including vital signs, patient name, date of birth, therapies, and medical diagnosis). Data can be read from an IMD by passively listening to radio communication. With newer IMDs providing nominal read ranges of several meters, eavesdropping will become easier. The privacy risks are similar to that of online medical records.
Improving IMD security and privacy requires a proper mix of technology and regulation.
Technological approaches to improving IMD security and privacy include judicious use of cryptography and limiting unnecessary exposure to would-be hackers. IMDs that rely on radio communication or have pathways to the Internet must resist a determined adversary.5 IMDs can last upward of 20 years, and doctors are unlikely to surgically replace an IMD just because a less-vulnerable one becomes available. Thus, technologists must think 20 to 25 years out. Cryptographic systems available today may not last 25 years.
It is tempting to consider software updates as a remedy for maintaining the security of IMDs. Because software updates can lead to unexpected malfunctions with serious consequences, pacemaker and defibrillator patients make an appointment with a healthcare provider to receive firmware updates in a clinic. Thus, it could take too long to patch a security hole.
Beyond cryptography, several steps could reduce exposure to potential misuse. When and where should an IMD permit radio-based, remote re-programming of therapies (such as changing the magnitude of defibrillation shocks)? When and where should an IMD permit radio-based, remote collection of telemetry (for example, vital signs)? Well-designed cryptographic authentication and authorization make these two questions solvable. Does a pacemaker really need to accept requests for reprogramming and telemetry in all locations from street corners to subway stations? The answer is no. Limit unnecessary exposure.
Premarket approval for life-sustaining IMDs should explicitly evaluate security and privacyleveraging the body of knowledge from secure systems and security metrics communities. Manufacturers have already deployed hundreds of thousands of IMDs without voluntarily including reasonable technology to prevent the unauthorized induction of a fatal heart rhythm. Thus, future regulation should provide incentives for improved security and privacy in IMDs.
Regulatory aspects of protecting privacy are more complicated, especially in the United States. Although the U.S. Food and Drug Administration has acknowledged deleterious effects of privacy violations on patient health,2 there is no ongoing process or explicit requirement that a manufacturer demonstrate adequate privacy protection. The FDA has no legal remit from Congress to directly regulate privacy (the FDA does not administer HIPAA privacy regulations).
My call to action consists of two parts legislation, one part regulation, and one part technology.
Improving IMD security and privacy requires a proper mix of technology and regulation.
First, legislators should mandate stronger security during premarket approval of life-sustaining IMDs that rely on either radio communication or computer networking. Action at premarket approval is crucial because unnecessary surgical replacement directly exposes patients to risk of infection and death. Moreover, the threat models and risk retention chosen by the manufacturer should be made public so that health-care providers and patients can make informed decisions when selecting an IMD. Legislation should avoid mandating specific technical approaches, but instead should provide incentives and penalties for manufacturers to improve IMD security.
Second, legislators should give regulators the authority to require adequate privacy controls before allowing an IMD to reach the market. The FDA writes that privacy violations can affect patient health,2 and yet the FDA has no direct authority to regulate privacy of medical devices. IMDs increasingly store large amounts of sensitive medical information and fixing a privacy flaw after deployment is especially difficult on an IMD. Moreover, security and privacy are often intertwined. Inadequate security can lead to inadequate privacy, and inadequate privacy can lead to inadequate security. Thus, device regulators have the unique vantage point for not only determining safety and effectiveness, but also determining security and privacy.
Third, regulators such as the FDA should draw upon industry, the health-care community, and academics to conduct a thorough and open review of security and privacy metrics for IMDs. Today's guidelines are so ambiguous that an implantable cardioverter defibrillator with no apparent authentication whatsoever has been implanted in hundreds of thousands of patients.3
Fourth, technologists should ensure that IMDs do not continue to repeat the mistakes of history by underestimating the adversary, using outdated threat models, and neglecting to use cryptographic controls.5 In addition, technologists should not dismiss the importance of usable security and human factors.
There is no doubt that IMDs save lives. Patients prescribed such devices are much safer with the device than without, but IMDs are no more immune to security and privacy risks than any other computing device. Yet the consequences for IMD patients can be fatal. Tragically, it took seven cyanide poisonings in the 1982 Chicago Tylenol poisoning case for the pharmaceutical industry to redesign the physical security of its product distribution to resist tampering by a determined adversary. The security and privacy problems of IMDs are obvious, and the consequences just as deadly. We'd better get it right today, because surgically replacing an insecure IMD is much more difficult than an automated Windows update.
1. Epilepsy Foundation. Epilepsy Foundation Takes Action Against Hackers. March 31, 2008; http://www.epilepsyfoundation.org/aboutus/pressroom/action_against_hackers.cfm
2. FDA Evaluation of Automatic Class III Designation VeriChip Health Information Microtransponder System, October 2004; http://www.sec.gov/Archives/edgar/data/924642/000106880004000587/ex99p2.txt
3. Halperin, D. et al. Pacemakers and implantable cardiac defibrillators: Software radio attacks and zero-power defenses. In Proceedings of the 29th Annual IEEE Symposium on Security and Privacy... May 2008.
5. Schneier, B. Security in the real world: How to evaluate security technology. Computer Security Journal 15, 4 (Apr. 1999); http://www.schneier.com/essay-031.html.
This work was supported by NSF grant CNS-0831244.
Figure. From left, Benjamin Ransford (University of Massachusetts), Daniel Halperin (University of Washington), Benessa Defend (University of Massachusetts), and Shane Clark (University of Massachusetts) worked to uncover security flaws in implantable medical devices.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.
No entries found