Throughout the history of computing, a common assumption has always been that microchips are generally secure; while software may be infected with malware or nefarious backdoors, hardware could be mostly trusted. As Milos Prvulovic, a professor at Georgia Institute of Technology’s School of Computer Science puts it: “Most people, even among security researchers, have not questioned the integrity of hardware. We have all assumed that the hardware we use works exactly as specified and that it reads all instructions correctly.”
Although researchers and security experts have been concerned about the possibility of a Trojan Horse or other type of hardware attack, the danger has remained in the theoretical realm.
Until now.
In May 2016, a team of researchers at the University of Michigan, including Todd Austin and Matthew Hicks, presented a paper showing exactly how to sabotage a microchip. The pair purposely built a backdoor into a chip and presented an academic paper at the IEEE Symposium on Privacy and Security documenting the method (it captured the conference’s Best Paper award). The security flaw could allow a nation-state or other nefarious entity to grab and steal data. “The vulnerability creates concern because it’s a method that could actually be used to do harm,” says Austin, a professor and director of the university’s Center for Future Architectures Research.
The discovery has sent a shock wave through the computing field. “This is the most demonically clever computer security attack I’ve seen in years. … It’s an attack that can be performed by someone who has access to the microchip fabrication facility, and it lets them insert a nearly undetectable backdoor into the chips themselves,” wrote Yonatan Zunger, head of infrastructure for the Google Assistant. And while the theoretical concept of embedding malware in hardware is not particularly new, the project “demonstrates just how feasible and devastating this method can be,” says Abhi Shelat, associate professor of computer science at Northwestern University.
Risky Chips
Although it is incredibly difficult to spot security flaws in software, finding them in hardware can be exponentially more complex. Austin refers to the challenge as finding the proverbial needle in a haystack. The reason is fairly simple, even if the technique he and Hicks used is not. Security researchers have historically focused virtually all their attention on the digital level of abstraction. “Defense tools rely on finding ones and zeros to identify malicious code,” says Hicks, a lecturer at the University of Michigan. However, “An attack doesn’t have to play by the digital rules—and there are currently no tools for detecting such an attack.”
As a result, Austin and Hicks focused their attention on the analog domain. “We began to explore this space because there are an infinite number of values between zero and one,” Hicks explains. Although it is entirely possible for security researchers to detect malicious hardware using an inspection-based technique—if it is large enough relative to the circuit to view or there is some visible effect on the power, performance, or temperature of the chip—their approach circumvents this approach. It also sneaks around a key protection: functional verification, essentially checking to see that the behavior of the chip matches its specifications before the design is sent to a foundry for fabrication. Using functional verification, “It’s possible to check for reliability problems and other types of errors,” Austin says.
Their method? After the design phase is complete and the microchip is ready to be fabricated, the saboteur drops a single engineered component into the overall structure. Since today’s microprocessors contain as many as a billion cells, this single cell is essentially indistinguishable from the rest of the components, even though it is secretly designed to act as a capacitor—temporarily storing electrical charges—rather than handling regular functions. Then, when a malicious script from a website or application triggers an obscure command, the capacitor grabs a tiny electric charge and stores it in its wires without affecting the chip’s power or performance characteristics.
Once the chip hits a predetermined threshold (typically after thousands or tens of thousands of events), the capacitor flips on a logical function and grabs control of the operating system. “The system avoids the triggers that provide a clue something is wrong,” Austin explains. What is more, “It’s highly unlikely that defenders or anyone testing the system will accidentally stumble onto the attack method.” Adds Hicks, “Detection would require a piece of logic that specifically looks for an arcane and extremely rare sequence of instructions. This essentially renders the detection processes useless.”
One thing the researchers honed in on during the project was using a basic version of a counter-based trigger. A simple way to engineer the attack would have been to increase the counter by one every time a certain set of criteria were met, such as turning on or off the computer and storing the value in a flip-flop state. However, this requires digital circuits, and accompanying logic that exposes the attack to testing or visual side-channel analysis inspection. Instead, using the analog domain, the capacitor continually adds the charge and increases voltage as if it were filling up a bucket. Because the voltage stays between zero and one, it is invisible as a digital value. When it finally hits the one level, the triggering mechanism takes place. But since the secret value is actually analog voltage in the capacitor, it remains stealthy.
“Detection would require a piece of logic that specifically looks for an arcane and extremely rare sequence of instructions.”
The researchers tested the system under a wide range of environmental conditions—including temperatures ranging from −13 degrees to 212 degrees Fahrenheit—and the process worked consistently. “The behavior only exists in the analog domain. So, if you try to analyze the environment with digital tools, the analog behavior disappears; it no longer exists. This makes it appear that the activity doesn’t exist at all,” Austin explains.
Adds Hicks: “The attack method uses the oldest trick in computer security. If you want to go undetected, then get below the things that detect you.”
Deep Insecurities
At this point, it appears nobody has used this approach in the wild. As far as everyone knows, Austin and Hicks were the first to break hardware into layers to create an attack method. Nevertheless, the risks are very real. Today, a relatively small number of chip fabrication facilities exist worldwide and no one can rule out the possibility that a worker at a facility could use this method to plant spyware or other code. Says Shelat, “Although only a handful of organizations are able to fabricate ASICS today, the reality is that they are now used for handling critical tasks and infrastructure.”
To be sure, Shelat says there is a real-world risk. “There is evidence of Tailored Access Operation (TAO)-style attacks mounted by sophisticated organizations,” he says. Using this method, “Physical hardware that has been ordered by the victim is intercepted and implanted with Trojan hardware that allows remote access. A natural extension of such attacks would be to manufacture a batch of chips with custom backdoor access, and then inject these into a supply chain that is incorporated into a target population. This would allow an organization to wreak havoc on critical systems while making it nearly impossible to isolate the flaw.”
What is particularly frightening about this method, Prvulovic says, is that it takes full control of a computing device and it is undetectable until activity reaches a certain threshold. At this point, “anything and everything is potentially compromised.” Moreover, there is no known antidote for the threat, though Austin and Hicks suggest some possible methods in their academic paper.
The upside, Prvulovic adds, is that chip fabrication does not take place overnight; in fact, in many cases, it takes years to design a chip. And while it is possible that someone could add a component in a shorter time frame, “This isn’t something that is likely to appear any time soon; though if it did, we almost certainly wouldn’t know about it. There’s also risk for a semiconductor company; if this is detected, your company is most likely out of business.”
Not surprisingly, the research team’s efforts have been greeted with both praise and disdain. Of course, most in the computing sciences field have come to acknowledge the value of exposing vulnerabilities and support the project. “Overall, this is a very positive thing,” Prvulovic says. “It isn’t something that requires an Einstein-level genius to figure out; it’s something that, if you think about it and work on it, you might eventually stumble onto this approach. That’s what makes it so dangerous, and that’s why it’s good that this is now out in the open.”
Shelat adds that he and others in the field are genuinely impressed by the methods Austin and Hicks used. “Their attack is clever because it uses both digital and analog techniques to implement a privilege escalation attack.” In fact, Shelat is now involved in research aimed at developing verifiable hardware for a limited class of circuits. The end-game is to develop “advanced cryptographic protocols in order to design a chip that can prove in real time that it has performed the correct computation.” However, he admits the gap between theory and reality remains formidable, and many of the brightest minds in computing have focused on this concept for decades.
Austin and Hicks say they have already briefed members of the U.S. Department of Defense and various branches of the U.S. military, as well as chip manufacturers and others about the attack method and how it could be used. They also have given some of the 100 chips they fabricated to government officials and industry executives.
Says Hicks, “The key to addressing these risks is to not stick our heads in the sand, but rather encourage research on analog circuits and the risks associated with them when they are part of digital systems.”
Adds Austin, “There are people who were very upset about this research, but if we all stick our head in the sand together, the threat will not go away. We are hoping that this research will spur more attention on analog circuits and the risks associated with them when they are part of digital systems.”
Yang, K., Hicks, M., Dong, Q., Austin, T., and Sylvester, D.
A2: Analog Malicious Hardware. 2016 IEEE Symposium on Security and Privacy. July 2016. http://static1.1.sqspcdn.com/static/f/543048/26931843/1464016046717/A2_SP_2016.pdf?token=QXoVmAnDwRuiL84oo13X0iH6cXI%3D
Wahby, R.S., Howald, M., Garg, S., Shelat, A., and Walfish, M.
Verifiable ASICs, IEEE Security & Privacy 2016, https://eprint.iacr.org/2015/1243.pdf
Sugawara, T., Suzuki, D., Fujii, R., Tawa, S., Hori, R., Shiozaki, M., and Fujino, T.
Reversing Stealthy Dopant-Level Circuits, International Conference on Cryptographic Hardware and Embedded Systems, ser. CHES. New York, NY: Springer-Verlag, 2014, pp. 112–126.
Hicks, M., Sturton, C., King, S.T., and Smith, J.M.
Specs: A lightweight runtime mechanism for protecting software from security-critical processor bugs, Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS. Istanbul, Turkey: ACM, 2015, pp. 517–529.
Forte, D., Bao, C., and Srivastava, A.
Temperature Tracking: An Innovative Run-time Approach for Hardware Trojan Detection, International Conference on Computer-Aided Design, ser. ICCAD. IEEE, 2013, pp. 532–539.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment