Since the invention of the integrated circuit, the complexity of the devices and the cost of the facilities used to build them have increased dramatically. The first fabrication facility with which I was associated was built at Xerox PARC in the mid-1970s at a cost of approximately $15M ($75M today). Today, the cost of a modern fab is approximately $15B. This cost is justified by the fact that today’s chips are much more complex than in earlier times. The number of layers involved has grown to over 100, and the tolerances involved are approaching atomic dimensions.
The high cost of a fab means that in order to be cost-effective, it must be fully loaded. This has led to "silicon foundries," which build chips for a variety of "fabless" semiconductor companies based on a set of physical design libraries supplied by the foundry. Carver Mead and Lynn Conway in their seminal 1980 "Introduction to VLSI Systems" initially proposed this concept, but the Taiwan Semiconductor Company (TSMC), founded in 1987, changed what had been an academic exercise into an industrial norm. Today, a few large fabs throughout the world dominate this business.
Over the last two decades, integrated circuit design has diverged into two specialties: (1) Architectural and logical design and device layout, done by a design house, with (2) mask generation and device fabrication done by a foundry. To ensure the foundry has done its job correctly, the design house relies on extensive testing to verify that devices meet their specifications.
The following paper assumes the foundry (or other parties involved in the low levels of fabrication) is malicious, and can modify the design they receive to produce a device that can later be used for malice. Their attack employs a very small Trojan circuit included in an otherwise correct design. The Trojan awaits the chip’s deployment, and it may then be triggered by an external software attack. When triggered, the chip’s normal function is subverted by the attacker. In the A2 implementation, the trigger is used to elevate the privilege of a user-mode program. The authors argue that the simplicity of the Trojan and its use of analog circuitry make it difficult to detect, even with enhanced levels of testing. They go to considerable lengths to verify their approach, including extensive simulation and actual fabrication of a processor in a modern silicon process. On the actual hardware, the Trojan operated as expected.
As technologists, technical solutions are what we do best. In the case of the attack proposed by the authors, a technical defense seems problematic.
Is this realistic? Certainly no foundry wants to compromise its business model by being identified as untrustworthy.
As I was preparing this Technical Perspective, the Dyn/Mirai DDoS attack occurred. Apparently, the attack used a large number of IoT devices (DVRs and webcams) as a botnet, which targeted a major DNS server. This is approximately what the authors of the following paper describe, although the attack was done by exploiting the lack of security in the target’s software, rather than by adding hardware. The reports seem to indicate the bot devices were easily compromised, using default passwords that could not be changed, and the devices were not designed to be updated in the field. While the security provided by IoT devices will surely improve, the authors argue that the introduction of small Trojans by untrusted fabrication facilities will remain a problem for which technical solutions appear elusive.
As technologists, technical solutions to problems are what we do best. In the case of the attack proposed by the authors, a technical defense seems problematic. We do, however, have examples from other fields that might be promising. The A2 Trojan assumes an untrusted fabrication facility. While it might not be possible to do all future fabrication in trusted facilities, using a third party trusted by both the fab and its customers to monitor the behavior of the fab seems plausible. The job of the third party is to certify the proper behavior of the fab. Trusted third parties are widely used in areas ranging from financial contracts to nuclear treaty compliance. "Trust but verify" was used during the Cold War to describe this relationship.
The authors have a lot of experience with attacks on digital logic, and do a good job of explaining previous work in the area. The paper is definitely worth reading carefully, as it covers an area that will likely become much more important in an increasingly technology-dependent world.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment