Research and Advances
Architecture and Hardware Contributed articles

Trustworthy Hardware from Untrusted Components

This defense-in-depth approach uses static analysis and runtime mechanisms to detect and silence hardware backdoors.
Posted
Trustworthy Hardware from Untrusted Components, illustration
  1. Introduction
  2. Key Insights
  3. Adversarial Model
  4. Defense In Depth
  5. Static Analysis
  6. Silencing Triggers
  7. Payload Detection
  8. Related Research Chronology
  9. Open Problems
  10. Conclusion
  11. Acknowledgments
  12. References
  13. Authors
  14. Footnotes
  15. Figures
  16. Sidebar: Three Promising but Infeasible Approaches
Trustworthy Hardware from Untrusted Components, illustration

Hardware is the root of trust in computing systems, because all software runs on it. But is the hardware trustworthy? How can we ensure it has not been corrupted? Can we design it so it is not easily corrupted? Many factors conspire to make hardware more susceptible to malicious alterations and less trustworthy than in the past, including increased use of third-party intellectual property components in system-on-chip designs, global scope of the chip-design process, increased design complexity and integration, and design teams with relatively few designers responsible for each subcomponent. There are unconfirmed reports of compromised hardware17,21 leading to undesirable economic consequences.4 A nontechnical solution is to design and manufacture hardware locally in a trusted facility with trusted personnel. However, it is not long term or viable, as it is neither efficient nor guaranteed to be secure. This is why this article instead reviews a series of measures for increasing the trustworthiness of hardware.

Back to Top

Key Insights

  • The approaches described here address a fundamental concern in hardware supply-chain security: Compromised hardware means security cannot be guaranteed.
  • Mitigations address digital hardware backdoors by including pre-design and runtime techniques.
  • Developers of mitigation techniques must still address several open problems, including formal verification, unified approaches to design, and foundry security.

To understand how hardware can be compromised, we need to understand how hardware is designed (see Figure 1). The first few steps are similar to software design and construction, beginning with the specification of design requirements. The hardware is then designed to meet operational requirements and coded into a hardware design language (HDL) (such as Verilog) either by designers working with the company designing the chip or with code purchased as intellectual property (such as for a USB controller) from third-party vendors around the world. The next step differs slightly from software. Hardware undergoes much more rigorous validation than most software, as hardware bugs, unlike their software counterparts, are often more expensive to fix following deployment. To minimize the risk of bugs, reputable hardware companies often employ validation teams that are much larger than the design team. They work either in tandem with designers or after the fact in the case of third-party IP components. The design, with all its components, is then processed using computer-aided design (CAD) tools from commercial companies that convert the high-level code into gates and wires. When done, the result is a functional design that can be reviewed for security but in practice is simply sent off to a foundry for manufacture. Reviews are encumbered by the complexity of the design and pressure of time-to-market constraints. We refer to everything until compilation with CAD tools as the front end of the process and the physical design and manufacturing at the foundry as the back end of manufacturing.

Thousands of engineers may have access to hardware during its creation and are often spread across organizations and continents. Each hardware production step must be considered a possible point of attack. Designers (either insiders or third-party providers) might be malicious. Validation engineers might seek to undermine the process. CAD tools that could be applied for design synthesis prior to manufacture might be malicious. Moreover, a malicious foundry could compromise security during the back-end manufacturing process. The root of trust is the front-end design phase; without a “golden design” to send off to the foundry, hardware designers, as well as end users, have no basis on which to secure foundries.

We have worked on methods to produce golden designs from untrusted components since 2008, with the goal to secure the front end of the hardware design process, forming a trustworthy root on which to build further security. This article explores the advances we have made toward mitigating the threat and describes remaining problems. The philosophy behind our solution is hardware attacks fundamentally need hardware-based defenses. It is not possible for software alone to create security solutions that cannot be undermined by the underlying hardware. Since these defenses are to be implemented in hardware, they have to obey typical hardware constraints (such as low power, low area footprint, low design and verification complexity, and low performance impact). We show how digital designs can be hardened against an array of known and possible attacks, including but not limited to untrusted insiders and a global conspiracy of attackers, through a defense-in-depth approach. The result of applying our methods is hardware design that can be trusted, despite the resources, people, and components used to produce it being untrusted.

Back to Top

Adversarial Model

Organizations that aim to earn a profit or even just stay in business are unlikely to sabotage their own designs or sell damaged products intentionally. More likely is one or more “bad apples” in an organization (or external forces) attempting to subvert the design. We thus consider four major sources of insecurity concerning the design process: any third-party vendor from which IP is acquired; any local or insider designers participating in the design process; the validation and verification team; and malicious CAD tools or libraries.

All four are relevant, and we consider the possibility that one or all of them could be malicious. The most obvious threats are from third-party vendors, because modern designs can contain tens or hundreds of distinct IP components, many of which may be sourced from small groups of designers. These third-party components may meet functional specifications but do more than they are supposed to. Another significant threat is from rogue insider designers (such as disgruntled employees and implants from a spy agency). In addition, we also consider the possibility of a larger conspiracy of malicious designers, including among members of the validation and verification team within an organization and third-party IP vendors from outside the organization.

As in many security situations, we can view hardware security as a game between the attacker and defender in which the person who plays the last hand usually wins. As a result, we trust a small number of designers (likely only one) to be trustworthy and able to use or implement our defensive techniques correctly and assume untrusted personnel are unable to modify the design after the security techniques are applied to the design. To keep the defense effort small, we aim to produce simple and low-overhead designs.

Backdoor models. We focus our efforts on digital backdoors, or backdoors targeting digital designs and using well-defined digital signals to turn on and perform malicious actions.a The signal that activates a backdoor is called the “trigger,” and the resulting action is the “payload.” Digitally triggered backdoors, like digital implementations, give a measure of certainty to the attacker and are preferable from an attacker’s perspective to analog backdoors. To prevent backdoors from turning on during design validation, they must be triggered with very obscure triggers. On the other hand, a triggerless backdoor that is “always on” would be too obvious to ignore, even if one member of the validation team is malicious; others on the team would notice explicitly bad outputs. Using obscure triggers also ensures the design can be compromised uniquely by the design saboteur. Finally, to be stealthy, backdoors must be small, well-hidden, and appear ordinary compared to rest of the design.

Digital backdoors are simply digital finite state machines and hence can change state in only two ways: over time or with input data. There are thus three types of digital triggers: timing triggers, or “ticking time bombs,” that induce bad actions after a certain amount of time; data triggers, or “single-shot cheat codes,” supplied as input data; and hybrid data/time triggers, or “sequence cheat codes,” delivered as a series of data inputs over multiple clock cycles. Time triggers are convenient because they can be implemented without external inputs; for instance, to trigger a backdoor following approximately 20 minutes of wall-clock operation for a device that operates at 1GHz, the attacker can simply implement a 40-bit down counter. These triggers could be valuable for air-gapped networks. Cheat codes, on the other hand, require access to supply inputs but provide an opportunity for the attacker to exclusively sabotage and own the device using a strong trigger (such as a 128-bit data value).

Digital payloads can be of two types: “emitters” or “corrupters.” Emitter payloads perform extra actions or send out extra information, beyond what is specified. Corrupters change existing messages to alter data that plays a specific role. Emitter payloads are easier for an attacker to implement, as corrupter payloads are likely to cause other aspects of the system to fail in unexpected ways without careful engineering.

Finally, as in Figure 2, a digital backdoor takes a logical form of a good circuit and a malicious circuit in the design. The outputs of both feed into a circuit that is semantically equivalent to a multiplexer that selects the output of the malicious circuit when a triggering circuit is activated (see the sidebar “Three Promising but Infeasible Approaches“).

Back to Top

Defense In Depth

Given the nature of threats against hardware design, we consider “defense in depth” the appropriate and necessary course of action for practical, real-world security. The philosophy is simple: Assume any security method can be defeated but only at great cost and with great difficulty. Then use multiple, independent security methods to make that cost and difficulty as daunting as possible. As an example, if two independent systems detect an attack with 90% probability each, then there is a 99% chance at least one will successfully detect the attack.

Here, we present three distinct and independent security systems for protecting hardware designs. They act “in series”, with later methods coming into play only if a previous system has failed. At a high level, our secure hardware flow works like this: Our first method (which is static) checks that the design being used is backdoor free. If it is somehow subverted, then the second system (which operates at runtime) alters inputs to hardware circuits within a chip to ensure no backdoors can turn on. However, if this, too, fails, the final system (also a runtime system) uses on-chip monitoring to detect that a backdoor has turned on, at which point the backdoor can either be disabled or in the worst case the system can be shut down. So even if several of our axioms are violated unexpectedly, the worst attack that can be perpetrated is denial of service.

Back to Top

Static Analysis

The first of the three stages of our defense-in-depth approach to hardware security is static analysis, which occurs during design and code entry, before the hardware is manufactured. The design being analyzed can be developed internally by an organization’s own design team or acquired from a third party. As this first line of defense, we have thus developed the first algorithm for performing static analysis to certify designs as backdoor free and built a corresponding tool called Functional Analysis for Nearly unused Circuit Identification, or FANCI.25

Key insight. Recall from the earlier section on backdoor models that a backdoor is activated when rare inputs called triggers are processed by the hardware. Since these inputs are rare, the trigger-processing circuit rarely influences the output of the hardware circuit; it switches the output of the circuit from the good subcircuit to the malicious subcircuit only when a trigger is received. If security evaluators can identify subportions of a hardware circuit that rarely influence output, they can narrow down the set of subcircuits that are potentially malicious, stealthy circuits.

Boolean functional analysis helps security evaluators identify subcircuits that rarely influence the outputs. The idea is to quantitatively measure the degree of influence one wire in a circuit has on other wires using a new metric called “control value.” The control value of an input wire w1 on a wire w2 quantifies how much the truth table representing the computation of w2 is influenced by the column corresponding to w1. FANCI detects stealthy subcircuits by finding wires with anomalous, or low, control values compared to other wires in the same design.

FANCI. The algorithm to compute the control value of w1 on w2 is presented here as Algorithm 1. The control value is a fraction between zero and one, quantifying what portion of the rows in the truth table for w2 is directly influenced by w1. In step 3 of the algorithm, we do not actually construct the exponentially large truth table. Instead, we construct the corresponding Boolean function. Since the size of truth tables grows exponentially, to scale FANCI, the algorithm approximates control values through a constant-size subset of the rows in the truth table.

As an example, suppose we have a wire w2 that is dependent on an input wire w1. Let w2 have n other dependencies. From the set of possible values for those n wires (2n), we choose a constant number of, say, 10,000. Then for these 10,000 cases, we toggle w1 to zero and then to one. For each of the 10,000 cases, we see if changing w1 changes the value of w2. If w2 changes m times, then the approximate control value of w1 on w2 is m ÷ 10,000. When we have computed all control values for a given wire (an output of some intermediate circuit), we have a vector of floating-point values we can combine to make a judgment about stealth. We find using simple aggregating metrics (such as the arithmetic mean and median) is effective for identifying stealthy wires. Other metrics may be possible and interesting in the future. The complete algorithm used by FANCI is summarized in Algorithm 2.

Evaluation. To evaluate FANCI effectiveness, we use benchmarks from the TrustHub suite, a popular benchmark suite for work on hardware backdoors.19 We evaluate multiple stealth metrics built atop the core notion of control values. The most important result is we did not encounter any false negatives. For each benchmark and each of the heuristics, we discovered at least one suspicious wire from each backdoor, or enough for us to manually identify the functionality of the hidden backdoors. We also observed different metrics can highlight different parts of the backdoor. The mean and median tend to highlight backdoor payload wires, while trivialityb more often highlights the backdoor trigger wires.

Figure 3 includes the results for 18 TrustHub benchmarks we analyzed regarding false positives. In addition to achieving low false-positive rates, we see surprisingly good scalability. As designs get larger, FANCI yields lower false positive rates, because designs tend to get larger due to wider data paths rather than fundamentally more complex logic. When this is the case, FANCI’s job gets easier. In addition to TrustHub benchmarks we also ran FANCI on a benign, academic out-of-order microprocessor core.6 We observed the runtime was approximately linear in the number of wires in the design and all experiments completed within three days with an unoptimized serial implementation.25

We also performed a red team/blue team experiment, where several teams from the U.S. and Europe tried to defeat FANCI, but FANCI performed well, catching all stealthy attacks and even a few non-stealthy (frequently-or always-on) attacks.14 While FANCI is not normally expected to detect frequently-on backdoors, sometimes even frequently-on backdoors use somewhat stealthy or abnormal logic for trigger recognition.

Benefits. FANCI is independent of inputs that might be supplied during validation for testing. Even with high-quality test suites, most tests fail to exercise all internal circuits, as input coverage and code coverage are not equivalent to internal state or value coverage. Such limited coverage provides attackers an obvious way to hide their backdoors. By operating statically, FANCI is able to observe the behavior of every gate in the design. Due to the way FANCI works, the algorithm is guaranteed to flag any stealthy combinational logic that bypasses design validation. FANCI also works against sequential (state-machine-based) backdoors, as state-based backdoors generally require combinational logic to recognize the trigger state, and FANCI is (in our experience) able to catch that combinational logic.

Back to Top

Silencing Triggers

If an attacker evades functional analysis through FANCI, then a backdoor exists within the design and will be fabricated. Our next defensive strategy is to include security circuits within the design itself that check security properties at runtime. We add these circuits after the design is completed and validated, so even a malicious insider on the design or validation teams cannot compromise them.

Key insight. A stealthy hardware backdoor always contains a trigger, the unique signal that turns the module from benign mode to malicious mode and enables the backdoor payload. The idea behind trigger obfuscation is to scramble signals at the input interfaces of hardware modules to prevent backdoor triggers from going off (see Figure 4). Since payloads depend on triggers, if the triggers do not arrive, the malicious payloads cannot be delivered. We discuss in the following sections how the three types of triggers—ticking time bomb, single-shot, and sequence cheat codes—can be prevented from turning on through this approach.

Power resets. Power resets protect untrusted units against ticking time bombs. A ticking time bomb, as mentioned earlier, goes off after a certain amount of time, say, 240 clock cycles of operation. An attacker can implement it with a 40-bit down counter. The time bomb will not go off during design validation because validation is carried out for much shorter time scales due to the slowness of validation testing and time-to-market constraints. Validation suites typically run in the KHz range compared to production hardware that runs in the GHz range. As such, validation tests are run for a small number of cycles of, say, 10 million cycles per test. Given these parameters and constraints, we see why a malicious engineer on the design team with insider information on the duration of the validation tests can easily use this information to make the backdoor trigger fire well after the validation period.

Our solution to mute this trigger is to prevent circuits from knowing a certain amount of time has passed since start up. We ensure this by frequently powering off and on (or resetting) each unit, causing all microarchitectural data to be lost. The frequency of power resets is determined by the validation epoch. We know and trust the circuit under test to be free of timing backdoors during the validation epoch; otherwise, it would not pass any validation, so resetting power before the end of the validation epoch ensures all state in the unit is wiped clean and the unit no longer has “memory” of previous computations; for instance, if the attacker is using a counter to trigger the backdoor, the counter will be reset to its initial value and never get a chance to hit the trigger.

If the defense technique is to constantly reset power to the unit, how can the computation make forward progress? Many common hardware modules behave “transactionally,” meaning the module applies a function only to its inputs and does not need to maintain long-term state. However, some hardware requires continuity of state across reset epochs. To ensure forward progress, we identify and store all architectural state necessary for forward progress outside the unit before the reset. From an implementation perspective the reset idea can be implemented through “power gating” circuits, a technique to limit power consumption where units are reset during periods of inactivity.

With this approach, how can a designer be assured the attacker does not exfiltrate the time counter (or some proxy that encodes the passage of time) through the state necessary for forward progress? This deliberate exfiltration of time is not possible due to our axiom that the unit does not produce incorrect results for the duration of the validation period. If time information is illegally exfiltrated during validation, just one good validation engineer from the entire design team is sufficient to notice and detect this violation. A final move by the attacker could be to store the trigger in nonvolatile state. This is a clever move but somewhat impractical, as the design process for creating nonvolatile hardware differs so much from regular hardware that this move would set off many different alarms at various stages in the design process. Storing the trigger in nonvolatile state may not pose a significant risk to most hardware units, except those that already require nonvolatile memory by design (such as trusted platform modules). Solutions to the problem of storing the trigger in nonvolatile state are covered by Waksman and Sethumadhavan.24


Each hardware production step must be considered a possible point of attack.


Single-shot cheat code. The second category of backdoor trigger is the single-shot cheat code that is delivered as a data value through one of the hardware input interfaces. Our solution here is to scramble the inputs supplied to a hardware module. Scrambling the inputs silences backdoors because the attacker cannot anticipate the trigger. However, hardware designers must figure out how to perform computation in the unit without unscrambling the input value.

Our solution draws on domain-specific applications of homomorphic encryption or homomorphic obfuscation, since there is no need for cryptographically strong encryption in the hardware backdoor-detection setting. The following paragraphs are a brief introduction to homomorphic functions, explaining our solution with an example.

An operation f is considered homomorphic with respect to another operation g if f(g(x), g(y)) = g(f(x, y)). One example of homomorphic operation is when f is multiplication and g is the squaring function, as in

ueq01.gif

If the functionality required of a hardware module is to compute the square of a value, the security engineer can obfuscate the input x to that unit by multiplying it by a pseudo-random value y. The squaring unit then computes the value (xy)2. Then, to decrypt, the security engineer has to only divide by the constant y2 to get back x2. Since the engineer permuted the input space in a homomorphic (structure-preserving) way, the engineer did not undermine the usefulness of the squaring module. More generally, if the obfuscation function is homomorphic over the computational function, then the computation can be done on the data while it is encrypted, allowing permutation of the inputs to silence the triggers and at the same time perform useful computations. This is one specific application of homomorphic functions, which have been used more broadly in software settings and in theory literature.7 If the engineer can create low-overhead homomorphic equivalents for typical functional units, protection can be improved against hardware backdoors.

Also worth noting is that for many hardware units, the work performed in the hardware module is an identity operation (moving data or storing without changing it, as in routers and caches), in which case any operation can be used for obfuscation. The engineer can then use a bitwise XOR operation with a changing random number (a one-time pad). This technique adds only one gate to the path delay.

Sequence cheat code. The third and final type of backdoor trigger is the sequence cheat code, which we protect against through a method called “sequence breaking.” Here, the trigger is received as several pieces of data over multiple cycles. Our solution is to benignly reorder sequences of inputs. If the attacker cannot predetermine the order transactions will be supplied to the hardware module, the attacker cannot trigger a backdoor with any significant probability. The main constraint here is ensuring changing the order of transactions does not change the functionality of the hardware unit.

As a realistic example of sequence breaking, consider a memory controller with a backdoor triggered by a sequence of 50 loads and stores that must come in a pre-chosen order. By changing the order of the 50 loads, we make it unlikely the correct trigger will ever be supplied. The intuition for why it is easy to make the probability of attack failure effectively 100% is the trigger ends up being one pseudo-random choice out of the set of all possible permutations of memory instructions, which is a factorially large set.

It may not be possible to reorder input transactions to a unit in all cases; for example, a malicious user may be able to contrive a sequence of inputs to a memory controller that is not amenable to reordering without violating correctness requirements (such as consistency or coherence requirements). For such cases, we propose adding random dummy events; for example, inserting random dummy events (such as non-binding loads to the memory controller) is sufficient to break any predetermined order for a trigger.

Evaluation. To evaluate the overheads of our techniques to silence triggers we prototyped a microcontroller22 that included all our security features. We found by applying solutions to each component in the design, we were able to protect each module; the only module our silencing triggers approach could not protect was the adder. To protect the adder we used two-way redundancy. The silicon-area overhead of the proposed solution was less than 8%. Figure 5 outlines the architecture of the microcontroller with our silencing modifications.

Security guarantees. The security of cheat-code-based techniques is probabilistic but effectively offers near-perfect coverage; the probability of a failure in each case scales as roughly 1 – 1/2n where n is the width of the trigger signal. In practice, n is large; most units that cannot be exhaustively validated to check for backdoors have 64-bit or larger interfaces. So the probability is negligible in practice. One way the proposed solutions can fail is if the random number generator used for the scrambling remaps a trigger to itself, which is extremely unlikely.

The strength of many of our techniques depends on having a trusted source for generating random numbers. However, our approach does need true cryptographically strong random-number generators and can work with weakened pseudo-random number generators.5 The ability to work with possibly weak pseudo-random number generators is due mainly to how the silencing-triggers approach uses the random numbers generated in hardware. One way for an attacker to defeat the scrambling schemes used in the silencing-triggers approach is to learn about the constants for pseudo-random number generation by choosing plaintext inputs; for instance, forcing the input to zero can reveal a sequence of outputs produced by the random number generator when the scrambling function is a simple XOR. To generate the trigger despite scrambling, the malicious hardware unit needs to reverse engineer the random-number generator within the malicious hardware unit. This requirement imposes severe constraints on the attacker and can severely reduce their stealth. We built this attack, showing the reverse-engineering circuit for pseudo-random number generators can be quite large (22x silicon-area overhead). Such overheads alone can make such attacks very obvious to hardware designers and validators alike.

Back to Top

Payload Detection

If a backdoor is somehow designed to bypass both FANCI and silencing triggers, and manages to turn itself on at a maliciously intended time, then we need to be able to detect the attack payload and either recover or fail gracefully. To make this happen, we use a dynamic on-chip monitoring system that performs backdoor payload detection by continuously checking invariants.

Key insight. Our technique for malicious payload detection is based on two key observations: First is that in hardware systems events between hardware units are largely deterministic, and there exist invariants that are violated only when backdoors achieve their payloads. Examples of invariants between hardware units may be simple checks (such as the number of instructions executed by the functional units should not be larger than the number instructions processed by the instruction decoder). The second is that if we, as security engineers, make the weak assumption that the majority of modules are not malicious, we can build a secure invariant-monitoring infrastructure even if we do not know which modules are malicious. For our initial defensive implementations, we assume exactly one module (or one design team) is malicious, but there are natural extensions to n malicious modules for n > 1.23

The first system we built is called TRUSTNET. The basic idea (see Figure 6) is to have self-monitoring triangles formed out of any set of three hardware modules. In a typical hardware design, any module is connected to several others; for example, in a microprocessor pipeline, a decoder is connected to a fetch and an execution unit, among others. There is usually some unit that sees data before it (in this case the fetch unit) and some unit that sees data after it (in this case the execution unit). We call them the “predictor” and the “reactor,” respectively. By adding a verification step between the predictor and the reactor, we can confirm the untrusted (monitored) unit is doing reasonable work; for example, the fetch unit and the execution unit can communicate to ensure the correct number of instructions is being decoded. This simple invariant (instructions in = instructions out) prevents all decoder-based code-injection attacks.

We also designed an extension to TRUSTNET called DATAWATCH. Since TRUSTNET measures simple invariants that can be checked by counting (such as numbers of instructions or events), DATAWATCH performs simple data-integrity checks; for example, it can check that memory events have the correct types or the correct opcode types are being supplied to pipeline units. DATAWATCH is inherently more heavyweight than TRUSTNET, since TRUSTNET uses one-bit signals, and DATAWATCH uses multi-bit signals. However, DATAWATCH is still efficient for modern hardware designs.

There are several techniques for handling alarms from TRUSTNET and DATAWATCH. One response could be to shut down the entire system, turning a confidentiality or integrity attack into an availability attack. In highly secure domains, this response may be desirable to guarantee no exfiltration of sensitive data. If continuous operation is desired, then some form of N-way redundancy is an option. If designs from multiple vendors are available, when an alarm sounds, computation can be migrated onto another machine from a different vendor and resumed from a last known good state; for instance, if the alarm sounded on an Intel x86 chip, the computation can be moved to an AMD x86 chip and resumed. The proposed technique can also be used for forensic examination. When an alarm goes off, the data in flight in the hardware units could be flagged as a cheat code and logged for future execution.

Evaluation. We demonstrated our payload-detection approach on portions of the Sun Microsystems OpenSPARC T2 microarchitecture. We used the HDL hardware implementation to systematically determine the number of on-chip units that can be covered by TRUSTNET and DATAWATCH. To measure this attack/vulnerability surface we observe a hardware unit is vulnerable to backdoors only insofar as its interfaces are threatened. The processing inside the unit can be checked to be correct simply by checking its inputs and outputs. The interfaces thus present points of vulnerability, and the efficacy of our solution is then determined by whether or not these interfaces are protected from attacks through TRUSTNET and DATAWATCH. Figure 7 outlines the units on a chip that can be covered partially or fully through the invariant specification technique.c


It is not possible for software alone to create security solutions that cannot be undermined by the underlying hardware.


Security guarantees. Consider a monitor that is set up to watch some unit X. Four actors are in play: the predictor of X (P); the reactor to X (R); X itself; and the monitor of X we call M. Under a lone-wolf-attacker model, the attacker gets to corrupt one (and only one) of these actors. In order to compromise confidentiality or integrity, the attacker would have to choose to corrupt X itself. In this case, P, R, and M are all untampered and the attack can be detected. If the attacker tampers with P or R, then they would disagree, and the attack would be detected. The attacker can choose to attack M, but since monitors are only one or a few logic gates (XOR gate), the monitors can be formally checked not to contain backdoors. However, the proposed solution will not work for a conspiracy among designers of multiple units or when a single malicious designer designs multiple units responsible for monitoring each other. Organizational security is necessary to prevent such attacks.

Back to Top

Related Research Chronology

The earliest discussion of the dangers of hardware backdoors was by Molho et al.13 in 1970. The earliest solution we could find was described in thesis work by M.K. Joseph11 at the University of California, Los Angeles, in 1988, suggesting N-version redundancy. Defenses for hardware-based attacks began in earnest around 2005, following a report by the Defense Science Board warning of potential counterfeit chips.21 In 2008, King et al.12 demonstrated modern processors are vulnerable to relatively simple hardware backdoors. Subsequent work in 2010 taxonomized the types and properties of possible backdoors, focusing primarily on digital hardware.20,23

The first solution for integrating untrusted third-party IP was presented at the 28th IEEE Symposium on Security and Privacy, where Huffmire et al.10 proposed isolation mechanisms and communication whitelists to prevent software IP cores with different levels of security from accessing each other. This research was performed in the context of reconfigurable hardware, or FPGAs. The first work on design-level backdoors in contemporary hardware (ASICs) was described in two papers presented at the 31st IEEE Symposium on Security and Privacy in 2010. One, by Waksman and Sethumadhavan,23 described a hardware-only solution for detecting payloads and is described in this article. Another, by Hicks et al.,8 proposed a hybrid hardware-software solution, replacing hardware circuits suspected to contain backdoors with circuits that report exceptions. If and when a program reaches these exception circuits—indicating a program may be trying to exercise malicious hardware—program control is transferred to a software emulator to emulate the offending instruction. However, the software emulator runs on the same hardware as the original program that caused the exception, thus introducing a chance of deadlocks and potentially opening up a new attack surface due to emulation. Some interesting counterattacks were performed in follow-up work by Sturton et al.18 in 2012. Researchers have also designed hardware trojans through competitions15 and developed tools to identify regions to insert trojans.16

In parallel with the work on front-end backdoors, a significant amount of work has been done on foundry-side security, addressing the problem of how hardware manufacturing can be secured if a trustworthy design model is provided. The first such solution was devised by IBM researchers Agrawal et al.2 in 2007. The idea was to characterize the expected power draw of the chip using a “golden” model, following abnormal, anomalous power readings to identify malicious chips. This line of work generated a significant amount of follow-on effort to improve the efficacy of the technique (such as extending it to smaller technologies resulting in a significant amount of background noise) and discovering alternate “fingerprints” (such as circuit path delays and thermal outputs).1,3,9,26


Security without trust is impossible, and hardware today is trusted without good reason.


Since security is a full-system property, both the front end and back end of the design process must be secured. As such, most of these techniques complement the work covered here.

Back to Top

Open Problems

Hardware-design security is a relatively new area of study, with many important open problems and challenges we organize into four categories:

Better homomorphic functions. Homomorphic functions present an interesting approach for securing hardware designs. By encrypting data while it is being operated on, foundries and physical design tools are prevented from compromising hardware. It can also be used to enhance resilience of front-end design attacks, as described in the section on silencing triggers. At the same time, homomorphic functions tend to be expensive, especially when one tries to design them to be more generally or broadly applicable. Coming up with homomorphic implementations of additional functional units and processor pipelines would be valuable.

Formal models, proofs, verification. There is no formal model for describing hardware backdoors today and thus no rigorous description of security guarantees provided by various security techniques or their applicability and limitations. Developing such models poses significant challenges for hardware designers because it requires description of not only complex hardware but also the abilities of agents (such as validation engineers) involved in the design process. Finding the right, meaningful abstractions to describe security techniques would be extremely valuable. Further, many techniques proposed for hardware security are likely composable; proofs to this effect would likewise be valuable.

Foundry security. Given the design-level defenses presented here, the notion of a golden design becomes an achievable goal. This design, or completely trusted design, is assumed by hardware security researchers a basis for most work in foundry-level security. Foundry-level security is vital, because a malicious foundry can ignore everything in the design and produce a malicious chip. However, foundry-level and design-level security literatures are today largely disparate. A unified approach, including both foundry and design security, to create a cohesive notion of hardware security would be valuable.

Analog, nondeterministic, ephemeral backdoors. As defenders, we constantly think about how we would create new backdoors to bypass our own techniques. The defenses discussed here assume digital hardware defined in the HDL. However, one can imagine attacks against analog circuits. One can also imagine ephemeral attacks that work based on physical parameters (such as environmental temperature). There could also be nondeterministic triggers. Understanding the feasibility of such attacks and defending against them would require new ways of thinking about hardware backdoors. Such attacks would be very valuable in advancing the field of hardware security.

Back to Top

Conclusion

Any system needs a degree of trust. Security without trust is impossible, and hardware today is trusted without good reason. As the cliché goes, security is an arms race to the bottom. Attackers exploit hardware (if they are not already doing so) to break security. Strong solutions are needed to build trustworthy hardware from untrusted components.

Here, we have presented a full view of the modern hardware supply chain, explaining how it is vulnerable to attacks. Taking into account all the agents and processes involved in producing a hardware design, we have proposed a defense-in-depth approach for making hardware trustworthy while minimizing the trust in the agents and organizations that supply components.

Using all our proposed methods in concert allows for well-defined notions of trust and can guarantee protection against large and critical classes of hardware-oriented attacks. For computer security to advance, all layers of security must be built atop a trustworthy foundation, starting with methods for strengthening hardware designs against attacks.

Back to Top

Acknowledgments

This work is supported by Defense Advanced Research Projects Agency grant FA 87501020253 through the Clean Slate Security program and National Science Foundation CCF/SaTC Programs grant 1054844 and a fellowship from the Alfred P. Sloan Foundation. Opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. government or commercial entities. Simha Sethumadhavan has a significant financial interest in Chip Scan LLC, New York. We also wish to thank the reviewers for their valuable feedback.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Front end and back end of the modern supply chain for digital hardware design.

F2 Figure 2. Taxonomy and illustration of digital backdoors.23

F3 Figure 3. False-positive rates for the four different metrics and for TrustHub benchmarks; the RS232 benchmark, the smallest, has about 8% false positives, and the others have much lower rates (less than 1%).

F4 Figure 4. Overview of our three methods for trigger obfuscation.24

F5 Figure 5. Microarchitectural overview of the TμC1 microcontroller.22

F6 Figure 6. Overview of the TRUSTNET and DATAWATCH monitoring schemes.23

F7 Figure 7. Units covered by TRUSTNET and DATAWATCH on an OpenSPARC microprocessor (approximate illustration).23

UF1 Algorithm 1. Compute control value.

UF2 Algorithm 2. How FANCI flags suspicious wires in a design.

Back to Top

    1. Abdel-Hamid, A.T., Tahar, S., and Aboulhamid, E.M. IP watermarking techniques: Survey and comparison. In Proceedings of the IEEE International Workshop on System-on-Chip for Real-Time Applications (Calgary, Alberta, Canada, June 30–July 2). IEEE Press, 2003.

    2. Agrawal, D., Baktir, S., Karakoyunlu, D., Rohatgi, P., and Sunar, B. Trojan detection using IC Fingerprinting. In Proceedings of the IEEE Symposium on Security and Privacy (Oakland, CA, May 20–23). IEEE Press, 2007, 296–310.

    3. Banga, M., Chandrasekar, M., Fang, L., and Hsiao, M.S. Guided test generation for isolation and detection of embedded trojans in ICS. In Proceedings of the 18th ACM Great Lakes symposium on VLSI (Pittsburgh, PA, May 20–22). ACM Press, New York, 2008, 363–366.

    4. BBC News. U.S. Government restricts China PCs (May 19, 2006); http://news.bbc.co.uk/2/hi/4997288.stm

    5. Becker, G.T., Regazzoni, F., Paar, C., and Burleson, W.P. Stealthy dopant-level gardware trojans. In Proceedings of the 15th International Conference on Cryptographic Hardware and Embedded Security. Springer-Verlag, Berlin, Heidelberg, Germany, 2013, 197–214.

    6. Choudhary, N., Wadhavkar, S., Shah, T., Mayukh, H., Gandhi, J., Dwiel, B., Navada, S., Najaf-Abadi, H., and Rotenberg, E. Fabscalar: Composing synthesizable RTL designs of arbitrary cores within a canonical superscalar template. In Proceedings of the 38th Annual International Symposium on Computer Architecture (San Jose, CA, June 4–8). ACM Press, New York, 2011, 11–22.

    7. Gentry, C. Computing arbitrary functions of encrypted data. Commun. ACM 53, 3 (Mar. 2010), 97–105.

    8. Hicks, M., King, S.T., Martin, M.M.K., and Smith, J.M. Overcoming an untrusted computing base: Detecting and removing malicious hardware automatically. In Proceedings of the 31st IEEE Symposium on Security and Privacy (Oakland, CA, May 16–19). IEEE Computer Society Press, 2010, 159–172.

    9. Hu, K., Nowroz, A.N., Reda, S., and Koushanfar, F. High-sensitivity hardware trojan detection using multimodal characterization. In Proceedings of Design Automation & Test in Europe (Grenoble, France). IEEE Press, 2013, 1271–1276.

    10. Huffmire, T., Brotherton, B., Wang, G., Sherwood, T., Kastner, R., Levin, T., Nguyen, T., and Irvine, C. Moats and drawbridges: An isolation primitive for reconfigurable hardware-based systems. In Proceedings of the IEEE Symposium on Security and Privacy (Oakland, CA, May 20–23). IEEE Press, 2007, 281–295.

    11. Joseph, M.K. Architectural Issues in Fault-Tolerant, Secure Computing Systems. Ph.D. Thesis, University of California, Los Angeles, 1988; http://ftp.cs.ucla.edu/tech-report/198_-reports/880047.pdf

    12. King, S.T., Tucek, J., Cozzie, A., Grier, C., Jiang, W., and Zhou, Y. Designing and implementing malicious hardware. In Proceedings of the First Usenix Workshop on Large-Scale Exploits and Emergent Threats (San Francisco, CA, Apr. 15). USENIX Association, Berkeley, CA, 2008, 5:1–5:8.

    13. Molho, L.M. Hardware aspects of secure computing. In Proceedings of the Spring Joint Computer Conference (Atlantic City, NJ, May 5–7). ACM Press, New York, 1970, 135–141.

    14. NYU Cyber Security Awareness Week. The 2013 Embedded Systems Challenge; https://csaw.engineering.nyu.edu/

    15. Rajendran, J., Jyothi, V., and Karri, R. Blue team-red team approach to hardware trust assessment. In Proceedings of the IEEE 29th International Conference on Computer Design (Amherst, MA, Oct. 9–12). IEEE, 2011, 285–288.

    16. Salmani, H. and Tehranipoor, M. Analyzing circuit vulnerability to hardware trojan insertion at the behavioral level. In Proceedings of the IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (New York, Oct. 2–4). IEEE Press, 2013, 190–195.

    17. Simonite, T. NSA's own hardware backdoors may still be a problem from hell. MIT Technology Review (Oct. 2013); http://www.technologyreview.com/news/519661/nsas-own-hardware-backdoors-may-still-be-a-problem-from-hell/

    18. Sturton, C., Hicks, M., Wagner, D., and King, S.T. Defeating UCI: Building stealthy and malicious hardware. In Proceedings of the 2011 IEEE Symposium on Security and Privacy (Oakland, CA, May 22–25). IEEE Computer Society, 2011, 64–77.

    19. Tehranipoor, M., Karri, R., Koushanfar, F., and Potkonjak, M. TrustHub; https://www.trust-hub.org/

    20. Tehranipoor, M. and Koushanfar, F. A survey of hardware trojan taxonomy and detection. IEEE Design Test of Computers 27, 1 (Jan.–Feb. 2010), 10–25.

    21. U.S. Department of Defense. High Performance Microchip Supply (Feb. 2005); http://www.acq.osd.mil/dsb/reports/ADA435563.pdf

    22. Waksman, A., Eum, J., and Sethumadhavan, S. Practical, lightweight secure inclusion of third-party intellectual property. IEEE Design and Test Magazine 30, 2 (2013), 8–16.

    23. Waksman, A. and Sethumadhavan, S. Tamper-evident microprocessors. In Proceedings of the 31st IEEE Symposium on Security and Privacy (Oakland, CA, May 16–19). IEEE Press, 2010, 173–188.

    24. Waksman, A. and Sethumadhavan, S. Silencing hardware backdoors. In Proceedings of the 2011 IEEE Symposium on Security and Privacy (Oakland, CA, May 22–25). IEEE Press, 2011, 49–63.

    25. Waksman, A., Suozzo, M., and Sethumadhavan, S. FANCI: Identification of stealthy malicious logic using Boolean functional analysis. In Proceedings of the 20th ACM Conference on Computer and Communications Security (Berlin, Germany, Nov. 4–8). ACM Press, New York, 2013, 697–708.

    26. Wei, S., Li, K., Koushanfar, F., and Potkonjak, M. Provably complete hardware trojan detection using test point insertion. In Proceedings of the International Conference on Computer-Aided Design (San Jose, CA, Nov. 5–8). ACM Press, New York, 2012, 569–576.

    a. We define backdoor as an undisclosed feature that compromises the confidentiality, integrity, and/or availability of the expected design and/or implementation.

    b. Triviality is an alternate aggregation metric where the score is the fraction of the time the output is fixed (either zero or one).

    c. This is a conservative estimate, based on only those interfaces we could easily cover; further effort or insider knowledge would likely lead to better coverage.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More