Review articles
# Hopes, Fears, and Software Obfuscation

Computer programs are arguably the most complex objects ever constructed by humans. Even understanding a 10-line program (such as the one depicted in Figure 1) can be extremely difficult. The complexity of programs has been the bane (as well as the boon) of the software industry, and taming it has been the objective of many efforts in industry and academia. Given this, it is not surprising that both theoreticians and practitioners have been trying to "harness this complexity for good" and use it to protect sensitive information and computation. In its most general form this is known as *software obfuscation*, and it is the topic of this article.

In a certain sense, any cryptographic tool such as encryption or authentication can be thought of as harnessing complexity for security, but with software obfuscation people have been aiming for something far more ambitious: a way to transform *arbitrary* programs into an "inscrutable" or *obfuscated* form. By this we do not mean reverse engineering the program should be cumbersome but rather it should be *infeasible*, in the same way that recovering the plaintext of a secure encryption cannot be performed using any reasonable amount of resources.

Obfuscation, if possible, could be the cryptographer's "master tool"—as we will see it can yield essentially any crypto application one can think of. Therefore, it is not surprising that both practitioners and theoreticians have been trying to achieve it for a great while. However, over the years many practical attempts at obfuscation (such as the DVD Content Scrambling System) had been broken (for example, Green^{18} and Jacob et al.^{19}) Indeed in 2001, in work with Impagliazzo, Goldreich, Rudich, Sahai, Vadhan, and Yang,^{3} we proved that achieving such a secure transformation is *impossible.* So, why isn't this the shortest survey in *Communications* history?

The key issue is what does it mean to be "secure." In our 2001 work, we proved impossibility of a notion of security for such "obfuscating compilers," which we termed as *virtual black box security* and will describe more formally next. While virtual black-box is arguably the most natural notion of security for obfuscators, it is not the only one. Indeed, in the same work^{3} we suggested a weaker notion called *indistinguishability obfuscation* or IO for short. We had no idea if IO can be achieved, nor if it is useful for many of the intended applications. But in 2013 Garg et al.^{13} used some recent cryptographic advances to give a *candidate construction* of obfuscators satisfying the IO definition. Moreover, their work, and many followup works, have shown this weaker notion is actually extremely useful, and can recover many (though not all) the applications of virtual black-box obfuscators. These applications include some longstanding cryptographic goals that before Garg et al.'s work seemed far out of reach, and so the cryptographic community is justifiably excited about these new developments, with many papers and several publicly funded projects devoted to exploring obfuscation and its applications.

What is an indistinguishability obfuscator? How is it useful? and What do I mean by a "candidate construction?" Read the rest of this article to find out.

Obfuscation research is in an "embryonic stage" in the sense that so far we only have theoretical proofs of concept that are extremely far from practical efficiency. Even with the breakneck pace of research on this topic, it may take years, if not decades, until such obfuscators can be deployed at scale, and as we will see, beyond the daunting practical issues there are some fundamental theoretical challenges we must address as well. Thus, while eventually one might hope to obfuscate large multipart programs, in this article we focus on the task of obfuscating a single function, mapping an input to an output without any side effects (though there is recent research on obfuscating more general computational models). Similarly, given that the overhead in translating a program from one (Turing complete) programming language to another pales in comparison to the current inefficiencies, for the purposes of this article we can imagine that all programs are represented in some simple canonical way.

An *obfuscating compiler* is an algorithm that takes as input a program *P* and produces a functionally equivalent program *P*′. So far, this does not rule out the compiler that simply outputs its input unchanged, but we will want the program *P*′ to be "inscrutable" or "obfuscated." Defining this requirement formally takes some care, and as we will see, there is more than one way to do so. Our guiding principle is that the output program *P*′ will not reveal any more information about the input *P* than is necessarily revealed by the fact the two programs are functionally equivalent. To take two extreme examples, if you wrote a program *P* that outputs your credit card number when given the number 0 as input, then no matter how obfuscated *P*′ is, it will still reveal the same information. In contrast, if the program *P* contained the credit number as a comment, with no effect on its functionality, then the obfuscated version *P*′ should not reveal it. Of course, we want an obfuscator compiler to do much more than strip all comments. Specifically, we say a compiler transforming *P* to *P*′ is *virtual black-box secure* if for any attacker *A* that learns some piece of information *x* from *P*′, *x* could have been learned by simply treating *P*′ (or *P*) as a *black box* and querying it on various inputs and observing the outputs. More formally, the definition requires that for every (polynomial-time) algorithm *A*, there is another polynomial time algorithm *S* (known as a *simulator* in crypto parlance) such that the random variables *A*(*P*′) and *S*^{P} are computationally indistinguishable, where *A*(*P*′) denotes the output of *A* given the code of the obfuscated program *P*′, while *S ^{P}* denotes the output of

Let us imagine that we had a practically efficient obfuscating compiler that satisfied this notion of virtual black-box security. What would we use it for? The most obvious application is software protection—publishing an obfuscated program *P*′ would be the equivalent of providing a physical "black box" that can be executed but not opened up and understood. But there are many other applications. For example, we could use an obfuscator to design a "selective decryption scheme"^{a}—a program that would contain inside it a decryption key *d* but would only decrypt messages satisfying very particular criteria. For example, suppose that all my email was encrypted, and I had an urgent project that needed attention while I was on vacation. I could write a program *P*, such as the one of Figure 2, that given an encrypted email message as input, uses my secret decryption key to decrypt it, checks if it is related to this project and if so outputs the plaintext message. Then, I could give my colleague an obfuscated version *P*′ of *P*, without fearing she could reverse engineer the program, learn my secret key and manage to decrypt my other email messages as well.

There are many more applications for obfuscation. The example of functional encryption could be vastly generalized. In fact, *almost any cryptographic primitive you can think of* can be fairly directly derived from obfuscation, starting from basic primitives such as public key encryption and digital signatures to fancier notions such as multiparty secure computation, fully homomorphic encryption, zero knowledge proofs, and their many variants. There are also applications to obfuscation that a priori seem to have nothing to do with cryptography; for example, one can use it to design autonomous agents that would participate on your behalf in digital transactions such as electronic auctions, or to publish patches for software vulnerabilities without worrying that attackers could learn the vulnerabilities by reverse engineering the patch.

So, virtual black-box obfuscation is wonderful, but does it exist? This is what we set to find out in 2001, and as already mentioned, our answer was negative. Specifically, we showed the existence of *inherently unobfuscatable functions*—this is a program *P* whose source code can be recovered from *any* functionally equivalent program *P*′ though curiously it *cannot* be efficiently recovered using only black-box access to *P*.

In the intervening years, cryptography has seen many advances, in particular achieving constructions of some of the cryptographic primitives that were envisioned as potential applications of obfuscation, most notably *fully homomorphic encryption*^{14} (see the accompanying sidebar). In particular, in 2012 Garg, Gentry and Halevi^{12} put forward a candidate construction for an object they called "cryptographic multilinear maps," and which in this article I will somewhat loosely refer to as a "homomorphic quasi encryption" scheme. Using this object, Garg et al.^{13} showed a candidate construction of a general-purpose *indistinguishablity obfuscators.*^{b}

Obfuscation research is in an "embryonic stage" in the sense that so far we only have theoretical proofs of concept that are extremely far from practical efficiency.

An *indistinguishability obfuscator (IO)* hones in on one property of virtual black box obfuscators. Suppose *P* and *Q* are two functionally equivalent programs. It is not difficult to verify that virtual black-box security implies an attacker should not be able to tell apart the obfuscation *P*′ of *P* from the obfuscation *Q*′ of *Q.* Indistinguishability obfuscation requires only this property to hold.

Indistinguishability obfuscators were first defined in our original paper,^{3} where we noted this notion is weak enough to avoid our impossibility result, but we did not know whether or not it can be achieved. Indeed, a priori, one might think that indistinguishable obfuscators capture the "worst of both worlds." On one hand, while the relaxation to IO security does allow to avoid the impossibility result, such obfuscators still seem incredibly difficult to construct. For example, assuming Goldbach's Conjecture is correct, the IO property implies the obfuscation of the `Goldbach(n)`

subroutine of the program in Figure 1 should be indistinguishable from the obfuscation of the function that outputs `True`

on every even *n* > 2; designing a compiler that would guarantee this seems highly non-trivial. On the other hand, it is not immediately clear that IO is useful for concrete applications. For example, if we consider the "selective decryption" example mentioned previously, it is unclear that the IO guarantee means that obfuscating the program *P* that selectively decrypts particular messages would protect my secret key. After all, to show it does, it seems we would need to show there is a functionally equivalent program *P*′ that does not leak the key (and hence by the IO property, since *P* and *P*′ must have indistinguishable obfuscations, the obfuscation of *P* would protect the key as well). But if we knew of such a *P*′, why did not we use it in the first place?

It turns out both these intuitions are (probably) wrong, and that in some sense IO may capture the "best of both worlds." First, as I mentioned, despite the fact that it seems so elusive, Garg et al.^{13} did manage to give a candidate construction of indistinguishability obfuscators. Second,^{13} managed to show that IO is also *useful* by deriving functional encryption from it, albeit in a less direct manner. This pattern has repeated itself several times since, with paper after paper showing that many (though not all) of the desirable applications of virtual black-box obfuscation can be obtained (using more work) via IO. Thus, indistinguishability obfuscation is emerging as a kind of "master tool" or "hub" of cryptography, from which a great many of our other tools can be derived (see Figure 3).

Now all that is left is to find out how do we construct this wonderful object, and what is this caveat of "candidate construction" I keep mentioning?

I will start with describing the construction and later turn to discussing the caveat. Unfortunately, the construction is rather complex. This is both in the colloquial sense of being complicated to describe (not to mention implement) and in the computational complexity sense of requiring very large (though still polynomial) space and time resources. Indeed, this complexity is the main reason these constructions are still at the moment theoretical "proof of concepts," as opposed to practical compilers. The only implementation of obfuscators I know of at the time of this writing was by Apon et al.,^{1} and their obfuscation blows up a circuit of 16OR gates to 31GB. (The main source of inefficiency arises from the constructions of "homomorphic quasi-encryption schemes" described here.) That said, making these schemes more efficient is the object of an intensive research effort, and I am sure we will see many improvements in the coming years. It is a testament to the excitement of this field that in the short time after the first candidate construction of IO, there are already far more works than I can mention that use IO for exciting applications, study its security or efficiency, consider different notions of obfuscation, and more.

The astute reader might notice that fully homomorphic encryption is an immediate consequence of (virtual black-box) obfuscation combined with any plain-old encryption.

While I will not be able to describe the actual construction, I do hope to give some sense into the components that go into it, and the rather subtle questions that arise in exploring its security.

In 2009, Craig Gentry rocked the world of cryptography by presenting a construction for *fully homomorphic encryption scheme.* What is this object? Recall that a traditional encryption scheme is composed of two functions: the encryption operation Enc—mapping the secret plaintexts into the "scrambled" cyphertexts—and the decryption operation Dec that performs the inverse of Enc (and requires the secret decryption key to compute). A fully homomorphic encryption supports two additional operations ⊕, ⊗ which correspond to "multiplying" and "adding" ciphertexts. Specifically they satisfy the equations for every *a, b* ∈ {0, 1}:

Since

these two operations allow us to compute from encryptions *Enc*(*a*_{1}), ..., Enc(*a _{n}*) the value Enc(

Fully homomorphic encryption was first envisioned in 1978 by Rivest et al.,^{23} but they gave no constructions and for many years it was not at all clear whether the existence of such operations is compatible with security until Gentry^{14} came up with the first plausibly secure construction. Rivest et al.^{23} were motivated by client-server applications (now known as "cloud computing"). Indeed, one can see that such an encryption scheme could be very useful in this setting, where for example a client could send to the server an encryption Enc(*a*) = *Enc*(*a*_{1}) ... Enc(*a _{n}*) of its private data

The astute reader might notice that fully homomorphic encryption is an immediate consequence of (virtual black-box) obfuscation combined with any plain-old encryption. Indeed, if secure obfuscation existed then we could implement ⊕ and ⊗ by obfuscating their trivial programs (1) and (2). One might hope that would also work in the other direction—perhaps we could implement obfuscation using a fully homomorphic encryption. Indeed, let *F* be the "program interpreter" function that takes as input a description of the program *P* and a string *a* and maps them to the output *P*(*a*). Perhaps we could obfuscate the program *P* by publishing an encryption *P*′ = Enc(*P*) of the description of *P* via a fully homomorphic encryption. The hope would be we could use *P*′ to evaluate *P* on input *a* by encrypting *a* and then invoking *F* on the encrypted values Enc(*P*) and Enc(*a*) using the homomorphic operations. However, a moment's thought shows if we do that, we would not get the value *P(a)* but rather the *encryption* of this value. Thus *P*′ is not really a functionally equivalent form of *P*, as (unless one knows the decryption key) access to *P*′ does not allow us to compute *P* on chosen inputs. Indeed, while fully homomorphic encryption does play a part in the known constructions of obfuscation, they involve many other components as well.

In some sense, the problem with using a fully homomorphic encryption scheme is it is "too secure." While we can perform various operations on ciphertexts, without knowledge of the secret key we do not get any information at all about the plaintexts, while obfuscation is all about the "controlled release" of particular information on the secret code of the program *P.* Therefore, the object we need to construct is what I call a "fully homomorphic quasi-encryption" which is a riff on an encryption scheme that is in some sense less secure but more versatile than a standard fully homomorphic encryption.^{c}

**Homomorphic quasi-encryption.** A "fully homomorphic quasi-encryption scheme" has the same Enc, Dec, ⊗ and ⊕ operations as a standard fully homomorphic encryption scheme, but also an additional operation ≟ that satisfies that Enc(*a*) ≟ Enc(*b*) is `true`

if *a* = *b* and equals `false`

otherwise. Moreover, instead of {0, 1}, the plaintexts will be in {0, 1, ..., *p* – 1} for some very large prime *p* (of a few thousand digits or so), and the ⊕ and ⊗ operations are done modulo *p* instead of modulo 2. Note that a quasi-encryption scheme is less secure than standard encryption, in the sense the ≟ operation allows an attacker to perform tasks, such as discovering two ciphertexts correspond to the same plaintext, that are infeasible in a secure standard encryption. Indeed, the notion of security for a quasi-encryption scheme is rather subtle and is very much still work in progress.^{d} A necessary condition is one should not be able to recover *a* from Enc(*a*) but this is in no way sufficient. For now, let us say the quasi-encryption scheme is secure if an attacker cannot learn anything about the plaintexts beyond what could be learned by combining the ⊗, ⊕ and ≟ operations.^{e}

One example of a "partially homomorphic quasi-encryption" scheme is *modular exponentiation.* That is, given some numbers *g, q* such that *g ^{p}* = 1 (mod

**From quasi-encryption to obfuscation.** As mentioned, the construction of obfuscation from a fully homomorphic quasi-encryption is rather complicated, but I will attempt to outline some of the ideas behind it. I will not even try to argue about the *security* of the obfuscation construction, but rather simply give some hints of how one might use the quasi-encryption scheme to represent a program *P* in a form that at least intuitively seems very hard to understand. At a high level the obfuscation of a program *P* consists simply of the quasi-encryptions (which we will call *encodings*) of *N* numbers *a*_{1}, ..., *a _{N}.* To make this a valid representation, we must supply a way to compute

How do we construct this magical mapping of inputs to formulas? We cannot present it fully here, but can describe some of the tools it uses. One component is the naive approach described earlier of constructing an obfuscation scheme from a fully homomorphic encryption. As we noted, this approach does not work because it only allows us to compute the output of the program in encrypted form, but it does essentially reduce the task of obfuscating an *arbitrary* function to the task of obfuscating the *decryption function* of the concrete encryption scheme. The crucial property for us is this decryption function can be computed via a *logarithmic depth circuit.* This allows us to use some of the insights obtained in the study of logarithmic depth circuits (which had been developed toward obtaining circuit lower bounds, without envisioning any cryptographic or other practical applications whatsoever). In particular, Barrington^{4} proved the following beautiful but seemingly completely abstract result in 1986 (see also Figure 4):

THEOREM 1. *If F*: {0, 1}^{n} → {0, 1} *is a function computable by a log-depth circuit, then there exists a sequence of m* = poly(*n*) 5 × 5 *matrices A*_{1}, ..., *A*_{m′} *with entries in* {0, 1} *and a mapping x → π _{x} from* {0, 1}

*(That is, F*(*x*) *is equal to the top left element of the product of matrices according to the order π _{x}.)*

This already suggests the following method for obfuscation: If we want to obfuscate the decryption function *F*, then we construct the corresponding matrices *A*_{1}, ..., *A _{m}*, encode all

Indeed, the actual obfuscation scheme of Garg et al.^{13} is more complicated and uses additional randomization tricks (as well as a more refined variant of quasi-encryption schemes called *graded encoding*) to protect against such attacks. Using these tricks, we were able to show in work with Garg et al.^{2} (also see Brakerski and Rothblum^{8}) that it is not possible to use the ⊕, ⊗ and ≟ operations to break the obfuscation. This still does not rule out the possibility of an attacker using the raw bits of the encoding (which is in fact what is used in the^{3} impossibility result) but it is a promising sign.

So far I have avoided all discussion of the *security* of homomorphic quasi-encryption schemes and obfuscation and indeed their status is significantly subtler than other cryptographic primitives such as encryption and digital signatures. To understand these issues, it is worthwhile to take a step back and look at the question of security for cryptographic schemes in general. The history of cryptography is littered with the figurative corpses of cryptosystems believed secure and then broken, and sometimes with the actual corpses of people (such as Mary, Queen of Scots) that have placed their faith in these cryptosystems. But something changed in modern times. In 1976 Diffie and Hellman^{11} proposed the notion of *public key cryptography* and gave their famous Diffie–Hellman key exchange protocol, which was followed soon after by the RSA cryptosystem.^{24} In contrast to cryptosystems such as Enigma, the description of these systems is simple and completely public. Moreover, by being *public key systems*, they give more information to potential attackers, and since they are widely deployed on a scale more massive than ever before, the incentives to break them are much higher. Indeed, it seems reasonable to estimate that the amount of manpower and computer cycles invested in cryptanalysis of these schemes today every year dwarves all the crypt-analytic efforts in pre-1970 human history. And yet (to our knowledge) they remain unbroken.

How can this be? I believe the answer lies in a fundamental shift from "security through obscurity" to "security through simplicity." To understand this consider the question of how could the relatively young and unknown Diffie and Hellman (and later Rivest, Shamir and Adleman) convince the world they have constructed a secure public key cryptosystem, an object so paradoxical that most people would have guessed could not exist (and indeed a concept so radical that Merkle's first suggestion of it was rejected as an undergraduate project in a coding theory course). The traditional approach toward establishing something like that was "security through obscurity"—keep all details of the cryptosystem secret and have many people try to cryptanalyze it in-house, in the hope that any weakness would be discovered by them before it is discovered by your adversaries. But this approach was of course not available to Diffie and Hellman, working by themselves without many resources, and publishing in the open literature.

Of course the best way would have been to prove a mathematical theorem that breaking their system would necessarily take a huge number of operations. Thanks to the works of Church, Turing, and Gödel, we now know that this statement can in fact be phrased as a precise mathematical assertion. However, this assertion would in particular imply that **P ≠ NP** and hence proving it seems way beyond our current capabilities. Instead, what Diffie and Hellman did (aided by Ralph Merkle and John Gill) was to turn to "security by simplicity"—base their cryptosystem on a simple and well-studied mathematical problem, such as inverting modular exponentiation or factoring integers, that has been investigated by mathematicians for ages for reasons having nothing to do with cryptography. More importantly, it is plausible to conjecture there simply *does not exist* an efficient algorithm to solve these clean well-studied problems, rather than it being the case that such an algorithm has not been found yet due to the problem's cumbersomeness and obscurity. Later papers, such as the pioneering works of Goldwasser and Micali,^{17} turned this into a standard paradigm and ushered in the age of *modern cryptography*, whereby we use precise *definitions* of security for our very intricate and versatile cryptosystems and then *reduce* the assertion that they satisfy these definitions into the conjectured hardness of a handful of very simple and well-known mathematical problems.

I wish I could say the new obfuscation schemes are in fact secure assuming integer factoring, computing discrete logarithm, or another well-studied problem (such as the LWE problem mentioned in the sidebar) is computationally intractable. Unfortunately, nothing like that is known. At the moment, our only arguments for the security of the constructions of the homomorphic quasi-encryption and indistinguishability obfuscator constructions is (as of this writing) we do not know how to break them. Since so many potential crypto applications rely on these schemes one could worry that we are entering (to use a somewhat inaccurate and overloaded term) a new age of "post-modern cryptography" where we still have precise definition of security, but need to assume an ever growing family of conjectures to prove that our schemes satisfy those definitions. Indeed, following the initial works of Garg et al.^{12,13} there have been several attacks on their schemes showing limitations on the security notions they satisfy (for example, see Coron et al.^{9} and Coron^{10}) and it is not inconceivable that by the time this article appears they would be broken completely.

While this suggests the possibility all the edifices built on obfuscation and quasi-encryption could crumble as a house of cards, the ideas behind these constructions seem too beautiful and profound for that to be the case. Once cryptographers have tasted the "promised land" of the great many applications enabled by IO, there is every hope that (as they have so many times before) they would rise to this challenge and manage to construct indistinguishability obfuscators and quasi-encryption based on a single well-studied conjecture, thus placing these objects firmly within the paradigm of modern cryptography. Indeed, this is the focus of an intensive research effort. More than that, one could hope by following the path these constructions lead us and "going boldly where no man has gone before," we cryptographers will get new and fundamental insights on what is it that separates the easy computational problems from the hard ones.

Thanks to Dan Boneh, Craig Gentry, Omer Paneth, Amit Sahai, Brent Waters and the anonymous *Communications* reviewers for helpful comments on previous versions of this article. Thanks to Oded Regev for providing me with the figure for the "Learning with Errors" problem.

1. Apon, D., Huang, Y., Katz, J., Malozemoff, A.J. Implementing cryptographic program obfuscation. Cryptology ePrint Archive, Report 2014/779, 2014. http://eprint.iacr.org/.

2. Barak, B., Garg, S., Kalai, Y.T., Paneth, O., Sahai, A. Protecting obfuscation against algebraic attacks. In *EUROCRYPT*, 2014, pp. 221–238.

3. Barak, B., Goldreich, O., Impagliazzo, R., Rudich, S., Sahai, A., Vadhan, S.P., Yang, K. On the (im) possibility of obfuscating programs. *J. ACM 59*, 2 (2012), 6. Preliminary version in CRYPTO 2001.

4. Barrington, D.A.M. Bounded-width polynomial-size branching programs recognize exactly those languages in NC^{1}. *J. Comput. Syst. Sci. 38*, 1 (1989), 150–164. Preliminary version in STOC 1986.

5. Boneh, D., Sahai, A., Waters, B. Functional encryption: A new vision for public-key cryptography. *Commun. ACM 55*, 11 (2012), 56–64.

6. Boneh, D., Silverberg, A. Applications of multilinear forms to cryptography. *Contemp. Math. 324*, 1 (2003), 71–90. Preliminary version posted on eprint on 2002, see https://eprint.iacr.org/2002/080.

7. Brakerski, Z., Rothblum, G.N. Obfuscating conjunctions. In *CRYPTO*, 2013, 416–434.

8. Brakerski, Z., Rothblum, G.N. Virtual black-box obfuscation for all circuits via generic graded encoding. In *TCC*, 2014, 1–25.

9. Coron, J., Gentry, C., Halevi, S., Lepoint, T., Maji, H.K., Miles, E., Raykova, M., Sahai, A., Tibouchi, M. Zeroizing without low-level zeroes: New MMAP attacks and their limitations. In *Proceedings of the Advances in Cryptology – CRYPTO 2015 – 35th Annual Cryptology Conference, Santa Barbara, CA, USA, August 16–20, 2015, Part I*, 2015, 247–266.

10. Coron, J.-S. Cryptanalysis of GGH15 multilinear maps. Cryptology ePrint Archive, Report 2015/1037, 2015. http://eprint.iacr.org/.

11. Diffie, W., Hellman, M.E. New directions in cryptography. *IEEE Trans. Inform. Theory 22*, 6 (1976), 644–654.

12. Garg, S., Gentry, C., Halevi, S. Candidate multilinear maps from ideal lattices. In *EUROCRYPT*, 2013. See also CRyptology ePrint Archive, Report 2012/610.

13. Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B. Candidate indistinguishability obfuscation and functional encryption for all circuits. In *FOCS*, 2013, 40–49.

14. Gentry, C. Fully homomorphic encryption using ideal lattices. In *STOC*, 2009, 169–178.

15. Gentry, C. Computing on the edge of Chaos: Structure and randomness in encrypted computation. *Proceedings of the 2014 International Congress of Mathematicians (ICM)*, 2014. Also available online at http://eprint.iacr.org/2014/610.

16. Gentry, C., Sahai, A., Waters, B. Homomorphic encryption from learning with errors: conceptually-simpler, asymptotically-faster, attribute-based. In *CRYPTO*, 2013, 75–92.

17. Goldwasser, S., Micali, S. Probabilistic encryption. *J. Comput. Syst. Sci. 28*, 2 (1984), 270–299. Preliminary version in STOC 1982.

18. Green, M. Cryptographic obfuscation and 'unhackable' software, 2014. Blog post. Available at: http://blog.cryptographyengineering.com/2014/02/cryptographic-obfuscation-and.html.

19. Jacob, M., Boneh, D., Felten, E. Attacking an obfuscated cipher by injecting faults. In *Digital Rights Management.* Springer, 2003, 16–31.

20. Joux, A. A one round protocol for tripartite Diffie-Hellman. *J. Cryptol. 17*, 4 (2004), 263–276. Preliminary version in ANTS 2000.

21. Popa, R.A., Redfield, C.M.S., Zeldovich, N., Balakrishnan, H. CryptDB: Processing queries on an encrypted database. *Commun. ACM 55*, 9 (2012), 103–111.

22. Regev, O. On lattices, learning with errors, random linear codes, and cryptography. *J. ACM 55*, 6 (2009). Preliminary version in STOC 2005.

23. Rivest, R.L., Adleman, L., Dertouzos, M.L. On data banks and privacy homomorphisms. *Found. Secure Comput. 4*, 11 (1978), 169–180.

24. Rivest, R.L., Shamir, A., Adleman, L.M. A method for obtaining digital signatures and public-key cryptosystems. *Commun. ACM 21*, 2 (1978), 120–126.

a. The technical name for this notion is *functional encryption.*^{5}

b. Even prior to the works by Garg et al.^{12, 13} there were papers achieving virtual black-box obfuscation for very restricted families of functions. In particular, independently of Garg et al.^{13}, Brakerski and Rothblum^{7} used Garg et al.'s^{12} construction to obtain virtual black-box obfuscation for functions that can be represented as *conjunctions* of input variables or their negations.

c. This is a non-standard name used for this exposition; the technical name is a *cryptographic multilinear map* or a *graded encoding scheme.*

d. In fact, the same paper Garg et al.^{12} proposing the first candidate construction for such a quasi-encryption scheme also gave an attack showing their scheme does *not* satisfy some natural security definitions, and followup works extended this attack to other settings. Finding a candidate construction meeting a clean security definition that suffices for indistinguishability obfuscation is an important open problem.

e. As an aside, a similar notion to quasi-encryption, without homomorphism, is known as *deterministic encryption* and is used for tasks such as performing SQL queries on encrypted data bases (for example, see Popa et al.^{21}).

f. The Dec operation is not efficiently computable, but this turns out not to be crucial for many of the applications.

Figure 1. The following Python program prints `"Hello world!"`

if and only if Goldbach's conjecture is false.

Figure 2. A "selective decryption" program. Black-box access to this program enables a user to decrypt only messages that match some particular pattern, and hence obfuscating such a program can be used to obtain a functional encryption scheme.

Figure 3. In two years since the candidate construction, Indistinguishability Obfuscation is already emerging as a "hub" for cryptography, implying (when combined with one-way functions) a great many other cryptography primitives.

Figure 4. We use Barrington's Theorem to encode a program *F* computable by a logarithmic depth circuit into a sequence of *m* matrices *A*_{1}, ..., *A _{m}* and publish the quasi-encryptions of these matrices' entries. Every input

Figure. Watch the author discuss his work in this exclusive *Communications* video. http://cacm.acm.org/videos/hopes-fears-and-software-obfuscation

The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.

No entries found

Read CACM in a free mobile app!

Access the latest issue, plus archived issues and more

- ACM CACM apps available for iPad, iPhone and iPod Touch, and Android platforms
- ACM Digital Library apps available for iOS, Android, and Windows devices
- Download an app and sign in to it with your ACM Web Account