Encryption is a method for users to securely share data over an insecure network or storage server. Before the advent of public-key cryptography, a widely held view was that for two users to communicate data confidentially they would have to first establish a mutually held secret key *k*. While acceptable, perhaps, for some small or tight-knit organizations, such a solution is clearly infeasible for larger networks (such as today’s Internet). More than 30 years ago, Diffie and Hellman^{11,12} introduced the concept of public-key cryptography, where two parties securely communicate with each other without having a prior mutual secret, radically challenging the conventional wisdom of the time.

### Key Insights

- Unlike traditional encryption, where decryption is all or nothing, in a functional encryption system decryption keys may reveal only partial information about the plaintext; for example, decrypting an encrypted image with a cropping key will reveal a cropped version of the and nothing else.
- Many advances in public-key encryption over the past decade can be viewed as special cases of functional encryption.
- Current functional encryption systems are quite expressive, yet much work remains in expanding the set of functionalities supported by existing constructions.

Today, public-key encryption is invaluable, ubiquitous in securing Web communication (such as HTTPS and SSH), voice traffic, and storage systems. However, within the technical community, there is an ingrained view that:

- Access to the encrypted data is “all or nothing”; one either decrypts the entire plaintext or learns nothing about the plaintext (other than a bound on its length); and
- Encryption is a method to encode data so a single secret key can decrypt that data.

However, for many applications, this notion of public-key encryption is insufficient; for example, the encryptor may want to encrypt data so anyone satisfying a certain policy can then decrypt it. Consider encrypting a message to a company so the only users who can decrypt it are employees in, say, the accounting or sales departments whose office is in the company’s main building. Realizing this application using existing public-key encryption raises several questions:

- How do we discover the public keys of all individuals who satisfy this policy?;
- What if someone joins the system or receives certain credentials well after the data is encrypted and stored?;
- What if we want to give someone a partial view of the plaintext depending on their credentials?; and
- Should a given user even be allowed to learn the identities of all individuals who have certain credentials?

**Functional encryption**. It is time to adopt a new broad vision of encryption that takes into account such concerns. To this end, we advocate the concept of “functional encryption” where a decryption key enables a user to learn a specific function of the encrypted data and nothing else. Briefly, in a functional-encryption system, a trusted authority holds a master secret key known only to the authority. When the authority is given the description of some function
as input, it uses its master secret key to generate a derived secret key sk[
] associated with
. Now anyone holding sk[
] can compute
(*x*) from an encryption of any *x*. In symbols, if *E*(pk; *x*) is an encryption of *x*, then decryption accomplishes

Note it is
(*x*) that is made available to the secret key holder, even though *x* was the value that was encrypted. A functional-encryption system can support a variety of functions this way. Intuitively, the security of the functional-encryption system should guarantee the secret-key holder can learn nothing else about *x* beyond
(*x*). We thus envision functional encryption as analogous to secure computation^{18,33} but with the critical difference that functional encryption is completely non-interactive once a recipient obtains the recipient’s own secret key.

Consider what is possible if functional encryption would be realized for a broad set of functions:

*Spam filtering on encrypted mail*. A user wishes to leverage a partially trusted proxy to filter out all encrypted email messages identified as spam according to the user’s criteria. The user wants to achieve the seemingly conflicting goals of hiding the message’s contents from the proxy while allowing the proxy to determine if the message is spam according to some arbitrary criteria. The user can achieve these goals by setting up a functional-encryption system, then giving the proxy a key sk[
] where
is the user-specified program that outputs 1 if the plaintext is spam and 0 otherwise. The proxy can use sk[
] to test if an encrypted message is spam without learning anything more about the plaintext (see the figure here).

One can naturally consider generalizations of this idea; for instance, the proxy might selectively send important email messages (as deemed by the function ) to the user’s mobile device. Taking things further we can imagine the destination of a packet is encrypted, and the secret key sk[ ] allows a router to learn the next hop and nothing more.

*Expressive access control*. In large organizations a user will often think of sharing data according to some access policy. In addition to our corporate example, this might also occur in other domains (such as health care, insurance companies, government institutions, and universities). Bridging the gap between how a user thinks of sharing data and discovering the public keys of all other users who match or will match such sharing can be difficult and is subject to the problems outlined earlier; for example, a system might try to encrypt data separately to the public key of every user matching a certain policy. However, as also noted, this user-specific encryption requires identification of each user, as well as the overhead of encrypting to each one individually. Moreover, this user-specific encryption does not cover users who do not meet the criteria today but will in the future.

Using functional encryption a user can directly express how the user (or organization) wishes to share the data in the encryption process. In particular, the user can encrypt *x* = (*P,m*) where *m* is the data the user wishes to share, and *P* is the access policy that describes how the user wants to share it. The user’s secret-key function sk[
] will then check whether the user’s credentials or attributes match the policy and reveal only *m* in this case. Corresponding to the example of an accounting or sales department with an office in the company’s main building, *P* could embed the policy (“ACCOUNTING” OR “SALES”) AND “MAIN BUILDING.” A recipient’s function
would embed the attributes of the particular user and check if they satisfy the formula and if so return *m*.

*Mining large datasets*. Data mining is used in medical research, social networks, network security, and financial fraud detection. Administrators often want to give users the ability to mine datasets for certain types of queries but not let them learn anything else. Consider a medical researcher who wants to test if there is a link between a genotype and a type of cancer in a particular ethnic group. If the administrator has data consisting of patient gene sequences and medical history, the administrator would like to give the researcher the ability to test for this linkage, without revealing the details of all patients’ medical conditions.

Note that in practice, an administrator typically does not know the queries that will be of interest until well after the data is created and stored. Functional encryption provides an elegant solution. When data is created it is simply encrypted in a functional-encryption system. Later, a user requests to be allowed to learn a certain query or function
of the data. If data access is authorized, the user is given sk[
] and can apply this key to (attempt to) decrypt existing or future encrypted data. Thus, in a functional-encryption system supporting a class of functions **F** a user could be given the ability to compute any function from this class on the dataset.

These three examples of functionality motivate the research agenda we put forward here—to create functional-encryption systems supporting the richest possible families of functions and understand what limitations are inherent for functional-encryption systems.

### Functional Encryption

Recall that public-key encryption systems (such as RSA and El-Gamal) consist of three algorithms:

*Setup*. Outputs a secret key denoted sk and a public key pk; anyone can encrypt message using pk, but only the secret key holder is able to decrypt;

*Encryption E*. Takes a public key pk and a message as input and outputs a ciphertext; and

*Decryption D*. Takes a secret key sk and a ciphertext as input and outputs a message.

A functional-encryption system includes the same three algorithms but also includes a fourth algorithm called KeyGen. Here, the secret key output by the Setup algorithm is called the master key, denoted by mk. The KeyGen algorithm takes as input mk and the description of some function
. It outputs a key that is specific to the function
and denoted sk[
]. More precisely, if *c* is the result of encrypting data *x* with public key pk, then

We emphasize that sk[
] does not fully decrypt *c*, outputting only a function
of the full decryption. To fully decrypt *c* the authorized user can use a secret key sk[*g*], where *g* is the identity function, namely *g*(*x*) = *x* for all *x*.

A filtering server can use the user’s functional secret key to test if an encrypted email message is spam without learning anything else about the plaintext.

Informally, security of a functional-encryption system means an attacker with a set of secret keys sk[
_{1}],…,sk[*
*_{e}] can learn nothing about the decryption of some ciphertext *c* other than what is revealed by the keys at the attacker’s disposal.

To illustrate the power of functional encryption, the following section covers how it naturally captures many advanced encryption concepts in cryptography. First, it should be clear that traditional public-key encryption is a very special case of functional encryption, where the only supported function is the identity function; the decryptor learns either the complete decryption or nothing at all.

**Identity-based encryption**. A more advanced public-key concept called “identity-based encryption,” or IBE, is an encryption system where any string can serve as a public key; a user’s email address, a date, an IP address, a location, or even the numbers 1, 2, and 3 are all potential public keys. IBE public keys are often called “identities” and denoted by id. To obtain the secret key for a particular identity the user communicates with an authority holding a master key. The authority verifies the user is authorized to receive the requested secret key, and, if so, it generates the secret key using its master key. IBE was proposed by Shamir^{29} in 1984, and the first implementations of IBE were proposed by Boneh and Franklin^{6} and Cocks^{10} in 2001; notable constructions include Agrawal et al.,^{1} Boneh and Boyen,^{4} Gentry,^{15} Gentry et al.,^{16} Waters,^{30} and Waters.^{31}

Using the terminology of functional encryption the IBE problem can be recast as an equality testing functionality. Let pk and mk be the output of the functional encryption setup algorithm. To encrypt a message *m* to identity id the encryptor calls the encryption algorithm as

and obtains a ciphertext *c*. Note the data being encrypted is the pair (id, *m*).^{a} A recipient with identity id* obtains a secret key for id* by asking the authority for a secret key sk[
_{id*}] where the function
_{id*} is defined^{b} as

The authority generates sk[
_{id*}] using its functional master key mk. Using this secret key the user can decrypt messages intended for identity `id`

* but learns nothing about messages encrypted for other identities.

Recall that IBE systems reduce reliance on certificate directories needed for traditional public-key encryption; to encrypt to identity id, the encryptor needs only the global public key pk and the recipient’s identity id. General functional encryption systems have the same property: they require no online certificate directory. An encryptor needs only the global public key pk and the payload *x* to be encrypted and no other information about the intended recipient(s).

**Attribute-based encryption**. Another encryption concept called “attribute-based encryption,” or ABE, lets the encryptor specify more abstractly who is authorized to decrypt a specific ciphertext. ABE was proposed by Sahai and Waters^{28} and later refined by Goyal et al.^{19} into two different formulations: key-policy ABE and ciphertext-policy ABE.

A tantalizing question is whether techniques from lattices, which have been so useful in the context of fully homomorphic encryption, can help achieve greater functionality for functional encryption.

In the ciphertext-policy system the encryptor specifies a policy φ on recipient attributes that determines who can decrypt the ciphertext; for example, the encryptor can encrypt messages to anyone who is a

which is a Boolean formula φ on three variables. To encrypt a message *m* with decryption policy φ the encryptor calls

and obtains a ciphertext *c*.

Now, consider a recipient who wants to decrypt the ciphertext. The recipient has a number of attributes, say,

Let *n* be the total number of attributes, and we represent the set of user attributes as a Boolean vector of length *n*; the vector is 1 at positions that correspond to attributes that are true and 0 everywhere else. With this setup each user has an attribute vector **u** in {0, 1}.^{n}.

A recipient with attribute vector **u** obtains a secret key for his attribute vector by asking the authority for a secret key sk[
_{u}] where the function
_{u} is defined as

The authority generates sk[
_{u}] using its functional master key mk. Using this secret key the user can decrypt ciphertexts where the user’s attributes satisfy the decryption policy but learns nothing about the decryption of other ciphertexts.

A related concept called “key-policy attribute-based encryption” places the access policy φ in the key and the vector **u**
{0, 1}^{n} in the ciphertext. The secret key sk[
_{φ}] decrypts all encryptions *E*(pk, (*u, m*)) for which φ(**u**) = 1.

### Security

Here we turn to constructing functional-encryption systems but first explain what it means for a functional system to be secure. The full definition is a bit technical, and we give only high-level intuition; for more, see Boneh et al.^{7}

Roughly speaking, a functional-encryption system is secure if an attacker with a set of secret keys sk[
_{1}],…,sk[
_{t}] can learn nothing about the decryption of some ciphertext *c* other than what is revealed by the keys at the attacker’s disposal. If *c* is the encryption of some data *x*, then the attacker can use the attacker’s secret keys to learn
_{1}(*x*),…,
_{t}(*x*). However, the attacker must be unable to learn anything else about *x*; for example, if the attacker has secret keys that reveal the first three bits of *x*, then clearly the attacker can learn these bits, given an encryption of *x* but would be unable to learn anything about the remaining bits of *x*.

To give a bit more detail about security requirements, let *A* be a polynomial-time adversary that takes as input three things: the public key pk, a set of secret keys sk[
_{1}],…,sk[
_{t}] for functions
_{1},…,
_{t} of its choice, and a ciphertext *c* = *E*(pk, *x*). This *A* might output some information about the decryption of *c* (such as the least significant bit of *x*). We say the system is secure if for every such *A* there is a another polynomial-time algorithm *B*, called a simulator, that, given pk and
_{1}(*x*),…,
_{t}(*x*) but not given *c* is able to output the same information about *x* that A output. Since *B* never got to see *c* it must have deduced the information about *x* strictly from
_{1}(*x*),…,
_{t}(*x*). Since *A* and *B* output the same information about *x*, the existence of *B* means the only information *A* can learn about *x* from the ciphertext *c* is information it can learn from
_{1}(*x*),…,
_{t}(*x*) but cannot learn anything else about *x*. Hence, *A* can learn from *c* whatever is revealed by the secret keys at its disposal but nothing else.^{c}

*Challenge: Preventing collusion attacks*. Attacks on functional encryption using multiple functional secret keys are called collusion attacks, and preventing them is the main obstacle to constructing secure functional systems. To illustrate the problem consider again the functionality described earlier and suppose the encryptor wishes to encrypt a message *m* to this policy

A simple implementation is to associate a public key pk_{1} with the attribute “U.S. citizen” and a public key pk_{2} with the attribute “over 30” and double-encrypt the message *m* as

where here *E*(·,·) is a regular public-key-encryption algorithm. To decrypt *c* the recipient—call her Alice—must possess both secret keys corresponding to pk_{1} and pk_{2}, implementing the conjunction policy specified by the encryptor.

Now, suppose another user, Bob, has attributes “U.S. citizen” and “male” where the attribute “male” is associated with a public key pk_{3}. He would be given the secret keys corresponding to pk_{1} and pk_{3}, letting him decrypt messages encrypted for some policies (such as (“U.S. citizen” and “male”)). In addition, suppose Alice has the attribute “over 30” and is then given only the secret key corresponding to pk_{2}. Thus, she cannot decrypt the message associated with the original policy on her own.

The problem is Alice and Bob can collude to combine their secret keys and create new secret keys neither one should have; for example, Alice and Bob working together can decrypt ciphertexts intended for policy “over 30” and “male,” even though neither can decrypt the ciphertext by themselves. In this example, collusion enabled Alice and Bob to decrypt a ciphertext to which neither should have access.

*Secure constructions*. Secure constructions for complex functionalities must prevent collusion attacks. Collusion attacks are prevented by binding together all secret keys for a set of attributes, so mixing the keys given to distinct users does not help. As a visual metaphor, one can imagine that all the keys given to Alice are colored blue, while all the keys given to Bob are colored red. Decryption succeeds only when the decryptor uses a set of keys of the same color. The colors ensure Alice and Bob cannot combine their keys to decrypt ciphertexts they should not be able to decrypt.

In practical terms, the colors are implemented through randomization values. All the keys given to Alice are blinded by the same random value, while all the keys given to Bob are blinded by a different random value. Decryption with keys blinded by the same randomizer produces the correct decrypted message. Decryption with keys blinded by different randomizers results in a random value unrelated to the correct decryption.

### State of the Art

The state of the art in functional encryption can be understood by considering what information about the plaintext *x* is exposed by the ciphertext to all participants. We refer to this information as the result of the “empty functionality” denoted
_{ε}(·); for example, it is inherent in any encryption scheme that the empty functionality exposes some information about *x* (such as a bound on the size of the plaintext). When the exact plaintext length is leaked by the ciphertext we write
_{ε} (*x*) = |*x*| to indicate that anyone can learn the plaintext length from the ciphertext.

**Public index: ABE**. In general, we can consider the problem of functional encryption where the data to be encrypted is decomposed into two parts *x* = (ind, *m*), where ind denotes a public index the encryptor does not mind revealing to all participants in the system. That is, we define the empty functionality as
_{ε} (ind, *m*) = (ind, |*m*|).

Now consider the specific case of ABE, where the access policy φ is now considered a public index. In it, where access policy does not require protection, we have fairly broad and efficient constructions of secure ABE schemes; secure ABE schemes exist that support any access policy φ that can be expressed as a Boolean formula over the attributes (as in the earlier examples).^{3,19,22,23,25,26,32} Going beyond policies expressible as Boolean formulas remains a vexing open problem for researchers, with the ultimate goal of supporting policies expressible as arbitrary Boolean circuits or Turing Machines.

**Non-public index**. A more challenging setting arises where we insist the empty functionality reveals as little as possible, namely
_{ε} (*x*) = |*x*|. Here, our current understanding of functional encryption is extremely limited. The state of the art is limited to the inner-product functionality over prime fields.^{2,21,22,25} Because this functionality is somewhat technical (and before we describe it more formally), we briefly discuss some applications: First, consider the question of searching on encrypted data, where the data is encrypted based on a public key and stored on a public server.^{5} The security challenge in this setting is to hide the specific nature of the search query from the public server while still allowing the public server to send back only data entries that match the search query. The inner-product functionality we describe in the following paragraphs allows a user to perform such a search based on disjunction queries and more generally searches defined by CNF and DNF formulae or by checking whether a univariate search polynomial evaluates to zero on a particular input value.

The functionality we consider is defined over a prime field F_{p} where *p* is a large prime chosen randomly during the setup of the functional-encryption scheme. Messages and keys will correspond to vectors of a fixed arbitrary dimension *n* over F_{p}. Let us denote the message by the vector **v** and the vector underlying a secret key by **u**. We then have

To see how this functionality can be applied, consider again the example of disjunction queries: Suppose a ciphertext is meant to encrypt a single keyword we hash down to a value *a* in our finite field F_{p}. Then to encrypt this value *a*, the system actually encrypts the vector **v** = (1, *a, a*^{2},…,*a*^{n1}). Now, suppose we have to create a key corresponding to a disjunction query “*a*_{1} OR *a*_{2} OR *a*_{3}“. We do this by first considering the polynomial *p*(*x*) = (*x* *a*_{1})(*x* *a*_{2})(*x* *a*_{3}), writing it out in standard form as *p*(*x*) = *c*_{0} + *c*_{1}*x* + *c*_{2}*x*^{2} + *c*_{3}*x*^{3}, where the *c*_{i} are the appropriate coefficients. We then issue a key for the vector **u** = (*c*_{0}, *c*_{1}, *c*_{2}, *c*_{3}, 0,…,0). Glancing at the functionality, we see our key will indeed match the ciphertext for value *a* if and only if *p*(*a*) = 0; that is, if the value *a* is a root of our polynomial *p*(*x*), which was designed to have roots only at the three values *a*_{1}, *a*_{2}, *a*_{3} in our desired disjunction. Other special cases of inner products, including conjunctions and range testing functionalities, were considered in Boneh and Waters.^{8}

Unfortunately, the exact cryptographic mechanisms by which the results work in Katz et al.,^{21} Lewko et al.,^{22} and Okamoto and Takashima^{25} are too technically involved to describe here; we encourage all to look into these sources for further technical detail.

**Current limitations**. Current functional-encryption schemes, especially in non-public index settings, are limited. From a technical standpoint, current techniques for building functional-encryption schemes are all based on elliptic-curve groups equipped with efficiently computable bilinear pairings that map into the multiplicative structure of a finite field. At a very high level of design abstraction a pairing operation allows for a single multiplication between the exponents of two “source” group elements. However, the result of a pairing operation is a “target” group for which the operation cannot be repeated.

The reason we can handle inner products of two vectors is because this operation requires only one parallel call to the multiplication operation, which is all that bilinear maps provide. A tantalizing question is whether techniques from lattices, which have been so useful in the context of fully homomorphic encryption,^{14} can help achieve greater functionality for functional encryption.

**Efficiency**. The efficiency of functional-encryption systems varies significantly with specific cryptographic constructions. However, we can offer an approximate sense of the efficiency of the ABE where the ciphertext is associated with any access policy φ that can be expressed as a Boolean formula over attributes. In current systems, the size of the ciphertext scales with the size of the Boolean formula φ; for example, in Waters,^{32} a ciphertext consisted of two group elements for every leaf node of φ, and encryption took three exponentiations for every leaf node. Decryption requires two of the aforementioned pairing operations for each attribute used in the formula. While difficult to predict how future functional-encryption systems might evolve, developers could expect that the number of public-key operations required will scale with the complexity of the functionality.

### Functional Encryption vs. Fully Homomorphic Encryption

Fully homomorphic encryption (FHE) is arguably the most impressive development in cryptography over the past few years, enabling one to compute on ciphertexts in the following sense: Given a public key pk, encryptions of messages *x*_{1},…,*x*_{t} under pk, and the description of a function
as input, any user can construct an encryption of the message
(*x*_{1},…,*x*_{t}); see Gentry^{13} for a detailed discussion. A more restricted version of FHE, called univariate FHE, allows any user to construct an encryption of