Sign In

Communications of the ACM

Research Highlights

Technical Perspective: What Does Provable Security Mean for Cryptographic Schemes?

subway turnstiles

Credit: NYC Subway Guide

In the early ages of cryptography, cryptographic schemes were considered secure until an attack could break them. But such an attack may take time to be discovered, and it might be too late. In the 1980s, a new paradigm emerged with the notion of provable security, with three important steps: the security model, the computational assumptions, and the proof by reduction.

The most important step is the formalization of a security model, which precisely states what security should mean, or alternatively, which attacks one wants to withstand. With public-key encryption, as studied in the following paper, one could simply expect the one-wayness, which basically requires that from a ciphertext, without the decryption key, it is difficult to recover the plaintext. But this is a very weak security notion. It has thereafter been improved into semantic security, which can be seen as a computational variant of perfect privacy: no adversary can recover one bit of information of the plaintext, still from the ciphertext and without the decryption key.

Thus, the security model first makes precise the goal an adversary could try to achieve, and we want to prevent, and this is usually modeled by a security game between the adversary and a challenger. But it also lists the capabilities of the adversary: what kind of information the adversary can legitimately get? In this setting, the encryption key is public, and so the adversary can encrypt any plaintext of its choice, hence the so-called chosen-plaintext attacks. In real-life applications, more information may be available: one can send ciphertexts and, from the answer of the receiver, learn whether the ciphertext is valid or not; or detect whether it contains a particular plaintext or not; or ultimately even be able to get the plaintext. The last scenario is called chosen-ciphertext attack where the adversary can ask for the decryption of any ciphertext of its choice to achieve its goal. It is of course precluded to ask for the decryption of the challenge ciphertext, as it would trivially break the one-wayness or the semantic security, but this is the only limitation. A security model thus defines the goals and the means of an adversary.

For a given cryptographic scheme, as perfect security is almost never achievable, one requires a computational assumption: some longstanding problems are famous, such as the integer factoring, the discrete logarithm problem or lattice-based problems. Some of them have a quite simple statement and have been scrutinized for many years. Their intractability is then well-admitted, and a new efficient algorithm to solve them would be of great importance from the algorithmic point of view. Computational assumptions are some well-defined problems that are widely accepted as difficult.

Eventually, one can state and prove the security claim: under the intractability of that hard problem, the cryptographic scheme achieves the security notion. To demonstrate such a claim, one assumes the existence of an efficient adversary against the security notion, and uses it, as a subroutine, in an algorithm to efficiently break the underlying hard problem. The latter cannot exist, hence, by contradiction, neither can the former. This is a proof by reduction, regarding complexity theory, by showing a problem (the long-standing problem) reduces to another problem (the security notion): the latter problem is at least as difficult as the former.

The ElGamal public-key encryption scheme, as explored in the paper, is well-known to have secure instantiations in appropriate groups: against chosen-plaintext attacks, it achieves the semantic security under the Decisional Diffie-Hellman Problem. Then, one usually claims it as a secure public-key encryption scheme. But if the adversary may have additional information than just the public key, security is no longer guaranteed. In real-life, concrete implementations of provably secure schemes may be insecure because of incorrect behaviors: randomness does not contain enough entropy, more information leaks, by legitimate accesses or using side-channel attacks that exploit power-consumption, execution-time, shared cache memory, among others.

The authors perfectly illustrate that provable security is not a certification for all the applications nor all the implementations: the ElGamal public-key encryption scheme in OpenPGP is vulnerable to several kinds of attacks, and for multiple reasons. First, the standard that describes ElGamal is open to several different and incompatible interpretations, leading to cross-configuration attacks. Second, in a stronger scenario than just the chosen-plaintext setting, a side-channel attack can recover the decryption key.

Provable security guarantees an adversary will unlikely be able to break a specific security notion if the scheme is correctly implemented and used in appropriate situations. It is thus of primary importance to identify the required security notion for the specific application (no need of excessive security, for efficiency concerns), and to have a correct implementation that prevents any unwanted leakage. Standardization bodies should also make all the requirements more precise to avoid any ambiguities that likely introduce weaknesses.

Back to Top


David Pointcheval is senior researcher at CNRS, head of the crypto team at INRIA, and chair of the ENS computer science department, Paris, France.

Back to Top


To view the accompanying paper, visit

Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: