To view the accompanying paper, visit doi.acm.org/10.1145/2902313
For thousands of years, cryptography was synonymous with "secret writing" whereby two parties used some shared secret key or method to communicate confidential information. But beginning in the 1970s, there was an explosion of new cryptographic concepts and applications. These include public key cryptography—confidential communication between two parties over an open channel, secure function evaluation—jointly computing a function of private inputs (such as the results of an election or auction) without revealing them, zero knowledge proofs—proving a statement is true without revealing anything else, fully homomorphic encryption—computing on encrypted data, and more.
These notions seem so paradoxical it is amazing these cryptography pioneers even imagined they could ever be achieved!a Based on their writing, it seems at least part of these inventors' thought process involved the following mental experiment: it not too difficult to convince yourself these wonderful objects can in fact be achieved if we had a black box computing some function, whereas every party could use the box to compute an output from an input but cannot understand its internal working. In the words of James Ellis,b "we can regard our [encryption function] as a look-up table containing one value of output for each possible input value." The hope was it would be possible to simulate such a program by using what Diffie and Hellman called a "one-way compiler, which takes an easily understood program in a high-level language and translates it into an incomprehensible program in some machine language."c
No entries found