Opinion
Letters to the Editor

Election Auditing and Verifiability

Posted
  1. Introduction
  2. Authors Respond:
  3. Unintended Consequences of Trusting AIs
  4. Author Responds:
  5. More to Asimov's First Law
  6. Author Responds:
  7. To Design Educational Languages, First Know What You Want to Teach
  8. Still Want to Know Who Is the Human
  9. Author Responds:
  10. References
  11. Footnotes
Letters to the Editor, illustration

Overall, the inside risks Viewpoint "The Risks of Self-Auditing Systems" by Rebecca T. Mercuri and Peter G. Neumann (June 2016) was excellent, and we applaud its call for auditing systems by independent entities to ensure correctness and trustworthiness. However, with respect to voting, it said, "Some research has been devoted to end-to-end cryptographic verification that would allow voters to demonstrate their choices were correctly recorded and accurately counted. However, this concept (as with Internet voting) enables possibilities of vote buying and selling." This statement is incorrect.

While Internet voting (like any remote-voting method) is indeed vulnerable to vote buying and selling, end-to-end verifiable voting is not. Poll-site-based end-to-end verifiable voting systems use cryptographic methods to ensure voters can verify their own votes are correctly recorded and tallied while (paradoxically) not enabling them to demonstrate how they voted to anyone else.

Mercuri and Neumann also said, "[end-to-end verifiability] raises serious questions of the correctness of the cryptographic algorithms and their implementation." This sentence is potentially misleading, as it suggests confidence in the correctness of the election outcome requires confidence in the correctness of the implementation of the cryptographic algorithms. But end-to-verifiable voting systems are designed to be "fail safe"; if the cryptographic algorithms in the voting system are implemented incorrectly, the audit will indeed fail. Poor crypto implementations in the voting system will not allow an audit to approve an incorrect election outcome.

Finally, we note that end-to-end verifiable election methods are a special case of "verifiable computation," whereby a program can produce not only a correct result but also a "proof" that it is the correct result for the given inputs. Of course, the inputs need to be agreed upon before such a proof makes sense. Such methods may thus be useful not only for election audits but elsewhere.

Joseph Kiniry, Portland, OR, and Ronald L. Rivest, Cambridge, MA

Back to Top

Authors Respond:

We cannot fully elucidate here the flaws in each of the many proposed cryptographically verifiable voting subsystems. Their complexity and that of the surrounding systems environments undemocratically shifts the confirmation of correct implementation to a scant few intellectually elite citizens, if even accomplishable within an election cycle. However, all of these methods have vulnerabilities similar to the Volkswagen emission system; that is, stealth code can be triggered situationally, appearing correct externally while internally shifting vote tallies in favor of certain candidates over others. We have previously discussed the incompleteness of cryptographic solutions embedded in untrustworthy infrastructures, potentially enabling ballot contents to be manipulated or detected via vote-selling tags (such as write-in candidates or other triggers). The mathematics of close elections also requires that a very high percentage of ballots (over 95%) be independently checked against the digital record, which is not likely to occur, leaving the results unverified.

Rebecca T. Mercuri, Hamilton, NJ, and Peter G. Neumann, Menlo Park, CA

Back to Top

Unintended Consequences of Trusting AIs

Toby Walsh’s Viewpoint "Turing’s Red Flag" (July 2016) raised very good points about the safety of increasingly human-like AI and proposed some commonsense law to anticipate potential risks. It is wise to discuss such protections before the technology itself is perfected. Too often the law trails the technology, as with the Digital Millennium Copyright Act in response—perhaps a decade late—to illegal file sharing.

Walsh primarily addressed the potential threat of autonomous systems being mistaken for humans, but what about the reverse? Humans could gain an unfair or even a dangerous advantage by impersonating an AI. For instance, in a world where autonomous vehicles are allowed smaller following distances and prompt extra caution from nearby human drivers, a human could install an "I am autonomous" identity device in order to tailgate and weave through traffic with impunity, having won unearned trust from other drivers and vehicles.

A similar situation could arise with the advent of bots that act as intermediaries between humans and online services, including, say, banks. As bots become more trusted, a human-in-the-middle attack could compromise everyone’s private data.

At perhaps the outer reaches of tech-no-legal tension, we could even imagine the advent of identity theft where the individual is an AI, lovingly brought to life by a Google or an Amazon, and the thief to be punished is a human impersonator. Is this the route through which AIs might someday become legal persons? In a world where the U.S. Supreme Court has already extended constitutional free speech rights to corporations, this scenario seems quite plausible.

Mark Grossman, Palo Alto, CA

Back to Top

Author Responds:

Grossman makes a valid point. Just as we do not wants bots to be intentionally or unintentionally mistaken for human—as I suggested in my Viewpoint—we also do not want the reverse. The autonomous-only lane on the highway should not have humans in it pretending to be, say, the equivalent of more-capable autonomous drivers.

Toby Walsh, Berlin, Germany

Back to Top

More to Asimov’s First Law

In his Viewpoint (July 2016), Toby Walsh argued for some sort of preliminary indication in cases in which a human is interacting with a robot. I suggest he check Isaac Asimov’s classic science fiction novels Caves of Steel (1953) and The Naked Sun (1957) for an earlier treatment of the topic. In the latter work especially, R. (Robot) Daneel Olivaw deliberately hides his/its nature while investigating a murder. Asimov also included interesting discussion on the limitations inherent in the first of his "Three Laws of Robotics," whereby "A robot may not injure a human being or, through inaction, allow a human being to come to harm," as it assumes the robot is aware such action/inaction would itself be harmful.

Joe Saur, Yorktown, VA

Back to Top

Author Responds:

Yes, science fiction offers many stories that support the call for a Turing Red Flag law whereby autonomous systems are required to identify themselves. I mentioned the movie Blade Runner, which is, of course, based on Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? Isaac Asimov’s oeuvre also contains many examples. We should all listen to these warnings.

Toby Walsh, Berlin, Germany

Back to Top

To Design Educational Languages, First Know What You Want to Teach

We were pleased by the attention R. Benjamin Shapiro’s and Matthew Ahrens’s Viewpoint "Beyond Blocks: Syntax and Semantics" (May 2016) gave to the potential educational value of tools that translate block syntax to text. However, though they did not identify any published studies that have evaluated possible benefits from such tools, several recent studies have indeed been done. Moreover, smooth transitions from blocks to text syntax have been a feature of research enhancements to existing languages (such as Tiled Grace by Homer and Noble) and of novel languages in successful products (such as the educational coding game Code Kingdoms). Researchers typically publish evaluations of their systems; we ourselves have evaluated the educational outcomes of Code Kingdoms.

But what specific skills and concepts are computer science educators actually teaching with such systems? To find out, we must focus on evaluating those skills and concepts, rather than on task performance or productivity measures with little relevance to educational objectives. We developed our own DrawBridge system1 to support not only understanding of syntax through transition from blocks to (JavaScript) text syntax but also transitions from direct manipulation drawing to geometric notation and from code to live Web deployment. Attention to educational assessment of benefits can also help guide and evaluate the design of continuing work, as in Shapiro and Ahrens. Educators and system designers should thus recognize the importance of notational expertise—understanding the nature and function of concrete syntax—along with the more popular but abstract concerns of computational thinking. An important step toward improving the design of educational systems is to better understand what computer science educators are actually trying to teach.

Alistair Stead and Alan Blackwell, Cambridge, U.K.

Back to Top

Still Want to Know Who Is the Human

Commenting on Moshe Y. Vardi’s Editor’s Letter "Would Turing Have Passed the Turing Test?" (Sept. 2014), Huma Shah’s and Kevin Warwick’s letter to the editor "Human or Machine?" (Apr. 2015) included part of a conversation between a judge (J19) and candidates (E20 and E24) of the (now famous) Turing Test experiment. Readers were asked to decide whether E20 or E24 is the computer—an appropriate and indeed challenging question. Unfortunately, I could not find a resolution in Communications or elsewhere. Would it be possible to get the correct answer from Shah and Warwick? I would like to include it in a quiz in a theory-of-computing course.

Sven Kosub, Konstanz, Germany

Back to Top

Author Responds:

In the 2014 experiment, all judges were informed there was indeed one human and one machine in each simultaneous comparison test. For Judge J19, the result for parallel interrogation of hidden entities E24 and E20 was the correct identification of the left interlocutor, E20, as a machine, awarding it 52/100 for conversational ability; E20 was UltraHal from Zabaware. J19 was unable to determine the nature of the right interlocutor, E24, which was actually a human male, reporting "unsure" for E24.

Huma Shah, London, U.K., and

Kevin Warwick, Reading, U.K.

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More