Opinion
Artificial Intelligence and Machine Learning Letters to the editor

Toward an Equation that Anticipates AI Risks

Posted
  1. Introduction
  2. Author Responds
  3. Final Knowledge with Certainty Is Unobtainable
  4. Author Responds
  5. References
  6. Footnotes
Letters to the Editor, illustration

In his "inside risks" Viewpoint "The Real Risks of Artificial Intelligence" (Oct. 2017), David Lorge Parnas wrote that "artificial intelligence" remains undefined while highlighting his concern that AI could yet render humans superfluous and aid authoritarian regimes looking to centralize their hold on political power. He also said AI could yet produce untrustworthy potentially dangerous devices and systems.

Among the very human psychological factors driving human fear are being financially or medically dependent on others, the expectation of physical or mental pain, unintentionally hurting others (such as by causing a car crash), being irresponsible (such as by forgetting an infant left in a car on a hot day), or simply being embarrassed about some inappropriate social behavior. Many of us fear losing our privacy and jobs, thoughtlessly insulting colleagues, being overly controlled by governments and corporations, suffering injustice, or being victimized by violence, especially if avoidable. It is our darkest fears that actually protect us the most. Could AI intensify such fears to levels beyond what we already know?

History records numerous instances of humans delivering slavery, humiliation, and genocide through even the simplest of technologies. Consider that swords, rope, and horses have allowed a handful of leaders to control vast populations. Pirates armed with guns to target passengers on airplanes or cruise ships are another threat to life and property. Other non-computational technologies that could, at first glance, appear so unsophisticated as to seem harmless include poison gas (as used by the Nazis for mass murder) and knives (as used to commandeer commercial airplanes in the 9/11 terror attacks). Even the simplest devices can be the riskiest, representing a much greater threat than any undefined super AI.

Measuring the magnitude of risk for a new device or for an entire category of technology is not straightforward and becomes even more difficult in light of AI’s incomplete scientific definition, which might even be self-serving. Whether human-like or self-aware, self-motivated machines are more harmful than the tools our Paleolithic ancestors might have used long ago is an open question. Regardless how difficult it is to measure AI’s potential, computer scientists would benefit from developing a new equation to estimate that risk, before the technology itself would become widespread, embedded in the Internet of Things. Like Drake’s Equation1—created by astrophysicist Frank Drake to estimate the number of potentially communicative extraterrestrial civilizations in the Milky Way galaxy, an equation defining quantitatively the boundaries of machine intelligence and its potential risk2—would stimulate further scientific debate around AI and help define—scientifically—AI benefits and risks.

Uri Kartoun, Cambridge MA, USA

Back to Top

Author Responds

Although I mentioned that others, including famous people, had expressed the fear that AI could "render humans superfluous," I do not share their view. As I explained, my concern is that programs designed by AI methods, rather than based on solid mathematical models, will be untrustworthy. I also said the term "artificial intelligence" has not been properly defined. Without a definition, no formula can reliably predict the risk of using it.

David Lorge Parnas, Ottawa, Ontario, Canada

Back to Top

Final Knowledge with Certainty Is Unobtainable

Martin E. Hellman’s Turing Lecture "Cybersecurity, Nuclear Security, Alan Turing, and Illogical Logic" (Dec. 2017) did not say the crypto wars from the 1970s have returned, thus threatening to overturn Hellman et al.’s own victory over mandatory government access to information in communication devices. Also not said was that the common understanding of mathematician Kurt Gödel’s results have been revised by mathematicians and logicians because they were based on first-order logic, which is being replaced by higher-order logic in computer science with knock-on effects.

Return of the crypto wars and revision of the common understanding of Gödel’s results illustrate that final knowledge with certainty is unobtainable in computer science, as it is in all other fields, and that further extensions, reinterpretations, and revisions are always possible through a process I would call "progressive knowing" that is never finished and never certain.3

The crypto wars have resumed through a current proposal from government security contractors that aims to provide government access to all Internet of Things devices in a way that only the government would be able to use to exfiltrate information. A public key would be required in each new device sold in the U.S. such that when a packet arrives that decrypts using that public key, the decrypted packet would become the "bootloader" for a virtual machine to take over the device, even as it is being used. Corresponding private keys can be protected against a single point of failure by splitting them into multiple pieces and storing each piece in a different secure government facility. Government access could, over time, be enforced by requiring all new devices sold in the U.S. interactively verify they can be accessed by the government before they would be allowed to connect to the U.S. public Internet. A device from another country would be allowed to connect domestically only after arrangements were made over the Internet with a foreign security service.

Government access might be used pursuant only to a court order. But there is nothing in the physical arrangements of the proposal for mandatory government access to prevent government surveillance. Such access was also a principle objection to the original technically defective government proposal that Hellman et al. confronted in the 1970s. By correcting these technical defects, the new proposal threatens to overturn the victory in the earlier crypto wars.

Meanwhile, Gödel’s results were based on first-order logic, but every moderately powerful first-order theory is inconsistent. Consequently, computer science is changing to use higher-order logic. However, logicians have shown there are proofs of theorems in higher-order logic that cannot be expressed through text alone, thus overturning a long-held nominally established philosophical dogma about mathematical theories—that all theorems of a theory can be computationally generated by starting with axioms and mechanically applying rules of inference.3 "Inexpressibility" means it is mathematically provable that it will be forever necessary for computer science to invent new notations for mathematical proofs.

Carl Hewitt, Palo Alto, CA, USA

Back to Top

Author Responds

Hewitt is correct that the crypto wars have continued, but the victory I mentioned still holds: establishing that independent researchers could publish papers free from government interference. His comments on Gödel’s results go beyond my mathematical knowledge but do not affect the main point I made about logic being just one way of knowing about the world, and an incomplete one at that.

Martin E. Hellman, Stanford, CA, USA

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More