Sign In

Communications of the ACM

Letters to the editor

Consider Indirect Threats of AI, Too


Letters to the Editor, illustration

Alan Bundy's Viewpoint "Smart Machines Are Not a Threat to Humanity" (Feb. 2017) was too limited in light of the recent accomplishments of artificial intelligence (AI). Reducing the entire field of AI to four "successful AI systems"—DeepBlue, Tartan Racing, Watson, and AlphaGo—does not give the full picture of the impact of AI on humanity. Recent advances in pattern recognition, due mainly to deep learning, for computer vision and speech recognition have achieved benchmarks comparable to human performance;2 consider AI technologies power surveillance systems, as well as Apple's Siri and Amazon's Echo personal assistants. Looking at such AI algorithms one can imagine AI general intelligence being possible throughout our communication networks, computer interfaces, and tens of millions of Internet of Things devices in the near future. Toward this end, Deepmind Technologies Ltd. (acquired by Google in 2014) created a game-playing program combining deep learning and reinforcement learning that sees the board, as well as moves the pieces on the board.1 Recent advances in generative adversarial learning will reduce reliance on labeled data (and the humans who do the labeling) toward machine-learning software capable of self-improvement.

It is not because four well-known AI applications are narrowly focused by design that smart machines are not a threat to humanity. This is a false premise. Smart machines are a threat to humanity in indirect ways. Intelligence runs deep.

Myriam Abramson, Arlington, VA

Back to Top

Author Responds:

Abramson misses my point. AI systems are not just narrowly focused by design, because we have yet to accomplish artificial general intelligence, a goal that still looks distant. The four examples I included are the ones most often cited to illustrate AI progress. Her reaction illustrates my main point—that it is all too easy to erroneously extrapolate from spectacular success in a narrow area to general success elsewhere. That leads to real danger to humans, but not to humanity.

Alan Bundy, Edinburgh, Scotland

Back to Top

Gustafson's Law Contradicts Theory Results

The article "Exponential Laws of Computing Growth" by Peter J. Denning and Ted G. Lewis (Jan. 2017) cited Gustafson's Law (from John L. Gustafson's "Reevaluating Amdahl's Law," May 1988) to refute Amdahl's Law. Unfortunately, Gustafson's Law itself contradicts established theoretical results.

Both Amdahl and Gustafson claimed to quantify the speedup t1/tN achievable by N processors, where N > 1, t1 is the time required to solve a computational problem by using one processor, and tN is the time required to solve the same problem using N processors. Gustafson, in an attempt to interpret experimental results, said

ueq01.gif

where s, 0 < s < 1, is the proportion of time spent by a single processor on serial parts of the program. Gustafson claimed this equation should replace Amdahl's Law as the general speedup rule for parallel and distributed computing.

The sequential running time for finding the maximum of n integers t1(n) ≤ cn, accounting for n − 1 comparisons and, in the worst case, n − 1 assignments, where c is a positive constant. Based on Gustafson's equation, the time to find the maximum of n integers by using N processors would be

ueq02.gif

which is bounded by a constant for any N proportional to n, and approaches 0 for any fixed n if N approaches infinity.

However, in 1982 Cook and Dwork3 provided an Ω(log n) lower bound for finding the maximum of n integers allowing infinitely many processors of any parallel random-access machine (PRAM) without simultaneous writes.

In 1985 Fich et al.4 proved an Ω(log log n) lower bound for the same problem under the priority model, where the processor with the highest priority is allowed to write in case of a write conflict. The priority model is the strongest PRAM model allowing simultaneous writes.

Nevertheless, Denning and Lewis were right that Amdahl's Law is flawed. Contrary to Amdahl's assumption, it has already been demonstrated (though without reference to Amdahl) that, in theory, no inherently sequential computations exist. Even though sequential computations (such as sequential concurrent objects and Lamport's bakery algorithm) may appear in concurrent systems, they have negligible effect on speedup if the growth rate of the parallel fraction is higher than that of the sequential fraction.

Ferenc (Frank) Dévai, London, U.K.

Back to Top

Author Responds:

Gustafson's Law gives bounds on data-parallel processing whereby the same operation(s) is applied in parallel to different data elements. The standard serial algorithm for finding the max of n elements cannot thus be parallelized, and Amdahl's Law says no speedup. However, finding the max can be computed in parallel by applying the compare operation to pairs of numbers in log(n) rounds, yielding a speedup of n/log(n) ≤ n(1-p), consistent with Gustafson's Law. Our point was that not that all algorithms are parallelizable through data parallelism, but, rather, data parallelism currently contributes to the exponential rise in computing power because so many cloud-based operations fall into the data-parallel paradigm.

Peter J. Denning and Ted G. Lewis, Monterey, CA

Back to Top

Embed the 'Social' in Application Software, Too

In their article "Industrial Scale Agile—From Craft to Engineering" (Dec. 2016), Ivar Jacobson et al. described how software development should be moved from "craft to engineering," but their proposed method completely ignored consideration of what I would generally call the "social embedding" of application software. As software is increasingly integrated into our daily lives, surrounding us in smart homes, smart cities, smart medical devices, and self-driving cars, socio-technical concerns arise not only due to privacy and security concerns but also to legal constraints, user diversity, ergonomics, trust, inclusion, and psychological considerations. This is not just a matter of conventional requirements engineering. Many such considerations are related to abstract legal, social, ethical, and cultural norms and thus demand a difficult translation from abstract normative level to concrete technical artifacts in the software. A crucial point is such concerns of social embedding could lead to conflicting software design requirements and thus should be addressed in a systematic, integrated development process. The results of such a transformation determines the acceptance and acceptability of application software. From this perspective, Jacobson et al. remain in the "old world" of software engineering, without proper attention to the changing role of application software for users like you and me.

Kurt Geihs, Kassel, Germany

Back to Top

Authors Respond:

Geihs is absolutely right in that Essence, the new standard we explored in our article, is not directly concerned with what he calls "social embedding," but that was intentional. Essence represents a common ground for software engineering in general, and its developers have been very conservative in what they have included. However, Essence is designed to be extended by any known specific set of practices, including human-centric, techno-centric, and user-centric. Since Essence is small yet practical, no one familiar with it would be surprised if it also could serve as a platform for the kind of practices Geihs is looking for.

Ivar Jacobson, Ian Spence, and Ed Seidewitz, Alexandria, VA

Back to Top

References

1. Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518 (Feb. 26, 2015), 529–533.

2. Xiong, W. et al. Achieving human parity in conversational speech recognition. arXiv (Feb. 17, 2017); https://arxiv.org/abs/1610.05256

3. Cook, S.A. and Dwork, C. Bounds on the time for parallel RAM's to compute simple functions. In Proceedings of the 14th Annual ACM Symposium on Theory of Computing (San Francisco, CA, May 5–7). ACM Press, New York, 1982, 231–233.

4. Fich, F.E., Meyer auf der Heide, F., Ragde, P., and Wigderson, A. One, two, three … infinity: Lower bounds for parallel computation. In Proceedings of the 17th Annual ACM Symposium on Theory of Computing (Providence, RI, May 6–8). ACM Press, New York, 1985, 48–58.

Back to Top

Footnotes

Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to letters@cacm.acm.org.


©2017 ACM  0001-0782/17/04

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.


 

No entries found

Read CACM in a free mobile app!
Access the latest issue, plus archived issues and more
ACM Logo
  • ACM CACM apps available for iPad, iPhone and iPod Touch, and Android platforms
  • ACM Digital Library apps available for iOS, Android, and Windows devices
  • Download an app and sign in to it with your ACM Web Account
Find the app for your mobile device
ACM DL Logo