The viewpoints by Alan Bundy "Smart Machines Are Not a Threat to Humanity" and Devdatt Dubhashi and Shalom Lappin "AI Dangers: Imagined and Real" (both Feb. 2017) argued against the possibility of a near-term singularity wherein super-intelligent AIs exceed human capabilities and control. Both relied heavily on the lack of direct relevance of Moore's Law, noting raw computing power does not by itself lead to human-like intelligence. Bundy also emphasized the difference between a computer's efficiency in working an algorithm to solve a narrow, well-defined problem and human-like generalized problem-solving ability. Dubhashi and Lappin noted incremental progress in machine learning or better knowledge of a biological brain's wiring do not automatically lead to the "unanticipated spurts" of progress that characterize scientific breakthroughs.
These points are valid, but a more accurate characterization of the situation is that computer science may well be just one conceptual breakthrough away from being able to build an artificial general intelligence. The considerable progress already made in computing power, sensors, robotics, algorithms, and knowledge about biological systems will be brought to bear quickly once the architecture of "human-like" general intelligence is articulated. Will that be tomorrow or in 10 years? No one knows. But unless there is something about the architecture of human intelligence that is ultimately inaccessible to science, that architecture will be discovered. Study of the consequences is not premature.
Martin Smith, McLean, VA
"Can We Trust Autonomous Weapons?" as Keith Kirkpatrick asked at the top of his news story (Dec. 2016). Autonomous weapons already exist on the battlefield (we call them land mines and IEDs), and, despite the 1997 Ottawa Mine Ban Treaty, we see no decrease in their use. Moreover, the decision as to whether to use them is unlikely to be left to those who adhere to the ACM Code of Ethics. The Washington Naval Treaty of 1922 was concluded between nation-states—entities that could be dealt with in historically recognized ways, including sanctions, demarches, and wars. An international treaty between these same entities regarding autonomous weapons would have no effect on groups like ISIS, Al-Qaida, Hezbollah, the Taliban, or Boko Haram. Let us not be naïve ... They have access to the technology, knowledge, and materials to create autonomous weapons, along with the willingness to use them. When they do, the civilized nations of the world will have to decide whether to respond in kind—defensive systems with sub-second response times—or permit their armed forces to be out-classed on the battlefield. I suspect the decision will seem obvious to them at the time.
Joseph M. Saur, Virginia Beach, VA
It was rather jarring to read in the same issue (Dec. 2016) a column "Making a Positive Impact: Updating the ACM Code of Ethics" by Bo Brinkman et al. on revamping the Code and a news article "Can We Trust Autonomous Weapons?" by Keith Kirkpatrick on autonomous weapons. Such weapons are, of course, enabled entirely by software that is presumably written by at least some ACM members. How does the Code's "Do no harm" ideal align with building devices whose sole reason for existing is to inflict harm? It seems that unless this disconnect is resolved the Code is aspirational at best and in reality a generally ignored shelf-filling placeholder.
Jack Ganssle, Reisterstown, MD
Robin K. Hill raised an interesting point in her blog post "Fiction as Model Theory" (Dec. 2016) that fictional characters and worlds need to follow certain rules—rules that can be formalized and verified for consistency. Fiction in general, and science fiction in particular, has always been of considerable interest to scholarly researchers. What was notable in Hill's post was her suggestion of using formalism in rather unconventional domains—domains not traditionally identified with computation-related methods.
I have personally taken a similar path and, together with my colleagues, discovered the utility of formalizing ideas from unconventional domains. These range from modeling complex living environments in self-organizing arrays of motion sensors to identifying unexpected emergent patterns in the spread of disease in large-scale human populations or even in cousin marriages.1 Likewise, I have found that formal specification can prove useful in terms of representing community-identified cognitive development of scholarly researchers measured as a function of their citation indices.2
Could a longer work of fiction, say, a novel or novella, benefit from such treatment? After all, well-written novels often invent their own internally consistent landscapes. They also often involve a rather complex interplay of characters, multiple plotlines, backstories, and conflicts. Scholarly researchers have even identified social networks of fictional characters influencing major events in these make-believe worlds. It is indeed the interplay of characters in conflict that makes for a potential page-turner or, at least, a novel worth reading.
While fiction authors have developed their own instruments, ranging from Randy Ingermanson's so-called "snowflake method" to Shawn Coyne's "story grid" for editors, what is of particular interest to me is the recurrence of self-similar patterns in well-written fiction. Snowflakes consist of fractals, and Coyne has identified similar patterns in well-written novels repeating in sub-scenes he calls "beats" and in scenes, scene sequences, and even the Aristotelian three-act structure; that is, same pattern, different scales. The "story grid" method performs a quantitative dissection of fiction, allowing editors to help create generally engaging fiction.
Fractals, or mathematical sets repeating at multiple scales, appear frequently in nature. Examples range from Romanesco broccoli to river basins and ferns. Prominent identification of fractal-related scholarly work includes the Mandelbrot set, Serpinski's carpet, Koch Snowflake, Julia set, strange attractor, and unified mass central triangle. We can thus infer well-written works of fiction might be better modeled through a combination of formal specification and fractals. Formalism could thus be useful even for people associated with the novel-publishing industry.
Muaz A. Niazi, Islamabad, Pakistan
Although Marina Krakovsky's news article "Bringing Holography to Light" (Oct. 2016) was timely (the visual interface will indeed dominate the future), the photo in the article's Figure 1 above the caption "Learning medicine in three dimensions with Microsoft's HoloLens." was completely opposite of what Krakovsky said in the article's opening sentence. Microsoft HoloLens is not even designed to produce a holographic image. On the contrary, Microsoft HoloLens is just a see-through stereoscopic head-mounted display, with two diffractive mirrors that are prefabricated diffractive reflection lenses manufactured either by diamond turning or optical holography. There is neither holographic processing nor holographic image reconstruction. In the HoloLens, a stereoscopic image pair is projected before the user's eyes through the diffractive mirrors. There is a marked difference between a stereoscopic 3D image and a holographic image. A holographic image can reproduce true 3D perspectives, whereas a stereoscopic 3D image cannot.
Debesh Choudhury, Kolkata, West Bengal, India
Adi Livnat and Christos Papadimitriou review article "Sex as an Algorithm" (Nov. 2016) was fascinating but mistitled. It discussed the benefits of conjugality. George C. Williams in Sex and Evolution distinguished the more general concept conjugality from (eu)sexuality, in which the number of conjugal strains in the species is equal to the number of individuals participating in conjugation—two, in all conjugal species on this planet. This seems an important distinction, and I suggest the cover of Communications was misleading. In my own book Albatross I emphasized this and other distinctions, aiming to avoid nonsensical talk, as in that arising from "the gostak distims the doshes" in The Meaning of Meaning by C.K. Ogden and I.A. Richards.
Livnat's and Papadimitriou's reference to their non-coverage of heterozygosity was revealing. I rather suspect heterozygosity is a prerequisite for sexuality proper; certainly a lot of sexual species are haploid in the gametic generation and diploid in the others.
Some of the mathematics as to the binarity of conjugation might be interesting. What are the chances that on some other world there may have evolved life with a triple helix, ternary conjugation—and so trisexuality?
John A. Wills, Oakland, CA
Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to firstname.lastname@example.org.
©2017 ACM 0001-0782/17/03
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from email@example.com or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.
No entries found