I am a big science fiction fan and robots have played a major role in some of my favorite speculative universes. The prototypical robot story came in the form of a play by Karel Čapek called "R.U.R." that stood for "Rossum's Universal Robots." Written in the 1920s, it envisaged android-like robots that were sentient and were created to serve humans. "Robot" came from the Russian word "работать" ("rabotat," which means "work"). Needless to say, the story does not come out well for the humans. In a more benign and very complex scenario, Isaac Asimov created a universe in which robots with "positronic" brains serve humans and are barred by the Three Laws of Robotics from harming humans:
A "zeroth" law emerges later:
In most formulations, robots have the ability to manipulate and affect the real world. Examples include robots that assemble cars (or at least parts of them). Less facile robots might be devices that fill cans with food or bottles with liquid and then close them up. The most primitive robots might not normally even be considered robots in normal parlance. One example is a temperature control for a home heating system that relies on a piece of bi-metal material that expands differentially causing a circuit to be closed or opened depending on the ambient temperature.
I would like to posit, however, that the notion of robot could usefully be expanded to include programs that perform functions, ingest input and produce output that has a perceptible effect. A weak notion along these lines might be simulations in which the real world remains unaffected. A more compelling example might be high-frequency stock trading systems whose actions have very real-world consequences in the financial sector. While nothing physical happens, real-world accounts are impacted and, in some cases, serious consequences emerge if the programs go out of control leading to rapid market excursions. Some market meltdowns have been attributed to large numbers of high-frequency trading programs all reacting in similar ways to inputs leading to rapid upward or downward motion of the stock market.
Following this line of reasoning, one might conclude that we should treat as robots any programs that can have real-world, if not physical, effect. I am not quite sure where I am heading with this except to suggest that those of us who live in and participate in creation of software-based "universes" might wisely give thought to the potential impact that our software might have on the real world. Establishing a sense of professional responsibility in the computing community might lead to increased safety and reliability of software products and services. This is not to suggest that today's programmers are somehow irresponsible but I suspect that we are not uniformly cognizant of the side effects of great dependence on software products and services that seems to increase daily.
A common theme I hear in many conversations is concern for the fragility or brittleness of our networked-and software-driven world. We rely deeply on software-based infrastructure and when it fails to function, there can be serious side effects. Like most infrastructure, we tend not to think about it at all until it does not work or is not available. Most of us do not lie awake worried that the power will go out (but, we do rely on some people who do worry about these things). When the power does go out, we suddenly become aware of the finiteness of battery power or the huge role that electricity plays in our daily lives. Mobile phones went out during Hurricane Sandy because the cell towers and base stations ran out of power either because of battery failure or because the back-up generators could not be supplied with fuel or could not run because they were underwater.
I believe it would be a contribution to our society to encourage deeper thinking about what we in the computing world produce, the tools we use to produce them, the resilience and reliability that these products exhibit and the risks that they may introduce. For decades now, Peter Neumann has labored in this space, documenting and researching the nature of risk and how it manifests in the software world. We would all do well to emulate his lead and to think whether it is possible that the three or four laws of robotics might motivate our own aspirations as creators in the endless universe of software and communications.
Vinton G. Cerf, ACM PRESIDENT
©2013 ACM 0001-0782/13/01
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from email@example.com or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
This is a very important concern. An existing type of artificial agent that has major impact on our world is the corporation (for-profit, not-for-profit, governments, churches, unions, etc.). When an agent has a very different form, and operates at a very different scale of time and action, it can be difficult even to recognize its existence. I discuss this in a recent paper: http://web.eecs.umich.edu/~kuipers/research/pubs/Kuipers-ci-12.html.
An existing, ecologically-successful genus of collectively intelligent artificial creatures.
Benjamin Kuipers. Collective Intelligence (CI-2012).
Abstract: People sometimes worry about the Singularity [Vinge, 1995; Kurzweil, 2005], or about the world being taken over by artificially intelligent robots. I believe the risks of these are very small. However, few people recognize that we already share our world with artificial creatures that participate as intelligent agents in our society: corporations. Our planet is inhabited by two distinct kinds of intelligent beings --- individual humans and corporate entities --- whose natures and interests are intimately linked. To co-exist well, we need to find ways to define the rights and responsibilities of both individual humans and corporate entities, and to find ways to ensure that corporate entities behave as responsible members of society.
The last decade saw a surge of papers about the use of formal methods for dependability in architecture, and I think this is where we are heading. Assigning security, reliability and dependability properties to the architectures we create (even if in a very lightweight, cost-effective form) seems like the way to go.
In fact, the word "robot" comes from the Czech word "robota" (noun), which means "work" and in particular "serf labour". In a note for the Oxford English Dictionary, Karel Äapek described the origin of the word. It was suggested by his brother Josef.
Displaying all 3 comments