Computing Applications Voices

Computing in Pervasive Cyberspace

Freed from the temporal constraints of hardware, software could be the ultimate cyberorganism—a mind taking a body as needed to fulfill a particular function or mission.
  1. Article
  2. References
  3. Author
Big Data

The idea of software as a program representing a sequence of instructions on a von Neumann architecture is no longer tenable. In an article in Communications almost two decades ago [1], I advocated building software systems by composing concurrent objects, or actors. Actors reflect several key characteristics. For example, they are distributed, autonomous objects that interact by sending each other messages. They have unique names that are not tied to their current location, thus facilitating mobility. And new actors may be dynamically created, thus allowing new services to be added to a system. With the growth of P2P computing, Web services, networks of embedded computers, and multicore architectures, programming using the actor model is inevitable; witness the increasing use of programming languages (such as Erlang, E, SALSA, Scala, and Ptolemy) and the various implementations of software agents.

The notion of a sequential algorithm has a venerable tradition. The ninth-century Persian rationalist philosopher and mathematician Al-Khwarzimi is credited with introducing the concept of a sequence of instructions to compute a function (later termed an algorithm in his honor). But it wasn’t until the mid-20th century that the discipline of computing took root, inspired by Alan Turing’s representation of programs as data and by Alonzo Church’s theory of computation as something that can be carried out on a Turing machine. CACM has covered the development of computer science almost since this inception.

Computing has been morphing ever since. Initially developed as a flexible calculator for scientific problems, computers have successively become an arbiter and recorder of business transactions, a reasoning engine carrying out symbolic computations, a laboratory for running simulations, and a vehicle for social networking and entertainment. At the same time, their speed has increased more than 10-million- fold, and they have been interconnected through ever-larger bandwidths. Amazingly, our concept of a program as an implementation of a sequential algorithm has remained the same. While the shift to actor-oriented computing is overdue, we need to start thinking of software systems beyond the composition of actors.

Actors residing on a single node, even if mobile, cannot address the inevitable challenges that will characterize the future of computing. Computers are increasingly ubiquitous and embedded in all sorts of devices, from refrigerators and thermostats to automobiles and wheelchairs. The next logical step is for these embedded computers to be networked, not just in relatively localized sensor networks but through their connectivity to more powerful computers (base stations) to be globally networked. Such networking will result in a cyberspace that parallels physical space and is as pervasive as physical objects in the real world.

Pervasive cyberspace will consist of computers with heterogeneous architectures, a variety of sensing and actuation capabilities, and security and privacy profiles, many of which turn mechanical things (such as chairs, desks, windows, and walls) into smart devices. The availability of services and resources on these computers will be dynamic.

Explicit programming of such an evolving system is not feasible; elements in the pervasive cyberspace will continue to be added and removed, and architectures and policies will keep changing. Moreover, user requirements will continue to evolve. In such an environment, programmers cannot possibly know where all the resources are or will be or how and when they may be accessed. Thus software in a complex environment cannot consist of preprogrammed sequences of actions.

It is not that programming as such will be obsolete; certain local behaviors of these computers will continue to be programmed, albeit more-or-less automatically from high-level user-friendly interfaces and domain-specific libraries, written in inherently parallel actor languages that enable the use of multicore processors. However, to facilitate more complex and interesting services, software must be able to endure and adapt. Thus, the concept of an actor as an ephemeral process tied to a single computer at a time is not sufficient to execute such services.

A cyberorganism may freely split and join other organisms, its parts may be distributed, and the parts may be destroyed and regenerate.

Endurance and adaptation are common traits in natural systems that require a relatively "long" life cycle depending on their ability to adapt through feedback mechanisms provided by the environment. Natural selection uses such feedback to change the programming of organisms.

Organisms with mobility and more complex functions (like animals in nature) use environmental feedback to learn complex new tasks. Software must also learn to purposefully navigate its environment. As such, it must be more like an animal: mobile and able to forage for resources, learn from the environment, evolve, and provide services. One can imagine cyberorganisms [2] roaming the pervasive cyberspace, gaining "wealth" by providing services, negotiating with computers to sense the environment, and computing, actuating, and learning (self-modifying) from the results of their actions.

This is not to argue that software will be entirely analogous to biological hardware—even if we were to create biological computers, as we will eventually. An innate difference between organisms and software will persist: Software cannot directly sense, compute, or affect its environment; rather, it depends on the availability of computer nodes and networks. It is like a Platonic ideal of mind with a life independent of a physical body, albeit one that may move from one body to another, adapt itself or modify how the body it controls may operate.

However, this metaphor also suggests the possibility of a different sort of resilience: a cyberorganism need not be bounded in space and time like its biological counterparts. Rather, it may freely split and join other organisms, its parts may be distributed, and the parts may be destroyed and possibly regenerate.

In pervasive cyberspace, no central authority exercises complete control over the actions of cyberorganisms. This does not mean that control mechanisms will be entirely absent. We can foresee a pervasive cyberspace with many autonomous monitoring agents and mechanisms that limit access. Some mechanisms will affect systems as a whole, others the interaction among nominally independent systems. Some of these monitoring-and-control mechanisms will affect macro-level properties of systems, much as a central bank influences the emergent properties of a national economy by controlling key interest rates.

We are at the threshold of an entirely new science of computing, one that will be inspired by the biological metaphor, not by the notion of algorithm. Reporting this scientific evolution, the next 50 years of CACM should make even more exciting reading than the previous 50 years.

Back to Top

Back to Top

    1. Agha, G. Concurrent object-oriented programming. Commun. ACM 33, 9 (Sept. 1990), 125–141.

    2. Jamali, N., Thati, P., and Agha, G. An actor-based architecture for customizing and controlling agent ensembles. IEEE Intelligent Systems 14, 2 (Apr. 1999), 38–44.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More