Opinion
Architecture and Hardware Voices

Toward a Network Architecture that Does Everything

In the same way light propagates through a medium, analogous wave-particle principles could help model communications through the future Internet architecture.
Posted
  1. Article
  2. References
  3. Author

Here’s a new way (I’ve liberally adapted from physics) to define the future network paradigm: Use the notion of wave-particle duality to view a network with swarms of coded content as the dual of packets. The wave model maps all the way down the metaphor food chain to the analog level but should be seen mainly as an analogy that works like this: First, new sources of content introduce material at multiple places in the network (including through sensor, video, and audio input), representing the start of a new wave of network traffic. The content spreads by matching agent and user subscriptions/interests to content descriptions at rendezvous points throughout the network. The analogy is also likely to go wrong in interesting ways. I hope we’ll be able to use them to inspire us to come up with a new unified network architecture, sharing it in future issues of Communications.

One temptation computer scientists are known to indulge is to spin grand unified theories, possibly due to an innate inferiority complex when looking over the fence at physics (or perhaps because some of us started life as physicists). Whatever the reason, it shows up in networking as a desire to unify all communications under a single all-inclusive, all-welcoming design, system, or architecture. In telecommunications networks, historically successful for much longer than the Internet, the paradigm is circuit switching. Meanwhile, broadcast networks (first radio, later TV) have been around for the past century. Multiplexed, isolated circuits still dominate; for example, more than 2.5 billion cell phones in the world today operate this way, despite the growth and promise of voice over IP on the Internet. Two side effects of this design choice are that calls are billable and the use of resources is quantifiable.

What about the future Internet? Many research programs have been proposed, including the Future Internet Design project in the U.S. and the Future Internet Research and Experimentation project in the European Union, as well as similar efforts in China, Japan, and Korea; it’s more than talk. Here, I propose that we take this opportunity to think more deeply about the fundamentals of communications systems in a variety of disruptive ways to try to escape the intellectual rut we may otherwise get stuck in.


The data is in some sense a shifting interference pattern that emerges from the mixing and merging of all sources.


The Internet is built on the packet-switching paradigm, famously co-devised in parallel by Paul Baran and Donald Davies in the early 1970s, replacing the metaphor of electrical circuits and pipes with the idea of statistical multiplexing. Thanks to statistical multiplexing, resource sharing is more flexible than the fixed partitioning used in previous networks, and thanks to buffers, bursty sources of data can take advantage of idle lines in the network, leading to potential efficiency gains. A debate raged in the late 1970s and early 1980s between those favoring the "virtual circuit" and those favoring the "datagram" model of how to build a packet-switching-based communications system. It turns out that the idea of a "flow" in the Internet is not very different from a virtual circuit; indeed, with so many stateful middle boxes (such as network address translation, firewalls, proxies, and multiprotocol-label-switching switched/routers), one can now say that the whole debate was futile.


The notion of a wave is optimized for resilience through massive scale, not for local efficiency.


However, the future needs of networks will be different from the simple circuit or packet system that has dominated for the past century, as society shifts from its historical patterns of work and entertainment. The vast majority of network content is pull/interest-based; economics [1] argues for making the "long tail" of content available at low transaction/network costs to large numbers of small groups of only the most interested recipients.

The decrease in the need for synchronization in remote activities for video, audio, and static content argues that networks, including the Internet, be optimized for nonconcurrent use. On the other hand, people want low latency, which argues for nearby or local copies of all content. Thus, we might talk about "asynchronization of multicast" and commercialization of peer-to-peer communication and content sharing. Rich-value content creators would love to eliminate any intermediaries while also pushing storage and its costs to edge users.

Technology push also plays a role in Internet-based communications. Software has changed since the days of layered system design; today, we sustain reliable software built from well-contained components assembled with wrappers designed to enforce behavior. How is this relevant to a future Internet architecture? For one thing, that architecture could be more diversified, with less commonality in the nodes (hosts and routers) than we have had for the past 20 years of PCs, servers, and routers all coming from the same technology stable.

This also fits my wave-particle model of how technology is changing within networks and protocols. Recent papers, including [2], have proposed replacing the layered protocol stack with a graph or even a heap (pile) of soft protocol components. However, we can also look at the network topology itself and see that the role of nodes is changing within it. Perhaps all nodes are the equivalent of middle boxes, revisiting the old Internet idea that any component with more than one network interface can be a router. We see it in our end-user devices—in my case the Macintosh iBook I typed this essay on and the Windows smart phone in my pocket, each with at least three built-in radios and a wired network interface. When we interconnect these devices, the network communication topology is far more dynamic than any public network has been in the past.

Many of the increasingly heterogeneous "links" connecting devices are also not well characterized as "pipes"; indeed, the capacity of a volume of space containing a number of mobile devices is simply not known; some physical bounds are known, but the equivalent of a Shannon Limit in information-theory terms is not. This lack of information argues that network architects need a temporal graph model of the network. It also argues that today’s architectures are unable to accommodate the resources model in a temporal graph. Simply labeling the edges of the graph with weights to represent capacity or delay does not capture the right information.

One more critical piece of technology—network coding—further muddies the effort to devise a grand unified network architecture that would maintain the wave-particle duality analogy. Practical network coding in wireless and wired networks promises to remove much of the complexity in resource management. Network coding involves the merging of packets (such as by XOR in the simplest form) transmitted along common subpaths. Network coding can be combined with redundancy for reliability.

So how might the wave-particle duality idea be applied to a network? The Internet is already dominated by swarms of network-coded content, no longer flowing from place to place but emanating like ripples on a pond from multiple points, then becoming available for local consumption. Neal Stephenson predicted this with remarkable prescience in his novel The Diamond Age [3]. Publication of new content is the start of a new wave. The content spreads through the automated matching of subscriptions/interests to content descriptions. Content is coded as it moves through the nodes in the network. A snapshot of "packets" (on an edge or stored in a node) at any given point in the graph would show they contain a coded multiplex of multiple sources of data. Hence, there would be a poor fit throughout the network architecture for packets, flow-level descriptions, normal capacity assignments, and end-to-end and hop-by-hop protocols. The data is in some sense a shifting interference pattern that emerges from the mixing and merging of all sources.

Have we also unintentionally thrown out the legacy system with the new paradigm? What about person-to-person voice calls and its 21st century equivalent, real-time gaming? If we could push the idea of swarms or waves down into the network architecture, how would the architecture implement circuit-on-a-wave and IP-on-a-wave?

Network architects could do this the same way (inefficiently) they implement VoIP—through a circuit on IP. One is at liberty to run multiple legacy networks, supporting one-to-one flows using separate communications systems, especially since the networks are available already. On the other hand, how would they be supported on the wave? Perhaps through some minimalist publication-and-subscription system.

Other ways to understand this design concept are circulating in the research community. One is the data-orientated paradigm in which information is indexed by keys and retrieved by subscription. Protocols are declarative. All nodes are caches of content, indexes, and buffers. All nodes forward information while caching, in the style of mobile ad hoc, delay-tolerant, and peer-to-peer systems; these communication methods are unified in the data-oriented paradigm.

No network architect interested in developing a grand unified network architecture would be concerned with micromanaging fine-grain resources. For a network architect, efficiency is measured at the global level. Traditional activities may be maddeningly inefficient, but most content—video, audio, and sensor data—is handled with maximum efficiency. Content is also handled through multi-path, coded delivery, with good isolation and protection properties through the statistics of scaling, not by virtue of local resource reservation.

So, unlike traditional network architectural goals, the wave-particle duality model I’ve described here pursues a different primary goal. In it, the notion of a wave is optimized for resilience through massive scale, not for local efficiency. Moreover, it supports group communication and mobility naturally, since the rendezvous in the network between publish and consumption is dynamic, not through the coordination of end-points in the classical end-to-end approach.

The details of the wave model are likely to keep researchers busy for the next 20 years. My aim here is to get them to think outside the end-to-end communications box in order to solve the related problems, if they are indeed the right problems, or to propose a better problem statement to begin with.

One might ask many questions about future wave-particle network architecture, including: What is the role of intermediate and end-user nodes? How do they differ? Where would be the best locus for a rendezvous between publication and consumption? Would each rendezvous depend on the popularity of content and its distance from the publisher, subscriber, or mid-point. What codes should be used? How can we build optical hardware to achieve software re-coding? What role might interference in radio networks play in the wave-particle network model? How can we achieve privacy for individual users and their communications in a network that mixes data packets?

This future wave-particle duality in the Internet-based network would be more resilient to failure, noise, and attack than the current architecture where ends and intermediate nodes on a path are sitting ducks for attacks, whether deliberate or accidental. How might its architects quantify the performance of such a system? Do they need a new set of modeling tools—replacing graph theory and queuing systems—to describe it? Finally, if network control is indeed a distributed system, can the idea of peer-to-peer be used as a control plane?

I encourage you not to take my wave-particle duality analogy too seriously, especially since I am suspicious of any grand unified network model myself. But I do encourage you to use the idea to disrupt your own thinking about traditional ideas. In the end, perhaps, we will together discover that many traditional ideas in networking are fine as is, but all are still worth checking from this new perspective.

Back to Top

Back to Top

    1. Anderson, C. The Long Tail: How Endless Choice Is Creating Unlimited Demand. Random House, New York, 2006.

    2. Braden, R., Faber, T., and Handley, M. From protocol stack to protocol heap: Role-based architecture. ACM SIGCOMM Computer Communication Review 33, 1 (Jan. 2003), 17–22.

    3. Stephenson, N. The Diamond Age: Or, a Young Lady's Illustrated Primer. Bantam Spectra Books, New York, 1995.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More