BLOG@CACM
Architecture and Hardware

How We Lost the Internet

Have technical limitations of the Internet architecture contributed to the rise of doom scrolling on social media?

Posted
Credit: Shutterstock cracked display screen, illustration

Expectations of and (dis)satisfaction with the Internet has changed a lot over the 40-odd years since the first application of TCP/IP to the implementation of shared public networks, as has the very meaning of the term “the Internet,” originally the name of a “network of networks’” which created a global, interoperable overlay on top of local area infrastructure. The Internet was uniquely capable of passing datagrams from any participating endpoint to any other in the wide area. When this model achieved explosive growth and acceptance in the nascent computer networking community, it became the foundation of a utopian ideal of democratically collaborating end users of distributed applications that was termed the Open Data Network (ODN).

As the technology matured, however, what has emerged is a complex Information and Communication Technology (ICT) environment which uses TCP/IP as the primary endpoint-facing component. For that reason, it is still called “the Internet.” This ICT environment includes elements implemented in machine rooms connected by private networks based on many technologies, some of them quite exotic. It includes cloud data/computation centers and content delivery network points of presence. It is, in some cases, connected by trucks that carry 100PBs of data stored on SSDs. It may process data using massive clusters of GPU-enabled processors. To differentiate this new, richer, and more general environment from the original Internet architecture, it might more appropriately be called “Internet++.”

Another way in which the current ICT environment differs from the vision of ODN is that many of the largest and most powerful application providers rely on business practices that are widely considered overly aggressive. They employ strategies which encourage engagement and tools which monitor (or “surveille”) end users.  And they monetize all the data they gather, either by using it themselves or by selling it to others. The ills that are attributed to these business practices vary widely. While some concerns may be overblown, in other cases the dangers are very real. There is a general sense that individuals are being stripped of control over their own lives and identities by shadowy unregulated corporate actors.

This change from an ODN that would serve the common good to an Internet++ whose largest service providers seem to be predatory is often attributed to the greed or malicious intent of those who develop and implement the latter. Our recent paper argues that another important factor driving this change was in fact a mismatch between the capabilities of TCP/IP as a universal communication technology and the requirements of the most economically important category of services that reach a mass audience.

The issue is that the only universally deployed service of the classical Internet architecture is loosely synchronous unicast datagram delivery (meaning that both sender and receiver must participate actively throughout an interval of time). In contrast, all the early mass media applications starting with FTP, the Web, and streaming of stored media, were purely asynchronous point-to-multipoint in nature, with a single source file being delivered to many receivers at a time of their choosing. Complex modern media and service distribution applications still have a significant asynchronous point-to-multipoint component, although it may be combined with synchronous point-to-point elements such as remote telepresence.

Figure 1: The Internet protocol stack models only communication. Modern applications require distributed storage and processing. This results in the necessary growth of resources which are not constrained by the “thin waist” design of the Internet’s spanning layer. Adapted from Exposed Buffer Architecture.

The reasoning in the paper draws on the idea, as expressed by Messerschmidt and Szyperski, that the common services layer of the Internet, known as the Internet Protocol Suite, represents a thin waist (or “spanning layer”) in the communication protocol stack (see Figure 1). This communication “stovepipe” is only one of the three silos required to implement distributed ICT applications; the other two are storage and computation. Thus, applications cannot rely solely on the “stovepiped communication spanning layer” provided by the Internet. Instead, they must augment it with other resources. Ultimately, the solutions that have prevailed (Content Delivery Networks and cloud) work by building private infrastructure to augment the Internet’s thin waist.

Services that rely on such costly and complex infrastructure must pay the fees required by their operators. Many applications do so by charging hefty end user fees for services that might otherwise have been provided at little or no cost as part of a broader business strategy. An infrastructure with greater deployment scalability might not have imposed such high communication costs.

The more problematic outcome was when the early idea that Internet services such as Web search could be viable without charging any end user fees led to massive investment in companies providing those services. The notion was that capturing market share would eventually somehow translate into profits. As it turns out, the most effective way achieve profitability was through surveillance and monetization of end user behavior. Exploitation of end user surveillance data, when combined with targeted marketing, proved not only viable but hugely profitable. The rise of social media added a new twist: rather than relying on organic search queries, end users could be encouraged to scroll compulsively by using aggressive engagement-maximizing algorithms. End users naively opened the door and invited such vampiric services into the unregulated environment of their online lives.

This analysis suggests two questions: Could it have been otherwise? Are there technical responses which could assist in alleviating the current situation? The answer to the first question is unknowable, but our paper discusses unsuccessful efforts made over multiple decades to extend the Internet’s thin waist with additional resources and services. These sought to achieve deployment scalability by making use of distributed storage and processing in restrained ways.  The second question is more salient, because of the widespread belief that the thin waist (or spanning layer) of the Internet can no longer be extended, modified, or replaced. This leaves only two possibilities: either 1) implement additional services as overlays added to the Internet stack above its spanning layer; or 2) define a spanning layer that is broader than the Internet communication stovepipe, including the local resources (storage, processing, and local area communication) which are offered by the layer below the Internet and used to implement it.

Our paper argues that overlay solutions, including the current efforts to define an Extensible Internet, are unlikely to exhibit the degree of deployment scalability required to achieve universal service. It also describes another approach, known as Exposed Buffer Architecture, which would define a standard for interoperable “underlay” services that would create a spanning layer capable of supporting a variety of ICT utilities and services using a highly generic model of storage, processing and local communication.

Our analysis lays some of the responsibility for the emergence of disturbing Internet++ business practices at the feet of the same limitations of the Internet architecture that have been key to its widespread deployment and universal adoption. This suggestion has been viewed as heretical. Exposed Buffer Architecture suggests that greater generality in a lower ICT spanning layer could be achieved while preserving deployment scalability. Seeking to create a spanning layer below the level of the Internet Protocol Suite which includes storage and processing also has been viewed as heretical.

Having thus barred the door from the inside, our ICT community voluntarily offers the public as nourishment to the parasitic service providers of Internet++. Resistance, apparently, is futile.

Micah D. Beck, University of Tennessee

Micah D. Beck (mbeck@utk.edu) is an associate professor at the Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, USA.

Terry R. Moore, University of Tennessee

Terry R. Moore (tmoore@icl.utk.edu) is an associate director (retired) at Innovative Computing Laboratory, University of Tennessee, Knoxville, TN, USA.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More