BLOG@CACM
Artificial Intelligence and Machine Learning

What Lessons Can We Learn from the Internet for AI/ML Evolution?

The principles that made the Internet so successful can guide us in building the next wave of AI systems. 

Posted
speed lines at night

Living through Internet evolution over the past three decades, and now seeing growing fragmentation in AI/ML evolution, made us write this post on what lessons we can learn from the evolution of the Internet to AI/ML evolution. We want to explore what principles that made the Internet so successful can guide us in building the next wave of AI systems. 

1. Simplicity and Ubiquity over Complexity

In the 1990s, when the Internet was taking shape, there were multiple competing models including ISO’s OSI (7-layer) stack. However, TCP/IP succeeded because it was lightweight, pragmatic, and universally interoperable/deployable.

In AI, we are again faced with competing approaches and fragmented stacks. The lesson here is clear: we need common, interoperable frameworks and APIs that everyone can adopt and extend. A “lowest common denominator” that allows agents, models, and tools to talk across ecosystems. 

2. Layered Abstractions

In the Internet, we have layered architecture, in which each layer has a clear role, from physical layer (Layer 1), MAC/LLC layer (Layer 2), to IP routing (Layer 3) to applications, with stable interfaces between them. Each layer innovated independently but interoperated through stable interfaces. We need similar layered abstractions for AI/ML, for example:  data/model layer (for training and inferencing), agent/reasoning layer (for decision making and coordination), and application/intent layer (where humans and AI agents interact). If we can create abstraction layers, and separate these concerns properly, then we can avoid building monolithic AI stacks and instead foster innovation across each layer, just as the Internet did.

3. End-to-End Principle

One of the defining principles of the Internet was to keep the core simple and push the intelligence to the edge. The network and its host computers just simply delivered packets reliably without dictating or controlling applications. That principle enabled the explosion of the Web, streaming, and countless other services. In AI, similar principles should be considered. Instead of centralizing everything in “one foundational model”, we should empower distributed agents and edge intelligence. Core infrastructure (like mode hosting or fabric) should stay simple and robust, enabling diverse use cases on top. 

4. Standards and Interoperability

In the Internet world, IETF’s open, collaborative approach to standards through the RFC process led to interoperability across vendors, governments, and researchers. This openness drove innovation and global adoption. Hence, we were able to access information using any device on the planet, from any location, at any time.

Currently, AI lacks an RFC style standards body for interoperability (especially for multi agents’ protocols, agent-to-agent coordination frameworks, model-to-model communication standards including APIs etc.), among other things. We need the equivalent of IETF for AI to prevent silos and fragmentation, just the way we addressed these issues in the Internet.

5. Resilience and Fault Tolerance

The Internet was designed to survive failures in terms of packet losses, congestion, or outages. AI systems, on the other hand, often fail unpredictably, due to hallucinations, brittle reasoning, or collapse under adversarial inputs. We need an AI equivalent of TCP’s retransmissions and congestion control, with backup agents, graceful degradation, and self-healing protocols that ensure reliability at scale. 

6. Scalability and Incremental Deployment

The Internet allowed incremental growth, in other words, new networks could join without re-architecting the whole system. NAT, IPv6 coexistence with IPv4, header compression all were added without forklift upgrades. AI systems must allow similar incremental upgrades as the Internet allowed, for example, new models, accelerators, or compression techniques must plug into existing systems without retraining the entire system. 

7. Governance and Neutrality

One of the most important lessons of all from the Internet is that there be no single company nor government-owned or controlled TCP/IP stack. It is neutral governance that created global trust and adoption. Institutions such as ICANN, and the regional Internet registries (RIRs) played a key role by managing domain names and IP address assignments in an open and transparent way, ensuring that resources were allocated fairly across geographies. This kind of neutral stewardship allowed the Internet to remain interoperable and borderless. On the other hand, today’s AI landscape is controlled by a handful of big-tech companies. To scale AI responsibly, we will need similar global governance structures—an “IETF for AI,” complemented by neutral registries that can manage shared resources such as model identifiers, agent IDs, coordinating protocols, among others. Without such mechanisms in place, we risk fragmentation, lack of trust, and uneven adoption across regions.

Guiding Principles

The Internet worked well because TCP/IP gave us simplicity, abstractions, openness, resilience, and neutrality. Those same principles can guide AI as it moves from today’s experimental silos to interoperable global scale AI systems.

Mallik Tatipamula is Chief Technology Officer at Ericsson Silicon Valley. His career spans Nortel, Motorola, Cisco, Juniper, F5 Networks, and Ericsson. A Fellow of the Royal Society (FRS) and four other national academies, he is passionate about mentoring future engineers and advancing digital inclusion worldwide

Vinton G. Cerf is vice president and Chief Internet Evangelist at Google. He served as ACM president 2012-2014.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More