Opinion
Artificial Intelligence and Machine Learning Cerf's Up

Thoughts on AI Interoperability

While the core transport protocols of the Internet are binary in character, one could imagine a more text-oriented exchange protocol for inter-ML systems that might be adequate and more easily debugged by human readers.

Posted
Google Vice President and Chief Internet Evangelist Vinton G. Cerf

A chance meeting with Jake Taylor, National Institute of Standards and Technology’s (NIST) new lead for the AI Safety Institute Consortium,a led me to wonder about the interoperability of machine learning (ML) and large language model (LLM) systems. I am persuaded that these powerful technologies will be widely used, and we will likely want or even need for them to interwork. Looking at today’s LLMs, one is struck by their glib ability to generate text (among other modalities).

Some of these systems have been outfitted with special-purpose application programming interfaces (APIs). For example, if the LLM discovers a need to respond to a mathematical computation, it might use a specialized interface to deliver the problem to MATLABb to be processed and return a result. Similarly, if there is a need to control a device in response to a request, such as “tune to channel 7,” an Internet of Things (IoT) interface might be used. Of course, the LLM would need to know about these interfaces and recognize when they might need to be activated. Many control interfaces are today being equipped with oral or text interfaces so that natural language might be used between the user and the control or functional subsystem—assuming there is sufficient precision of expression. The ambiguities of natural language might result in unexpected outcomes, leading me to think about more precise kinds of interfaces.

There are other kinds of interoperability worth thinking about. There is the concept of federated learning, in which multiple ML systems independently ingest training content—resulting in a multi-layer neural network in which the neurons take on weights. When these ML systems are essentially identical in structure, one can imagine collecting the state information of each replica and then forming a system that is a computed combination of the weight values of each separate system. A success with this method might allow learning to take place in a distributed fashion and a combined system formed without having to move all the training data to a single location. Since training data can be voluminous, the tactic, if it worked, might avoid costly or even impossible transfer of all training data to a single location.

A more ambitious notion might involve cooperative interaction among ML systems (not only LLMs). The question in my mind is whether some kind of symbolic or technical representation might be needed to assure precision in the exchange of information between independently operating ML systems. This makes me think, at least superficially, about the role of Internet standards which allow computers on various distinct networks to reliably exchange data. The enabling mechanisms at the lower layers just assure reliable delivery of digital payloads, which are interpreted at higher layers of protocol. Is there a role for information exchange among ML systems and at different layers of implementation?

Given how richly powerful these systems are, it seems natural to wonder whether semantic and syntactic exchange standards might be useful. They would almost certainly have to be extensible, given the early state of today’s AI and ML systems. Purpose-built and trained ML systems typically take in some kind of digital input and perform a computation that produces output. The outputs may simply be displayed, or they might be delivered to a control system. At Google, such a system was trained to control pumps and valves in a data-center cooling system, leading to a 40% reduction in the cost of electrical power for cooling.c

While the core transport protocols of the Internet are binary in character, one could imagine a more text-oriented exchange protocol for inter-ML systems that might be adequate and more easily debugged by human readers. Provision for the transfer of binary-coded information would likely be a wise addition. This line of reasoning leaves many dangling participles. As usual, my non-expert status in this space prompts me to invite comments from more qualified readers who may have better ideas than those above.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More