The recent paper “Complexity and the Global Governance of AI” offered by researchers from New America, Princeton University, Arizona State University and other institutions applies “complexity thinking” to AI policy. Looking at the section entitled “Develop multi-layered AI accountability based on transparency, incentives, and regulations,” I noticed that there was call for some standards that would be “voluntarily adopted”:
“The next layer would be ‘soft’ regulations, such as voluntary standards and public pressure and other measures that can affect the incentives of AI developers, such as avoiding reputational damage.”
The creation of voluntary standards is an idea that may seem easier than imposing regulations. But devising voluntary standards presents unique challenges, different from those which arise in devising standards which can be imposed on developers. The AI community should take note.
In the case of both voluntary standards and regulations, the decision-making body is presumably convinced that the value of having a standard outweighs the opportunity costs to the community as a whole of excluding alternatives. Decision makers cannot know the future, so their evaluation of the durability of this trade-off is a guess. Regulations can be reevaluated and changed, but until that happens the community may well well be stuck with outdated rules.
In the case of voluntary standards, the entire developer population (and sometimes the user population) is part of the decision-making process. And a voluntary standard whose value ceases to outweigh the cost of excluding alternatives is much less likely to see continued support. The cost of ignoring a voluntary standard tends to be much lower than the penalty for breaking a regulation. A voluntary standard must therefore be designed not only to meet the known needs of the current community, but to mean the unknown needs of the future. As stated in the paper:
“Uncertainty is not a reason to dismiss traditional regulatory approaches, but it does increase the challenge, in part because humans tend to overestimate the short-term impacts of technologies and underestimate their longer-term effects.” [emphasis added]
This problem arises in many situations where voluntary standards are needed or desired. In the worlds of operating systems and wide area networks being widely deployed in the late 20th century, there were no governmental standards. The designers of two important infrastructures, the UNIX operating system and the Internet Protocol Suite, created system specifications which have gained near-universal and still-growing acceptance in their respective fields. Together the POSIX OS standard (derived from UNIX) and the Internet have captured a huge portion of the market, being adopted by the erstwhile advocates of many competing designs.
In a 2019 paper entitled “On the Hourglass Model” in Communications, I argued that the design of Unix and the Internet both adhere to a rule that can be formalized as a design principle: Minimal Sufficiency. This principle says that in order for a set of rules to be as widely applicable and durable as possible in unknown environments and in the face of changing assumptions, it should be as logically weak as possible while still being sufficient to attain its necessary goals.
In that paper, I proved a formal result: the Hourglass Theorem. An implication of this theorem is that logically strengthening common rules beyond the minimum required to achieve mutually agreed upon goals can impair voluntary acceptance. It tells us that the logically stronger a system of common rules, the fewer possible environments there are that will support it. While regulations may hope to change the world, voluntary standards must take it as it is.
The implications of the Hourglass Theorem can be illustrated using the hourglass visualization in Figure 1. In each frame, the set of rules adopted by the community is represented by this middle layer, the implications of these rules are represented by the “upper bell” and the environments which can support the implementation of those rules are represented by the “lower bell”. Informally, the Hourglass Theorem teaches that the logically stronger the middle layer (meaning the more guarantees that it makes), the larger the upper bell and the smaller the lower bell tend to be. The converse is also true.
To apply the principle of Minimal Sufficiency, a set of necessary goals is identified which must be included within the upper bell. The Hourglass Theorem then tells us that choosing the logically weakest set of rules which meets this sufficiency criterion will result in the largest possible lower bell. Intuitively, an hourglass with a large lower bell is one that can be voluntarily adopted in the greatest variety of environments.
Figure 1: The Hourglass Theorem explains that the weakest set of rules which meet specified goals will tend to maximize
the number of different environments that will support those rules.
Credit: Micah D. Beck
The discussion of AI governance in the paper “Complexity and the Global Governance of AI” uses the terms “weak” and “strong” in a way that implies a preference for strong rules:
“Policymakers should focus on strengthening all the layers of the accountability system. And they should avoid passing measures that weaken layers.”
The principle of Minimal Sufficiency suggests that adopting rules which are logically stronger than necessary may negatively impact their voluntary adoption and durability. This applies to AI regulations just as it does to the specification of Networking and Operating System APIs and protocols.
In terms of policy, the Minimal Sufficiency principle implies that it may be better to restrict voluntary standards to those required to meet a policy goal, rather than also including in any standard rules intended to achieve consequences that are merely desirable to those creating the rules. Those engaged in AI governance ignore this principle at their peril.
Of course, adhering to the principle of Minimal Sufficiency is difficult if the goals of a policy are not clearly defined, and there can be genuine disagreement over what is necessary vs. what is desirable. However if rules are created in the absence of such guidelines, and restraint is not applied in the fashioning of the common rules, then the result may not be widely accepted and/or may not be durable over time. My colleague Terry Moore pointed out this relevant quote:
“Now the only way to discover the principles upon which anything ought to be constructed is to consider what is to be done
with the constructed thing after it is constructed.”
— Collected Papers of Charles Sanders Peirce, volume 7, paragraph 220
Micah D. Beck (mbeck@utk.edu) is an associate professor at the Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, USA.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment