Opinion
Artificial Intelligence and Machine Learning Point/counterpoint

Counterpoint: Regulators Should Allow the Greatest Space for AI Innovation

Permissionless innovation should be the governing policy for AI technologies.
Posted
  1. Introduction
  2. The Risk of Avoiding All Risks
  3. What's at Stake
  4. Case Study: Autonomous Cars
  5. A Better Path Forward: Humility and Restraint
  6. Authors
  7. Footnotes
robot hand and shoulder

Everyone wants to be safe. But paradoxically, sometimes the policies we implement to guarantee our safety end up making us much worse off than if we had done nothing at all. It is counterintuitive, but this is the well-established calculus of the world of risk analysis.

When we consider the future of AI and the public policies that will shape its evolution, it is vital to keep that insight in mind. While AI-enabled technologies can pose some risks that should be taken seriously, it is important that public policy not freeze the development of life-enriching innovations in this space based on speculative fears of an uncertain future.

When considering policy for AI and related emerging technologies such as robotics and big data, policymakers face two general options regarding how best to respond to new technological developments: They can either choose to preemptively set limits or bans on new technologies if they believe the risks to society are simply too great to tolerate—an approach known as the “precautionary principle”—or they can decide to allow innovation to proceed mostly unhampered and intervene only in a post hoc or restitutionary manner, which we call “permissionless innovation.”

We believe artificial intelligence technologies should largely be governed by a policy regime of permissionless innovation so that humanity can best extract all of the opportunities and benefits they promise. A precautionary approach could, alternatively, rob us of these life-saving benefits and leave us all much worse off.

Back to Top

The Risk of Avoiding All Risks

Human psychology is such that the precautionary principle often initially seems appealing. We, as a species, are risk averse. People can quite easily conjure a parade of hypothetical horrible situations they believe new technologies will usher into society. Yet imagined best-case scenarios are not as readily apparent to our risk-adverse psyches.

This can ironically render us less safe. “If the highest aim of a captain were to preserve his ship, he would keep it in port forever,” Saint Thomas Aquinas once wrote. Of course, captains aim higher and take risks in braving the high seas precisely because progress and prosperity—for both them and society at large—depend upon it.

The same holds true for all new innovations. When we fail to consider the upsides new developments could bring, we can end up doing more harm to ourselves than if we allowed a new technique to continue without burden.

Consider drug regulation. On its face, it seems logical for a pharmaceutical regulator like the Food and Drug Administration (FDA) to maintain exacting regulations against new medicines until they can be proven almost entirely safe. After all, the risk of dangerous side effects harming or even killing people in the long term is a formidable one indeed.

But what about the people who could be saved by an experimental treatment that is unjustly delayed or prohibited? We don’t see these people or their plights as readily, but they are just as real. Because their suffering or death is under the radar, the tragic effects of this kind of error are not accounted for. This was the unfortunate outcome of the FDA’s delayed approval of a drug named Misoprostol to treat gastric ulcers in the early 1980s. Their dithering ended up costing up to 15,000 lives.a

Humans are already well aware of the first-order risks of new technologies like AI applications. We fear errors in the opposite direction: that policymakers and the public will underrate the improvements AI can bring, and will allow fears of worst-case scenarios to justify policies that ensure best-case scenarios never come about.

Back to Top

What’s at Stake

After centuries of speculation in both the academy and science fiction, AI is finally shaping our lives in important ways. While we are still far away from the kind of “strong,” self-directing AI first anticipated by Mary Shelley’s Frankenstein almost 200 years ago, narrower applications of machine learning and big data techniques are already integrated into the world around us in subtle but important ways.

Many are unaware of just how prevalent AI techniques already are.b They quietly help to more efficiently connect us with the information that is most valuable to us, whether the data in question is related to healthcare, consumer products and services, or just reconnecting with an old friend.

For example, neural networks can help doctors to diagnose medical ailmentsc and recommend treatments, thereby saving money on testing and office visits and potentially improving the likelihood of recovery and remission. Yelpd has developed a machine learning program that translates user-submitted restaurant photos into searchable data on a restaurant’s cuisine and atmosphere. And the rise of AI-powered “virtual personal assistants”e on social media platforms will help us to better keep track of our relationships and obligations with little thought required on our parts.

And yet these marginal improvements in efficiency and matching will yield great dividends in our economy and our personal lives. Analystsf project savings and economic growth to exceed hundreds of billions or even trillions of dollars over the coming decade, thanks to improvements in manufacturing, transit, and health. The ease and convenience of tailored artificial assistance will likewise improve our overall qualities of life and leave us more time to do the things that really matter to us.

The U.S. in particular has been a leader in AI development, boasting the world’s most innovative research facilities in the academy and industry. But that could soon change. Global challengers Russia and Chinag recognize the importance of shaping AI technologies and have poured substantial support and funding into boosting their national industries. If the U.S. falls behind, global innovation arbitrageh will kick in and technologists and companies will flock to countries where such creativity is treated more hospitably.


Part of the reason the U.S. has been so successful with AI deployment is a relatively permissive policy regime.


How can the U.S. stay ahead? Part of the reason the U.S. has been so successful with AI deployment is a relatively permissive policy regime. The U.S. houses some of the most successful technology companies in the world due to the federal government’s explicit embrace of permissionless innovation in the 1990s.i Other countries, particularly in Europe,j that pursued a more precautionary approach ended up hemorrhaging talent to other more open environments.

To date, there is no central regulatory authority tasked with reviewing and approving each new instance of AI development in the U.S. Rather, regulators at disparate agencies apply existing rules in accordance with their established authorities—so the FDA oversees applications of health-related AI, the Securities and Exchange Commission (SEC) monitors automated trading, and the National Highway Transportation Safety Administration (NHTSA) is tasked with autonomous vehicle oversight. While imperfect, this approach has the benefit of limiting regulations to a narrow domain.

Back to Top

Case Study: Autonomous Cars

Autonomous transport presents perhaps the most salient example of how AI will fundamentally change our future. Of course, driverless cars and commercial drones also generate some of the greatest anxieties regarding safety and control. As such, they provide a good example of the tensions between onerous regulation and a more permissive policy environment.

Americans in general are a bit worried by the concept of driverless cars. According to an October 2017 Pew Research Center, more than half of Americans say they would outright refuse to ride in a driverless car. Why? Many fear they cannot trust the software undergirding such technologies, and believe the cars will be dangerous. Furthermore, respondents to the Pew poll indicated they do not believe driverless cars will have much of a positive impact on road safety, with 30% reporting they believe road deaths would increase, and another 31% saying they would probably remain about the same. Yet our current human-operated system produces the equivalent of a massacre on the roads each year. 2016 saw the highest number of road fatalities in the past decade, with 40,000 needless deaths by human drivers. Put another way, 100 people were killed by a human driver each day. Autonomous vehicles, on the other hand, could reduce traffic fatalities by up to 90%.k This means the cost of delaying driverless car technologies due to regulatory anxieties would mean tens of thousands of needless deaths each year. A Mercatus Center modell suggests a regulatory delay of 5% could yield an additional 15,500 needless fatalities. A delay of 25% would mean 112,400 needless deaths. The difference between regulatory humility and regulatory dithering could literally be the difference between life and death for many.

Back to Top

A Better Path Forward: Humility and Restraint

This illustration should not be construed as a call to “do nothing.” Rather, it is meant to paint a picture of the real potential cost of bad policy. Rather than rushing to regulate in an attempt to formalize safety into law, we should first pause and consider the risks of avoiding all risks.

In our recent research paper, “Artificial Intelligence and Public Policy,”m co-authored with Raymond Russell, we outline a path forward for policymakers to embrace permissionless innovation for AI technologies. In general, we recommend regulators:

  • Articulate and defend permission-less innovation as the general policy default.
  • Identify and remove barriers to entry and innovation.
  • Protect freedom of speech and expression.
  • Retain and expand immunities for intermediaries from liability associated with third-party uses.
  • Rely on existing legal solutions and the common law to solve problems.
  • Wait for insurance markets and competitive responses to develop.
  • Push for industry self-regulation and best practices.
  • Promote education and empowerment solutions and be patient as social norms evolve to solve challenges.
  • Adopt targeted, limited, legal measures for truly hard problems.
  • Evaluate and reevaluate policy decisions to ensure they pass a strict benefit-cost analysis

Of course, these recommendations must be tailored to the kind of application under consideration. Social media and content aggregation services already enjoy liability protection under Section 230 of the Communications Decency Act of 1996, but the question of liability for software developers of autonomous vehicles is still being discussed.

In that regard, we should not forget the important role the courts and common law will play in disciplining bad actors. If algorithms are faulty and create serious errors or “bias,” powerful remedies already exist in the form of product defects law, torts, contract law, property law, and class-action lawsuits.


For most AI applications, the promised benefits far outweigh the imagined danger.


Meanwhile, at the federal level, the Federal Trade Commission already possesses a wide range of consumer protection powers through its broad authority to police “unfair and deceptive practices.” Similarly, at the state level, consumer protection offices and state attorneys general also address unfair practices and continue to advance their own privacy and data security policies, some of which are often more stringent than federal law.

So, we can dispense with the idea that AI is not regulated. Regulatory advocates and concerned policymakers might still be able to identify particular AI applications that present true and immediate threats to society (such as “killer robots” or other existential threats) and which require more serious consideration and potential control. Government uses of profiling software for law enforcement falls under this category, due to its capacity to violate established civil liberties.

But we should realize the vast majority of AI applications do not fit into this bucket; for most AI applications, the promised benefits far outweigh the imagined danger, which can so seductively inflame our anxieties and lead to full-blown technopanics.

The more sensible tone and policy disposition for AI was nicely articulated by The One Hundred Year Study on Artificial Intelligence,n a Stanford University-led project that brought together 17 of the leading experts to compile a comprehensive report on AI issues. “Misunderstanding about what AI is and is not, especially against a background of scare-mongering, could fuel opposition to technologies that could benefit everyone. This would be a tragic mistake,” they argued. “Regulation that stifles innovation, or relocates it to other jurisdictions, would be similarly counterproductive.”

That is precisely the sort of humility and patience that should guide our public policies toward AI going forward. As our machines get smart, it is vital for us to make our policies even smarter.

uf1.jpg
Figure. Watch the authors discuss this work in the exclusive Communications video. https://cacm.acm.org/videos/point-counterpoint-on-ai-regulation

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More