BLOG@CACM
Artificial Intelligence and Machine Learning

Two Models of AI Oversight — and How Things Could Go Deeply Wrong

Posted
Gary Marcus

The Senate hearing that I participated in a few weeks ago was, in many ways, the highlight of my career. I was thrilled by what I saw of the Senate that day: genuine interest, and genuine humility. Senators acknowledged that they were too slow to figure out what do about social media, that the moves were made then, and that there was now a sense of urgency. I am profoundly grateful to Senator Blumenthal’s office for allowing me to participate, and tremendously heartened that there was far more bipartisan consensus around regulation that I had anticipated. Things have moved in a positive direction since then.

But we haven’t landed the plane yet.

§

Just a few weeks earlier, I had been writing in this Substack and in the Economist (with Anka Reuel) about the need for an international agency for AI. To my great surprise, OpenAI CEO Sam Altman told me before the proceedings began that he was supportive of the idea. Taken off guard, I shot back, “Terrific, you should tell the Senate,” never expecting that he would. To my amazement, he did, interjecting, after I raised the notion of global AI, that he “wanted to echo support for what Mr. Marcus said.”

Things have in many ways moved quickly since then, far faster than I might have ever dreamed. In 2017, I proposed a CERN for AI, in The New York Times, to relatively little response. This time, things (at least nominally) are moving at breakneck speed. Earlier this week, British Prime Minister Rishi Sunak explicitly called for a CERN for AI, as well something like an IAEA for AI, all very much in line with what I and others have hoped for. Earlier today, President Biden and Prime Minster Sunak agreed today, publicly, “to work together on A.I. safety.”

All that is incredibly gratifying. And yet … I am still worried. Really, really worried.

§

What I am worried about is regulatory capture; governments making rules that entrench the incumbents, whilst doing too little for humanity.

The realistic possibility of this scenario was captured viscerally in a sharp tweet from earlier today, from British technology expert Rachel Coldicutt:

 

I had similar pit-of-my-stomach feeling in May after VP Kamala Harris met with some tech executives, with scientists scarcely mentioned:

 

§

Putting it bluntly: if we have the right regulation; things could go well. If we have the wrong regulation, things could badly. If big tech writes the rules, without outside input, we are unlikely to wind up with the right rules.

In a talk I gave earlier today to the IMF, I painted two scenarios, one positive, one negative:

 

 

 
§

We still have agency here; we can still, I think, build a very positive AI future.

But much depends on how much the government stands up to big tech, and a lot of that depends on having independent voices—scientists, ethicists, and representatives of civil society—at the table. Press releases and photo opportunities that highlight governments hanging out with the tech moguls they seek to regulate, without independent voices in the room, send entirely the wrong message.

The rubber meets the road in implementation. We have, for example, Microsoft declaring right now that transparency and safety are key. But their current, actual products are definitely not transparent, and at least in some ways, demonstrably not safe.

Bing relies on GPT-4, and we (e.g., in the scientific community) don’t have access to how GPT-4 works, and we don’t have access to what data it’s trained on (vital, since we know that systems can bias, e.g., political thought and hiring decisions based on those undisclosed data)—that’s about as far away from transparency as we could be.

We also know, for example, that Bing has defamed people, and it has misread articles as saying the opposite of what they actually say, in service of doing so. Recommending Kevin Roose get a divorce wasn’t exactly competent, either. Meanwhile, ChatGPT plugins (produced by OpenAI, which they have a close tie with) open a wide range of security problems: those plugins can access the Internet, read and write files, and impersonate people (e.g., to phish for credentials), all alarms to any security professional. I don’t see any reason to think these plugins are in fact safe. (They are far less sandboxed and less rigorously controlled than Apple app store apps.)

This is where the government needs to step up and say, “Transparency and safety are indeed requirements; you’ve flouted them; we won’t let you do that anymore.”

We don’t need more photo opportunities, we need regulation—with teeth.

§

More broadly, at an absolute minimum, governments need to establish an approval process for any AI that deployed at large scale, showing that the benefits outweigh the risks, and to mandate post-release auditing—by independent outsiders— of any large-scale deployments. Goverments should demand that systems only use copyrighted content from content providers that opt-in, and that all machine-generated content be labeled as such. And governments need to make sure that there are strong liability laws in place, to ensure that if the big tech companies cause harm with their products, they be held responsible.

Letting the companies set the rules on their own is unlikely to get us any of these places.

§

In the aftermath of the Senate hearings, a popular sport is to ask, “Is Sam Altman sincere, when he has asked for government regulation of AI?”

A lot of people doubted him; having sat three feet away from him, throughout the testimony, and watched his body language, I actually think that he is at least in part sincere, that it is not just a ploy to keep the incumbents in and small competitors out, that he is genuinely worried about the risks (ranging from misinformation, to serious physical harm to humanity). I said as much to the Senate, for what it’s worth.

But it doesn’t matter whether Sam is sincere or not. He is not the only actor in this play; Microsoft, for example, has access, as I understand it, according to rumor, to all of OpenAI’s models, and can do as they please with them. If Sam is worried, but Nadella wants to race forward, Nadella has that right. Nadella has said he wants to make Google dance, and he has.

What really matters is what governments around the world come up with by way of regulation.

We would never leave the pharmaceutical industry to entirely self-regulate itself, and we shouldn’t leave AI to do so, either. It doesn’t matter what Microsoft or OpenAI or Google says. It matters what the government says.

Either they stand up to Big Tech, or they don’t; the fate of humanity may very well rest on the balance.

 

Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, deeply concerned about current AI but really hoping that we might do better. He spoke to the U.S. Senate on May 16, and is the co-author of the award-winning book Rebooting AI, as well as host of the new podcast Humans versus Machines.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More