News
Artificial Intelligence and Machine Learning

U.S. AI Policy Still Effectively A Closed Box

Posted
Artist's representation of artificial intelligence in the U.S.
There is no shortage of intellectual rigor expended on how U.S. federal policy makers should consider moving forward on artificial intelligence, both in relation to other nations (as competitors and global market partners) and between the public sector, p

If the U.S. government's awareness of its own agencies' use of facial recognition technology were translated into a common baseball term, its batting average would be .071—quantitatively awful.

Such an observation was made possible after the Government Accountability Office (GAO) recently released a report on federal agencies' use of the technology, one of the most visible artificial intelligence (AI) applications, and found 13 of 14 agencies had no awareness of exactly which non-federal facial recognition technologies their employees were using. Another GAO report exploring an accountability framework for federal agencies "and other entities" underscores what some policy experts are calling a scattershot and ineffective effort thus far at defining a coherent U.S. federal AI policy.

"The report primarily reflects the lack of coherence in federal AI policy," said Marc Rotenberg, executive director of the Center for AI and Digital Policy. "It is precisely because there is no legal framework for AI and no clear direction from the White House—the OMB was to have launched the rule making in June, but that did not happen—that agencies are largely improvising."

Another close observer of AI policy, Brookings Institution Rubenstein Fellow Alex Engler, said that trying to discern the Biden administration's approach to AI is more akin to making educated guesses at this point.

"No one, I think, reasonably expected this to be the first priority of the Biden administration, given the pandemic and the economy and the transition," said Engler. That said, he mentioned several moves that signaled perhaps the Biden administration would take a more robust regulatory stance toward AI than its predecessor.

One signal was what Engler called a "small but interesting" blog post from the Federal Trade Commission outlining the importance of deploying AI that strives unequivocally toward fairness. The other were the appointments of social scientist Alondra Nelson to the inaugural position of deputy director for science and society in the White House Office of Science and Technology Policy (OSTP) and of University of Utah algorithmic fairness scholar Suresh Venkatasubramanian as OSTP assistant director. Their hiring, Engler said, could be interpreted to mean the Biden administration is going to take AI bias quite seriously.

It's complicated

Engler also noted that trying to distill a blanket "AI policy" will be extremely difficult. "AI is too different a thing in too many circumstances to conceivably talk about housing discrimination, and banking and finance, and electronic medical devices, and autonomous weapons in the same coupled breath," he said. "There have to be several separate conversations. Then you can say, 'Hey, here's a commonality', but really, when you talk about policy implementations, they are all different."

A handful of common threads weave themselves through much of the discussion about AI policy in the U.S. One is that, in principle anyway, there is consensus that implementation of AI be done in a way that the public trusts the technology and those who use it, and that civil rights be rigorously protected. Another is that U.S. policy makers and their counterparts in other regions, such as the European Union, work together as closely as possible to harmonize technical standards and policy commonalities. A daunting third factor, however, is concern that China's drive to develop AI superiority presents a serious security and economic risk to the U.S.

"The race to research, develop, and deploy AI and associated technologies is intensifying the technology competition that underpins a wider strategic competition," the National Security Commission on Artificial Intelligence (NSCAI) wrote in the executive summary of its final report, published in March. "China is organized, resourced, and determined to win this contest. The United States retains advantages in critical areas, but current trends are concerning."

Those trends are also by-products of policy and market choices over decades; some, such as the wholesale export of manufacturing processes in many industries to lower-cost locations overseas, venture far from narrowly defined "high technology" categories. For example, one trend mentioned in several policy recommendations is the large-scale offshoring of semiconductor manufacturing, "dependent on foreign supply chains and manufacturers in Asia that are vulnerable to coercion or disruption," as stated by the NSCAI.

There is no shortage of intellectual rigor expended on how U.S. federal AI policy makers should consider moving forward, both in relation to other nations (as competitors and global market partners) and between the public sector, private industry, and individual citizens domestically. The 750-page NSCAI report is one of several touchstone AI policy documents published recently. Another is MITRE Corps' A Horizon Strategy Framework for Science and Technology Policy, published in May. The CAIDP has released numerous reports about AI policy, including a comprehensive overview of global AI policies in Artificial Intelligence and Democratic Values: The AI Social Contract Index 2020.

Additionally, the ethics of AI deployment are front and center for many organizations advising government at all levels. For example, MITRE's chief scientist for responsible AI, Chuck Howell, helped support the NSCAI effort. Another prominent scholar of AI ethics, Julia Stoyanovich, is the co-founder and director of New York University's Center for Responsible AI, housed in the university's Tandon School of Engineering in Brooklyn, NY.

A Cautionary Tale

Stoyanovich served on New York City's Automated Decision Systems Task Force, established in 2018 to examine the city government's use of algorithms; in principle, she said, national policy could build off the granular experience local policy experts have gained by crafting relevant policies.

She acknowledged that calibrating local and state AI policy with national policy is a complex subject, but also said local efforts such as New York's could offer crucial insights into expanding AI policy beyond municipal boundaries.

"This gives us a really good sandbox, and not a small sandbox," Stoyanovich said. "New York City is the size of a small- to medium-sized European country. I think it's absolutely imperative for the federal government to work with people who are doing this work locally. Part of the reason is, we are closer to the ground and have a more immediate way to speak with constituents. Another is that we simply don't know how to regulate the use of these technologies; not locally, not globally, not federally. So every example we can use is really crucial."

A pessimist might say New York City offers an example of how not to go about setting AI policy. Nearly a year after the task force was commissioned, Stoyanovich and fellow task force member Solon Barocas testified in front of the New York City Council that they had never been granted access to information about a single automated decision-making system, and the task force's final report was widely seen as a disappointment.

"Unfortunately, despite much hope and pomp, we failed in that effort because there really wasn't a lot of good will on the part of the city to engage in the conversation," Stoyanovich said. "There was also an adversarial environment set up in which city agency representatives were worried about engaging in good faith with some representatives of the public. Things failed not because they didn't know how to deal with technology, but because they didn't know how to deal with people."

Others in the technology policy community have been vocal about their priorities as the policy-setting process has begun to move. CAIDP, for example, has offered numerous comments critical of the way the NSCAI report process was conducted. MITRE's Howell, however, said he thought appreciation of transparency and respecting civil liberties is baked in to any AI policy discussion in the U.S., and the NSCAI report authors respected that sentiment.

"To my pleasant surprise, the topics around what we called establishing justified confidence in AI systems and the topics around privacy, civil liberties, and civil rights were first-class citizens. If the American people lose confidence in the government's adoption of AI, they will push back. That pushback will be reflected in that Congress will stop funding it."

Follow the money – another Sputnik moment?

If anything, Congress has gone the other way, by allocating generous resources to AI under the Biden administration. 

To underscore the potential threat of Chinese dominance of AI, the specter of the Soviet Sputnik satellite (which in 1957 shocked the U.S. government out of its complacency and into investing heavily in science and technology) has been raised numerous times in reports published about how to advance AI policy. For example, the NSCAI report, citing research by the Council on Foreign Relations, states that in 2018, U.S. federal R&D funding amounted to 0.7% of GDP, down from its peak or more than 2% in the 1970s.

In response to the sense of urgency the technical policy community has communicated, in June the U.S. Senate passed the United States Innovation and Competition Act, a nearly 2,400-page bill that lays out a $250-billion attempt to bring scientific R&D once again to the fore. Among the bill's highlights is $52 billion to stimulate domestic semiconductor fabrication, which produced 37% of the world's chips in 1990, a level that has declined to 12% today.

While the House of Representatives has yet to act on the legislation, the fact it passed a polarized Senate by a 68-32 vote signals even staunch political foes believe STEM research needs a significant increase.

One such example was highlighted in a recent essay by the Brookings Institution's Engler; he mentions the National Science Foundation requested $868 million for AI research in fiscal 2021 (all-inclusive of basic AI research and AI's application in other fields), and that Congress "approved every penny."

'Teach somebody like your grandmother'

In the same essay, Engler expanded on part of the NSF AI appropriation, a grant-making program called Fairness in Artificial Intelligence (FAI), through which NSF is providing $20 million to researchers working on difficult ethical problems in AI. The program, in collaboration with Amazon, funded 21 projects in its first two years. It is, Engler said, an example of "use-inspired research," inspired by real-world challenges.

In the rapidly expanding world of AI, such relative micro-initiatives might very well be the only way to proceed. Even the National Artificial Intelligence Initiative Office, the federal government's hub of AI administration, says it is overseeing and implementing the U.S.  national AI strategy, not policy, and serves as a central repository of AI information across the federal government (NAIIO officials had no comment for this story).

Despite the failure of the New York City AI policy task force, Stoyanovich said she would be more than happy to speak with federal officials about setting a national course for AI. "Speaking on my own behalf, and on behalf of my colleagues, we would be thrilled to engage with the federal government now that the climate is such where we actually expect there to be positive change, including meaningful useful regulations that actually look to include the interest of many constituencies."

Beyond talking policy with politicians, Stoyanovich said computing professionals need to step up now to educate the public about what AI is, how it works, and how it affects them.

Having served on the task force that wrote the ACM's 2018 Code of Ethics and Professional Conduct, Stoyanovich has spent a lot of time preparing educational materials about AI for non-technologists, including online courses and comic books in a campaign called We Are AI. "The code says it is our responsibility as computing professionals to share our knowledge with people broadly, to make our best effort to explain to anybody who asks 'What's an algorithm?'" she said, "and what it can and can't do.

"It's very, very difficult to develop material where you can teach somebody like your grandmother about what AI is," she said. "And everybody's grandmother is impacted by this, not just hyper-educated computing professionals."

 

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More