In early March, my colleague Merve Hickok testified before the House Oversight Committee at the first hearing on AI policy in this Congress. The House Committee asked a simple question: “Are We Ready for the Tech Revolution?” Her answer was direct: “No, we do not have the guardrails in place, the laws that we need, the public education, or the expertise in government to manage the consequences of the rapid changes that are now taking place.”
Washington got the message. This week, the Senate Judiciary Committee held one of the most productive hearings in Congress in many years, taking up the challenge Hickok had set out. With expert testimony from OpenAI founder Sam Altman, IBM’s Christina Montgomery, and leading AI thinker Gary Marcus, a well-prepared Senate Committee focused on the next steps for “Oversight of A.I.: Rules for Artificial Intelligence.” Several Senators expressed hope that the U.S. could become a global leader on AI policy.
Here is a quick assessment of the outcomes from the hearing, noting the highlights and also the warning signs.
Highlights
The Senators were well-prepared and engaged. When Mark Zuckerberg testified before Congress several years ago, members of Congress were mocked for their lack of understanding of Facebook’s business model. The members of the Senate Committee came to the discussion about AI well prepared. Senator Coons discussed with Sam Altman training models on constitutional values, a hot topic in the AI field.
Senator Blumenthal’s Framing Outlined Key Goals. It is easy for a Congressional hearing to spin off in many directions, particularly with a new topic. Senator Blumenthal set out three AI guardrails—transparency, accountability, and limitations on use—that resonated with the AI experts and anchored the discussion. As Senator Blumenthal said at the opening, “This is the first in a series of hearings to write the rules of AI. Our goal is to demystify and hold accountable those new technologies and avoid some of the mistakes of the past.”
Nonpartisan Approach to AI. Congress has struggled in recent years because of increasing polarization. That makes it difficult for members of different parties, even when they agree, to move forward with legislation. In the early days of U.S. AI policy, Dr. Lorraine Kisselburgh and I urged bipartisan support for such initiatives as the OSTP AI Bill of Rights. In January, President Biden called for non-partisan legislation for AI. The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement. There is a long road ahead. Still, this is a favorable sign.
Acknowledgment of Past Mistakes. Members of Congress are reluctant to admit past mistakes, but the Senators made clear at the hearing that there were a lot of mistakes to avoid—negative impacts on creators and journalists, monopoly concentration, and waiting too long to legislate. Most notable was the criticism of Section 230, the provision from a 1996 law that gave Internet companies broad immunity and contributed to disinformation, teen depression, polarization, and the near-collapse of the news industry. There will be no Section 230 immunity for the AI industry.
Risks Made Clear. Gary Marcus was particularly effective in outlining the many known, and likely unknown, risks with the further deployment of AI. He explained to the Committee that AI will be “destabilizing” and poses risks to democracy. “We have unprecedented opportunities here, but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability,” Dr. Marcus said. There was a sense of urgency in the room. It was not business as usual.
Agreement on Need for Legislation. As Senator Durbin said, it is unusual for representatives of companies to call for legislation, but that is precisely what happened when OpenAI’s Altman and IBM’s Montgomery said legislation was necessary. Montgomery pointed to the EU AI Act as a good model for legislation and expressed support for a “precision regulatory approach,” including impact assessment for high-risk uses. More remarkable still was the number of concrete proposals put forward by the witnesses—impact assessments, safety standards, transparency requirements, privacy rules, and limits on compute and AI capability. There was support for Gary Marcus’s proposal for an agency to govern AI, modeled on the FDA, with scientists and sufficient resources, perhaps at the cabinet level.
Risks
Still, there were warning signs.
Generative AI is Not The Whole AI Picture. It was not until Senator Padilla raised concerns about non-English-language training and datasets that the issue of fairness entered the Committee’s deliberation. For many in the AI community, it is precisely the tendency to conflate the emerging risks of generative AI with the immediate concerns about AI that is the cause for concern. AI systems in the U.S. routinely make decisions about housing, employment, credit, and education. AI powers facial surveillance, emotion detection, and biometric categorization. Many academics and organizations have documented growing problems with machine learning systems that embed bias and make it difficult to contest adverse outcomes. Meaningful AI legislation will need to address these issues.
Lack of Relevant Knowledge. There were few references to other AI policy frameworks during the hearing, though Gary Marcus did mention the OSTP AI Bill of Rights, the OECD AI Principles, and the UNESCO Recommendation on AI Ethics. Senator Blumenthal referenced the EU Artificial Intelligence Act, which is approaching the finish line as EU institutions begin negotiations over the final version. Members of Congress will need to learn more about these influential AI policy frameworks.
Risk of Repeating Past Mistakes. When asked about solutions for privacy, the witnesses tended toward proposals, such as opt-outs and policy notices, that will do little to curb the misuse of AI systems. The key to effective legislation will be to allocate rights and responsibilities for AI developers and users. This allocation will necessarily be asymmetric as those who are designing the big models are far more able to control outcomes and minimize risk than those who will be subject to the outputs. That is why regulation must start where the control is most concentrated. A good model for AI policy is the Universal Guidelines for AI, widely endorsed by AI experts and scientific associations.
Deference to Tech CEOs. The news media is still captivated by tech CEOs. Much of the post-hearing reporting focused on Altman’s recommendation to Congress. That is not how democratic institutions operate. Industry support for effective legislation will be welcomed by Congress, but industry does not get the final say. There are still too many closed-door meetings with tech CEOs. Congress must be wary of adopting legislation favored by current industry leaders. There should be more public hearings and opportunities for meaningful public comment on the nation’s AI strategy.
In the realm of AI policy, the U.S. has lagged behind allies and adversaries. The Senate hearing “Oversight of A.I.: Rules for Artificial Intelligence” signals a turning point and an opportunity for the U.S. to promote AI that is human-centric and trustworthy, to establish overdue guardrails, perhaps even to be a global policy leader.
Marc Rotenberg is founder of the Center for AI and Digital Policy and a former Staff Counsel for the Senate Judiciary Committee.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment