News
Artificial Intelligence and Machine Learning

Regulation of AI Remains Elusive

Posted
Illustration of a digital version of Justice.
Despite the a wave of national strategies on artificial intelligence that has washed over the world, none have yet proposed or published specific ethical or legal frameworks for artificial intelligence.

Over the past several years, a wave of national strategies on artificial intelligence (AI) has washed over the world, with many jurisdictions introducing policies for its regulation. With the exception of the European Union (EU), none have yet proposed or published specific ethical or legal frameworks for AI. 

Canada led the way, announcing national AI policies in 2017, and has since been followed by many other jurisdictions. The Organization for Economic Co-operation and Development (OECD) AI Policy Observatory early last year released a continuously updated database of over 600 AI policy initiatives from 60 countries, territories, and the EU. Of course, not all are the same, but some are noteworthy.

Amba Kak, director of Global Policy and Programs at the AI Now Institute at New York University, notes that many national strategies focus on increasing competition in AI. For example, additional government funding is provided to champion a country's own commercial companies with a focus on AI. With the U.S. and China widely recognized as the world's AI superpowers, Kak says, "All countries are in an arms race against China. The U.S. has said that if it put rules (for AI) in place, it would be slowed down. This is not a good dialogue."

Hodan Omaar, a policy analyst at the Center for Data Innovation in Washington, D.C. focusing on AI policy, watches differences in national policies. For example, the U.S. and China take a bottom-up approach with government funding for AI as a facilitator of industry, while Singapore is taking a top-down approach with the government leading AI and pushing out initiatives. In a similar vein, the U.S. to date has taken a light-touch approach to regulation, while the E.U. has been more prescriptive.

Data protection legislation is often seen as a starting point for AI regulation, as it influences data input to applications. On a global basis, about 130 jurisdictions have implemented data protection laws, but only the E.U.'s General Data Protection Regulation (GDPR) embeds AI regulation based on data privacy. Says Kak, "Data privacy laws are critical to regulating AI and determining what AI tools are acceptable. They can control data collection and processing, and help to build trust in AI."

Looking at noteworthy national strategies for AI, India is often called out for creating an inclusive national strategy for AI, although it has yet to implement the Personal Data Protection Bill that would underpin AI programs. The bill, expected to become law this year, would support an AI For All strategy that focuses on leveraging AI for growth in line with the government policy of Sabka Saath, Sabka Vikas, Sabka Vishwas — Hindi for "together, for everyone's growth, with everyone's trust." This means the role of the government will be to develop a research ecosystem, promote its adoption, and address AI skills in the population. The strategy also flags ethics, bias, and privacy issues relating to AI.

Chile, which has enacted privacy laws, is also prominent in its adoption of AI. The Chilean Artificial Intelligence Policy 2021-2030 aims to reverse a decade of economic downturn by catapulting the country into the fourth industrial revolution. The policy incorporates three pillars: enabling factors; use and development of AI, including ethics and regulatory aspects; and socio-economic issues.

On a broader scale, organizations including the OECD, the G20 group of 20 of the world's largest economies, and the Council of Europe (which includes 47 member-states and is the continent's leading human rights organization) have issued nonbinding recommendations on global AI regulation, all of which, according to the Law Library of Congress, have in common a human-centric approach.

In May 2019, the OECD adopted the recommendation of its Council on AI, which includes five values-based principles for reis sponsible and trustworthy AI. These principles provide guidance to help governments design national legislation, in areas including:

  • Inclusive growth, sustainable development, and well-being
  • Human-centered values and fairness
  • Transparency and explainability
  • Robustness, security, and safety
  • Accountability.

In June 2019, the G20 issued a set of human-centered AI principles based on the OECD recommendations, and there is no doubt AI will be a subject of discussion at this year's October G20 summit in Rome (although there is no certainty AI regulations will follow). 

"Regulation shouldn't be the goal, but it should be used to address specific harms," says Omaar, who endorses the multi-stakeholder Global Partnership on Artificial Intelligence (GPAI), which aims to bridge the gap between AI theory and practice "by supporting cutting-edge research and applied activities on AI-related priorities," according to its website. Omaar says the organization "will develop high-level principles on issues such as bias. It will be a guide for global conversation and an initiative to identify things we don't want to see, such as legal situations around AI."

GPAI was launched in May 2020, the fruition of an idea developed within the G7 under the Canadian and French presidencies. The Trump Administration initially was wary of joining the group on the grounds that it could limit AI development by setting standards and regulations, but soon made a U-turn after realizing the need to work collectively to try to avoid China dominating how AI can be used.

Now including 19 international partners with a shared commitment to the values expressed in the OECD recommendations on AI, GPAI's aim is to guide the responsible development and use of AI grounded in human rights, inclusion, diversity, innovation and economic growth – but no mention of regulation. That said, with so much at stake as AI becomes increasingly able and potentially more harmful, and what is and isn't acceptable sparks conflict, Kak and Omaar conclude there will be AI regulation, but when and what it will look like remain to be seen.

Sarah Underwood is a technology writer based in Teddington, U.K.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More