Unintentional bias in public services provision, rapidly increasing academic and commercial research, and fears (to date unfounded) that artificial intelligence (AI) could run out of control are pushing the technology down the path of regulation. What will it look like, and when will it be implemented?
As yet, there are no definitive answers to these questions, but there are proposals on the table. There are also moves to guide the usage and outcomes of AI by multilateral fora including the Council of Europe, the United Nations Educational Scientific and Cultural Organizations (UNESCO), the World Trade Organization (WTO), and the Organization for Economic Cooperation and Development (OECD).
The OECD was an early mover and set out principles on AI that promote AI that is innovative and trustworthy, and respects human rights and democratic values. The principles were adopted in May 2019 by OECD member countries. In June 2019, the G20 adopted human-centered AI principles that draw from the OECD AI principles.
U.S. regulatory development
Principles such as these, along with emerging industry standards play into development of AI regulation, with publications such as the January 2020 memorandum on Guidance for Regulation of Artificial Intelligence Applications from the U.S. Office of Management and Budget, which clarifies the aim of President Trump's February 2019 Executive Order 13859, calling for "federal engagement in the development of technical standards" for AI:
"To promote innovation, use, and adoption of AI applications, standards could address many technical aspects. . . . Moreover, federal engagement with the private sector on the development of voluntary consensus standards will help agencies develop expertise in AI and identify practical standards for use in regulation."
The nascence of AI regulation in the U.S. is palpable in the memorandum, which is dedicated to AI applications outside the federal government and focuses on 10 principles for 'weak' AI, such as machine learning; 'strong' AI that may include sentience or consciousness is beyond the scope of the memorandum. The memorandum does not include a timeline for regulation.
Beyond this approach to AI regulation from the White House, other efforts to regulate AI are fragmented. In the first half of 2020, members of Congress introduced a variety of relevant bills, including:
- The National Artificial Intelligence Initiative Act of 2020, which would provide nearly $6.5 billion over five years for AI research and development (R&D), education, and standards development.
- The Data Protection Act of 2020, which would create a federal agency requiring impact assessments of 'high-risk data practices'.
- The Ethical Use of Facial Recognition Act, which would prohibit all federal use of facial recognition technology without a warrant until Congress enacts legislation. Similarly, the Advancing Facial Recognition Act.
- Bills focused on limiting the use of AI-related technologies in law enforcement include the George Floyd Justice in Policing Act of 2020 and the Facial Recognition and Biometric Technology Moratorium Act.
While these activities cover a range of AI possibilities, the stand-out topic is facial recognition. Reflecting broad concerns about this AI, in June 2020, the Association for Computing Machinery (ACM) U.S. Technology Policy Committee (USTPC) issued a statement on principles and prerequisites for the development, evaluation and use of unbiased facial recognition technologies:
"The technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems. The consequences of such bias frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society."
In conclusion, it states: "USTPC urges an immediate suspension of the current and future private and governmental use of face recognition technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights."
Hodan Omaar, AI policy analyst at the Center for Data Innovation, which tracks AI legislation, suggests the White House principles will allow agencies to improve their own guidance on AI, rather than be subjected to widespread regulation. Regulation should be contextual and not tied to solution types, and needs to move on from high-level concepts (ethics, accountability, explainability, and so on) to measure the impact of AIs. This, she says, is an obstacle.
Regulatory development in Europe
The European Commission (EC) set out its latest approach to AI in the February 2020 White Paper on Artificial Intelligence: a European approach to excellence and trust. The document notes EC support of 'a regulatory and investment-oriented approach with the twin objectives of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology."
Like the U.S. memorandum, the EC White Paper does not set up a timeframe for regulatory implementation, and questions on this to the Commission remain unanswered.
The paper says existing legal or regulatory regimes cover many aspects of AI, although those regarding transparency, traceability, and human oversight are not specifically covered in many economic sectors, and need to be addressed in a European regulatory framework.
The paper says such a regulatory framework "must ensure socially, environmentally, and economically optimal outcomes and compliance with EU legislation, principles, and values. This is particularly relevant in areas where citizens' rights may be most directly affected."
A clear ambition, but already member-states are pointing at the current absence of a common European framework. The German Data Ethics Commission has called for a five-level risk-based system of regulation that would go from no regulation for the most innocuous AI systems to a complete ban for the most dangerous ones. Denmark has launched the prototype of a Data Ethics Seal. Malta has introduced a voluntary certification system for AI.
If the EU fails to provide an EU-wide approach, there is real risk of fragmentation that could undermine the objectives of trust, legal certainty, and market uptake.
The white paper came under industry scrutiny during a consultation period that received a healthy response predominantly noting the need for more detail and highlighting specific aspects of the paper.
The AI Now Institute, a research institute at New York University dedicated to studying the social implications of AI and algorithmic technologies, responded with 10 points of interest, a majority covering a future regulatory framework for AI. They include:
- Rather than the risk-based approach proposed by the Commission, the scope of regulation should be determined based on the nature and impact of the AI system.
- Algorithmic Impact Assessments (AIA) should be required before an AI system is deployed to inform any decision of whether (and how) such systems are used.
- Regulation should ensure that external researchers and auditors have access to AI systems in order to understand their workings.
- Policymakers should impose moratoriums on all uses of facial recognition in sensitive social and political domains, including law enforcement use, education, and employment.
The practitioner view
The concepts of transparency, trust, explainability, evaluation, human rights, values, and, of course, a ban on facial recognition technology, which run through both the U.S. and EU documents, have been welcomed by AI practitioners, although they do stress the need for further discussions before regulations are finalized. International collaboration is also key.
Virginia Dignum, professor of social and ethical AI at Sweden's Umeå University studies the ethical and societal impact of AI. Says Dignum, "These are early proposals, a guide for discussion that must include more technologists. We need to look at what results we are comfortable with, what we don't want, and what is feasible. It is not yet time to set red lines or green lights."
Peter Bentley, honorary professor in the department of computer science of University College London, and author of 10 Short Lessons in Artificial Intelligence and Robotics, says AI needs to be regulated; "otherwise, we could risk democracy." In terms of how regulation should evolve, he says, "It is the role of good governments to take responsibility and understand the effect on society and people's safety as technology develops. In the case of AI, experts haven't been listened to enough."
Sarah Underwood is a technology writer based in Teddington, U.K.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment