http://bit.ly/3sQQ67C October 12, 2022
A new technology, broadly deployed, raises profound questions about its impact on American society. Government agencies wonder whether this technology should be used to make automated decisions about Americans. News reports document mismanagement and abuse. Academic experts call attention to concerns about fairness and accountability. Congressional hearings are held. A federal agency undertakes a comprehensive review. Scientific experts are consulted. Comments from the public are requested. A White House press conference is announced. A detailed report is released; its centerpiece is five principles to govern the new technology.
The year is 1973, and the report "Records, Computers, and the Rights of Citizens" (http://bit.ly/3FAARqY) provides the foundation for modern privacy law. The report sets out five pillars for the management of information systems that come to be known as "Fair Information Practices" (http://bit.ly/3sUPsG9). The report will lead to the passage of the 1974 Privacy Act, the most comprehensive U.S. privacy law ever enacted. To this day, Fair Information Practices, developed by a commission led by computer scientist Willis Ware, remain the most influential conceptions of privacy protection.
Fast-forward 50 years: The "Blueprint for an AI Bill of Rights" (http://bit.ly/3WjAW8D) is announced by the Office of Science and Technology Policy. The 2022 report marks a turning point in U.S. AI policy, and like the 1973 report, follows a familiar trajectory. That is too soon to assess, but many criticisms are far off the mark. Like the "Rights of Citizens" report, the AI Bill of Rights set out no new rights. And like the 1973 report, the recommendations in the Blueprint require action by others. The most remarkable parallel is the five principles at the center of both reports. The Rights of Citizens report set out the Fair Information Practices:
The 2022 Blueprint stated:
The Fair Information Practices allocated rights and responsibilities in the collection and use of personal data. The 2022 Blueprint has set out "Fair AI Practices," allocating rights and responsibilities in the development and deployment of AI systems. This could well become the foundation of AI policy in the U.S.
In the years ahead, it will be interesting to see whether the AI Bill of Rights occupies a role in American history similar to that of the 1973 "Rights of Citizens" report. At the outset, one point is certain: the similarities are striking.
http://bit.ly/3Dsv5Fj September 14, 2022
Reflecting across many conversations in the past year, I have found four types of conversations about human-centered artificial intelligence (AI). My own work has focused on the need for policies regarding AI in education, thus I have been involved in conversations about how teachers, students, and other educational participants should be in the loop when AI is designed, evaluated, implemented, and improved. I have been in many conversations about how surveillance and bias could harm teachers or students, and I have seen wonderful things emerging that could help teachers and students. Using education as an example, I reflect on what we talk about when we talk about human-centered AI.
The four types of conversations are:
When we stay within one or two of the four conversations, we limit progress toward human-centric AI. For example, the opportunities-and-risks conversation tends to be hopeful and abstract; it can appear that by naming risks, we're making progress to mitigating them. The future may be described in attractive terms but it is always far off, and that makes the risks feel safer. A complementary conversation about metaphors and mechanisms can defuse the sense of magic and help the conversants see the devil in the details.
Likewise, building trust and engineering trustworthiness are key conversations we need to have for any field of human-centric AI. Then again, the scale and power possible through AI does not always bring out the best in people, and even when people act their best, unintended consequences arise. We have to maintain skepticism. We need to distrust AI, too. Unless we talk about policies and protections, we are not engaging our rights as humans to self-govern our future reality. Not everything that will be available will be trustworthy, and we have to create governance mechanisms to keep harm at bay.
I believe it is important to notice the four kinds of conversations and use them to achieve well-rounded overall discussions about human-centric AI. I would welcome your thoughts on the kinds of conversations you observe when people talk about human-centered AI, how typical conversations limit what we talk about, and what we can do to engage a broad diversity of people in the conversations we need to have.
©2023 ACM 0001-0782/23/01
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.
No entries found