Opinion
Artificial Intelligence and Machine Learning Departments

Who Is Responsible Around Here?

Posted
  1. Article
  2. Author
  3. Footnotes
CACM Senior Editor Moshe Y. Vardi

In January 2012, I wrotea about the past and future of artificial intelligence (AI). I reiterated Bill Joy's 2000 question: Does the future need us? Little did I know then that a revolution was already brewing. By 2011, GPUs had accelerated considerably the training of deep neural networks, finally making a technology whose roots go back to the early 1940sb competitive. By 2011–2012, AlexNet, a deep neural network, won several international competitions, launching the deep-learning revolution. A decade later, generative AI, which refers to AI that can generate novel content rather than simply analyze or act on existing data, has become all the rage. Over the past few weeks, ChatGPT, the "newest kid on the generative-AI block," is practically everywhere.

Media reporting of these new AI technologies often focuses on their societal risk, and various proposals have been put forward aiming at containing the risk of AI. In October 2022, the U.S. Office of Science and Technology Policy (OSTP), which is part of the Executive Office of the President of the United States, published a Blueprint for an AI Bill of Rights. This bill identified five principles that "should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence." In November 2022, the ACM Technology Policy Council released a statement on principles for responsible algorithmic systems, identifying nine principles "intended to foster fair, accurate, and beneficial algorithmic decision-making."

Principlism is an approach developed in biomedical ethics that uses a framework of universal ethical principles that should underlie biomedical decisions. Its recent rise in computing has, however, been criticized:c "they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas."

Consider the ACM policy statement. The statement addresses system builders and operators, AI system developers, and operators of AI systems. In other words, the statement is not about responsible algorithmic systems, but about responsible people and corporations. Nevertheless, as I have previously pointed out,d ACM has been reluctant to address the unethical behavior of technology corporations and their leaders, even when these corporations blatantly violate ACM's Code of Ethics and Professional Conduct.

OSTP's mission is to "maximize the benefits of science and technology to advance health, prosperity, security, environmental quality, and justice for all Americans." OSTP, however, is a governance body, not a philosophy department. Governance happens via actions, either by executive actions or congressional bills. So far, OSTP has shared nothing about a plan to turn the AI Bill of Rights into an actionable policy.

Worries about societal harm caused by AI are not new. About a decade ago, the philosopher Nick Bostrom worried about the existential risk of super-intelligent AI. In his "Paperclip Maximizer," thought experiment he hypothesized about a super-intelligent agent whose goal is to maximize the number of paperclips in its collection. In its zeal to accomplish its mission, the agent may transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities." The purpose of this experiment was to demonstrate that an artificial agent with apparently innocuous values could pose an existential threat to humanity. The value-alignment approach is a response to the super-intelligent-agent risk. AI-alignment research aims to steer AI systems toward their designers' intended goals and interests. In his 2021 report, Our Common Agenda, the UN Secretary General called for ensuring AI is "aligned with shared global values."

AI agents, however, are developed by tech corporations. Our fundamental problem is not paperclip maximization but unregulated profit maximization. Adam Smith's argument in 1759 in favor of the invisible hand of the free market was to "advance the interests of the society as a whole." Since then the argument that unregulated profit maximization advances the interests of the society as a whole, for example, by former U.S. Federal Reserve chairman, Alan Greenspan, has been shown to fail both theoreticallye and practically.f While profit maximization has led to some impressive societal benefits, for example, mRNA vaccines, it has also led to serious adverse consequences, for example, financial crises. In general, technology advances faster than regulation. The unregulated use of AI in targeted advertising and content moderation has brought deep polarization to our society, seriously threateningg democracy.

So let us stop talking about Responsible AI. We, computing professionals, should all accept responsibility now, starting with ACM!

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More