News
Artificial Intelligence and Machine Learning

Rage Against the Intelligent Machines

Posted
Artist's representation of a large language model.
A number of high-profile organizations are questioning the risks inherent in the largely unregulated way emerging Large Language Models are being fielded.

If the launch of ChatGPT in November 2022 was the point at which generative artificial intelligence (AI) began to make an appreciable impact on the public consciousness, the final week of March 2023 was the start of a multi-faceted fightback against AI, one that could have deep ramifications for the freedom firms have to roll out machine intelligences into the public domain.

The AI counter-offensive that week involved a number of high-profile organizations questioning the risks inherent in the largely unregulated way emerging Large Language Models (LLMs)—like OpenAI's ChatGPT and GPT-4, Microsoft's Bing Chat and Google's Bard systems—are being fielded.

At issue, they say, is the way LLMs are being unleashed without prior, transparent, and auditable assessment of their risks, such as aiding and abetting cybercrime, their propensity for simply fabricating facts people might rely on, reinforcing dangerous disinformation, and exhibiting overt and offensive societal biases. Some are calling for LLM development to be halted while measures to make them safe are thrashed out.

This was not just an argument amongst AI cognoscenti. News of the spat even reached the White House, with President Biden reiterating on April 5 that artificial intelligence providers, like all technology companies, "have a responsibility to make sure their products are safe before making them public." 

First out of the gate, on March 27, was Europol, the joint criminal intelligence organization of the 27 nations of the European Union, which published a report on the "diverse range of criminal use cases" it predicts products like ChatGPT could be used in.

Europol's digital forensics experts found the LLM's ability to quickly produce convincing written text in many languages would serve to hide the telltale typos and grammatical errors that are normally a giveaway with phishing messages,  and so boost the success of phishing campaigns.

Europol also said the ability to write messages in anybody's writing style is a gift to fraudsters impersonating employees to entice their colleagues to download malware, or to move large amounts of cash, as has happened in so-called "CEO fraud" cases. In addition, terrorist groups could prompt LLMs to help them generate text to promote and defend disinformation and fake news, lending false credibility to their propaganda campaigns, Europol says.

Worst, perhaps, is that the code-writing capabilities of LLMs could be misused by criminals with "little to no knowledge of coding" to write malware or ransomware. "Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures," Europol says in its report.

Of particular worry, says the organization, is that LLMs are far from a done deal: they are constantly being improved,  so their potential criminal exploitation could happen ever-faster and at greater scale.

 "The Europol report seems exactly correct. I agree things look grim," says Gary Marcus, a professor of psychology and neural science at New York University, and an AI entrepreneur and commentator. "Perhaps coupled with mass AI-generated propaganda, LLM-enhanced terrorism could in turn lead to nuclear war, or to the deliberate spread of pathogens worse than Covid-19," Marcus later said in his newsletter.

OpenAI did not respond to questions on Europol's findings, and neither did the U.S. Cybersecurity and Infrastructure Security Agency (CISA), part of the Department for Homeland Security.

However, two days later, on March 29, the AI fightback moved up another notch, when Marcus was one of more than 1,000 initial signatories to an open letter to AI labs calling on them to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4".

Drafted by the Future of Life Institute, in Cambridge, MA, which campaigns against technologies posing existential risks, the letter urged that "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

"These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

The letter was signed by some of the leading specialists in AI, including deep neural networking pioneer (and ACM A.M. Turing Award recipient) Yoshua Bengio of the Quebec AI Institute (MILA) in Montreal, Canada, and Stuart Russell, head of the Center for Human Compatible AI at the University of California, Berkeley.

From the commercial sector, high-profile signatories included Apple cofounder Steve Wozniak, Skype founder Jann Tallinn, and electric car and spaceflight entrepreneur Elon Musk, who cofounded OpenAI but who left the company in 2018. Reuters pointed out that CEOs Sam Altman at OpenAI, Satya Nadella at Microsoft, and Sundar Pichai at Alphabet (Google) did not sign.

"We have passed a critical threshold: machines can now converse with us and pretend to be human beings," said Bengio in a statement  explaining why he signed the letter. "No one, not even the leading AI experts, including those who developed these giant AI models, can be absolutely certain that such powerful tools now, or in the future, cannot be used in ways that would be catastrophic to society."

"The letter does not claim that GPT-4 will become autonomous—which would be technically wrong—and threaten humanity. Instead, what is very dangerous —and likely—is what humans with bad intentions, or simply unaware of the consequences of their actions, could do with these tools and their descendants in the coming years," Bengio says.

The Future of Life Institute letter sparked global debate on Twitter, and dominated TV and radio news and talk shows for a few days. It led to the United Nations Educational, Scientific and Cultural Organization (UNESCO) calling on governments to enact the global ethical AI principles it has been developing since 2018.

Joanna Bryson, a professor of Ethics and Technology at the Hertie School of Governance in Berlin, Germany, says there is no dearth of ethical principles, like UNESCO's, for AI; it's just that the AI sector simply tends to ignore them.

Doing the right thing, Bryson adds, should be straightforward, if the will is there.

"What we need is accountability for the people who wrote it, and they need to be able to prove that they did appropriate testing as a sector. We need openness, transparency, accountability and audits, to make sure that people are following the proper procedures, just like in any other sector. That's all they need to do," Bryson said.

"They should all be demonstrating due diligence, they should start doing the transparency, the DevOps; all the basic things the [forthcoming] European Artificial Intelligence Act is going to mandate they do anyway. People using LLMs are already saying that early voluntary compliance is actually a business opportunity and that they can sell it as part of the product."

Bryson's words proved prophetic. On March 30, the AI fightback continued when a nonprofit advocacy group called the Center for AI and Digital Policy (CAIDP) in Washington, DC, filed a complaint with the U.S. Federal Trade Commission seeking the suspension of deployment of OpenAI's GPT products because, it alleges, they violate U.S. federal consumer protection law.

In its complaint, the CAIDP doesn't mince words, alleging that OpenAI "has released a product, GPT-4, for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment."

It goes on to say OpenAI has acknowledged the product's risks, such as its potential for fueling disinformation campaigns and the propagation of untruths, yet "disclaims liability for the consequences that may follow." The complaint then calls on the FTC to establish "independent oversight and evaluation of commercial AI products in the United States."

CAIDP's complaint will be a key case to watch, not least because it will likely have implications beyond LLMs, such as in AI-based products like driver assistance systems and autonomous vehicles, which have been under investigation the U.S. National Highway Transportation Safety Administration for causing deaths and injuries on the roads.

Said Selim Alan, a director at CAIDP, "We are anticipating hearings on our FTC complaint in Congress later this year. We've sent copies of our complaint to the chair and the ranking members of both the Senate and House Commerce Committees."

Marcus, for one, has high hopes for the CAIDP's action bringing the AI sector to heel. "It's a smart and thoughtful complaint," he says. "And Biden picked up on its framing in his remarks, so I think it has a chance. And I would love to see it succeed."

 

Paul Marks is a technology journalist, writer, and editor based in London, U.K.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More