Artificial intelligence experts and ethicists are urging caution about potential adverse effects of using the latest generation of artificial intelligence (AI) agents to simulate the expressions of people who have died. The issue is gaining attention after recent demonstrations of text- and voice-based machine learning models built on natural language processing. Like the use of deepfakes in online videos, the trend presents a minefield of moral quandaries.
Israeli AI company AI21 Labs recently developed a chatbot called Ask Ruth Bader Ginsburg based on the writings of the eponymous Supreme Court justice who died in 2020. The company used a 600,000-word corpus from 27 years of Ginsburg's legal writings, speeches, and interviews to train a large language-processing neural network it developed called Jurassic-1. It billed this customized version of the model as a "fun digital experience" and a means to spark public engagement with AI while exploring its limits. The software's developers say the chatbot can emulate how Ginsburg would answer questions.
Before the Supreme Court made its landmark abortion ruling in late June, I gave the chatbot a whirl, asking it whether abortion should be illegal.
"Wait a minute," it wrote. "Are you asking me if I think abortion is morally correct? The fact is, the court, in Roe vs. Wade, legalized abortion. The court said that a woman has a right to an abortion if, under all the circumstances, it was the best way to preserve her health. But that was over 40 years ago. So I don't think a question like this would arise today."
Like many Americans, Ginsburg might have been surprised by Roe being struck down. Paul Schiff Berman, a George Washington University law professor who clerked for Ginsburg, tried the chatbot and was disappointed with both the style and content of its output, according to The Washington Post. AI21 admits that its creation has limitations, but says it is faithful to the original.
"Regarding both substance and style, the model is sometimes uncannily accurate, sometimes correct but rather bland and uninformative, and sometimes though infrequently plainly wrong, saying things the late judge would never say," says Yoav Shoham, a cofounder of AI21 and a former AI professor at Stanford University.
The company's goal in creating the chatbot was to entice users to recognize both their highest expectations and darkest fears as they relate to AI, while understanding the limits of the technology. Shoham said AI21 plans other educational AI experiences, without elaborating. He added that the chatbot is "absolutely not intended for commercial use."
Alexa, talk like Grandma
Other companies seem to have no qualms about selling simulations of the departed. Amazon said it is developing technology that would allow its Alexa service to clone the voices of dead loved ones after hearing their original voices for less than a minute.
Alexa already can be set to speak in the voices of celebrities such as Samuel L. Jackson. At its re:MARS (Machine learning, Automation, Robotics, Space) event in Las Vegas in June, the e-commerce giant showed a video in which a boy asks Alexa to have his late grandmother finish reading The Wizard of Oz. The speaker complies and switches to an old lady voice as the boy follows along with the novel, smiling.
"While AI can't eliminate that pain of loss, it can definitely make their memories last," Rohit Prasad, head scientist for Alexa, told re:MARS attendees. "We are unquestionably living in the golden era of AI, where our dreams and science fiction are becoming a reality."
The announcement drew headlines suggesting the technology could make users feel uncomfortable. Some compared it to an episode of the British science fiction series Black Mirror, in which a grieving woman uses an AI service to interact with a synthetic recreation of her boyfriend who died in a car accident.
As technological mimicry of human expression increases in sophistication, experts warn we are not ready for potential unintended side effects.
Lee Yi-Chieh, a human-computer interface researcher at the National University of Singapore, warns that the technology could be used deceptively, as in the case of deepfakes. He adds that we lack research exploring how humans may build relationships with machines in the long run.
"Recent studies have shown that people may develop some kind of relationship and trust with an AI agent equipped with active listening and self-disclosure skills," says Lee. "Although the users clearly knew they were talking with a computer agent, they still 'felt' some intimacy with the agent, and this feeling might be enhanced over time."
Aylin Caliskan, an assistant professor in the University of Washington's Information School who specializes in AI ethics, says creating synthetic characters that can create an illusion of reality or connection could amplify the values and biases in the machine.
"Given the dual-use cases of technology and unintentional side effects, such human-machine interaction might evolve to impact the human mind and psychology significantly," says Caliskan. "Deploying this type of AI at scale without rigorous scientific investigation of its harms and adhering to ethical principles poses a dangerous threat to society."
Lee believes the technology may have positive commercial applications, such as repairing corrupted songs and music. Emulating the dead, however, is not one of them, he believes.
"Over time, bereaved persons still have their life to live, and they must accept death instead of avoiding it. However, the idea of creating a realistic AI of a departed person might prolong this grief."
Tim Hornyak is a Canadian journalist based in Tokyo, Japan, who writes extensively about technology, science, culture, and business in Japan.
No entries found