News
Artificial Intelligence and Machine Learning

Laureates Look Past the Hype in Generative AI

Attendees at this year's Heidelberg Laureate Forum anticipate future applications of the technology.
Posted
Credit: Shutterstock Heidelberg, Germany

When ChatGPT was released less than a year ago, it seemed to mark a turning point for artificial intelligence (AI). The chatbot created by OpenAI made headlines due to its sophisticated and human-like text conversations made possible by generative AI—algorithms that learn patterns from the data on which they are trained to produce output with similar characteristics. The technology had previously made waves for its image-generation capabilities, with systems such as DALL-E and Stable Diffusion capable of producing photorealistic visuals from text descriptions provided by humans.

There is lots of hype associated with generative AI. Some experts have even speculated that some systems could be sentient, as an engineer working on a Google AI called LaMDA did last year. Others, however, claim their output is less impressive than it seems.

In September at the Heidelberg Laureate Forum, a networking conference where young math and computer science researchers spend a week interacting with laureates of their fields in Heidelberg, Germany, generative AI was a hot topic. Some of the laureates who were recipients of an ACM A.M. Turing Award, given for major contributions to computer science and often referred to as the ‘Nobel Prize of Computing’, discussed potential uses and outlooks for the technology.

Generative AI now can be used to write essays and computer code, accelerate drug discovery, create new designs, and make personalized product recommendations, among other things. However, Raj Reddy, a Turing Award laureate and professor of computer science and robotics at Carnegie Mellon University in Pittsburgh, is particularly interested in how it can help reduce language and literacy divides in society. In India, for example, his native country, there are 22 official languages, meaning that people from neighboring communities often are unable to speak to each other.

At the same time, Reddy described how economic activity would increase if more people were able to communicate with each other. This follows Metcalfe’s law, first proposed by last year’s ACM A.M. Turing Award recipient Bob Metcalfe, which states that the value of a network is proportional to the square of the number of its users. “I think speech and language are going to be central to the progress of humanity in the future,” says Reddy.

Reddy is excited about how generative AI could be used for instant translation systems. The idea is that your phone would automatically translate what you are saying in real time into the language of a person with whom you want to communicate, either wirelessly or face-to-face. Systems such as Google Translate already can help to some extent, but Reddy thinks generative AI technology will significantly improve what is possible over the next 15 to 20 years. “It’s still not quite smooth and it doesn’t work for all languages,” he says. “That’s the work we still have to do.”

Generative AI systems are not always accurate, which can be of concern. ChatGPT and other chatbots are known to sometimes generate false information which they present as facts, often described as hallucinations. Leslie Valiant, a Turing Award laureate and T. Jefferson Coolidge professor of computer science and applied mathematics at Harvard University, thinks current generative AI systems are well suited for entertainment purposes or for applications in which making a wrong decision does not result in serious consequences. However, he is skeptical about using such systems for applications in which you need to trust the outcome. “[Using] AI when it may kill someone if you make the wrong decision, that’s much more difficult,” he says.

Generative AI produces output by learning from vast quantities of data. It can do this better than humans can because it is pushing a technological limit in terms of the amount of information it can process. However, there are other aspects of intelligence that it does not incorporate. “Our cognitive abilities are [made up of] different things put together: some reasoning, some learning,” says Valiant. “[Generative AI] is just doing one thing, but doing it very well.”

Valiant thinks generative AI systems can be improved by giving them reasoning capabilities. Learned knowledge has a degree of uncertainty to it, but current systems such as ChatGPT are not able to reflect on whether what they have generated makes sense. “Traditional logic is very disparate from machine learning,” says Valiant. “I think the next step is to put reasoning front and center in AI systems and to integrate it with learning.”

Using generative AI systems maliciously is another concern. Reddy is worried about the misuse of deepfakes—AI-generated images, audio, or video which seem real—which can, for example, present a politician saying something controversial that he has not actually said. “Governments are trying to come to grips with how to handle the good and the bad [of generative AI technology],” says Reddy.

At the same time, computer scientists are developing ways to fight AI impersonation. Many of them involve using AI to detect slight anomalies in synthesized media that would identify them as deepfakes (like the fact that in early deepfake videos, the synthesized images of people did not blink). “Now, in addition to fact-checking text materials, you need to be able to fact-check audio and video materials,” says Reddy. “That is something that can be done, and will increasingly be done.”

There is also fear that generative AI could be a threat to society by becoming superintelligent and hard for humans to control. Valiant’s ‘ecorithm’ theory states that algorithms interact with their environments and learn from them, a process that can occur both in computational systems and biological ones, like a brain. This suggests an AI system could start to behave in unexpected ways in certain environments, leading to potentially harmful behavior.

However, Valiant is not worried about such a scenario taking place with generative AI, since he thinks we understand the intelligence we put in machines well enough. “Unless you put some characteristics into them which are against our interests, there is no reason that they [will] develop characteristics that are against our interests,” he says. “So I don’t see that [generative AI getting out of control] is a big issue in the foreseeable future.”

Sandrine Ceurstemont is a freelance science writer based in London, U.K.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More