News
Artificial Intelligence and Machine Learning

Embuing AI with Ethics

In lieu of regulations, companies are starting to develop their own guidelines to control the use of artificial intelligence.

Posted
Credit: Shutterstock humanoid robotic hand points to 'AI Ethics' button, illustration

In both the consumer and corporate realms, generative artificial intelligence (GAI) is spreading like wildfire. The technology, a subset of artificial intelligence (AI), generates all kinds of high-quality content. As such, questions about bias and ethics—issues that began cropping up as organizations started embedding AI into their systems and processes in the past few years—are garnering even more attention now.

AI is being used more frequently to automate processes and increase efficiencies, as well as to compensate for talent shortages and reduce costs. Now, with generative AI in the spotlight, experts say it has never been more important to consider ethical and safe practices.

“With generative AI gaining rapid momentum and entering the mainstream, its growing popularity is yet another reason responsible AI—developing and deploying AI in an ethical manner—should be a top concern for organizations looking to use it or protect themselves against its misuse,” according to a KPMG report

One of the risks concerns intellectual property, because generative AI uses neural networks that are often trained on large datasets to create new text, images, audio, or video based on patterns it recognizes in the data it has been fed, KPMG notes.

Other risks include the potential for employee misuse, potential inaccuracies in the material it generates, and use for the creation of deepfake images.

“Those risks we see today have always been part of AI, such as bias,” which became a greater concern when law enforcement started using facial recognition technology, notes Beena Ammanath, executive director of the Global Deloitte AI Institute, and author of the book, Trustworthy AI. “Generative AI is amplifying and accelerating the risks.”

It is likely only a matter of time before regulations are enacted around AI. The Biden Administration has secured voluntary commitments from AI companies Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI in a number of areas, including research on bias and privacy concerns, and transparency to identify AI-generated materials. The move to self-govern is seen by some as an effort to shape eventual legislative and regulatory measures. In 2022, the administration released a “Blueprint for an AI Bill of Rights.”

“Sooner or later organizations will be expected to comply with rules and regulations governing generative AI,” says Emily Frolick, U.S. trusted imperative leader at KPMG. “Governance around generative AI is being built in parallel with use and can help protect consumers and investors while building stakeholder trust.”

The good news is, organizations are paying closer attention to responsible AI practices.

For example, 58% of AI startups that use training data in product development have established a set of AI principles, according to a 2022 Brookings report based on a survey of 225 AI start-ups. Start-ups with data-sharing relationships with tech firms, those that were impacted by privacy regulations, or with prior (non-seed) funding from institutional investors were found to be more likely to establish ethical AI principles, the report said.

In addition, the report said, startups with prior regulatory experience with the European Union’s General Data Protection Regulation (GDPR) “are more likely to take costly steps, like dropping training data or turning down business, to adhere to their ethical AI policies.”

Startups “must balance numerous trade-offs between ethical issues and data access without substantive guidance from regulators or existing judicial precedence,” the Brookings report advised.

Established companies are also heeding that advice. PepsiCo, for one, has begun collaborating with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to advance research, education, policy, and best practices for responsible AI adoption. Part of HAI’s research focus is “to improve the human condition,” the institute says.

“We felt there was a big change coming in technology thru AI and it would have a huge impact on society and Stanford was well-positioned to guide this for humanity,” explains James Landay, vice director of Stanford HAI and a professor in its school of engineering. “We foresaw this five and a half years ago, and I have to say, it’s happening a little faster than we thought.”

There is no single agreed-upon definition of ethical AI at this point, and often, people also refer to it as “responsible AI” or, as Stanford characterizes it, “human-centered AI,” he says.

Landay doesn’t believe there is a serious lack of ethics around AI, but more “a lack of ethics in general in business. In the end, people will often do the wrong thing if it makes more money, even if it has a negative impact.”

He cites Facebook as an example of a company where evidence has surfaced indicating it has done things “purely to make money, like spread disinformation or content to make people feel bad about themselves.” AI drives a lot of those decisions, according to Landay, adding that because people are motivated more by making money than what is the right thing to do, “teaching people to be more ethical doesn’t always solve that.”

In addition to bias, there are also huge and moral questions when AI is built into systems, such as its influence on decisions about who on a heart transplant list should get the next heart, or whether a weapon should be fired, Landay says. While these dilemmas have always existed, it becomes more of a problem when you put them in the hands of a machine without transparency into the data it is being fed.

“There is no silver bullet, there is no one answer,” he notes. “In the university realm, we have to educate students, especially those building the technology, but also those who will intersect with it in their lives,” Landay says.

Students must be taught how AI works—and understand that it does not always come up with the right answer. “If people understood how the news they’re getting on Facebook and other social media is influenced by AI, they may have more skepticism,” he says, “so education is one aspect.”

Other approaches to establishing ethics in the context of AI include teaching companies and students to design AI processes that have a positive impact on society and considering legality and policies “to make sure people play by the rules or play safely and determine where liability may be,” Landay says.

Consumer packaged goods company Mars has taken the stance that generative AI must be used responsibly and in a way “that is fair, inclusive, and sustainable,” a spokesperson said. The company has also committed to protecting the privacy and security of its employees, consumers, and partners in its usage of generative AI.

Yet another approach is to ensure a company’s CEO is involved in driving responsible AI; when this happens, nearly four out of five (79%) report they are prepared for AI regulation, according to a June report from Boston Consulting Group. When the CEO is not involved, the report found that the percentage drops to 22%.

Establishing fairness and impartiality guidelines is critical, says Deloitte’s Ammanath. So is privacy; making sure you have the right permissions when creating and training the models, she says. Also paramount are safety and security, and ensuring there are guardrails in place.

Additionally, AI models should be transparent, explainable, and accountable. “If an AI misbehaves, who is accountable for it in an organization?” Ammanath says. “A model itself can’t be held accountable for its outputs in a meaningful way.”

The last element is about being responsible in the use of AI, she says. “Take a step back and ask, ‘Is this the right or responsible thing to do? What I’m building or creating, is that being a responsible citizen of the world?’ We live in a world where being responsible citizens can get fuzzy.”

Esther Shein is a freelance technology and business writer based in the Boston area.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More