Opinion
Artificial Intelligence and Machine Learning

Warnings!

Computer technologists have work to do to help societies cope with the potentially harmful effects of media scale while protecting the provisions of the Universal Declaration of Human Rights.

Posted
warning symbol

As I write this at the end of October 2024, artificial intelligence (AI) continues to be Topic A in many discussions. So too are recommendation algorithms in social media. Misinformation and disinformation rank high across many areas of socio-economic concerns. We are even seeing misinformation about the Federal response to severe storms interfering with our ability to render aid. Why is it that we are attracted to and respond so readily to alarming information?

I have a rather unscientific theory about this. Well, it isn’t grounded in solid data, but it is a cartoon model of the way I think of the phenomenon. I think sensitivity to warnings is likely a genetic survival trait for all species, especially those with some level of cognition. I include non-human species in that category. Warning calls are common across many species. Humans have benefited from such warnings by surviving to contribute to the gene pool. Many who ignored warnings did not survive and did not contribute. Thus, when we read, see, or hear warnings, we respond almost automatically. “It’s a bear! Run!” (Actually, I hear running from a bear is actually bad advice.)

Social media influencers take advantage of recommendation algorithms that steer users toward perceived interests and the scale at which these systems operate. The same mechanisms that might select advertisements of interest may also steer users toward information, including warnings that appear to be of interest or concern. None of this is a new realization. My long-time friend and colleague, Peter G. Neumann, drew attention to this in a 2001 Communications articlea which is as relevant now as it was then, maybe even more so.

This is not the first time I have written about this phenomenon. The mix of accurate and inaccurate and deliberately misleading information reinforces my belief that training in critical thinking is needed now more than ever. We rely on many more sources of information today than we have in the past, in part because virtually anyone who has access to the Internet and World Wide Web is in a position to post his or her views to a global audience. In the past, fewer sources might have meant that information consumers could exercise more due diligence on the sources they chose to rely on. The proliferation of sources increases the need for and utility of provenance of content and concomitant assessment of sources.

This kind of filtering is not new. We don’t read every book, newspaper, or magazine; watch every movie or television show; or listen to every broadcast. We don’t even pay attention to every social media site on the ‘Net. We select these based on recommendations from parties we trust, often including our friends or organizations we belong to.

We could use some technical help, however, as we wrestle to assess the provenance of the information we encounter. Digital signatures and reliable registration of information sources might help. Anonymous speech, while of value in some circumstances (such as whistleblowing), is generally prone to harmful abuse because the source may believe it is immune from the consequences of spreading disinformation. The problem is exacerbated by people who spread information without checking, either deliberately or out of naive belief that it is correct or relevant. Elections in this century have been affected by deliberate misinformation campaigns sourced anonymously or by parties whose identity is deliberately obscured.

I have become persuaded that identity, provenance, and accountability are our friends in this proliferated, online space. Of course, I subscribe to the idea that privacy is an important societal value but not at the expense of potential harms arising from the abuse of anonymity. The veil of anonymity may need to be pierced under the right judicial conditions. I am not in favor of so-called “backdoor” processes as they can be abused and have been in the recent past; for example, by hijacking wire-tapping provisions to gain unauthorized access to telephone conversations. I remember well the debate of the so-called “Clipper Chip” in the early 1990s that would have provided “authorized parties” with the ability to decrypt content encrypted by the chip. Eventually, some unauthorized party will find a way to abuse the capability.

Plainly, we computer technologists have work to do to help our societies cope with the potentially harmful effects of media scale while protecting the provisions of the Universal Declaration of Human Rights.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More