Opinion
Computing Applications Viewpoint

‘Have You Thought About . . .’: Talking About Ethical Implications of Research

Considering the good and the bad effects of technology.
Posted
  1. Introduction
  2. The Dilemma of Good and Bad Uses
  3. Changing the System
  4. A Portfolio of Approaches
  5. References
  6. Author
  7. Footnotes
open hand and question marks

How do researchers talk to one another about the ethics of our research? How do you tell someone you are concerned their work may do more harm than good for the world? If someone tells you your work may cause harm, how do you receive that feedback with an open mind, and really listen? I find myself lately on both sides of this dilemma—needing both to speak to others and listen myself more. It is not easy on either side. How can we make those conversations more productive?

We are at an unprecedented moment of societal change brought on by new technologies. We create things with both good and bad possible uses, and with implications we may or may not be able to meaningfully foresee. Technologies have both intended and unintended consequences.5 To complicate things, we inevitably lose control of the technologies we create. The ethical responsibility of us as creators is difficult to understand. But one thing we can and should do is to talk about those implications—relentlessly. Even when the conversations are difficult.

This semester, I assigned my students in my class "Computing, Society, and Professionalism" to listen to a pod-cast from Planet Money, "Stuck in China's Panopticon" (see https://www.npr.org/2019/07/05/738949320/episode-924-stuck-in-chinas-panopticon). The podcast documents unprecedented levels of surveillance being used to oppress China's Uighur minority. The Chinese police are creating comprehensive profiles of each Uighur person, including their DNA, face, voice, and even their gait. Technology that can determine your ethnic background from your DNA was developed by Yale geneticist Kenneth Kidd. Years ago, Kidd allowed a researcher from the Chinese Ministry of Public Security to spend time in his lab learning how his techniques worked, and he shared DNA data with them. These were later used to oppress the Uighur. But at the time, it was pure research. Planet Money asks Kidd if he regrets collaborating with the Chinese secret police, and he says he "[couldn't] know everything that's going to happen in the future." On the other hand, the Uighur man profiled in the story is outraged at what Kidd did, saying he should have known. How do we resolve this stand-off?

Back to Top

The Dilemma of Good and Bad Uses

Another example of a technology with good and bad uses is face recognition. For many years, I have been disturbed by the social implications of face recognition technology. It is possible to use face recognition responsibly, of course. But any technology developed will eventually become widely available, including to less-responsible individuals. Yes, we can use face recognition technology for simple conveniences like unlocking our phones, and also for important security applications. But this technology also inevitably will fall into the hands of state and non-state actors who will use it in oppressive ways (like the Chinese Ministry of Public Security). When you think about it, it is frightening.

I was against this technology for many years. That is until we had a blind student in our department, and he told me that working face recognition software would change his life. If he could only easily know who is in the room with him, it would be transformative.

Face recognition has good and bad uses. The number of people who will be harmed by it in less-free societies (and maybe in our own) greatly outnumbers the population of blind people who will be helped by it. So all in all, I personally would not do research on it. But I realize I may not fully understand its implications, just like I initially did not understand its importance for the blind. How can any of us fully understand the implications of something so transformative?

It is tempting to just say, "face recognition is going to be developed no matter what I do," and shrug and go about our own business. But I think that's a cop out. I am not an expert on face recognition, but I have colleagues who are. How can I talk with them about it? Ethics are discursive—ethical understandings emerge from conversation. But we're not talking about ethical issues of new technology enough.

Here, I am using face recognition as a stand-in for all technologies that have possible strong negative consequences (as well as good ones). And there are a lot of them. We approach an era of rapid change in norms of privacy, jobs supported by the economy, the degree of education needed for those jobs, and the economic inequality that the structure of the new work force will generate if we do not have the political will to balance it better. While much of the power to shape what happens next is in the hands of policymakers, some of it is in the hands of the people inventing these technologies.


Ethics are discursive—ethical understandings emerge from conversation.


I teach professional ethics to our undergraduates, and one course topic is how best to raise ethical issues that come up in an organization. The first rule is always go through internal channels before you contact people outside your organization. For conversations about ethical research, the analogous principle is: always talk to the researcher first, in private, before you make public pronouncements. If you are attending a public presentation about the work and there is a Q&A session, ask a polite question. You might, at the start, use hedging language: "Have you thought about …" Or "I'm concerned about …" You could also offer to follow up with the author, and try to catch them privately. A key principle for delicate ethical discussions is to give people opportunities to save face. Someone is more likely to listen if they can plausibly think that doing the right thing was their idea all along. If you antagonize them, they will just dig their heels in.

You always need to think about who you are trying to influence. If you are trying to influence researchers, you need to talk to them in ways they can hear. Try to validate their goals and present the change you are suggesting as a modest deviation from their current plan (even if a bigger change is more what you are hoping for.)

Criticizing the ethics of someone else's research requires humility. It is their research. Unless you are in exactly the same field, they know more about it than you do. They may have considered the issues you are worried about and thought them through. They may be several steps ahead of you. They also may share some of your ethical concerns, and not express them explicitly because to them they are just so obvious.

Consider the possibility that your critique is misguided. When you use the phrasing, "Have you thought about …," it should not just be an attempt at being polite but also a sincere acknowledgment they may well have thought about it—a lot. They may be way ahead of you.

People do not adjust the way they think about things right away. Helping someone to rethink ethical implications of their work takes time—often years. And it is not fun. It is so much easier to "stay in your lane" and shrug off problematic work. But maybe we can help one another to see important issues if we focus on politeness and humility.

If you see something that you sincerely (checking your knowledge and assumptions more than once) believe is going to cause immediate harm, it may be necessary to forget about being polite or humble about it. But that is a rare occurrence. Most of the time, things are subtle, and a more delicate approach is strategic.

Back to Top

Changing the System

ACM had the great idea a few years ago to create a "Future of Computing Academy" and gathered together some of the brightest young computing researchers. The group came up with an ambitious plan for helping draw more attention to the social implications of technology. They proposed that, "Peer reviewers should require that papers and proposals rigorously consider all reasonable broader impacts, both positive and negative."3 The plan is forward-thinking and insightful, though there are practical details that need working out. Presumably under this plan, work will be evaluated on how honest you are about possible implications, not how bad the implications are. But what if the consequences are difficult to foresee? What if the consequences are potentially really scary? The details make my head hurt. That said, it is a first step toward taking these implications more seriously. The challenge is how to convince ACM and other professional societies and publishers that this is important and necessary.

Back to Top

A Portfolio of Approaches

There are many things we need to do to better shape the future of technology. At the individual level, we can encourage technologists (especially students and young professionals) to choose to work on things they believe will make a positive difference.

While individual choices matter, the big and pressing problems require a more coordinated approach. I am encouraged to see the beginnings of collective action in the technology industry. For example, in 2018 over 3,000 Google employees signed a letter protesting the company's participation in a military AI initiative called Project Maven,2 and as a result the company chose not to renew its contract for this work.4

In addition to individual choice and collective action, the third key strategy is policy. In May 2019, San Francisco proactively outlawed the use of face recognition technology by police.1 SF police were not actually using face recognition and it does not reliably work yet, but policymakers are anticipating consequences and doing something about them in advance. We need more politicians who really understand technology, and to hold those politicians accountable for forward-thinking policy change rather than simply reacting to disastrous situations after the fact.


There are many things we need to do to better shape the future of technology.


Geneticist Kenneth Kidd says he could not have predicted how his technology would be used by the Chinese secret police. Maybe he would have realized the implications if colleagues had talked with him about it. Talking to one another is just one among a host of strategies that are needed to take the implications of new technologies seriously and try to help to shape them. We all need to make the effort (even when it is uncomfortable) to say, "Have you thought about …" And to listen with an open mind when someone says that to us.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More