Synthetic media technologies are rapidly advancing, making it easier to generate nonveridical media that look and sound increasingly realistic. So-called “deepfakes” (owing to their reliance on deep learning) often present a person saying or doing something they have not said or done. The proliferation of deepfakesa creates a new challenge to the trustworthiness of visual experience, and has already created negative consequences such as nonconsensual pornography,11 political disinformation,19 and financial fraud.3 Deepfakes can harm viewers by deceiving or intimidating, harm subjects by causing reputational damage, and harm society by undermining societal values such as trust in institutions.7 What can be done to mitigate these harms?
It will take the efforts of many different stakeholders including platforms, journalists, and policymakers to counteract the negative effects of deepfakes. Technical experts can and should play an active role. Technical experts must marshall their expertise—their understanding of how deepfake technologies work, their insights into how the technology can be further developed and used—and direct their efforts to find solutions that allow the beneficial uses of synthetic media technologies and mitigate the negative effects. While successful interventions will likely be interdisciplinary and sociotechnical, technical experts should play a role by designing, developing, and evaluating potential technical responses and in collaborating with legal, policy, and other stakeholders in implementing social responses.
The Responsibilities of Technical Experts
Deepfakes pose an age-old challenge for technical experts. Often as new technologies are being developed, their dangers and benefits are uncertain and the dangers loom large. This raises the question of whether technical experts should even work on or with a technology that has the potential for great harm. One of the most well known and weighty versions of this dilemma was faced by scientists involved in the development and use of the atomic bomb.18 The dilemma also arose for computer scientists as plans for the Strategic Defense Initiative were taking shape14 as well as when encryption techniques were first debated.13
Although some technical experts may decide not to work on or with the synthetic media technologies underlying deep fakes, many will likely attempt to navigate more complicated territory, trying to avoid doing harm and reap the benefits of the technology. Those who take this route must recognize they may actually enable negative social consequences and take steps to reduce this risk.
Figure. A deepfake video from a December 25, 2020, posting “Deepfake Queen: 2020 Alternative Christmas Message” (source https://youtu.be/IvY-Abd2FfM).
Responsibility can be diffuse and ambiguous. Any deepfake involves multiple actors who create the deep-fake, develop the tool used to make it, provide the social media platform for amplification, redistribute it, and so on. Since multiple actors contributed, accountability is unclear, setting the stage for a dangerous blame game where no one is held responsible. Legal interventions will also be stymied by difficulties in determining jurisdiction for punishing deepfake creators,5 and by the need to strike a balance with free speech concerns for platform publication.18 Still, ethically, each actor is responsible for what they do as well as what they fail to do, particularly if a negative consequence might have been averted. Technical experts have an ethical responsibility to avoid or mitigate the potential negative consequences of their contributions.
Consider DeepNude, an app that converts images of clothed women into nude images. It is not only end users that are doing harm with the app. The developer is reported to have said that he did not expect the app to go viral, and later withdrew it from the marketplace.6 In defense of the developer, some could consider him thoughtless but not ill-intended. This, however, misses the fact that the tool was designed for a purpose that inherently objectifies women. The negative outcome of the app was not difficult to foresee, and the designer bears some responsibility for the harm caused.
Many technical experts will work on more generic synthetic media technologies that have diverse applications and uses even they cannot foresee. But despite the uncertainty of future uses they still are not entirely off the hook ethically. Responsibility in this case is less about blame than about making conscientious efforts to identify the potential uses of their creations in the hands of a variety of users with ill as well as good intent.4 NeurIPS, a premier conference in the field of AI, is trying to enforce this ethical responsibility by requiring submissions to include a “Broader Impact” section that addresses both potential positive and negative social impacts.b Technical experts must go a step further though: not to just think or write about social impacts, but to design tools and techniques that limit the possibility of harmful or dangerous use.
How to Be Part of the Solution
Individually and collectively, the behavior of technical experts in the field of synthetic media is coming under scrutiny. They should be expected to, and should expect one another to, behave in ways that diminish the negative effects of deepfakes. Research and development of synthetic media will be better served if technical experts see themselves as part of the solution, and not the problem. Here are three areas where technical experts can make positive contributions to the development of synthetic media technologies: education and media literacy, subject defense, and verification.
Education and Media Literacy. Technical experts should speak out publicly (as some already have) about the capabilities of new synthetic media. Deepfakes have enormous potential to deceive viewers and undermine trust in what they see, but the possibility of such deception is diminished when viewers understand synthetic media and what is possible. For example, were individuals taught to spot characteristic flaws that might give deepfakes away, they would be empowered to use their own judgment about what to believe and what not to believe. More broadly, media literate people can verify and fact check the media they consume and are, therefore, less likely to be misled. While many stakeholders, from journalists to platforms and policymakers, can contribute to increased education and media literacy, technical experts are crucial.
Deepfakes pose an age-old challenge for technical experts. Often as new technologies are being developed, their dangers and benefits are uncertain and the dangers loom large.
Because of their knowledge, technical experts are in the best position to identify the limitations of deepfakes and recommend ways that viewers and fact checkers can learn to recognize those limitations. For example, some of the early deepfake methods were not able to convincingly synthesize eyes, and so individuals could be taught to carefully examine eyes and blinking. Of course, the technology is changing rapidly (newer methods can synthesize eyes accurately), so technical experts must be at the forefront of translating the latest technical capabilities into guidelines. Technical experts could also facilitate media literacy by pushing a norm that those who publish new methods for media synthesis always include a section specifying how synthesis using the new method could be detected. Including this information in publicly available publications would facilitate media literacy.
Subject Defense. Technical experts should contribute to the development of technical strategies that help individuals avoid becoming victims of malicious deepfakes. While viewers can be deceived by deepfakes, those who are depicted in deepfakes can also be harmed. Their reputations can be severely damaged when they are falsely shown to be speaking inappropriately or engaged in sordid behavior. As well, the subjects of deepfakes have their persona (their likeness and voice) taken and used without their consent, resulting in misattribution that either exploits or denigrates their reputation according to the goals of the deepfake creator. Deepfakes may also be used to threaten and intimidate subjects.
Here there are a variety of technical approaches that experts could take. They can develop more sophisticated identity monitoring technology that could alert individuals when their likeness appears online. An individual could enroll using a sample photo, video, or audio clip, and be notified if their likeness (real or synthetic) appeared on particular platforms. Of course, this type of response would come with difficult sociotechnical challenges, including obtaining the cooperation of platforms to provide data for monitoring and addressing the resulting privacy implications. Other approaches to subject defense could involve everything from water-marking and blockchain to new techniques to limit the accessibility, usability, or viability of training data for deepfake model development. Chesney and Citron5 suggest the development of immutable life logs tracking subjects’ behavior so that a victim can “produce a certified alibi credibly proving that he or she did not do or say the thing depicted.” These are only a few suggestions; the point is that technical experts should help develop ways to counteract the negative effects of deepfakes for individuals who may be targeted.
Verification. Technical experts should develop and evaluate verification strategies, methods, and interfaces. The enormous potential of deepfakes to deceive viewers, harm subjects, and challenge the integrity of social institutions such as news reporting, elections, business, foreign affairs, and education, makes verification strategies an area of great importance.
Verification techniques can be a powerful antidote because they make it possible to identify when video, audio, or text has been manipulated. While state-of-the-art detection systems may reach accuracy in the 90%+ range,1 they are also typically limited in scope, that is, they may work on familiar datasets but struggle to achieve comparable accuracy on unseen data or media “in the wild.”8 For instance, a reduction in visual encoding quality, or the fine-tuning of a model on a new dataset may challenge the detector.2,16 Technical research on automated detection continues, with the recent Deepfake Detection Challenge drawing thousands of entries and resulting in the release of a vast dataset to help develop new algorithms.8 To spur work on in this area NIST has organized the Media Forensics Challenge over the past several years,c and other workshops on Media Forensics have also convened to advance research and share best practices.”d Another avenue for further technical work is in building human-centered interactive tools to support semiautomated detection and verification workflows.9,10,17
Research and development of synthetic media will be better served if technical experts see themselves as part of the solution, and not the problem.
In practice a combination of automated and semiautomated detection may be most prudent.15 Ultimately, once verification tools are developed there will be yet another layer of sociotechnical challenges for tool deployment, from considering adversarial scenarios and access issues, to output explanations and integration with broader media verification workflows.12
There is no doubt that synthetic media can be used for beneficial purposes, such as in entertainment, historical reenactment, education, and training. The pressing challenge is to reap the positive uses of synthetic media while preventing or at least minimizing the harms. We are encouraged by efforts in industry and academia to grapple directly with ethics and societal impact as new innovations in synthetic media advance.e And, as we laid out in this column, there are numerous opportunities to direct effort in buttressing against some of the worst outcomes. The challenge can only be met with the sustained efforts of technical experts. Let’s get to it!
Join the Discussion (0)
Become a Member or Sign In to Post a Comment