New paper imagines scenarios for deepfakes, elections and ethics
A new paper by Nicholas Diakopoulos, Assistant Professor in Communication Studies and Computer Science at Northwestern University and Director of the Computational Journalism Lab and Deborah Johnson, who recently retired as the Anne Shirley Carter Olsson Professor of Applied Ethics in the STS Program within the Department of Engineering and Society at the University of Virginia explores the ethical implications of so-called “deepfake” technologies for elections.
The paper employs a unique method to collect hypothetical scenarios for how deepfake technologies may affect the 2020 elections. Using Amazon’s Mechanical Turk, the authors crowdsourced short stories based on stimulus materials including examples of face swapping and audio synthesis. Focusing in on the eight most plausible scenarios produced by the crowd, the authors used this stimulus material to develop a framework for how to understand the potential harms and mitigate them, including an analysis of the responsibility of various stakeholders in doing the work of mitigation.
The authors identify four intervention strategies:
- Education and Media Literacy: the first line of defense against deceptive media tactics is an informed public. Since some efforts to expose the existence of deceptive practices may indeed depress trust and make individuals more skeptical of all media, education must be finely tuned and paired with other forms of intervention.
- Subject Defense: Campaigns should develop strategies to prepare for potential attacks utilizing deepfakes and related synthetic media technologies. “The plans could include legal response strategies for harms specifically relating to defamation, false light, or right of publicity.” To bolster the strength of such strategies, new policies are needed to “buttress legal actions by campaigns while being careful not to chill free expression.” The authors imagine other self defense tactics, such as lifelogging- recording a verifiable version of all events with which to debunk potential attacks.
- Verification. Technologies to identify synthetic media exist and are improving. Platforms and publishers will need to invest in and adopt such technologies, and the authors suggest that “technologists, especially those who are building and therefore most familiar with media synthesis techniques, should take responsibility for developing new and better automated detection algorithms and semi-automated verification tools that are easy to use by these various stakeholders.”
- Publicity Modulation. The authors suggest one of the main ways to diminish the impact of deepfakes (and, one might argue, disinformation more generally) is to “throttle the degree of publicity a deepfake can receive by strategically moderating a deepfake’s amplification”. Right of redress, or “counterspeech,” is another important method to modulate the impact of such attacks.
These intervention concepts are not just applicable in the context of elections, but rather to the implications of synthetic media disinformation more generally. Certainly, more “anticipatory ethics” are necessary to prepare for the coming age of synthetic media. There is much work to do to defend our elections and create a more healthy and just media environment.