Deep fake at glitched background. Vector, eps 10

Taking the Wide-Angle View of the Business and Legal Risks of Disinformation and Deepfakes

In the midst of the coronavirus pandemic—as markets struggle, schools and businesses shutter, and the healthcare system strains under the weight of increased morbidity and mortality—the world confronts another viral threat: a torrent of disinformation about the virus itself.  In what the World Health Organization has labeled an “infodemic,” charlatans have pushed alarmist, false information about the disease via social media and text messages.  The Trump Administration leveled claims that foreign states are behind phony claims that a nationwide quarantine is in the offing and took legal action against some pushing “cures”.

The disinformation spreading about the coronavirus is the latest example of an ancient vice (innuendo, rumor, false witness) given new life in our interconnected age.  Too often, the conversation about disinformation has been restricted to a few well-known use cases—fake news about electoral candidates or manipulated media of Hollywood celebrities. 

On March 12, 2020, two WilmerHale colleagues and I published a Client Alert to broaden the aperture of our discussion of issues surrounding misinformation: “Identifying the Legal and Business Risks of Disinformation and Deepfakes: What Every Business Needs to Know,” which is forthcoming as well in Pratt’s Privacy and Cybersecurity Law Report

Our goal is to explain the many legal and business exposures private entities face from disinformation and deepfakes, or artificial intelligence (AI)-manipulated media.  We show the many different ways the contemporary information environment can threaten businesses, and what they can do about them.

We note that disinformation is not an issue associated merely with social media.  Realistic forgeries can be used in many contexts to contribute to such harms as social engineering, credential theft, business and insurance fraud, and falsified court evidence, among others.  Solutions to these challenges go beyond what social media platforms can do alone, especially when such media is spread by email, telephone or offline, or intended for innocuous purposes—like an AI-enabled face-swapping app that can be innocent fun but also implicate another’s right to publicity or be used to defraud.

We review eleven key risk areas:

  1. Reputational Damage. Disinformation about companies can negatively impact brand identities and harm valuations.
  1. Extortion and Harassment. Over 95 percent of all deepfake videos now on the internet are of nonconsensual pornography—a nonconsenting person’s face on a nude body. Such images can be used to extort or embarrass individuals, including business leaders.
  1. Social Engineering Fraud. Recent accounts of fraudsters using AI-manipulated audio to trick victims into sending them money illustrate a growing risk in this area. These swindles represent a variant of what has been called “business email compromise,” where someone impersonates an employee’s email account to trick a company official into sending money to the scammer. Through these cons, businesses not only stand to lose the value of defrauded funds, but they can also be subject to litigation by shareholders, investigations by regulators, and potentially loss of access to the capital markets.
  1. Credential Theft. Through convincing impersonation, bad actors can extract information technology credentials from unwitting corporate employees. With these credentials, they may be able to steal valuable company information. Hackers could also lace company computer systems with malware or ransomware.
  1. Market Manipulation. Fraudsters long ago discovered how to make money using regular disinformation to move the financial markets. We describe several examples of disinformation being used in pump-and-dump and short-selling schemes. The dangers of these scams will only grow as believable manipulated media (perhaps a fake video of a CEO announcing a merger) can be made at scale.
  1. Insurance Fraud. The insurance industry often relies on the veracity of digital media to adjudicate claims. Research about the potential use of AI-manipulated radiology scans for insurance fraud demonstrates just one of a number of liabilities in this area.
  1. Labor and Employment Issues. Employees are relying increasingly on clandestine recordings to support their claims of harassment or mistreatment. Given the prevalence of legitimate recordings of such behaviors, the public could be expected to lend credence to manufactured video and audio recordings purporting to substantiate claims of employer wrongdoing. Companies will need to prepare for this potential additional layer of legal and business risk.
  1. Entertainment Industry Issues. Content creators may wish to use AI-enabled manipulated media as part of their expressive works, while actors will seek to protect their digital likenesses from studio use without their consent. Navigating this tension, and evolving regulations, will take deliberation. So too will questions of who owns deepfakes, which are often created by stitching together material that is copyrighted.
  1. Cybersecurity Insurance. While the vast majority of businesses have cybersecurity insurance, very few policies address the threat of disinformation. Businesses will want to consider how to modify or supplement their current policies to address these issues.
  1. Falsified Court Evidence. Highly realistic falsified media pose challenges to the authenticity of evidence admitted into court. Court rules will need to adapt, and judges will need to be trained to scrutinize carefully potentially manipulated media.
  1. Regulatory Scrutiny. Recently, some companies have developed deepfake technology for innocuous, commercial uses. These technologies may draw greater regulatory scrutiny as Congress and state legislatures pass laws to govern certain types of deepfakes.

In addition to spotting the legal and business risks, we also identify potential causes of action that could be brought by victims of disinformation against the individual creators or propagators of malicious content, whether under long-established state tort laws, federal intellectual-property protections, or laws enacted within the past year that specifically address deepfakes.  Finally, the paper reviews seven best practices businesses can undertake today to prepare for these hazards.

Protecting private entities and individuals from the risks of disinformation and deepfakes will take new thinking, preparation and diligence.  Identifying the threat vectors, reasonable responses, and prudent preparatory measures are important places to start.  Our full Client Alert is available here and via PDF.