OpenAI Experiment A Wake-Up Call for Policymakers: Cost of Producing Disinformation Set to Plummet As AI Improves

Regulating content is hard- instead start with privacy and data protections

Image Credit: OpenAI

OpenAI, a nonprofit research organization initially backed by wealthy Silicon Valley individuals such as Elon Musk, Sam Altman, Reid Hoffman, and Peter Thiel as well as companies such as Microsoft and Amazon, seeks to advance artificial intelligence with the goal of ensuring humanity develops an ethical version of artificial general intelligence, or AGI. The company has ~60 star researchers, some earning nearly $1 million, and is known for teaching robots human-like dexterity, measuring the success of complex tasks, beating humans at Pong and playing teams of humans competitively in more complex games like Dota 2.

Last week, OpenAI announced it built a system “that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization,” effectively delivering text content that appears to be written by a human. For instance, researchers fed the system, called GPT-2, a fanciful paragraph about unicorns:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

After a few tries, the system delivered back a coherent story based on this premise. The potential for mischief is obvious- OpenAI researchers are so concerned the technology may be misused that they decided not to release it to the public. (Elon Musk distanced himself from OpenAI after reading reports of the project). The researchers summarize the potential implications:

Large, general language models could have significant societal impacts, and also have many near-term applications. We can anticipate how systems like GPT-2 could be used to create:

  • AI writing assistants
  • More capable dialogue agents
  • Unsupervised translation between languages
  • Better speech recognition systems

We can also imagine the application of these models for malicious purposes, including the following (or other applications we can’t yet anticipate):

  • Generate misleading news articles
  • Impersonate others online
  • Automate the production of abusive or faked content to post on social media
  • Automate the production of spam/phishing content

It is not difficult to imagine versions of this technology deployed in the near future. State actors are racing to develop AI for more than benevolent reasons. And eventually, this technology will be available to the general public, just like other computational disinformation tools such as Deepfakes- impostor videos produced using generative adversarial networks . The challenge, then, for policymakers is what to do to prepare for a world in which AI has proliferated.

Some argue no new rules are necessary for these new innovations- the Electronic Frontier Foundation, for instance, “sees no reason why the already available legal remedies will not cover injuries caused by deepfakes.” John Villasenor, a fellow at Brookings, wrote about the difficulty of creating good rules around deepfakes last week. More needs to be done, “but it is very hard to draft deepfake-specific legislation that isn’t problematic with respect to the First Amendment or redundant in light of existing laws,” he wrote. The same is certainly true of systems like GPT-2.

As I wrote with David Carroll for MIT Technology Review, the greatest fear for those concerned about the potential of these technologies to be employed with malicious intent is the combination of tools that automatically generate content with enormous amounts of personal data that make the resulting content dramatically more effective. OpenAI’s advances are a further indication that this is not science fiction; rather, it is a near-term threat. It is not inconceivable that prototypes of such technologies may be deployed in the 2020 election cycle, and certainly in subsequent ones. While policymakers may be stuck on the problem of regulating the content, they should quickly move to institute privacy and data protections to avoid worst case scenarios.

“We must make sure that people stay in charge of the machines,” concludes a report on disinformation published this week by a UK Parliamentary inquiry that is replete with recommendations on new oversight structures, privacy protections and digital literacy initiatives. The OpenAI experiment is another wake-up call to American policymakers, who have collectively failed to do anything about the proliferation of disinformation that is commensurate with the threat, and failed to protect the privacy of citizens. Will they be jolted from their slumber?  Time is up.

About The Author