Fake news headline on a newspaper

WE’VE DIAGNOSED THE DISINFORMATION PROBLEM. NOW, WHAT’S THE PRESCRIPTION?

Fake news headline on a newspaper

The following was previously published on Defusing Disinfo here.

Over the past few years, the age-old practice of propaganda has mixed with social media to create a formidable new problem: computational disinformation powerful enough to disrupt democracies.

From 2014 through 2017, Russia’s Internet Research Agency (IRA) targeted the American people with tens of millions of social media posts designed to inflame societal divisions. It was part of a comprehensive, multi-year operation that fostered tribalism, disseminated propaganda directly to the American people, and turned activists into unwitting tools of foreign provocateurs. The end goal? To disrupt American societal cohesion and undermine our ability to trust what we see. And, while they were at it, to influence the 2016 and 2018 elections.

I led one of the teams that investigated and analyzed those social media posts. In my research, I encountered videos pontificating about the evils of Hillary Clinton; Russians masquerading as African-American, liberal, and pro-Trump activists; and polemic memes about Islam, refugees, and the Second Amendment.

But Russia is not the only state actor to interfere in American politics by way of social networks; cybersecurity experts at FireEye uncovered evidence of an Iranian operation targeting the United States’ 2018 midterms. Outside of the United States, researchers, civil society, and the press spent the last several months regularly uncovering disinformation campaigns in Europe, the Philippines, Myanmar, Sri Lanka, India, and beyond.

2018 was the year of exposing and quantifying manipulation in its various forms. 2019 has to be the year that we take significant steps to mitigate the systemic problems that enable it.

Before discussing potential solutions to disinformation, it’s important to understand what we learned in 2018. A collection of reports, investigations, and hearings on a myriad of topics — and global in scope — led to near-universal agreement that something is wrong on the internet and that disinformation itself is a problem enabled by a confluence of systemic flaws in the information ecosystem. But disinformation, although the subject of this essay, was far from the only critical tech industry problem to capture the attention of journalists, academics, and regulators.

The first major investigative thread that took center stage in 2018 was privacy. The Cambridge Analytica saga, and the idea that it had been far too easy to harvest and misuse user data, captivated the media and both houses of Congress in the United States, as well as Parliament in the United Kingdom. The conversations focused on the extent to which users were aware of what platforms collected, what degree of data gathering was appropriate, and what protections people should be entitled to. Towards the end of the year, California passed its own version of the General Data Protection Regulation (GDPR), the sweeping European privacy legislation that took effect in May 2018.

The second major thread was monopoly: the idea that online platforms were just too big to govern themselves gained momentum in 2018. The idea that platforms had too much power over consumers, and that some of the their failures could be attributed to their size, appeared to resonate with lawmakers. Academics, policy experts, and think tanks on both sides of the political spectrum debated what came to be called “hipster antitrust” – a new way of thinking about the applicability of monopoly law – which traditionally focused on protecting consumers from predatory pricing – when attempting to regulate companies that give their services away for free.

The third thread was the algorithms: stories and studies of radicalizationpolarizationgaming, and manipulation of all types, present (to varying degrees) across all platforms, combined to create a foreboding impression that social network algorithms were exacerbating societal problems. However, although both sides of the aisle agreed that something was amiss, the solution conversation was itself polarized and politicized. Democrats accused the platforms of silencing voices on the left, while Republicans held Congressional hearings devoted to ‘exposing’ anti-conservative bias.

Together, the tapestry of exposés and investigations, and the calls for reform, came to be known as “the techlash.” Public approval of the tech industry decreased as lawmaker ire increased. Industry executives, including Facebook’s Mark Zuckerberg and Sheryl Sandberg, and Twitter’s Jack Dorsey, appeared before Congress, sometimes contrite – and sometimes standoffish.

Although 2018 delivered the diagnoses, pessimism colored much of the discussion about prescriptions. This was particularly true when it came to how best to mitigate disinformation, which is enabled by a combination of the three larger issues.

The idea of addressing disinformation indirectly as part of a broader privacy reform was floated: perhaps we could make ad targeting less precise by creating stronger privacy protection. That would make it harder for bad actors to target users via ads, but doesn’t stop them from simply posting their memes in the relevant online Groups and message boards where people naturally congregate.

Since the mass consolidation of audiences onto a handful of platforms makes it easy for propagandists to spread their material, antitrust action might similarly have an indirect impact on disinformation. Theoretically, breaking up the behemoths would result in users scattered across more, smaller social networks. Those sites would be easier for Trust and Safety teams to wrangle. But bad actors have shown a commitment to spreading information wherever they can find an audience; they were on Tumblr and Reddit as well as Facebook and YouTube.

What if we tempered algorithms so they were less likely to polarize or radicalize? This at first seemed like low-hanging fruit — but quickly triggered a fierce battle over free speech. Unfounded allegations of partisan bias in ranking algorithms led to Congressional hearings about the “censorship” inherent in not appearing at the top of Google search results.

And in his 2019 New Year’s Resolution, Mark Zuckerberg, perhaps the person with the greatest degree of direct change-making ability, expressed a desire to engage and simultaneously undermined it in the same post:

“Do we want technology to keep giving more people a voice,” he wrote, “or will traditional gatekeepers control what ideas can be expressed?”

Zuckerberg’s false binary turned his resolution into a deflection. Social platforms are the gatekeepers now. Their CEOs need to acknowledge that they are the first line of defense against disinformation.

As we start 2019there is near-universal consensus that things need to change. However, the problem with something as massive and powerful as the information ecosystem being broken is that the idea of fixing it feels overwhelming. It’s difficult to know where to start. It’s widely accepted that we need “regulation” and “accountability,” for the technology industry, but there’s really not much of a sense of what, exactly, that’s going to look like.

When it comes to addressing disinformation specifically, the policies have to be nimble, capable of evolving in response to what will be an ongoing tactical arms race. To that end, there are four promising areas for engagement: oversight, cooperation, education and a new national security doctrine.

The technology companies that operate large online platforms such as  Facebook, Instagram, Twitter, and YouTube, are the first line of defense in detecting new tactics and mitigating disinformation and propaganda.   In their current state, these large platforms gather substantial quantities of data, sell targeted ads, curate content using feed-ranking, trending, and recommendation algorithms, and serve over 50,000,000 monthly United States users (a limit that Congress has used to bracket legislation such that it only impacts major entities). Because they will always have unique insight into metadata and other means of detecting malign actors, they need to be empowered to police their platforms. But these companies also need oversight from government regulators to ensure that they do address it, which requires regulatory and legislative reforms

In Foreign Affairs late last year, former OECD Ambassador Karen Kornbluh suggested changing the incentive structure surrounding the obligation to monitor for disinformation via narrow changes to the Communications Decency Act (CDA) Section 230, the legislation that governs platforms’ responsibility for the content they host. Kornbluh, who is now a director of technology programming at the German Marshall Fund, advocates for eliminating immunity for platforms that leave up content that threatens or intentionally incites physical violence. This is a small percentage of the content underlying disinformation campaigns, to be sure, but reevaluating our long-accepted policy of blanket indemnity with no obligation to investigate is a place to start. Senator Ron Wyden, the lawmaker who originally wrote CDA 230 in 1996, recently indicated that his own thinking on the appropriate degree of indemnification protection has evolved.

Another oversight solution is to enact common-sense advertising reform, perhaps overseen by the FTC. For instance, legislators might curb targetability and tracking. This approach has been considered in the past, including via Representative Jackie Speier’s “Do Not Track Me Online Act of 2011”. There was not much enthusiasm for it then, but sentiment and priorities have changed.

The Honest Ads Act, a bipartisan bill, proposes regulating political advertising on the Internet similarly to television, radio, and print advertising, with the Federal Election Commission playing a key role. If enacted, companies like Facebook, Google, and Twitter would have to disclose how much specific political ads on their platforms cost; the number of ad views; how the ad was targeted; and the contact information of the buyer.

One necessary component of oversight is allocating the responsibility for investigations and monitoring to a specific party. Current suggestions for a responsible body in the USA include granting federal agencies like the FTC or FEC greater insight into the platforms’ self-regulation plans, to ensure the features and policies that platforms are introducing themselves are in the public interest. There are interesting lessons to draw from the financial industry, which operates using a combination of government oversight (the SEC), self-regulation via industry associations, and nimble responses from the exchanges themselves. This tiered system allows for the protection of the consumer as well as industry, where trust and information integrity are paramount to ensuring a well-functioning market. We might consider a parallel model for our information ecosystem: an SEC for the technology industry.

Cooperation is another key area for mitigating disinformation — especially when it comes to detecting campaigns early, before they reach millions of people. This multi-stakeholder approach entails companies that operate platforms, governments, and independent researchers working closely together, rather than in silos. This approach is the status quo in other industries: Information Sharing and Analysis Centers, or ISACs, for example, exist in the realms of healthcare, financial services, and aviation, where they facilitate threat information sharing between the public and private sectors.

There are a few similar coalitions in the technology industry. One is the Global Internet Forum to Counter Terrorism, where the United Nations, leading tech companies, NGOs, and academics collaborate to disrupt extremist content online. Another is the Global Engagement Center, where the Department of State and private-sector experts collaborate to counter terrorist and foreign disinformation online. Initiatives like these should be staffed robustly, funded fully, and replicated liberally.

Effective cooperation can be simpler, too. Technology companies , government agencies, and researchers could engage in regular penetration testing, or “pentesting,” to identify platforms’ vulnerabilities before an adversary does. Governments and platforms alike could incentivize responsible disclosure through the creation and expansion of “bug bounty” programs. The financial industry does routine pentesting on technology around personal data, credit cards, and the like. Why shouldn’t platforms do the same?

The third area for engagement, education, entails raising awareness among those targeted by disinformation campaigns: voters, consumers, and everyday internet users. Imagine a government-led media literacy campaign, complete with PSAs that explain how algorithmic ranking works, and why disinformation spreads.

Here, the United States can learn a lot from countries like Estonia, Sweden, and Finland, who have long dealt with foreign disinformation campaigns. Beginning in 2015, the Finnish government introduced a comprehensive strategy for combating Russian disinformation, including public education efforts and simple-but-effective tactics (“don’t repeat lies.”) The Swedish government has produced content for its citizens explaining what to look for. The Estonians regularly debunk stories emerging from Russian language media targeting citizens, and their elected officials carefully consider what outlets to give comment to.

One of the Senators with the greatest expertise on the topic,  Mark Warner, recently released 20 policy recommendations for regulating the tech industry. Prominent among them was a call to empower citizens by ensuring that they are better informed about the challenges we face. He writes: “Addressing the challenges of misinformation and disinformation in the long-term will ultimately need to be tackled by an informed and discerning population of citizens who are both alert to the treat but also armed with… critical thinking skills.”

There are worthwhile debates to be had about how to execute media literacy programs – many focus on schoolchildren and college students when research appears to indicate that older Americans are more prone to sharing disinformation – but a pilot program is a worthy endeavor.

Senator Warner also highlighted another important gap: the lack of a cohesive whole-of-government strategy to address this new asymmetric threat, and the need for a new cybersecurity doctrine.

Despite a flurry of strategy documents from the White House and Department of Defense, the federal government is still not sufficiently organized or resourced to tackle this hybrid threat. We have no White House cyber czar, no cyber Bureau, and no senior cyber coordinator at the State Department. And we still have insufficient capacity at State and DHS when it comes to cybersecurity and disinformation. Our Global Engagement Center at the State Department is not sufficiently equipped to counter  propaganda from our adversaries. And this White House has still not clarified roles and responsibilities for cyber across the U.S. government.

In his December 2018 speech to Center for a New American Security (CNAS), Senator Warner expanded the call from a whole-of-government to a whole-of-society approach, in recognition of the fact that solutions require cooperation from private companies and individual citizens alike.

National governments need to develop new norms and take executive action in service to a new cybersecurity doctrine that addresses the changes to media and information ecosystems. This includes the development of new rules and norms for the use of cyber and information operations, as well as better enforcement of existing norms. We need to link clearly articulated principles to predetermined responses according to the target and severity of the attack, including sanctions, export controls, indictments, and military action.

Ultimately, a new cyber doctrine will require executive leadership. Tangible steps that the White House could take include restoring a cybersecurity coordinator position, and working with Congress and other stakeholders to pass bipartisan oversight legislation as well as spearheading efforts to create media literacy programs. Given the President’s own proclivity toward tweeting politicized conspiracy theories, and the resistance to acknowledge Russia’s 2016 election interference operation, it’s unclear that the appetite for this much-needed effort exists in our current government. However, it should be part of the campaign conversation during Election 2020.

Disinformation is one of the defining threats of our generation. We must come together as individuals, private corporations, experts, and governments to fight it successfully. Computational disinformation is a systems-level problem resulting from precision targeting, centralized platforms, and an array of amoral algorithms. As a result, it requires a thoughtful, multi-faceted solution, not digital security theater.  There is no simple technical fix, nor silver-bullet feature that Facebook or Twitter engineers can deploy to “solve” disinformation.

We can’t afford to spend another year establishing that there’s a problem. It’s time to come together to implement potent prescriptions to inoculate society against disinformation and build resilience.

About The Author