A doctored video of House Speaker Nancy Pelosi went viral last week, Facebook de-ranked it as false, but refused to remove it. Platform politics have rubbed raw the public’s nerves, so that each new incident like this – whether Facebook de-platforms a hateful speaker or refuses to – makes tech look bad and creates new anxieties. So too, here. The video was edited to make the Speaker seem as if she slurred – a “cheap fake” as opposed to a “deep fake” with AI-generated spoofing. Some called on Facebook to remove the video and adopt a rule against … something, some category of false or malicious videos. How to define what that category would be is not obvious to anyone. Senator Sasse has proposed legislation to criminalize the creation and distribution of deep fakes, using language that can’t possibly survive review because it is so vague.
What Facebook did after the video had already been shared millions of times was to put up a warning before it could be shared again: “Before you share this content, you might want to know that there is additional reporting on this.”
Justin Hendrix and Bryan Jones propose a different solution: revive the FCC’s old Personal Attack rule which required broadcasters to give what amounted to a “right of reply” to persons whose “honesty, character, integrity or like personal qualities” were attacked “during the presentation of views on a controversial issue of public importance,” EXCEPT during “bona fide news” coverage or as part of a political campaign. The rule fell by wayside during the Reagan-era paroxysm of deregulation, when the agency tossed out the Fairness Doctrine. Hendrix and Jones say the rule was repealed in 1987, but this isn’t quite right; the facts are actually better for them. The FCC axed the Fairness Doctrine in 1987. But the Personal Attack rule was considered a “corollary” to the Fairness Doctrine and survived. The FCC didn’t actually suspend the rule until 2000 with a shrug, rather than a clear condemnation. Well into the 2000’s there were still serious requests to revive the rule.
For two reasons, I appreciate the proposal; for many more, I disagree with it.
The idea that we should be looking to media law principles and values in regulating the digital platforms is right. I’m working on a project run by Karen Kornbluh at the Digital Innovation & Democracy Initiative to develop these ideas, and am part of a University of Chicago research team to drill down on the specifics of market and regulatory interventions. The proposals all involve new regulation and a new regulator. What I also like about the proposal is that it doesn’t end in take-downs, codifying the power of unaccountable and opaque platforms we actually need to subject to more transparency and checks. Like Evelyn Douek tweeted:
“What do we want! Facebook to have less power!
When do we want it! Right after they take down the false stuff _I_ don’t like!”
To its credit, the Hendrix/Jones proposal doesn’t fall into this trap.
Now to the negatives.
1. The Personal Attack rule never worked well. Steven Simmons in a wonderful 1977 Penn Law Review piece runs through the FCC’s arbitrary and impenetrable reasoning on what counted as a personal attack and what didn’t. “Calling a person an ‘extremist’ and a ‘patriotic extremist’ is not an attack; but asserting that an institute and its newsletter are ‘subversive,’ to the ‘Far Left,’ and run by a ‘Communist,’ is….Calling two United States Senators ‘liberals and socialists’ is not an attack; but declaring that a university professor is a ‘Communist,’ is.
2. In part because it was impossible to predict when an attack might qualify as a “Personal Attack” under the rule, the existence of the rule was chilling of editorial independence and First Amendment protected speech. Now it’s true that it was never struck down by a court. But it’s also pretty clear that had the FCC not stopped enforcing the rule, the rule would have been declared unconstitutional. We can fight about whether digital platforms enjoy even more protection than broadcasters because they don’t have licensed airwaves, or less because they should be regarded as mere carriers, engaged in “machine speech.” But even if a rule for digital platforms survived, it would almost certainly encourage platforms to over-censor lest they expose themselves to same kinds of trouble broadcasters sought to avoid. This, then, would be worse in terms of concentrated power, unaccountability, and private censorship than a take-down rule.
3. Which begs the question of who would enforce a Personal Attack rule against digital platforms. The FCC has no jurisdiction. The FTC has no authority or enforcement personnel to do that. There would have to be a new agency, which might not be a bad idea, but we’re nowhere near that and we need solutions now.
OK, so what instead? Most immediately, platforms need to have much clearer and more instantaneous responses to fake video and audio. They should have as a policy that any alteration of an original must be labeled as altered. If it’s not, the platform will take it down. Upon complaints, the platform will ensure that the altered content is labeled after the fact. Instead of asking users whether they want more information about altered content, the platform will simply watermark the content with ALTERED. Detecting alterations from the original is something that AI should be able to handle well, as opposed to deciding when an attack is a “Personal Attack” meriting a response. Now it’s true that edited or altered audiovisual content is everywhere, including in satire and journalism. Most journalists edit original audio and video, without intending to deceive. Should they have to represent their content as edited? Maybe. Or the platforms could come up with their own “bona fide news” exemption for labels, like the FCC did. There is really no way around the fact that labels, like take-downs, will require judgements based on context and intent to deceive. But less judgment and less chill than a revival of the Personal Attack rule.