Facebook’s Oversight Board needs greater authority
Facebook announced the inaugural twenty members of its independent Oversight Board earlier this month. But while the board is filled with esteemed and respected names, we believe its underlying concept will require further consideration in the way forward.
The board’s scope, which focuses primarily on take-down decisions, is limited and may not effectively address the myriad misinformation and disinformation problems that plague Facebook’s spaces. The board is also reactive, rather than proactive. It seeks to close the gate after the chickens have fled the coop. Board decisions simply cannot come fast enough to effectively protect the flow of information.
Finally, the board can’t succeed because it is not truly independent. As long as it is tied to one corporation, rather than the industry as a whole, Facebook will continue to face the same questions about its take-down decisions. Instead, Facebook should expand its board to encourage the creation of an industry wide, independent content policy oversight board.
The board would make policies and act as a self-regulatory body at a time when elected officials are discussing regulatory policies and removing legal protections that insulate tech giants from liability over the way people use their services. To be taken seriously, such an oversight board must provide guidelines to the industry around any “offending” content – content that violates socially acceptable norms.
Rather than relying on everchanging definitions of hate speech, disinformation, child exploitation or other forms of offending content, this board must collaborate with technology, legal and policy experts to develop the technicalities of those norms and guide the industry in enforcing them as a collective.
The key is to uphold citizen rights to act as independent speakers in the public square — but recognize online spaces are not traditional public squares. Internet dynamics allow some speakers to beat out others because it prioritizes some speech over other speech. That’s because of the way search engines and social media firms work, and because of human nature. Some content — whether true or false, salient or salacious — will keep users glued to their screens and scrolling down their social feeds. As experts have contended, today’s leading technology firms take advantage of individual users’ psychological dependence on addicting content.
We should be concerned with the flow of information in democratic society because we can’t function without quality information. We must demand the industry that brought the idea that snorting cocaine or drinking bleach prevents coronavirus do more when it comes to policing the spaces they created and manage. Facebook, Google, Amazon, Apple and Twitter – have fundamentally transformed the nature of how information flows in our society.
Internet and social media platforms are becoming more effective at processing and engaging people through highly sophisticated algorithms. This sense of engagement maximization over internet platforms is centrally responsible for having diminished the commercial strength of local news in recent years. But with these technological advancements come new paradigms for ways society interacts with media. While this has created great benefits such as the Arab Spring effect, it has also poisoned the information environment in myriad ways – we need look no further than the full slate of coronavirus misinformation.
We must demand that the platform firms protect us from their creations.
Facebook took a stab at an internal oversight board in November 2018, when it was announced my Mark Zuckerberg. But the board we envision needs to be autonomous and empowered to create guidelines about things like misinformation, data privacy, revenge pornography — and penalties for offenders.
Regulation is coming. Technology firms such as Google and Facebook would be wise to choose a voluntary oversight board before leaving the task to aggressive government regulators. For instance, American lawmakers and politicians, including former Vice President Joe Biden, want to update Section 230 of the Communications Decency Act, which shields online forum-creators from liability for how people use their services. An industry content oversight board would limit the need for such an effort.
The board could blunt criticism generally reserved for specific firms. Backlash about a decision by Twitter to remove manipulated content or a deepfake, for example, would be less about the firm and more about industry-wide policy. After the recent Facebook controversies, the public might feel reassured if internet corporations owned up to shortcomings and instituted changes.
An organized form of technology industry self-regulation over content policy would be beneficial to corporate interests and the crucial nourishing role information plays in democratic society – without harming First Amendment rights. American society has already given the technology industry vast quantities of hard currency in the form of our data and willing attention. What we have witnessed in return is a distortion of the public square. It is time for us to call for meaningful, lasting change that can stop the harms propagated by the industry.