
Protego Press Weekly Roundup- January 9th, 2020
Happy New Year! 2020 kicks off with a bang in tech policy, with Facebook being in the news for deepfakes and targeted ads, a call for national privacy legislation, China’s growing privacy concerns and new AI guidelines, there is a lot happening. Fortunately, it’s all in this week’s Protego Press Weekly Round Up.
In Case You Missed It: The 2010 decade shook the open web as many countries realized the risks that come with a global and open internet. Public attention to corporate data collection was front and center, underscoring just how much data is being collected and analyzed by private companies, often without the knowledge or consent of citizens. Justin Sherman writes that, looking to the next decade, it’s time for the U.S. Congress to pass long overdue federal privacy legislation. The longer American legislators fail to act, the greater the harms become.
In Case You Missed It II: Facebook released its policy on enforcing against manipulated media, detailing how the company would respond to deepfakes, and to a lesser extent shallowfakes. But, as Sam Gregory points out, setting the criteria for content removal is a good start, but the platform must do more to address simpler forms of media manipulation.
China’s Increasing Privacy Concerns: Emily Feng, NPR’s Beijing correspondent, highlights the growing privacy concerns in a country one may not typically associate with privacy in “In China, A New Call To Protect Data Privacy.” Specifically, while China produces huge amounts of online data little of it is protected. That has led to a thriving market for stolen personal information, from national identification numbers to home addresses. Some of it is used for state surveillance, while much of it is used for private extortion and fraud.
White House Issues AI Guidelines: In a follow-up to President Donald Trump’s executive order on artificial intelligence, the White House’s Office of Science and Technology Policy has released what it has described as a “first of its kind” set of principles that agencies must meet when drafting AI regulations. The White House directed federal regulators to consider “fairness, non-discrimination, openness, transparency, safety, and security” when weighing regulatory action related to AI, and to consider public feedback on proposed regulations.
Facebook and Voter Manipulation: More details are emerging about the scale and scope of disgraced data company Cambridge Analytica’s activities in elections around the world — via a cache of internal documents that’s being released by former employee and self-styled whistleblower, Brittany Kaiser. Emma Briant, an academic at Bard College, New York, who specialises in investigating propaganda and has had access to some of the documents for research, said that what had been revealed was “the tip of the iceberg”.
Meanwhile, Andrew Bosworth, a Facebook VP who headed up the ads platform in 2016, wrote in an internal Facebook memo leaked to The New York Times that Cambridge Analytica was selling “snake oil” and was not as powerful as has been portrayed. However, Alex Stamos, Facebook’s former chief security officer wrote that “Where I disagree with Boz is that I think limits on targeting for political and issue ads are neutral and fair in the long-run and conducive to healthier democracy. Same with a tightly drawn standard on false claims about opponents. Neither are an attack on Trump.”
Quick Hits:
- AI researchers have taught the GPT-2 text generator to “learn” chess
- How a Swiss programme is teaching online privacy to children
- Ten things technology platforms can do to safeguard the 2020 U.S. election.