As we move onward into October, the tech policy world continues to also move onward. From deepfakes and AI, to bots and disinformation, facial recognition and new regulations, we have it all and more in this week’s Protego Press Weekly Round Up.
In Case You Missed It: One of the most insidious problems facing humanity is the power of synthetic media to erode our understanding of what is real. Currently, society’s programmed response to contrived images, videos, audio tracks and documents is to wage a battle of technology over the authenticity of disputed content. But ultimately this is a losing proposition. When alleged deepfakes or other disinformation emerge online, laborious efforts to unwind and explain the underlying artifice fail to make victims whole or promote restraint while the veracity of the content is adjudicated. That’s why Jono Fischbach writes the development of “truth analytics” which expose and explain the probabilistic relationships connecting every event and condition documented online may help process the surrounding truth, allowing us to process and make sense of deepfakes or other disinformation is essential.
In Case You Missed It II: Artificial intelligence and facial analysis software is becoming commonplace in job interviews. However, as Ivan Manokha writes, AI is created within our existing society, marked by a whole range of different kinds of biases, prejudices, inequalities and discrimination. The data on which algorithms “learn” to judge candidates contains these existing sets of beliefs. Which means that technologies developed using data from our existing society, with its various inequalities and biases, is likely to reproduce them in the solutions and decisions that it proposes.
Senate Intel Committee Report: The Senate Intelligence Committee released the second volume of its report on Russian interference in the 2016 presidential election, which focuses on the social media disinformation campaign led by the Kremlin-backed Internet Research Agency. The report, which provides further bipartisan evidence of Russia’s election meddling in 2016, finds “the IRA sought to influence the 2016 U.S. presidential election by harming Hillary Clinton’s chances of success and supporting Donald Trump at the direction of the Kremlin.”
Google’s Facial Recognition Research Program: Google has suspended a facial recognition research program designed to make its software less racially biased after a report emerged that its contractors had been targeting homeless black people. The program, according to the Daily News, was designed by Google to avoid the past pitfalls of facial recognition technology identifying people with darker skin. Google told the Daily News it was investigating the matter, and on Friday a Google spokesperson told the New York Times the company had suspended its facial recognition research pending the investigation.
Facebook’s Thread App: Facebook on Thursday launched Threads, an image-centric messaging app designed to weave tight circles of Instagram friends together, while ramping up its challenge to rival Snapchat. The app, however, gather attention from privacy experts, as it asks for a significant amount of data about everyone who uses it, from continuous, 24/7 access to your physical location to your movement and whether you’re working out, and even your battery level. The iOS and Android app’s requests illustrate how Facebook’s thirst for detailed and intimate data about its userbase continues unabated, even after two years of scandals and scrutiny over the company’s use and misuse of the personal information of its 2.7 billion users.
California’s New Tech Regulations: California signed into law two new bills designed to restrain tech companies. The first bill, signed by California’s governor Gavin Newsom on Thursday makes it illegal to create or distribute videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election. The move is designed to protect voters from misinformation but may be difficult to enforce. And on Tuesday, Newsom signed a bill blocking law enforcement from using facial recognition technology in body cameras. The bill, AB 1215, bars police from installing the software on their cameras through Jan. 1, 2023. California is now the largest state to take steps to limit police use of the technology, following New Hampshire and Oregon.
Facebook’s Lack Of Censoring Policy: President Donald Trump’s reelection campaign is running a false ad about former Vice President Joe Biden on Facebook, and there’s nothing the company is going to do about it. Even false statements and misleading content in ads, the company has said, are an important part of the political conversation. “Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process, and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is,” Facebook’s head of global elections policy, Katie Harbath, wrote in the letter to the Biden campaign.
It also comes a few days after Presidential candidate Sen. Elizabeth Warren, D-Mass., took aim at Facebook CEO Mark Zuckerberg in a series of tweets on Monday night. In Monday’s tweets, Warren drew a line from Zuckerberg’s meeting with President Donald Trump in Washington, D.C., last week to Facebook’s policy change regarding political ads. “The public deserves to know how Facebook intends to use their influence in this election,” Warren wrote in the first of a string of tweets Monday. “For instance, Trump and Zuckerberg met at the White House two weeks ago. What did they talk about?” she wrote in a later tweet.