Today, the House Permanent Select Committee on Intelligence will convene an open hearing on “ the national security challenges of artificial intelligence (AI), manipulated media, and ‘deepfake’ technology.” Specifically, the Committee is looking at “democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens.”
Among other topics, the Committee will seek testimony on:
- Future advancements in deepfake technology;
- Detecting and tracking deepfakes;
- How deepfakes could allow people to deny legitimate media;
- The enduring psychological impact of deepfakes;
- Counterintelligence risks;
- Internet platforms’ role in policing fake content;
- The difficult legal challenges raised by deepfakes; and
- The appropriate role for the U.S. government.
The Committee has invited the following witnesses to attend:
- Danielle Citron, Professor of Law, University of Maryland Francis King Carey School of Law
- Jack Clark, Policy Director, OpenAI
- Dr. David Doermann, Professor, SUNY Empire Innovation and Director, Artificial Intelligence Institute, University at Buffalo
- Clint Watts, Distinguished Research Fellow, Foreign Policy Research Institute, and Senior Fellow, Alliance for Securing Democracy, German Marshall Fund
In preparation for the hearings, the Masthead at Protego Press has pulled together the following questions that hopefully will be asked and discussed during the hearings. What would you like to see asked?
Question 1: How are deepfakes different, qualitatively, than photoshopped images?
Question 2: How difficult is it to make deepfakes today that are high enough quality to fool most consumers? What level of expertise is needed to create this level of deepfake?
Question 3: What technologies exist to detect deepfakes? How quickly can detection occur?
Question 4: How best can the government interact with the private sector to identify and address deep fakes whose rapid spread might have national security or law enforcement implications?
Question 5: What does it mean to “address” deepfakes effectively—for example, whose voices can best debunk mistaken beliefs in deep fakes, and how?
Question 6: Which malicious actors are you most concerned about creating and spreading deep fakes in the immediate future, and with what goals?
Question 7: Are you aware of, or do you have evidence of, any foreign state actors developing the ability to develop deep fakes en masse?
Question 8: What responsibilities do the platforms have in regulating deepfakes? In notifying users that content consumed was a deepfakes? At adding disclaimers that deepfakes are, in fact, fake?
Question 9: Which audiences (age, education…) are most susceptible to which form of fake media: fake news (text) vs fake audio vs fake photos (spread with wrong context or doctored) vs fake video (manipulated and/or deep fake)?
Question 10: Is the debate on these issues robust enough? How do we tailor awareness-raising, training and/or education to each audience?
Question 11: Do you believe there should be a Select Committee or a bipartisan Congressional Committee to address social media regulation?
Question 12: What agency do you think is best positioned to regulate social media? Are they doing a good job? Do they have the resources and expertise that they need to do a good job?
Question 13: How would you suggest that we address deepfakes while still respecting the 1st Amendment?