Deception operations using high-quality fake videos produced with artificial intelligence are the next phase of information warfare operations by nation states aimed at subverting American democracy.
Currently, "deepfakes," or human image-synthesized videos, mainly involve the use of celebrity likenesses and voices superimposed on women in porn videos.
Recent Stories in National Security
But the weaponization of deepfakes for political smear campaigns, in commercial operations to discredit businesses, or subversion by foreign intelligence services in disinformation operations is a looming threat.
"I believe this is the next wave of attacks against America and Western democracies," said Sen. Marco Rubio (R., Fla.), a member of the Senate Select Committee on Intelligence.
Rubio is pushing the U.S. intelligence community to address the danger of deepfake disinformation campaigns from nation states or terrorists before the threat fully emerges.
Deepfake videos are produced by first collecting still images of a person's head and applying facial recognition software, such as the program FakeApp, to produce life-like videos.
Simulated voices are then synced to the fabricated videos using sound-editing software. Adobe's VoCo, a prototype software, for example, was demonstrated last year and revealed how to capture a person's voice and then simulate near-perfect speech simply by input from a keyboard.
Last year, University of Washington researchers demonstrated the use of deepfake technology that turned audio clips into a realistic, lip-synced video of former President Barak Obama.
Hollywood has utilized a similar process for movies. For example, a digital representation of the late Star Wars actor Peter Cushing was created for the character of Imperial Officer Grand Moff Tarkin in the 2016 movie Rogue One.
The quality of deceptive videos is increasing rapidly as artificial intelligence and its subset machine learning are applying advanced algorithms to the video and audio simulation process.
The result is life-like videos of someone saying or acting in ways they never did. Currently, such fakes can only be debunked through technical analysis capable of spotting the manipulation of video pixels—usually long after the fabrications have gone viral on the internet and social media.
Deepfake porno videos have been made of several well-known actresses and public figures. Some digital platforms, such as Twitter, Reddit and Pornhub, have removed deepfake videos when alerted. Others, such as Google, have been slow to respond when notified of fabrications.
Rubio said in an interview he has launched a staff investigation into the threat posed by deepfakes. The inquiry includes gathering examples and consulting experts. The Senate intelligence panel may examine the problem and look for solutions, he said.
"It's largely been used as a gag in the short term but you could see how deepfakes could be weaponized, both in a political campaign by people looking to create all sorts of chaos, and ultimately by nation states that have higher technical capabilities," Rubio said.
For example, the Lebanon-based Iranian proxy terrorist group Hezbollah could produce deepfake videos falsely showing Israeli soldiers committing atrocities against Palestinian children.
"That could spark all sorts of, riots, violence, maybe even a war," Rubio said.
Another potential use of deepfake video would be creating a false but high-quality video showing a politician accepting a bag of cash, or saying something incriminating that was never spoken. The deepfake could then be publicized on the eve of an election.
"Given the nature of the media narrative in America today, it is hard to believe something like that would not be widely distributed online," Rubio said.
"There might even be media outlets reporting it as real, and by the time that that's all cleared up, the damage is already done."
Rubio said there is an awareness of the threat posed by deepfakes within the U.S. government but mainly at lower levels in the intelligence agencies among people that specialize in countering foreign intelligence disinformation.
"They know that it's a possibility, but since it hasn't been widely deployed yet and they're dealing with plenty of threats that already exist, there's not a lot of work going on," Rubio said.
Foreign intelligence services, such as Russia's FSB noted for its interference in the 2016 presidential election, could produce a deepfake videos of a politician using a racial slur or taking a bribe and use the disinformation to help defeat one candidate or boost the election fortunes of another.
- Michael Waller, an information warfare expert, said deepfakes also could be used by U.S. intelligence agencies for offensive information warfare operations against hostile states or terrorist groups.
"Obviously the U.S. can use deepfake capabilities against foreign adversaries, but American intelligence has, with a few exceptions, been so poor at strategic messaging already, that the technology is far more of a boon to our adversaries than it is to us," said Waller, vice president of the Center for Security Policy.
U.S. soldiers, for example, could be targeted by foreign adversaries using deepfakes to discredit American military operations and generate international opposition.
The Pentagon's Defense Advanced Research Agency, or DARPA, last month contracted with the Silicon Valley firm SRI International to develop the techniques for identifying deepfake videos and images.
"We expect techniques for tampering with and generating whole synthetic videos to improve dramatically in the near term," an SRI official told the online outlet TechCrunch.
DARPA has set up a media forensic group and is working with SRI in developing new counter-deepfake tools. The project is aimed at identifying manipulated digital imagery a problem made worse by the vast expansion in the use of digital photography.
"Mirroring this rise in digital imagery is the associated ability for even relatively unskilled users to manipulate and distort the message of the visual media," DARPA said on its website. "While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns."
Current visual media forensics are inadequate for spotting high-quality fakes.
DARPA is tasking "world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform."
The objective is to automatically detect video fabrications and provide detailed assessments showing how the fakes were produced.
Rubio first raised the issue during a recent Senate Intelligence Committee hearing earlier this month with Bill Evanina, director of the National Counterintelligence and Security Center.
Evanina said he was not familiar with the term deepfake but noted that spy agencies are stepping up efforts against hostile foreign spy services.
"The entire intelligence community and federal law enforcement [community] is actively working to not only understand the complexities and capabilities of our adversaries, but what, from a predictive analysis perspective, we may face going forward, particularly with the election this fall as well as in 2020," Evanina said.
Solutions to deepfakes will be difficult even with advanced technology tools that use artificial intelligence to deal with the problem.
"I can't tell you what the solution is or what the intelligence community can do about it," Rubio said.
Some observers have noted that a draconian solution to deepfakes would require those concerned about being targeted to submit to 24/7 personal surveillance that could verify their every move.
Rubio believes an important first step is raising public awareness, especially among media outlets, to carefully assess any hard-to-believe videos before disseminating them.
"I think a lot of the responsibility is going to be making people aware of this, and the press and others being highly suspicious of videos that look so over the top and outrageous that they demand more careful scrutiny before we put that stuff out there," he said.
For foreign intelligence, Rubio said a sophisticated disinformation operations will likely use a combination of accurate information, such as hacked emails, combined with deepfake videos.
"Ten hacked emails that are real in order to gain credibility, and a deepfake video is they way they would weaponize it," Rubio said.
Economic security also could be targeted using deepfakes. For example, malicious actors could produce deceptive videos designed specifically to subvert a business enterprise or its executives and employees.
"We live in a day and age where videos are widely available and a lot of things these days are captured and reported because some bystander with a cell phone captures video and shares it," Rubio said.
"I just think people in the press moving forward, particularly with high profile figures and if what they've captured seems to be extraordinary, I think they need to be a little bit more careful in reporting on some video that shows up until they can absolutely verify it because especially near an election date, our predisposition to want to believe things that are over the top makes us very vulnerable to this being used against us to disrupt democracy."
Waller, the information warfare expert, said deepfakes have the potential to inflict serious harm to U.S. and allied interests.
"They can be created as phony raw intelligence designed to mislead our intelligence collectors and analysts and lead them down the wrong trails," he said.
"They can be created to frame people to become false targets of the FBI, Justice Department, and other investigative agencies," Waller added. "They can become false evidence in legal cases and trials. They can become realistic false sources for media leaks and social media memes to destroy people."
Waller said China, for example could use sensitive information stolen from cyber attacks against U.S. networks for years to create deepfakes that could discredit Americans involved in military, intelligence and security affairs, or to blackmail them into becoming agents.
"This means that China can target American diplomats, political appointees, military personnel, and intelligence officers based on their personal weaknesses, vulnerabilities, personalities, and psychological traits by using the raw data stolen from [Office of Personnel Management] records," Waller said.
"China can then exploit that personal information to create convincing deepfakes to compromise or destroy the targeted individuals."