January 30, 2020
IntelBrief: The Challenge of Deep Fakes
Deep fakes are videos that have been digitally altered or manipulated, typically with the assistance of machine learning tools, to produce human bodies and/or faces that look and sound authentic. As witnessed with comedian Jordan Peele’s deep-fake video impersonating President Barack Obama, they can be used for entertainment. Other positive uses are depicting now-deceased actors and actresses in movies as if they were still alive. But there is concern over a more sinister or nefarious use of deep fakes, in which these videos can deliberately mislead people into believing an individual said or did something that is entirely fabricated. The implications are dire. Just consider the fallout from a deep fake that depicts a world leader announcing a military strike on an adversarial nation. The threat is compounded when one considers the use of deep fakes in conjunction with other types of attacks, including either kinetic strikes or cyber attacks, and how quickly a deep fake can spread on the internet and through social media platforms.
There are also variants of deep fakes known as ‘cheap fakes,’ or attempted manipulations done with cheaper software, even including a mere Photoshop-version of a video or image. With the proliferation of social media and more readily accessible software and technologies, there are a range of options available to anyone seeking to experiment with video and image manipulation. Last year, a so-called ‘cheap fake’ emerged of Speaker of the House Nancy Pelosi. The video was deliberately slowed down to make it appear that Speaker Pelosi was impaired or slurring her words. While it was eventually revealed that the video was doctored, it was still viewed millions of times on social media, disseminated, and discussed widely. Finally, text-based deep-faking where a user can simply use off-the-shelf software to edit a text transcript by inserting new language or delete authentic language has begun a cycle of making deep fakes a household activity. Deep fakes have become a go-to tool in the growing disinformation portfolios of both nation-states and non-state actors.
There are serious implications for dealing with the threat of deep fakes and similar technology. First, the potential threat posed by emerging technologies like deep fakes elevate the importance of diplomacy. One could easily imagine a deep fake video that depicts Kim Jong Un or another world leader threatening an attack. This merely reinforces the necessity of diplomats being able to establish contact with other foreign governments, including adversaries, to verify the authenticity, or in most cases quickly discredit deep fake videos designed to inflame tensions. Given the pressure to act and the speed of warfare in the modern era, this means that fake images and videos could have real world consequences. Second, there is a danger of deep fakes becoming so frequently used that governments and individuals develop an aversion to paying close attention, growing numb from the constant barrage of manipulated videos and images. This could prove problematic the rare time that one of these videos is indeed authentic and authorities prove slow to respond. Third, intelligence analysts and others whose mission it is to analyze data and identify trends will suffer because there is now so much more time and effort required to even verify if something is real, which leaves less bandwidth for actual analysis. This is true even despite the development of new tools designed to aid analysts in separating ‘signals from noise.’
Two states, California and Texas, have tried to curb the proliferation of deep fakes by passing laws banning known deceptive videos intended to influence voting in U.S. elections. Maine has a comparable bill under consideration as well. The federal government, however, has not demonstrated a capacity to tackle the challenge of deep fakes in a bipartisan manner. As such, the proliferation of deep fake technology will continue to serve as a disinformation force multiplier. While diplomatic and intelligence solutions to hard national security challenges will be key in confronting deep fakes, there is no silver bullet and any solution must be comprehensive. Only when a mix of technological, regulatory, intelligence, diplomatic, and civil society solutions – including one predicated on increasing the media and digital literacy of all strands of society – are deployed will the challenge of deep fakes and its threat to society be partially mitigated.
For tailored research and analysis, please contact: firstname.lastname@example.org