The rise of deepfakes beyond social media

Manipulated media has become a threat even beyond social networks
Main Top Image
Image created with the assistance of Dall-E 2

Deepfakes have blended naturally in the muddled information environment of social media filled with bots and trolls spreading falsehood. However, recently, states have begun to increasingly utilise deepfake-based manipulations in new settings like TV news broadcasts, phone calls, and classified networks.

Recent cases, such as the Biden robocall and Putin’s false military orders, threaten to undermine democratic elections further, increase escalation of conflicts, trigger diplomatic crises, and eventually challenge states’ ability to communicate effectively.

Novel information manipulation

Deepfakes are hyper-realistic and hard-to-detect synthetic media created using artificial intelligence (AI). While information manipulation is not new, deepfakes present a new precedent in deceit and media manipulation by allowing virtually anyone to create realistic digital falsifications of images, video, and audio rapidly, easily, and at scale and even tailor them to a specific audience. Uses of deepfakes can range from generating compelling spear-phishing emails at scale to faking a leader’s speech or a political candidate’s video call.

Integrating deepfakes across a spectrum of information sources—from digital platforms notorious for manipulation to traditional media, often considered more credible ones—provides state and non-state actors with unprecedented tools to pursue information operations to influence adversaries.

Synthetic media, real threats

In January, New Hampshire voters received a call featuring a voice that impersonated President Biden and discouraged them from voting in the upcoming primaries. Though promoted by a US political consultant, this robocall, generated using a voice-cloning technology, showcases the risk of AI tools being exploited to sway voter actions via unconventional information channels. Especially if supported by a targeted social media campaign, such a call could deter undecided voters from casting their ballots, raising concerns about the threat deepfakes pose to the integrity of democratic elections.

Beyond elections, infiltrating AI-generated falsehoods into seemingly authentic information streams can profoundly impact military conflicts. In the shadows of the Russia-Ukraine war, a fake video of Russia’s President, Vladimir Putin, urging drastic military actions was injected into Russia’s national live broadcasts, introducing a new weapon in information warfare—televised deepfakes. Although this attack did not significantly change the course of events, it is likely a precursor to more sophisticated forms of manipulation to influence military decisions, such as the voice cloning of commands within military networks.

The flooding of information channels with disinformation is also likely to worsen the problem of the fog of war by making it harder for the public to know what actually happened. It was recently revealed that Iran interrupted TV streaming services in the United Arab Emirates (UAE) and replaced them with a fake news video featuring an AI-generated news anchor reporting the situation in Gaza. China is also increasingly utilising deepfake news anchors to promote its narrative, especially with regard to the conflict with Taiwan.

Televised deepfakes can also be weaponized to undermine diplomatic relations. In 2017, hackers, allegedly from the UAE, injected a false news ticker displaying controversial quotes from Qatar’s Emir into Qatar’s News Agency broadcast. Despite denials by the Qatari government, several countries used the seemingly authentic news reports as a pretext to sever diplomatic ties with Qatar. While not deepfake-based, this incident demonstrates the far-reaching consequences of such strategic ‘hack-and-plant’ attacks.

States can also exploit generative AI to fabricate pretexts for aggressive policies. At the beginning of Russia’s invasion of Ukraine, the US accused Russia of plotting to release a fabricated video of a Ukrainian attack to justify an attack. Although false flag operations are an age-old ruse, today, one does not need to direct a scene but instead insert a text prompt into an AI software, plant the artificial video into a hacked strategic network, and capitalise on that to justify an attack.

Finally, generative AI technologies, like large language models (LLMs), can be used to fabricate seemingly authentic leaks to sow international discord. Last April, leaked Pentagon documents revealed US spying activities on allies. Although their authenticity wasn’t denied, some documents were reportedly doctored. Ukraine accused Russia of inserting fake documents along with authentic ones to mislead and sow discord. When embedded within authentic intelligence reveals, fake leaks can seem reliable, allowing adversaries to capitalise on that information for their strategic gain.

Communication crisis

In addition to the above risks, a more fundamental risk pertains to inter-state communication.

The emergence of deepfake technology has led scholars to warn of an “infocalypse” where people can no longer believe their eyes and ears, constantly questioning the authenticity of the information before them. This situation fosters scepticism, erodes trust in institutions, and intensifies polarisation as each side is entrenched in its position. Political candidates can take advantage of this crisis to evade responsibility by discrediting allegations against them as fake, a strategy known as “the liar’s dividend“.

Similar processes occur at the international level. As realistic fake media grows common and more information environments turn into battlegrounds for political struggles, it becomes easier for states to dismiss information that does not align with their interests as fake, making it harder to reach agreements and compromises. Over time, states may even become “reality apathetic” and give up on trying to discern authentic signals from inauthentic ones, raising concerns about the stability of the international system.

A layered approach

To confront these looming threats, states must adopt a dual strategy focusing on prevention and mitigation. Initially, states need to enhance their cybersecurity measures across digital networks to thwart attacks of this kind. Since complete prevention is unattainable, states should also consolidate their detection efforts and media forensic techniques to rapidly identify forgeries. Media literacy education is also essential to foster a society resilient to disinformation once it spreads.

Beyond these measures, states should collaborate to establish a normative framework for countering the rising challenge posed by synthetic media. The upcoming elections in numerous countries have urged governments to enact legal measures to mitigate deepfakes’ harmful impact, such as through labelling manipulated content on social media platforms or introducing penalties for those who create and distribute them maliciously. Such efforts must be expanded to address the dissemination of manipulated media across various information environments

Terms & Conditions

Lorem ipsum odor amet, consectetuer adipiscing elit. Integer vestibulum massa; habitasse molestie velit tincidunt commodo. Blandit class sollicitudin in natoque fusce tincidunt maecenas tempor potenti. Turpis velit elit pulvinar aliquet sociosqu pharetra eleifend montes? Arcu sed ultricies gravida, tincidunt cubilia lobortis elementum elit. Lectus aptent suscipit auctor ultricies facilisi ultrices. Etiam pellentesque elementum lacinia morbi ac nulla fermentum primis. Eleifend scelerisque phasellus finibus nulla nisl. Dapibus nec accumsan scelerisque fringilla, tempus duis odio.

Duis odio urna mattis sociosqu ornare ligula torquent. Ornare tempus velit euismod nisi eu duis. Interdum torquent libero finibus porta sem ornare sit mauris. Facilisi consequat semper enim torquent nisl penatibus metus quis etiam. Habitant pharetra bibendum rutrum inceptos fermentum volutpat. Vulputate montes dis adipiscing himenaeos nascetur. Erat amet mus ipsum ultricies non aenean. Arcu penatibus primis platea primis tempus non dignissim convallis.

Arcu diam ante est varius pellentesque litora a vivamus? In dis purus tellus commodo semper egestas mattis adipiscing. Mi dis sapien, nisl morbi viverra dictum. Sociosqu lacinia consequat per vivamus elit. Torquent facilisi velit porttitor nunc phasellus facilisis tempor bibendum class. Integer nascetur neque ligula eget consequat lobortis neque ligula. Quisque maecenas a diam viverra senectus feugiat. Consectetur dignissim ut vivamus magna lorem malesuada turpis vitae.

Orci fusce efficitur libero porta porta ante euismod. Diam viverra malesuada integer, dictumst finibus ultricies! Dis turpis sociosqu montes cras arcu. Donec nulla et suspendisse elit accumsan duis tempus. Amet pellentesque lobortis dapibus fusce elementum nisi. Quam ultrices primis pellentesque ante dictumst? Dis rhoncus eros ipsum egestas, senectus potenti. Iaculis pellentesque habitasse vitae sociosqu vivamus lorem ex turpis tortor.

Elit ridiculus ut mollis neque dis. Vel class arcu duis varius fusce metus. Phasellus finibus pellentesque laoreet fusce lacus primis molestie. Ut cras risus arcu tincidunt ante, molestie maximus taciti. Etiam aliquam molestie; justo cubilia integer aenean. Curabitur curabitur suspendisse aliquet eu senectus. Tempus per eget dictum; id tristique velit vulputate pharetra iaculis. Cubilia nisi congue ligula iaculis luctus.