Deepfakes: A Threat to National Security in the Digital Era
Anurag Sharma, Associate Fellow, VIF

मिथ्याज्ञानं तमोगुणः संशयात्मा यदा जायते।
तदा जगदिदं सर्वं भ्रान्तं व्यामोहमोहितम्॥

(English Translation: When ignorance born of darkness and doubt arises, Then this entire world appears delusory and veiled by illusion.)

Since its inception, Artificial intelligence (AI) has continued transforming many parts of modern life, from improving medical diagnosis to streamlining supply networks. However, alongside these benefits, AI has also given rise to technologies that pose significant risks. One such technology is deepfake, which creates hyper-realistic but entirely fabricated audio and visual content. Deepfakes have become a pressing concern for governments, security agencies, and civil societies worldwide in recent years. Defined as highly realistic and convincing fake audio, video, or images created using deep learning algorithms[1], deepfakes can manipulate public perception and disrupt societal norms.

For a diverse and populous nation like Bharat, the potential misuse of deepfakes poses unique challenges to its national security. This write-up is an attempt to explore the implications of deepfakes for Bharat’s national security. It introduces the technology behind deepfakes, their potential misuse, and their threats to Bharat’s political stability, social harmony, and defence mechanisms. The article also discusses regulatory and technological measures Bharat can adopt to mitigate these threats.

Understanding Deepfakes

Deepfakes are synthetic media generated through advanced artificial intelligence techniques. Deepfakes are generally created using Machine Learning (ML) ’s Generative Adversarial Networks (GANs). Two machine learning systems known as neural networks are trained against one another in the GAN process. The first network— the generator, oversees synthetic data creation with the same properties as the original data set, such as photos, audio files, or video clips. The discriminator, or second network, detects fake data. The generator network adapts according to the results of each iteration to create more accurate data. The networks often compete for thousands or millions of these cycles before the generator’s performance increases to the point where the discriminator can no longer highlight the difference between real and fake data. [2]

In 2017, FaceApp Technology Limited, a Cyprus-based software firm, launched “FaceApp”— a photo and video editing app for iOS (Apple) and Android (Google) mobile devices. [3] The application used AI-powered neural networks to create incredibly realistic alterations of human faces in still photos. Competing with FaceApp, in August 2019, Chinese mobile-based face-swap application— Zao gained widespread popularity due to its unique characteristics over other deepfake applications in the market; Zao app allowed users to replace the faces of celebrities in a video clip with their own.

As the Zao deepfake application was not available through primary iOS and Android application stores, users were required to download it from different sources, further raising digital privacy concerns among users. In 2019, DataGrid, a Japanese start-up based in Kyoto University, developed a deepfake to create full-body models of non-existent humans automatically.

DataGrid developed an AI algorithm capable of creating an infinite number of realistic-looking persons that do not exist. In other words, deepfake enables the AI person to constantly morph into new but non-existent faces, showing various positions and looks. According to the company’s ambitions, deepfakes will find application in advertising, maybe allowing marketers to generate the exact type of person they want to represent when using their product. [4]

As a technology, deepfake can convincingly alter individuals’ speech, facial expressions, and other identifiable characteristics, making it appear that they are saying or doing things they never did. While initially seen as a novelty, the potential for misuse has become increasingly apparent.

Deepfake as a Threat

Deepfake technology poses significant threats across various domains. With the creation of realistic but fake audio/video content, deepfakes can spread misinformation, influencing political situations. Deepfakes can compromise confidential information stored in Critical Information Systems (CIS) by enabling sophisticated social engineering. Some of the domains where deepfakes can pose a severe challenge are discussed in the following sections.

Threat to Political Stability

Political stability poses one of the most significant threats posed by deepfakes in Bharat. A high level of polarisation and intense electoral competition characterises Bharat’s political landscape. In such an environment, deepfakes can be weaponised to undermine political opponents, spread misinformation, and manipulate public opinion. For example, fabricated videos of political leaders making inflammatory statements could incite violence, disrupt elections, or diminish public trust in democratic institutions.

The ability to rapidly disseminate these videos through social media platforms exacerbates the threat, as false information can reach millions before it can be debunked. For instance, in the 2020 Delhi elections, a political party was seen using deepfake technology to create videos of a leader delivering speeches in different languages, targeting specific voter demographics. Although these videos were not malicious, they underscored how deepfakes could be leveraged for political gains, raising concerns about their potential misuse.

Threat to Social Cohesion and Communal Harmony

Bharat is a diverse nation with many religions, languages, and cultures. Maintaining social cohesion and communal harmony is a perennial challenge. Deepfakes can be exploited to stoke communal tensions by creating fake videos that appear to show members of one community attacking another. Such content can go viral quickly, leading to violence and riots, as seen with less sophisticated misinformation, primarily misinformation regarding the Citizenship Amendment Act (CAA) and related facets that created a law-and-order situation in Delhi in February 2020.[5] The psychological impact of seeing false evidence of attacks or any impactful information can be reflective, making it difficult to quell unrest once it has started.

Threat to National Security and Defence

On 18 February 2022, a deepfake video of Ukrainian President Volodymyr Zelensky emerged, in which he “falsely” informed Ukrainians that their troops had surrendered. [6] President Zelensky’s deepfake highlighted the use of deepfake technology during a military conflict and hacked media services to disseminate misinformation. Additionally, deepfakes can be used in psychological operations (Psy Ops) to demoralise troops, spread false orders, or create confusion within the ranks. For instance, a deepfake video showing a senior military official surrendering or issuing a false command could have devastating consequences on the morale and operational effectiveness of the armed forces. Deepfakes could be used to manipulate diplomatic communications or to create false narratives that might influence the international relations of a country.

Impact of Deepfakes on Economy

The economic implications of deepfakes are also significant. In a rapidly digitalising economy, trust is paramount. Deepfakes can erode confidence in digital transactions, electronic communications, and even the integrity of financial markets. In a 2019 cyber scam, an unidentified fraudster used AI-enabled deepfake audio to spoof Rudiger Kirsch, CEO of Euler Hermes Group SA, believing he was on the phone with his boss of the firm’s German parent company. Rudiger then wire transferred EURO 220,000 (approximately ₹ 1.7 Crore) to the bank account of a Hungary-based supplier. [7]

This instance highlighted that criminals have adapted, using developing technology such as deepfake to commit financial fraud or scams. Fraudsters might employ deepfakes to impersonate company officials, authorise fraudulent transactions, or propagate misinformation that affects stock prices. This might result in significant financial losses and erode trust in the economic system.

Countermeasures and Policy Responses

Deepfakes is one of the developments in Artificial Intelligence. Fake news or misinformation generated by AI/machine learning algorithms has gained traction in recent years. In November 2018 and 2019, three scientific papers suggested using the ‘face-warping artefacts and inconsistent head poses’ technique to detect and counter deepfakes by analysing the facial expressions and movements of an individual’s speaking pattern. However, in the long run, these techniques may not work as the deepfake developers are likely to improve their discriminative neural networks and further improve deepfakes.

At present, there are no laws or regulations specifically addressing the issue of deepfake content; however, Section 66D (Punishment for cheating by personation by using computer resources) and Section 66E (Punishment for violation of Privacy) mentioned in the Information Technology (IT) Act 2000 punish a person(s) with imprisonment and a fine if he or she cheats by impersonating an individual and publishes or transmits images of private body parts without his or her consent in an electronic form. [8] These provisions of the IT Act are insufficient to identify and prevent deepfake content from being spread on the Internet. On 23 November 2023, the Minister of Electronics and IT— Ashwini Vaishnav, met with representatives from academia, industry, and social media firms to discuss the need for an effective response to deepfake. [9]

Given the multifaceted threat deepfakes pose, no purely technical solution will be very effective. Therefore, Bharat needs a comprehensive strategy to mitigate its impact. This strategy must include various aspects:

  1. Technological Solutions: Investing in and improving the AI-based detection systems that can identify deepfakes with high accuracy is essential. These systems should be integrated into social media platforms and other critical digital infrastructure.
  2. Legislation and Regulation: Enacting robust laws that criminalise creating and disseminating malicious deepfakes is crucial. This should be coupled with regulations that mandate digital platforms to take down harmful deepfake content swiftly.
  3. Public Awareness: Educating the public about the existence and dangers of deepfakes can reduce the likelihood of such content being believed and spread. Media literacy programmes should be integrated into the education system and public information campaigns. Governments and organisations must develop a ‘response’ campaign against the spread of misinformation[10], treating it as a security incident.
  4. International Cooperation: Deepfakes are a global issue that requires international collaboration. Bharat should work with other nations to develop global standards for the detection and management of deepfakes and to share intelligence on emerging threats.
  5. Cyber Security Infrastructure: Strengthening cybersecurity measures to protect against the unauthorised creation and dissemination of deepfakes is essential. This includes securing government and military communication channels against infiltration.
The Way Forward

Deepfakes represent a significant and evolving challenge to Bharat’s national security; its potential to disrupt political stability, undermine social cohesion, impact national defence, and destabilise the economy necessitates a proactive and comprehensive response. It is convenient to predict that in the future, it will be challenging to trust videos of political leaders because deepfake technology is constantly improving. The deepfake video of Joaquin Oliver, one of the students killed in the Parkland shooting incident in 2018, was made two years after the incident. It is one of the noteworthy examples where deepfake enabled dead person to be resurrected and he delivered a message to lawmakers against gun shootings in the US. [11] In future, a prominent figure in politics or society could pass away and be replaced by a digital clone that would conceal a change in leadership.

Keeping pace with the advancement and accessibility of deepfake technology, governments and organisations must develop strategies to detect and counter these deceptive practices to safeguard national security and public trust. With effective implementation of technological advances, investing money and trust in Bharatiya firms developing Indigenous anti-deepfake measures, establishing solid legislative frameworks, launching public awareness programmes, and collaborating internationally, Bharat will be able to address the issue of deepfakes and protect national security in the digital era.

Endnotes

[1] Note: Deep learning algorithms are the ability of a machine to learn automatically without any human intervention. Deep Learning Algorithms mimic the human brain and often provide the experience of interacting with an actual human.
[2] “Deep Fakes and National Security”, Congressional Research Service, 17 April 2023, available from: https://crsreports.congress.gov/product/pdf/IF/IF11333 ; “Report on Deep Fakes and National Security”, US Naval Institute, 08 June 2022, https://news.usni.org/2022/06/08/report-on-deep-fakes-and-national-security; Joseph. “An Example of a GAN in Pytorch”, Reason. Town, 15 August 2022, available from: https://reason.town/gan-example-pytorch/
[3] “Introducing FaceApp: the Year of the Weird Selfies”, Forbes, 06 May 2017, available from: https://www.forbes.com/sites/haroldstark/2017/04/25/introducing-faceapp-the-year-of-the-weird-selfies/#489efb9543d2
[4] “Japanese Startup Generates Photorealistic AI-Humans”, SpringWise, 21 May 2019, available from: https://springwise.com/japanese-startup-generates-photorealistic-ai-humans/
[5] Chaudhuri, Pooja. “the year that was: Misinformation Trends of 2020”, AltNews, 08 January 2021, available from: https://www.altnews.in/the-year-that-was-misinformation-trends-of-2020/
[6] Twomey, John Joseph, Conor Linehan, and Gillian Murphy. “Deepfakes in Warfare: new concerns emerge from their use around the Russian invasion of Ukraine”, the Conversation, 26 October 2023, available from: https://theconversation.com/deepfakes-in-warfare-new-concerns-emerge-from-their-use-around-the-russian-invasion-of-ukraine-216393
[7] Damiani, Jesse. “A Voice Deepfake was used to scam a CEO out of $243,000”, Forbes, 03 September 2019, available from: https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/
[8] “Section 66D and Section 66E of the Information Technology Act 2000”, Page no. 24. Available from: https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf ; “Cybercrime Against Women”, Ministry of Electronics and IT, 07 December 2022, available from: https://www.pib.gov.in/PressReleasePage.aspx?PRID=1881404
[9] “Interaction of Minister of Railways, Communications and Electronics & IT Shri Ashwini Vaishnaw with stakeholders on issues arising out of deepfake”, Press Information Bureau- Ministry of Electronics and IT, 23 November 2023, available from: https://pib.gov.in/PressReleasePage.aspx?PRID=1979042
[10] “How to mitigate the impact of deepfakes”, Kaspersky, 17 March 2020, available from: https://kfp.kaspersky.com/news/how-to-mitigate-the-impact-of-deepfakes/
[11] Diaz, Ann-Christine. “Parkland victim Joaquin Oliver comes back to life in heartbreaking plea to voters”, AdAge, 02 October 2020, available from: https://adage.com/article/advertising/parkland-victim-joaquin-oliver-comes-back-life-heartbreaking-plea-voters/2285166

(The paper is the author’s individual scholastic articulation. The author certifies that the article/paper is original in content, unpublished and it has not been submitted for publication/web upload elsewhere, and that the facts and figures quoted are duly referenced, as needed, and are believed to be correct). (The paper does not necessarily represent the organisational stance... More >>


Image Source: https://www.mkscolombia.com/wp-content/uploads/2021/05/Deepfake-01-scaled.jpg

Post new comment

The content of this field is kept private and will not be shown publicly.
6 + 11 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Contact Us