Deepfakes are synthetic media that use Artificial Intelligence (AI) to manipulate or generate visual and audio content.They are manipulated usually with the intention of deceiving or misleading someone.
How does deepfake technology pose a threat to integrity of elections in India?
Emergence of generative adversarial networks (GANs)– This is a variant of generative AI which facilitates the rapid generation of deepfakes on a real-time basis. Due to its easy accessibility, it may lead to the creation of a large number of deepfake accounts that can suppress factual information in every constituency. For ex-deepfake videos of Indian Prime minister.
Weaponization of social media– Deepfake videos evoke emotions and exaggerate confirmation bias by spreading more misinformation on social media platforms.
External election interference- Digital businesses can create context-specific fake videos for politicians and foreign powers. These fake videos may be abused by hostile foreign countries to threaten the integrity of Indian elections.
What are Social Impacts of Deepfakes?
Trust is essential for the adoption of technology because it ensures users feel secure and confident in using new innovations.
Deepfakes erode trust by creating believable fake content that spreads rapidly on social media, causing severe harm.
Women and children are frequent targets of deepfakes, facing significant psychological distress.
Deepfakes can manipulate evidence, threatening the judiciary and leading to wrongful convictions.
They undermine user-verification methods like facial recognition, critical for services in India.
Deepfakes spread misinformation, impacting democratic processes. The World Economic Forum’s 2024 risk report highlights misinformation as a critical global risk.
What are the legal challenges related to deepfakes in India?
Sections 66D, 66E, 67, 67A, and 67B of the IT Act penalize impersonation and obscene material but do not fully address deepfakes.
The Digital Personal Data Protection Act could be more effective if it included reputational loss in its definition of “loss.”
Data fiduciaries are required to notify individuals of data breaches but need stricter measures like disabling private-media downloads.
Rule 4(2) of the 2021 IT Guidelines mandates social media to identify originators of harmful content, but platforms like WhatsApp and Meta contest this, citing privacy concerns.
The Anil Kapoor vs. Simply Life India case highlights privacy and publicity rights violations by deepfakes.
What should be done?
Social media platforms must limit the spread of deepfake content and crack down on bots amplifying misinformation.
Tech developers should incorporate consistent labeling features to identify artificial content, as suggested by the Union IT ministry’s advisory.
Implement mandatory user verification for content creation to establish accountability.
Provide clear legal paths and psychological support for deepfake victims.
Criminalize the creation of non-consensual deepfakes, like the proposed UK law.
Invest in media literacy efforts and promote responsible digital citizenship to help individuals critically evaluate online content and identify deepfakes.