Abstract
In an Algorithmic world, a woman’s face is no longer of her its own but it is a data that is manipulable, replicable and weaponizable. When a hyper realistic deepfake video of Indian actress Rashmika Mandanna cascaded across Indian social media in November 2023, it was not merely a celebrity scandal but was a harbinger of a terrifying new frontier in gender-based violence. The offender? A 24-year-old engineer armed with freely accessible AI tool and driven by nothing more than a desire of getting more followers on Instagram. His arrest exposed a paradox more disturbing than the crime itself: India possesses no specific law against the creation of non-consensual synthetic intimate imagery. This article will examine the legal vacuum surrounding deepfake pornography and non-consensual intimate imagery (NCII) in India. Through a critical analysis of the Information Technology Act, 2000, the Indian Penal Code, 1860, and the emerging jurisprudence on personality rights, including the cases like Kamya Buch v. JIX5A & Others and Shilpa Shetty Kundra v. Getoutlive.in, which demonstrates that Indian criminal laws are remains silent on the creation of synthetic intimate imagery, leaving victims with proactive judicial activism rather than preventive statutory protection. A comparative study of the United Kingdom’s Online Safety Act, 2023 which criminalizes the very act of creation non-consensual deepfakes which reveals that the Indian laws are significantly lagging behind the comparable democracy. The implications on this issue are not only limited but extend beyond individual victims to the constitutional foundations of privacy, autonomy, and bodily integrity in the digital era. The question is no longer whether India will face a deepfake epidemic, the McAfee survey confirming 75% Indian exposure suggests it already has. The question is whether the law will arrive in time. At the end this article concludes with some recommendations for legislative reform, including the introduction of a dedicated statutory offence for AI-generated intimate imagery, mandatory platform accountability mechanisms, and a victim-centric takedown framework.