Indian Journal for Research in Law and Management

Advancing Law and Management

ISSN No. : 2583-9896

The Legal Vacuum in Regulating Non-Consensual AI-Generated Pornography in India

📄 Download Full PDF

Cite this Article

Subha Venkatraman (2025). The Legal Vacuum in Regulating Non-Consensual AI-Generated Pornography in India. The Indian Journal for Research in Law and Management, Volume II(Issue 9). Retrieved from https://ijrlm.com/journal/the-legal-vacuum-in-regulating-non-consensual-ai-generated-pornography-in-india/

Abstract

The rise in AI-enabled non-consensual pornography, commonly referred to as “deepfake pornography,” poses significant challenges to modern legal frameworks around the globe. In India, where current laws are insufficient to address this growing issue, victims who primarily are women continue to suffer severe emotional, psychological, and reputational harm. Personal privacy, dignity, and personal autonomy are infringed upon by non-consensual intimate image abuse, particularly when deepfakes are used. However, the current legal system does not implement proper penalties or preventive measures. This paper examines the regulatory gap that allows AI-generated pornography to spread, evaluating the effectiveness of the Information Technology Act, the Indian Penal Code, and the recently enacted Digital Personal Data Protection Act, 2023. While each statutory instrument addresses certain cybercrimes, none articulates unequivocal prohibitions against deepfake material, and the absence of a discrete statute prohibiting the manufacture and circulation of non-consensual AI-generated pornography further undermines the protection of victims. The Indian Penal Code includes provisions such as Section 354, which penalises outraging the modesty of a woman, and Section 500, which criminalises defamation, yet their relevance to AI-generated sexualised images remains too general to afford victims consistent protection. Similarly, the Information Technology Act, 2000, contains sections like 66E (violation of privacy), 66D (cheating by personation), and 67 (publishing obscene material), but falters in addressing the novel techniques of synthetic media production. The Indian Cyber Crime Coordination Centre and recent Meta Oversight Board proceedings document how the content moderation protocols of platforms such as Facebook and Instagram have been reactive rather than anticipatory, resulting in the slow extraction of harmful material and leaving affected individuals exposed. The delayed removal of deepfake pornography targeting Indian public figures underscores this inadequacy; repeated notifications to the platforms did not deter the further spread of the material, as confirmed by a Reuters report of 2024. Such lapses reveal an urgent gap between the statutory framework and emerging technological realities of harm. This study contends that the absence of tailored statutory safeguards in India renders individuals especially vulnerable to the harms of deepfake pornography, necessitating prompt legislative redress. Through a comparative examination of jurisdictions like the United States, United Kingdom, and Australia which have progressively enacted prohibitions on the manufacture and dissemination of non-consensual synthetic sexual imagery the study articulates a prospective Indian legal architecture. Central to the design is the introduction of a discrete statutory offense criminalizing the production and circulation of such fabrications absent affirmative, prior consent, accompanied by graduated custodial and pecuniary sanctions. The framework is predicated on a consent-oriented paradigm, stipulating that any likeness deployed in AI-mimetic sexual content must rest on antecedently secured, unequivocal authorization. Furthermore, the analysis recommends an integration of synthetic pornography within the identity misappropriation stipulations of the Information Technology Act, thereby extending the purview of Section 66C to unequivocally encompass the unauthorized appropriation of digital likenesses enabled by generative algorithms. This study further examines how innovations in detection technology including platforms like Vastav.AI can be augmented by collaborative frameworks joining law enforcement, service providers, and forensic specialists to strengthen both identification and legal redress. Alongside these measures, the analysis highlights the imperative of developing victim-oriented support structures, encompassing streamlined content removal protocols, restitution mechanisms, and psychological services, which together form essential pillars of an effective regulatory matrix. Ultimately, the investigation indicates that, in order to shield individuals from the pervasive dangers of AI-mediated non-consensual pornography, India must institute an all-inclusive legislative framework.

Journal Information

The Indian Journal for Research in Law and Management
ISSN No.
2583-9896
Submit Manuscript
Licensing
All research articles published in The Indian Journal for Research in Law and Management are fully open-access. i.e. immediately freely available to read, download, and share. Articles are published under the terms of a Creative Commons license, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited.
Disclaimer
The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the IJRLM or its members. The designations employed in this publication and the presentation of material therein do not imply the expression of any opinion whatsoever on the part of the IJRLM.

Article Analytics

385
Page Views
23
Downloads