Abstract
India’s legal system is not ready for agentic AI and the gap is already consequential. Unlike a chatbot, an agentic AI system executes transactions, makes hiring decisions, sends communications and pursues goals across dozens of steps without human approval at each stage. When one causes harm, existing law offers no clear answer on who bears responsibility. This article argues that the answer requires assembling and where necessary, extending three existing instruments. The Indian Contract Act 1872’s law of agency, provides a workable basis for deployer liability. A deployer who authorises an AI system to act on its behalf is a principal and cannot escape liability by pointing to the system’s autonomy. The Consumer Protection Act 2019’s product liability framework provides the corresponding route for developer liability subject to the unresolved question of whether AI qualifies as a ‘product’ under Section 2(34), a gap requiring statutory clarification. The Supreme Court’s absolute liability doctrine from M C Mehta v Union of India supplies the appropriate standard for high-risk deployments, where negligence is structurally inadequate. MeitY’s IT (Intermediary Guidelines) Amendment Rules 2026 encode the right normative logic the deploying entity bears accountability for AI-enabled harm but stops short of autonomous AI action, covering only synthetically generated audio-visual content. Further drawing on Mobley v Workday Inc (ND Cal 2024), Moffatt v Air Canada (BCCRT 2024), and the Delhi High Court’s Jackie Shroff order, this article proposes a statutory codification of AI agency, risk-tiered liability, and a targeted extension of the 2026 Rules to agentic action.