Abstract
The increasing use of artificial intelligence in arbitration has moved from experimental pilots to institutional practice in several leading arbitral fora. The American Arbitration Association–International Centre for Dispute Resolution (AAA–ICDR) has launched an “AI arbitrator” for documents-only construction disputes, which analyses submissions, structures issues, and drafts awards subject to human oversight, while expressly preserving the final decision-making authority of human arbitrators. In parallel, the China International Economic and Trade Arbitration Commission (CIETAC) has issued the Asia-Pacific region’s first Guidelines on the Use of Artificial Intelligence Technology in Arbitration, emphasizing party autonomy, transparency, proportionality, and the non‑deferability of adjudicative functions, and the Chartered Institute of Arbitrators (Ciarb) has promulgated a global guideline on AI in arbitration built around similar principles.
These developments reflect a broader turn toward efficiency-driven adjudication, in which AI tools support fact‑management, legal analysis, and the generation of reasoned awards. By contrast, the Indian Arbitration and Conciliation Act, 1996 (“the 1996 Act”), though technologically neutral and heavily grounded in party autonomy, is silent on AI-assisted decision-making. Key provisions, Section 19 on procedural flexibility, Section 28 on the applicable substantive law, Section 31 on reasoned awards, and Section 34 on judicial review have been interpreted by the Supreme Court to require an independent application of mind by arbitrators and strict adherence to principles of natural justice, especially through the doctrinal evolution of “public policy” and “patent illegality” in cases such as ONGC v Saw Pipes and Associate Builders v DDA. Against this background, this article pursues two aims. First, it examines whether AI-assisted arbitration, where AI tools provide analytical and drafting support but humans retain decisional authority, is feasible within the existing statutory framework of the 1996 Act and the jurisprudence on Section 34. Secondly, it critically evaluates whether such AI integration is normatively desirable for Indian-seated arbitrations, balancing putative efficiency gains against systemic concerns relating to transparency, bias, data protection, explainability, and accountability.
The article argues that the 1996 Act, read purposively and in light of international practice, already permits a human‑in‑the‑loop model of AI-assisted arbitration without legislative amendment, provided that the use of AI is disclosed, party participation rights are respected, and the tribunal’s reasoning remains intelligibly its own. However, the absence of express statutory or soft‑law guidance generates uncertainty for Section 34 review, particularly around procedural regularity, public policy, and patent illegality where AI tools hallucinate facts, rely on extra‑record material, or embed undisclosed biases. To reconcile technological innovation with due process, the article proposes a calibrated reform strategy comprising (a) India‑specific soft‑law guidelines modelled on the CIETAC and CIARB frameworks, (b) measured amendments to the 1996 Act recognizing the permissibility of AI assistance while codifying non-delegation and disclosure duties, and (c) a Section 34‑sensitive standard that treats AI‑related defects as grounds for challenge only where they result in demonstrable prejudice to party rights.