Abstract
The idea of legal personality defines who or what can hold rights and responsibilities under the law. Traditionally, this status belongs to humans and extends to legal entities like corporations, idols, and even rivers or temples. With the rise of the digital age, a new question has arisen: should artificial intelligence (AI) and robots be recognized as legal persons? They can make decisions, perform tasks, and even cause harm, but they do not experience emotions or intent like humans. In traditional legal thought, intent was key to being considered a legal person. This belief is now being called into question, as AI can operate independently in ways that impact people and society.
AI technologies are no longer just theoretical; they are actively used in finance, healthcare, transportation, defense, and governance. These systems often make autonomous decisions with significant consequences. Self-driving cars, medical diagnostic tools, and humanoid robots showcase a level of independence that complicates traditional views on liability and accountability. When an AI system causes harm, it becomes difficult to determine who is responsible: the manufacturer, the programmer, or the user?
In this context, modern jurisprudence faces a dilemma. Can we treat AI and robots as legal persons like corporations, with rights and duties, or should they remain mere tools of human agency?