LEGAL STATUS OF AN «E-PERSON»: FROM A BINARY OPPOSITION TO A MODULAR APPROACH
Abstract
The rapid advancement of artificial intelligence (AI) challenges traditional legal frameworks, particularly concerning liability. The European Parliament's 2017 proposal to create a legal status of an "electronic person" for sophisticated autonomous systems ignited a critical debate on whether machines can or should be granted legal personality. This article scrutinizes this controversial concept. Purpose. The study aims to critically analyze the "e-personhood" proposal, highlight its fundamental flaws, and introduce a more viable alternative framework for conceptualizing the legal status of AI. Methods. The research methodology includes a comparative legal analysis of key EU documents, such as the European Parliament's Resolution and reports from the EESC and AI HLEG, alongside a review of influential scholarly publications in the field of law and AI ethics. Results. The analysis concludes that granting legal personality to AI is a flawed approach that creates a dangerous "responsibility gap" and is ethically questionable. The study finds that existing legal instruments, such as strict product liability and insurance schemes, can be adapted to address damages caused by AI more effectively. Conclusion. The binary "person-or-thing" approach is insufficient for regulating AI. This article puts forward a novel hypothesis of a "modular approach" to AI's legal status. Instead of full personhood, this framework suggests endowing AI systems with specific, limited "modules" of legal capacity tailored to their function and autonomy. This pragmatic model ensures compensation for victims while firmly anchoring ultimate legal responsibility with human developers, manufacturers, and operators.
References
2. Bertolini, A. & Episcopo, F. (2021). The Expert Group’s Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: a critical assessment. Cambridge University Press. https://doi.org/10.1017/err.2021.30
3. Bryson, J, Diamantis, M. & Grant, T. (2017). Of, for, and by the people: the legal lacuna of synthetic persons. Artif Intell Law, 25. PP.273–291. https://doi.org/10.1007/s10506-017-9214-9
4. European Economic and Social Committee. (2017). Opinion – Artificial intelligence – The consequences of artificial intelligence for the (digital) single market, production, consumption, employment and society. EESC Website. https://www.eesc.europa.eu/en/our-work/opinions-in-formation-reports/opinions/artificial-intelligence-consequences-artificial-intelligence-digital-single-market-production-consumption-employment-and
5. European Parliament. (2017). European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). European Parliament Website. https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html
6. Soyer, B. & Tettenborn, A. (2023). Artificial intelligence and civil liability–do we need a new regime? International Journal of Law and Information Technology, Volume 30, Issue 4, PP.385–397. https://doi.org/10.1093/ijlit/eaad001
7. Hallevy, G. (2010). The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control. Akron Intellectual Property Journal, Vol. 4. Iss. 2. https://ide-aexchange.uakron.edu/akronintellectualproperty/vol4/iss2/1
8. High-Level Expert Group on AI. (2019). Ethics Guidelines for Trustworthy AI. European Commission Website. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trust-worthy-ai
9. Chesterman, S. (2021). We, the Robots? Cambridge University Press. https://doi.org/10.1017/9781009047081.002
10. Shamov, O. (2025). Artificial Intelligence Renders Verdicts: A Developer's Take vs. a Lawyer's Stand. Amazon KDP. https://www.theusreview.com/reviews-1/Artificial-Intelligence-Renders-Verdicts-by-Oleksii-Shamov.html
11. Solaiman, S. (2017). Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artif Intell Law, 25. PP.155–179. https://doi.org/10.1007/s10506-016-9192-3
Abstract views: 13 PDF Downloads: 7





