Skip to main content

Advertisement

Log in

Legal personhood for artificial intelligence: citizenship as the exception to the rule

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence—instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Far more commonly referred to as “artificial” intelligence given that the intelligence is non-biologic in nature.

  2. Either resulting from ego or lack of definitive empirical research into the topic.

  3. Leaving leeway to grant rights for potentially intelligent organisms not found on Earth. It may also apply to humans with bionically enhanced bodies, as their intelligence would not be naturally found in our terrestrial environment.

  4. E.g., an anthropomorphic mechanised entity.

  5. Determining what these differences are will require the full development of AGI and other NBI systems, along with an unbiased opinion about what their basic needs are.

  6. As can be shown in Schwitzgebel and Garza’s “No-Relevant-Difference Argument,” which they claim is their main argument for granting MI systems rights and possesses a “humanocentric [sic]” value standard.

  7. Alternatively, need to be granted to, as it were.

  8. In this context, Barfield’s arguments are less targeted towards the argument that legal personhood is necessary and more towards the case of the civil liberties enjoyed by an MI system and how lack of legal personality affects them. Though these are similar ideas on a broad context, discussing whether an MI system can make a claim to civil liberties is only possible insofar as the MI personality possesses citizenship in any given nation—and arguably, could possess a limited number of civil liberties as a nationless entity. Whether the International Court of Justice would allow such an argument to pass is still unclear, however, as no such case has been brought before them regarding the citizenship status of MI personalities at the time of this writing.

  9. Again, making the case that computer software has no legal personhood at the time of this article’s writing. See Bridy, p. 21.

  10. Hristov also displays the discrepancy between humans under the law and MI systems using Naruto v. Slater, effectively displaying how non-human authors are not traditionally considered to be legally capable of owning a creative copyright. See Hristov, pp. 447–451 for his full argument.

  11. Whether they ever will is a discussion for a different time. However, there are still benefits to us contemplating whether machines will ever feel at all—and if so, what those experiences allow the MI to determine about its environment. This point is mute regarding machine-enhanced human beings, though we must still be concerned as to the degree of emotion felt by these individuals. It may be prudent, for instance, to treat them as sociopathic or emotionally depressed individuals given that their actions will be more unpredictable than that of MI alone.

  12. See Dehaene (2014) Consciousness and the brain: Deciphering how the brain codes our thoughts. Viking, New York. Contrary to common belief, more research such as that compiled by Dehaene is displaying how much more in common we have with animals than we admit. Though consciousnesses may not be a metaphysical concept that can be proven without a shadow of a doubt, we would be ignorant to assume that the behavioural similarities that exist between specific animal groups and humans is nothing more than our personification of them as argued by Harari and others.

  13. This statement runs contrary to Johansson’s essay. Her essay is quoted here to display one of the proposed adaptations of the Turing Test that may yield a more satisfactory result, whereas Harari is quoted to emphasise the inability of the Turing Test to have any true effect on a human’s opinion regarding the conscious state of an NBI.

  14. Either face-to-face or through writing.

  15. Though with privacy laws becoming as stringent as they have been, any such tests conducted with the MI being filmed with other humans would require waivers to be signed—thus defeating the purpose of the experiment altogether.

  16. Such as those that power Amazon, Google and Netflix.

  17. Whether because someone has not “tagged” someone else in other images or provided a “satisfaction rating” for the product they have bought or show they have viewed.

  18. Perhaps with the use of genetic programming, as was first suggested by Richard Forsyth in his 1981 publication entitled “BEAGLE—A Darwinian Approach to Pattern Recognition.” Though the field of genetic programming has significantly developed since this publication, Forsyth should not be forgotten as the first academic to use the term “genetic programming” in relation to computer intelligence.

  19. Though this does not mean that NBIs are limited to exist in the mechanised form we have come to view them in. Development towards cybernetic beings may change this understanding—and only regarding a mechanised brain being implanted into a human form—and is not a pressing concern at the time of this writing.

  20. Even with the implementation of an artificial brain in a human form, the intelligent being will require bioelectrical power.

  21. Exposing electrical components to the elements increases the rate at which a computer system will fail. Thus, they need to be stored in such a manner that they are not exposed to water or other agents that could damage the internal structure of the device.

  22. Following the logic that NBI systems may incorporate bionically enhanced humans, this point is mute for those specific organisms. It must still be emphasised, however, that a bionically enhanced human may develop emotional or spiritual needs or beliefs separate from those held by BIs that may be difficult to integrate into society.

  23. E.g., shelter, a source of energy, systems to cool internal structures.

  24. Thus attempting to circumvent the traditional bias that MI systems cannot be “persons” under the law de lege lata. Such comparisons are necessary to circumvent these currently restrictive definitions and develop precedence to allow a wider variety of NBI systems to gain protections under currently and potentially drafted laws.

  25. Such as a smartphone, tablet, laptop, or desktop computer.

  26. Assuming that AGI and other NBI systems will adopt anthropomorphic or quadriplegic forms to navigate the same world humans do.

  27. With other NBI systems, humans, or animals commonly kept by humans as pets.

  28. While humans can live freely in the elements, a shelter is essential in that it protects the body from becoming too chilled (which may lower the immune system enough to cause illness), provides space to store excess foodstuffs or materials, provide privacy, and protect the skin from excessive damage from the sun. For computer systems, a shelter would act as both a bastion against foreign particulate (such as sand) and a place to receive power. Until NBI systems gain the ability to protect themselves from the elements (possibly through the use of nanobots to extract foreign matter from their internal systems), we cannot expect them to live out of doors for the same amount of time humans would be able to.

  29. See Stone, p. 468.

  30. See Stone, p. 456, footnote #26.

  31. As speculated by futurists and the author.

  32. See Etymonline.com entry “robot.” Given that robots had been circulating within society since 1923 in English society, and that several black-and-white films incorporated robots (e.g., The Day the Earth Stood Still [1951], The Earth Dies Screaming [1964]), this idea would have been novel at the time—especially considering where robots were considered antagonists to humans for the majority of the cultural literature at the time.

  33. This assumption is made to mimic Solum’s argument that an AI system passing the Turing Test would be sophisticated enough to serve as a legal trustee, given that the system would need a combination of logical and abstract information to pass as a human. For instance, we would expect a human to exhibit subtle emotional quirks throughout the exam set forth by Turing. Assuming that testing the AGI would take place face-to-face (as other methods would defeat the purpose of the Test), the AI system necessarily needs to mimic human body motion and tonal inflexions. The system would be unable to do this if it could not “think” similar to a human.

  34. That is if the courts ultimately decide that the AGI or NBI system does not need to possess a determinable consciousness (which it may be able to exhibit, regardless if humans can determine whether that consciousness is “true” in nature) to understand the harm being inflicted upon it. To this end, the most straightforward conclusion is that the system is treated like a grown plantation-born slave child and not one in its infancy. This point is null for humans that are bionically enhanced, as there would theoretically be precedence enough to understand the harm that could be caused to these entities.

  35. Such as a sentencing for death, extended periods in a correctional facility, or community service.

  36. Or otherwise replicated form of the NBI system before its sentencing for criminal charges.

  37. Which is arguably unconstitutional in the USA, where the sentencing of the replicated NBI system would necessarily need to begin from scratch—meaning that evidence may be unavailable for submission (given that it was used in prior sentencing), or that the new jury would rule in favour of the replicated NBI’s innocence.

  38. Which is arguably against the notion that someone being charged for criminal actions is innocent until they are proven to be guilty.

  39. It should be noted that these authors argue that the position for MI having personhood is a weak argument and that presenting a change to a system developed by “people currently recognised as such” would institute a change in how that legal system should function. Their arguments are logically valid in that few humans would actually hold legal personhood—especially considering that humans tend to de-humanise groups considered to be their enemies (or rather, create pseudohumans to justify cruelties in war). By using Solaiman’s criteria for legal personhood, they demonstrate that our conceptual reality surrounding a “legal person” is highly tenuous; which raises the question as to if legal personhood should be a criterion to grant legal protections.

  40. If any are granted at all, which is a subject that has not even been adequately addressed by the nation of Saudi Arabia. There are further questions to ask, including how NBIs can gain citizenship in nations without a monarchist system of government or being based within a human subject, which cannot be adequately addressed in this paper.

  41. This can be circumvented by demanding restitution from the owner of the deep-learning system—though the legal question as to whether the computer was acting upon its own will never appropriately be examined or pursued if this course is taken. There is also the concern that the owner of the deep-learning system can be wrongfully charged for criminal accusations when their intent delineates from the behaviour of the system.

  42. This intent can be questioned at any point in the software development process, as there exists the potential for the author’s intent to change during the development process.

  43. Given that the system is a logical platform (and supposedly devoid of emotion), legal analysts should be capable of determining if the harm caused by the system was intentional or accidental following the same set of logical rules.

  44. As was mentioned, there exists the possibility that the “will” of the NBI and the will of the programmer will diverge as the NBI develops. Assuming that the information gathered by the NBI will influence how it will proceed to gather future information, that is.

  45. Though this fact could be contested and would require the development of a database tracking each of these genetically modified humans. This would potentially create a world akin to Gattaca however, and thus the topic of genetic manipulation in vitro should still be carefully considered by lawmakers.

  46. Instead, by the quality of its components.

  47. Alternatively, a body.

  48. We still need to consider that dignity may not be a quality possessed by the NBI when making this claim, and that it will have the means to own property whose ownership is not tied to a human entity.

  49. Whether through the addition or deletion of the code that makes up the NBI’s “personality.”.

  50. This is, of course, due to the nature of the programmed code of the NBI. It would be akin to depriving the human body of a single meal. While the human would be annoyed that they had to go hungry, their fundamental personality would not change.

  51. The point at which many, if not all, physicians would state that a patient has died.

  52. Where provided by the law and realistically within the budgetary constraints of a court session when not provided.

  53. Regardless of current copyright or patent held by humans.

  54. Such as human emotion, as would be found in a suit where an AGI or NBI’s actions are emotionally disturbing to a human.

  55. Which are already handled by courts of law and will not unnecessarily increase their caseload as a result.

  56. Alternatively, even with the freedom of African–American slaves in the USA.

  57. As determinations will have to be made regarding how courts of law can punish NBI systems.

  58. This, of course, is ignoring the fact that these individuals may still take out monetary loans to afford these enhancements. Though not entirely a form of slavery in and of itself, the amounts and rates of these loans will need to be heavily regulated by government authorities to keep the costs of these enhancements from skyrocketing. If there is one thing that should be learned from the USA’s stint of “Obamacare,” or other such similar market-limiting effects, a lack of reasonably price regulation will inevitably leave this technology affordable to a select few. The downside to having only certain members of society bionically enhanced is that they effectively form the class of “ruling elites,” using their money and influence to keep other populations too poor or undereducated to reach a level footing with this elite class.

  59. Unlike other animals that occur naturally, or through human intervention.

  60. Sophia the Robot was granted citizenship by the king of Saudi Arabia in October of 2017. Given that Sophia is the first NBI to be granted citizenship, we can assume that Sophia will be the standard to which other NBIs will be compared; as no other computerised entity has been granted this status at the time of this writing.

  61. Though the argument may still remain that these devices are only an extension of a human, and thus cannot be independently right-bearing. The law will still protect them, but only because these systems are incorporated into a person’s right of expression and various privacy laws.

  62. As suggested in Locke’s works, in that man has a right to the fruits of the “labour of his body and the work of his hands.”.

  63. Such as the USA.

  64. For instance: As the law is written, workers in the USA are required to prove their citizenship or proof of emigration to become employed legally. If the NBI system was developed in the USA, a case would need to be made that it is a US citizen. If this citizenship is denied on the grounds that an NBI system cannot be “born” like a human can, other methods will need to be devised to ensure that the labour done by the NBI is constitutional. Assuming that the system will only require time enough in the work week to debug its software and update itself, niceties like breaks and limits on work hours will inevitably exceed those required by human workers. What the legal system will also need to decide is whether allowing an NBI system to work more than a human (because it does not have the same biological needs) is legal under market competition considerations. Ultimately, this type of decision will determine the speed at which the job market will decrease on a local and national scale. Should the courts determine that the impact of NBI workers far offset what they are legally capable of ruling upon, other branches of government will be required to then make the determinations judiciaries cannot.

  65. With the exception of MI systems that come into existence through the bionic enhancement of humans.

  66. And though this is a flimsy simile, we cannot say that similar technologic advancements have not come about due to our fear that another power will attain that given technology first. The Cold War between Russia and the USA is a prime example of this phenomenon.

  67. Based on the ideas of Amartya Sen’s Capabilities Approach. Though relatively free of regulations, the Approach argues that developing an individual’s various capacities is much more beneficial to a society than blindly throwing resources into it. By developing an individual’s capacities, one is necessarily increasing the capability of the individual to perform the actions they desire to perform. Done correctly, this can develop a society to be more productive; meaning fulfilled desires, lack of crippling poverty, and healthier citizens.

  68. Majorly, though other examples may differ from this example.

  69. In the view of the author.

  70. Initially, at least, to the degree that each person is accommodated according to their ability to perform a particular set of tasks (such as the ability to ride a bicycle) or achieve certain things (e.g., starting a family). Long-term expenses may be difficult to track logistically but are not impossible with the proper bookkeeping.

  71. Alternatively, to be able to research.

  72. While this may be farfetched, it is still a potential issue to consider.

  73. Whether this is due to the utter collapse of the USA economy or another nation’s.

  74. If we are to take the perspective that the Founders could have never imagined that we would develop machines sophisticated enough to act like a human.

  75. And on a more significant scale, how the distribution of AGI and similar NBI systems into the public will affect various economies and lifestyles.

References

  • Allen T, Widdison R (1996) Can computers make contracts? Harv J Law Technol 9:25–51

    Google Scholar 

  • Ashrafian H (2015) AIonAI: a humanitarian law of artificial intelligence and robots. Sci Eng Ethics 21:29–40. https://doi.org/10.1007/s11948-013-9513-9

    Article  Google Scholar 

  • Baker LR (2008) The shrinking difference between artifacts and natural objects. In: Boltuc P (ed) Newsletter on philosophy and computers, vol 7, no. 2. American Philosophical Association Newsletter on Philosophy and Computers, pp 1–10

  • Barfield W (2006) Intellectual property rights in virtual environments: considering the rights of owners, programmers and virtual avatars. Akron Law Rev 39:649–700

    Google Scholar 

  • Barrat J (2013) Our final invention: Artificial intelligence and the end of the human era. St. Martin’s, New York

    Google Scholar 

  • Bayamlioglu E (2008) Intelligent agents and their legal status. Ankara B Rev 1:46–54

    Google Scholar 

  • Bridy A (2012) Coding creativity: copyright and the artificially intelligent author. Stanf Technol Law Rev 5:1–28

    Google Scholar 

  • Bryson JJ, Diamantis ME, Grant TD (2017) Of, for, and by the people: the legal lacuna of synthetic persons. Artif Intell Law 25:273–291

    Article  Google Scholar 

  • Dehaene S (2014) Consciousness and the brain: deciphering how the brain codes our thoughts. Viking, New York

    Google Scholar 

  • Dowell R (2018) Fundamental protections for non-biological intelligences (or: how we learn to stop worrying and love our robot brethren). Minn J Law Sci Technol 19:305–335

    Google Scholar 

  • Forsyth R (1981) BEAGLE—a Darwinian approach to pattern recognition. Kybernetes 10:159–166. https://doi.org/10.1108/eb005587

    Article  Google Scholar 

  • Griffin A (2017) Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language. Independent. https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-. Accessed 10 May 2019

  • Hamman BA (2006) US patent no. US7120021B2. U.S. Pat. Trademark Office, pp 1–56

  • Harari YN (2017) Homo deus: a brief history of tomorrow. HarperCollins Publishers, New York

    Book  Google Scholar 

  • Hristov K (2017) Artificial intelligence and the copyright dilemma. IDEA J Frankl Pierce Cent Intellect Prop 57:431–454

    Google Scholar 

  • Hubbard FP (2011) “Do androids dream?”: Personhood and intelligent artifacts. Temple Law Rev 83:405–474

    Google Scholar 

  • Johansson L (2010) The functional morality of robots. Int J Technoethics 1:65–73. https://doi.org/10.4018/jte.2010100105

    Article  Google Scholar 

  • Locke J (1980) Chapter 5: property. In: McPherson CB (ed) Second treatise of government. Hackett Publishing, Indianapolis, pp 18–29

    Google Scholar 

  • Miller LF (2015) Granting automata human rights: challenge to a basis of full-rights privilege. Hum Rights Rev 16:369–391. https://doi.org/10.1007/sl12142-015-0387-x

    Article  Google Scholar 

  • Moses LB (2007) Recurring dilemmas: the law’s race to keep up with technological change. Univ Ill J Law Policy 2007:239–285

    Google Scholar 

  • Omohundro S (2007) Standford computer systems colloquium talk: self-improving AI and the future of computing. Self-aware systems. https://selfawaresystems.com/2007/11/01/standford-computer-systems-colloquium-self-improving-ai-and-the-future-of-computing/. Accessed 10 May 2019

  • Polson N, Scott J (2018) AIQ: how people and machines are smarter together. St. Martin’s, New York

    Google Scholar 

  • Ramachandran G (2009) Against the right to bodily integrity: of cyborgs and human rights. Denver Univ Law Rev 87:1–57

    Google Scholar 

  • Schwitzgebel E, Garza M (2015) A defense of the rights of artificial intelligence. Midwest Stud Philos 30:98–119

    Article  Google Scholar 

  • Snodgrass MM (1989) The measure of a man. In: Scheerer R (director) Star trek: the next generation, season two, episode 9 (13 February 1989)

  • Solum LB (1992) Legal personhood for artificial intelligences. N C Law Rev 70:1231–1287

    Google Scholar 

  • Stone CD (1972) Should trees have standing? Toward legal rights for natural objects. South Calif Law Rev 45:450–501

    Google Scholar 

  • Thompson MF (2009) Earnings of a lifetime: comparing women and men with college and graduate degrees. InContext 10:1–10

    Google Scholar 

  • Turing AM (1950) Computing machinery and intelligence. Mind 49:433–460

    Article  MathSciNet  Google Scholar 

  • Tzafestas SG (2016) Artificial intelligence. In: Tzafestas SG (ed) Roboethics: a navigating overview. Springer International, Cham, pp 25–33

    Chapter  Google Scholar 

  • Wein LE (1992) The responsibility of intelligent artifacts: toward an automation jurisprudence. Harv J Law Technol 6:103–154

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tyler L. Jaynes.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jaynes, T.L. Legal personhood for artificial intelligence: citizenship as the exception to the rule. AI & Soc 35, 343–354 (2020). https://doi.org/10.1007/s00146-019-00897-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-019-00897-9

Keywords

Navigation