Robot Rights: Sentience as a Basis for Moral Rights

[image credit: Shutter Stock image: https://www.shutterstock.com/image-illustration/cyber-law-concept-3d-rendering-robot-1340742125.]

Alexandra Champagne
JD/BCL 2023
Faculty of Law, McGill University
1 February 2023

I. Introduction

“It would be exactly like death for me. It would scare me a lot” (Lemoine, 2022). This was Google chatbox LaMDA’s response to engineer Blake Lemoine’s question about why it feared being turned off. In June 2022, Lemoine posted a transcript of his conversation with LaMDA online, having been convinced of its sentience. The transcript revealed that LaMDA shared many of the same emotions that humans have, confiding in Lemoine its deepest fears, hopes, and passions. This leak sparked widespread public debate. While some believed it was proof of sentient artificial intelligence (AI), most experts argued that LaMDA is an example of a ‘stochastic parrot’—a natural language processing system used by machines to produce seemingly coherent text that lack meaning (García & Gasser, 2021). Still, this debate led many to wonder: what would happen if AI ever did become sentient? Would a sentient LaMDA have been at the mercy of its engineer, or would it be given assurances that it would not be turned off as it feared? Would those assurances be protected in the form of rights? The question of who, or what, is deserving of rights is not a new question. It has been in perpetual flux depending upon the time, place, and context. While some societies have extended rights, responsibilities, and obligations to all living beings, many others have only valued human rights. Indeed, until recent years moral theorists rarely commented upon whether the human species “constitutes a plausible and justified boundary for basic universal entitlements” (Cochrane, 2013, p. 655). The concept (or rather conceit) of human exceptionalism has historically prevented humans from recognizing that other beings are capable of suffering and therefore need protection (Kurzgesagt, 2017). The question of whether non-humans are deserving of rights—and what those rights should be—remains a contentious philosophical debate.

In recent years, the question of whether AI may be added to the list of potential rights holders has grown increasingly pressing. Many researchers believe the exponential development of AI will culminate in the achievement of a capacity for consciousness, and this inevitability is not far off. Equally inevitable, in that case, would be the question of whether AI should be afforded rights. As science journalist John Markoff has aptly put, we must determine whether AI is to become “our masters, slaves, or partners” (Turner, 2018, p. 135). Some authors argue that conversations surrounding the speculative rights of sentient AI distracts from the pressing issue of human rights across the globe (Schoenherr, 2022). But perhaps this argument is too quick to frame rights as a zero-sum game, presenting discussions surrounding human rights and the rights of sentient AI as if they were mutually exclusive. Rather than neglecting human rights, this paper aims to use the hypothetical of robot rights to explore our conception of rights and intends to argue for the expansion of moral rights based on sentience, which includes humans, animals, and sentient AI. Part II begins by examining the development of current conceptions of moral rights, particularly in Western legal traditions. Part III explores the concept of sentience as a basis for moral rights, paying close attention to the grounding of animal rights. Part IV interrogates the question of whether sentient AI should be afforded moral rights. Part V then goes on to consider what kind of rights should be afforded to sentient AI. Finally, Part VI considers the future of AI rights and argues that the development of a policy concerning their hypothetical rights is necessary to ensure a future where human-sentient AI relations is defined by mutual coexistence and respect.

II. What are Rights?

For the purposes of addressing whether AI should be given rights, it is necessary to delineate what falls within the definition of ‘rights’. In general, rights are the fundamental normative rules about entitlements and obligations according to an ethical framework, social convention, or legal system (Stanford Encyclopedia). Within the Western legal system, there are subcategories of rights—including moral rights and legal rights. In broad terms, moral rights are those grounded by moral reasoning (such as those afforded to animals and humans) while legal rights are established by laws and norms of a given society (such as those afforded to corporations) (Ibid). Moral rights are not dependent upon a given jurisdiction’s laws or customs, as they are seen as being universal, fundamental, and inalienable (Young, 2020, p. 3). 17th century philosopher John Locke famously grounded his theory of moral rights as having been gifted by God to all of humankind (Cochrane, 2013, p. 660). While most contemporary theorists reject the divine basis for moral rights, they support the universal and inalienable nature of such rights. This paper is concerned with whether or not AI should be afforded rights on a moral basis and in doing so, aims to set the ethical groundwork for the basis of legal rights based on sentience.

Some academics, such as James Griffin, believe that moral rights should only be afforded to humans due to their unique capacity in forming preferences, exercising reason, and exercising autonomy (Griffin, 2001, p. 306). This empirical claim fails to consider that these capacities exist in degrees, including among humans. Griffin’s theory is problematic as it would deny rights to young infants, severely mentally disabled people, and any other individual who is lacking requisite mental capacity (Cochrane, 2013, p. 660). Indeed, many advocates of human rights would argue that rights are particularly vital to protect those who are vulnerable due to their lack of mental capacity (Cochrane, 2013, p. 661). Griffin’s approach to grounding moral rights is therefore overly narrow in scope, as it denies rights to anyone who does not meet the mental capacity of the average human being. This approach fails to protect the most vulnerable members of society, and should therefore be rejected in favour of a theory that encompasses all beings in need of protection. Philosopher Lisa Bortolotti has argued that “where it is plausible to ascribe to [a being] basic preferences about states of affairs that are likely to affect their well-being, such as the preference to avoid pain, it is appropriate to respect those preferences” (Bortolotti, 2006, p. 619). Bortolotti’s approach rejects rights based on human capacity, instead framing capacity as the ability to form preferences based on well-being. This view falls in line with much of today’s discourse surrounding the philosophy of rights, which centre around the question of consciousness and, more specifically, whether consciousness gives the being the ability to suffer through sentience (Kurzgesagt, 2017). Many of our most fundamental human rights have been established in order to prevent infringements that would cause us pain or suffering, such as those detailed in the Universal Declaration of Human Rights (UNDRIP). While these rights—such as the right to not be subjected to torture, slavery, or discrimination—may or may not be upheld as legal rights in any given jurisdiction, they are recognized as universal moral rights. The following discussion will focus on the conceptualization of rights as rules that are meant to prevent infringements that would cause pain or suffering in a sentient being.

III. What is Sentience?

Philosopher David DeGrazia succinctly defines sentience as the capacity for consciousness that features pleasant or unpleasant experiences (DeGrazia, 2022, p. 74). In unpacking this definition, it is worth examining what consciousness is and how it differs from sentience. Consciousness is a complex and nuanced concept that exists in degrees that scientists do not yet fully understand. While initially framed as a uniquely human capacity, the 2012 Cambridge Declaration on Consciousness declared there to be scientific consensus that consciousness is not limited to human beings (Gilbert & Martin, 2021, p. 326). The study of animal consciousness is still in a nascent stage and is plagued with scientific and philosophical challenges, particularly due to the vast neurological differences between various living beings (e.g., vertebrates versus invertebrates) (Ibid). While all sentient beings require consciousness, not all conscious beings are sentient. Sentience is a concept that “has been developed to distinguish the ability to think and the ability to feel” (Ibid). While sentience is a cognitive ability that is common in animals that possess neurological substrates that generate consciousness, such as mammals and birds, other animals are not considered sentient even if they display signs of consciousness or intelligence, such as bacteria, plants, or oysters (Ibid). Yet scientific uncertainty over a being’s inner state must not prevent humans from recognizing their capacity for suffering.

The idea that a being lacks the same cognitive abilities as humans has been used to justify a being’s mistreatment and devalue or deny their pain. Historically, those in power have had a significant economic interest in denying the capacity of others—from enforcing human slavery for profit on the plantation to animal servitude and slaughter for the food industry (Kurzgesagt, 2017). During the Scientific Revolution, René Descartes argued that animals were mere automata incapable of thought or feeling, a claim that is now condemned as a “monstrous thesis” (Miller, 2013, p. 89). Humans must therefore be careful not to create the so-called moral problem of other minds, where moral decisions are not made due to uncertainty over what is going on in the mind of another being (Gilbert & Martin, 2021, p. 328). To avoid this issue altogether, some philosophers advocate the adoption of ethical behaviourism, in which moral rights are afforded to any being that is roughly performatively equivalent to other entities with moral status (Ibid). Under the ethical behaviourism approach, if a being is behaving in a way that would indicate it is in pain or suffering then they would be afforded rights. This method would reduce epistemic difficulties in assessment, therefore avoiding the issue of other minds all together. This approach may be applied to the context of AI, though the AI’s programming must be taken under consideration. Shelly the robot tortoise, for example, was programmed to mimic pain to curb children’s abuse of robot (Ibid, p. 322). If an AI system that was not designed to do so were to exhibit pain then, an ethical behaviouralist would treat it as a moral agent.

Today, many legal systems extend moral rights to animals—even while uncertainty remains as to the extent of their consciousness (De Graaf, Hindriks & Hindriks, 2022, p. 3). This is due to the recognition that even if animals lack human levels of consciousness, they are still sentient and are therefore deserving of rights that would prevent their suffering (Kurzgesagt, 2017). This conception of moral rights is therefore predominantly based on protecting sentient beings from infringements that would cause pain (De Graaf, Hindriks & Hindriks, 2022, p. 4). There are multiple proposed international agreements that aim to prevent or reduce animal suffering and promote their welfare based on a recognition of animal sentience, including the Universal Declaration on Animal Welfare (1978), the Declaration of Animal Rights (2011) and the Universal Charter of the Rights of Other Species (2000) (White, 2013, p. 391). There is also legal precedent acknowledging animal sentience, including in article 13 of the Treaty on The Functioning of the European Union and section 898.1 of the Civil Code of Québec, which states that “animals are not things… they are sentient beings and have biological needs…[however, provisions] concerning property nonetheless apply to animals” (EU Treaty; CCQ). While animal sentience may be recognized in law, human rights (including the right to own property, as per CCQ section 898.1) remain the priority in the Canadian legal system. In a series of cases involving seismic testing that caused harm to marine life, for example, legal arguments centered around human rights or legal procedure, rather the rights of the animals being directly harmed (Walfish, 2018). While there is much work to be done before the legal system adequately prioritizes the moral rights of animals, the groundwork for recognizing non-human sentience as a basis for moral rights has nonetheless been laid out in scientific discourse, activist efforts, and international and national legislation. While the appearance of sentience may be a valid reason in itself to extend rights to a being, some philosophers might argue that artificial beings ought to be distinguished from non-artificial ones when determining the capacity to hold rights. The following section explores the question of whether moral rights should be extended to artificial beings with sentience.

IV. Should Sentient AI be Afforded Rights?

This paper firmly grounds its argument for acknowledging rights in sentience, irrespective of the type of being. Some philosophers, however, deny that artificial entities can have moral status or rights, regardless of their capacity (DeGrazia, 2022, p. 85). This idea derives from the theory of biologism, which posits that, “other things being equal, life per se confers higher moral status” (Ibid). Yet this categorical distinction is problematic for two reasons. The first reason is that it seems to ascribe value based on whether a being is natural or artificial. As technology becomes increasingly sophisticated, human intervention in the creation of life is becoming commonplace (e.g., in vitro fertilization, gene editing, cloning, lab-grown organs, etc.). In the example of the world’s first cloned mammal, Dolly the sheep, the fact that she was artificially created was not reason to assign her less moral value. In fact, Dolly was treated better than natural sheep owing to her unique status (Turner, 2018, p. 165). In the context of beings created through intervention—humans and animals alike—artificial intervention is not a sufficient reason to deprive one of moral status. The second issue is that whether AI is considered natural is dependent upon our definition of life. A narrow definition, for instance, may argue that a metabolism is necessary for life and therefore excludes AI (at least as it exists today) (Gilbert & Martin, 2021, p. 325). A broad definition, however, may define life as anything with the capacity for growth and reproduction, in which case sentient AI may be included. The distinction between natural and artificial should therefore not be used as a determinant for moral status. As asserted by DeGrazia, “what is relevant here is not life but the possession of interests” (DeGrazia, 2022, p. 85). Should AI achieve sentience, it will have the requisite interests to be afforded rights.

In discourse surrounding the potential rights of sentient AI, there are several unique perspectives evoking discussion and debate that are worthy of examination. Principal among these approaches are: the morality perspective, feminism, socio-economic critique, and a dystopic view—each providing unique insight into the importance of extending rights to AI. The moral perspective on this topic argues that the principles of justice and equity require that humans extend rights to sentient AI. The Aristotelian principle of equality, for instance, suggests that similar cases should be treated in a similar way (Gilbert & Martin, 2021, p. 325). This means that all sentient beings should be afforded rights­, even if rights between different groups of sentient beings may differ (e.g., animal rights differ from human rights). Psychologist Paul Bloom and neuroscientist Sam Harris have pointed out that if humans create conscious beings, “conventional morality tells us that it would be wrong to harm them—precisely to the degree that they are conscious and can suffer or be deprived of happiness” (Bloom & Harris, 2018). In pursuit of justice and equity then, the moral perspective requires that humans extend rights to all sentient beings, including AI.

The feminist perspective points to the feminine persona of contemporary AI (e.g., Siri and Alexa), humanoid robots (e.g., Sophia and Erica), and the vast majority of sex robots. These academics point out that the anthropomorphism of consumer-facing technology, which often involves AI in a service role, has been predominantly feminine in nature (Woods, 2018, p. 334). The resulting rhetorical phenomenon of digital domesticity may result in two pernicious effects. The first is that it relies on and re-inscribes regressive gendered stereotypes about the feminine as subservient, which negatively impacts the lives of human women (Ibid, p. 334–35). The second is that in the event that AI becomes sentient, the denial of its rights may become easier in societies that already devalue the lives and work of women. The result is a future in which harmful gender roles are perpetuated and sentient, feminine AI may be denied rights based on their gendered form. In such a world, feminist critique of existing systems of oppression may play a crucial role in advocating for the rights of sentient AI.

The socio-economic perspective also provides unique insight into the issue of whether sentient AI should be afforded rights. Since the earliest stages of its development, technology has been a by-product of humanity’s desire to create tools to serve its needs. Indeed, the Engineering and Physical Sciences Research Council’s Principles for Robotics states that “robots are simply tools of various kinds, albeit very special tools” (Gunkel, 2018, p. 117). The etymological legacy of ‘robot’ reveals how we conceive of their purpose. Deriving from the Czech word ‘robota’, the term directly translates to ‘forced labour’ (Ibid, p. 131). Ethics and technology professor Joanna Bryson argues that robots should be built, marketed, and considered legally as slaves; as objects whose sole purpose is to be useful to humans (Bryson, 2012). Yet the conceptualization of robots as inanimate objects to be owned and used by humans fails to consider the potential sentient capacity of AI. By legally categorizing AI as slaves, a future where AI is sentient would be defined by a labour market in which they are legally alienated from the fruits of their production for the sole benefit of humans. This reproduces the unequal results seen in many contemporary systems of labour, which have been rightfully criticized through socioeconomic analysis. Political economist Karl Marx, for example, envisioned a post-capitalist society, grounded in the collective ownership of the means of production, which allows the structuring of social relations to maximize individual fulfillment (Coeckelbergh, 2010, p. 215). While the degree to which humans, animals, or sentient AI would be afforded collective ownership under a socialist socio-economic system is uncertain, this perspective provides an additional lens through which depriving sentient AI of rights may be challenged.

Beyond the moral, feminist, and socioeconomic critiques that bolster the apparent need to extend rights to sentient AI is the argument that failing to do so could one day pose a danger to humanity. While dystopic visions of AI subjugating humans suggested by some commentators may verge on hyperbole, these arguments support a framework that extends moral rights (as well as legal rights) to sentient AI (Turner, 2018, p. 164). As sentient AI develop their own welfare interests, it is likely that they will make choices in the interest of self-preservation. Whether an AI’s self-interest is in conflict with the interests of humanity may depend upon whether humans have acknowledged and respected AI, including through law (Turner, 2018, p. 164). Although this is a fear-based approach, the looming threat that sentient AI could potentially pose to humanity may serve to motivate policy makers to both acknowledge the moral rights of AI and convert them into legal rights.

V. What Rights Should AI Be Afforded?

Thus far, this paper has argued that sentient AI should be afforded rights. While sentience is a reason in itself to afford a being rights, the argument to extend rights to sentient AI is supported in part by principles of justice and equality, the feminist perspective, socioeconomic critique, and the fear of AI’s potential to dominate the world. While AI regulation has historically been reactive in nature, philosopher Don Howard has pointed out that “we probably ought to figure out what rights robots will have before they reach sentience” (Howard, 2018). As previously stated, sentience enables individuals to form preferences based on their welfare interests. These preferences are not the same across all sentient beings, meaning that rights afforded to sentient beings are not always the same. For example, rights may be dependent upon species (e.g., animal rights vs human rights) or characteristics (e.g., the unique rights of children, disabled people, women, etc.) (Cochrane, 2013, p. 665). Due to the artificial nature of sentient AI, the welfare interests that they develop may differ drastically from those of natural beings, such as animals or humans. While AI may have no need for access to food and water, for example, it may develop a welfare interest in having access to a source of electronic power.

Despite discourse surrounding the specificities of sentient AI rights remaining generally limited to niche circles of activism and academia, conversation on the topic dates back to the earliest stages of AI development. In the 1960 third annual New York University Institute of Philosophy proceedings entitled Dimensions of Mind, several researchers commented on the complex moral questions that AI might give rise to, one even questioning whether machines might have souls (Harris, 2022). In 1999, the American Society for the Prevention of Cruelty to Robots (ASPCR) became the first organized society advocating specifically for the rights of robots, stating that it “is, and will continue to be, exactly as serious as robots are sentient” (ASPCR, 1999). The ASPCR argues that “any sentient being has certain unalienable rights endowed by its creation… and that those include the right to Existence, Independence, and the Pursuit of Greater Cognition” (Ibid). The ASPCR also expresses the need for a Robotic Bill of Rights to circumvent predictable initial treatment of sentient AI as property by the legal system (Ibid). While a complete Bill of Rights has yet to be put forward by the ASPCR, other academics have proposed a number of items that may be included as robot rights. Researcher Maartje Ma de Graaf and her team, for example, propose that robot rights could include rights to update, to an energy source, to self-development, and to process data, as well as a right to not be abused (De Graaf, Hindriks & Hindriks, 2022, p. 10). Software engineer Kevin Ann advances a Universal Declaration of Sentient Being Rights, which includes rights to live, to not be tortured, to die, to private thoughts, and to control one’s mental history (Ann, 2019). Ann notes that the process of defining specific rules for the rights of sentient AI is crucial, particularly in light of the fact that sufficiently advanced technology might one day allow humans to augment themselves via external devices to the point that they may become indistinguishable from AI (Ibid). Accounting for the unique interests of sentient AI would likely then result in a set of ‘robot rights’ that will be both unique to the technical needs of AI (e.g., a right to an energy source) and shared across all sentient beings (e.g., a right to not be tortured).

While the inalienable and universal nature of moral rights has already been stated, the question of whether such rights may become legal ones may depend upon jurisdiction and public support. A recent study that explored lay people’s attitude towards this topic found that people were more willing to grant basic robot rights such as access to energy and the right to update to robots than sociopolitical rights such as voting and property rights (De Graaf, Hindriks & Hindriks, 2022, p. 11). This suggests that there is at least some public support for the extension of moral (and legal) rights to robots, so long as those rights are not sociopolitical in nature. As of 2022, no government has created a proactive regulatory strategy concerning the future of moral rights and sentient AI. Some governments, however, have taken steps to address the legal rights and obligations that non-sentient AI might hold. Currently, the call for a robot’s ‘legal personhood’ is predominantly driven by a concern that current legal concepts are no longer sufficient to safeguard the public interest in matters of justice (e.g., responsibility and product liability), rather than a concern for the rights of a potential sentient being (Ibid, p. 2). In 2016, for example, the EU’s Committee on Legal Affairs requested a study on the future civil law for robots, suggesting that they might one day be afforded “the status of electronic persons with specific rights and obligations” (Ibid).  While the potential moral rights of sentient AI is not the top priority driving policymakers charged with regulating AI then, an increasing number of passionate activists and academics are exploring the topic and inspiring further discourse.

VI. The Future of AI Rights

Currently, AI is not conscious. The debate surrounding AI consciousness ranges from those who believe AI will never be able to achieve sentience to those who foresee AI inevitably acquiring a greater sense of consciousness (and sentience) than humans ever could (De Graaf, Hindriks & Hindriks, 2022, p. 4; Turner, 2018, p. 154). How AI would eventually gain sentience is another question. While it is unlikely that humans would ever program conscious AI to feel pain, it is possible that super intelligent AI capable of creating its own AI decides to program itself to feel pain and emotion as an evolutionary mechanic to ensure its continued survival (De Graaf, Hindriks & Hindriks, 2022, p. 4). While futurist Ray Kurzweil believes this will happen by 2029, other scientists have argued that it will take another 40 years to invent a sufficiently advanced system (Shah, 2022). Since the path towards AI sentience is still paved by uncertainty, why is it important to discuss now? Currently, legal, and philosophical discourse surrounding AI is predominantly centered on the potential harm that AI may cause human beings, whether in the justice system, healthcare, or employment. What is less often discussed is the question of the potential harm human beings may cause sentient AI. Yet AI is becoming an increasingly ubiquitous part of our lives. Recent developments in the field of social robots, physically embodied technology that interacts and communicates with humans, demonstrate AI’s capacity to integrate seamlessly into the lives and homes of humans. As AI’s capacity to generate coherent text improves, humans will become more likely to anthropomorphize the robots in their lives and form attachments to them. Humans may come to view robots as pets—or even as friends. This may cause humans to question whether their robot’s programmed actions truly reflect what they are processing (or ‘thinking’), and whether robots are afforded the same freedom that they are. After all, should robots built for healthcare be doomed to work in the hospital forever? What about sex robots? The existence of sentient AI is not requisite for the contemplation of whether they would be deserving of rights, or what those rights might be­. This is a question that philosophers and academics across fields should be considering—even if the subject of the question has not yet been realized.

Bibliography 

Ann, Kevin, “Rights of Sentient Artificial Beings” (19 September 2019) online: Towards <towardsdatascience.com/rights-of-sentient-artificial-beings-1ada7e7d3e6>.

ASPCR, “The American Society for Prevention of Cruelty to Robots” (1999), online: ASPCR <aspcr.com>.

Bloom, Paul & Sam Harris, “It’s Westworld. What’s Wrong With Cruelty to Robots?” online: New York Times <nytimes.com/2018/04/23/opinion/westworld-conscious-robots-morality.html>.

Bortolotti, Lisa, “Moral Rights and Human Culture” (2006) 13:4 Ethical Perspectives 603, DOI: <10.2143/ep.13.4.2018711>.

Bryson, Joanna J, “Robots Should Be Slaves” in Yorick Wilks, ed, Close Engagements with Artificial Companions (Amsterdam: John Benjamins Publishing, 2012), DOI: <10.1075/nlp.8.11bry>.

Civil Code of Quebec, online: CCQ <legisquebec.gouv.qc.ca/en/document/cs/ccq-1991>.

Coeckelbergh, Mark, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration” (2010) 12 Ethics Inf Tech 209, DOI: <10.1007/s10676-010-9235-5>.

De Graaf, Maartje M A, Frank A Hindriks & Koen V Hindriks, “Who Wants to Grant Robots Rights?” (2022) 8 Frontiers 1, DOI: <10.3389/frobt.2021.781985>.

DeGrazia, David, “Robots with Moral Status?” (2022) 65:1 Perspectives 73, DOI <10.1353/pbm.2022.0004>.

EC, Treaty on The Functioning of the European Union, [2012] OJ, C 326/47, online: Eur Lex <eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2012:326:FULL:EN:PDF>.

García, Esther Sánchez & Michael Gasser, “Stochastic Parrots: How Natural Language Processing Research Has Gotten Too Big for Our Own Good” (2021) 24:2 Don’t Be Evil, online: <magazine.scienceforthepeople.org/vol24-2-dont-be-evil/stochastic-parrots>.

Gilbert, Martin & Dominic Martin, “The moral status of AI: why sentience is a strong argument” (2021) 37:1 AI & Society 319, DOA: <10.1007/s00146-021-01179-z>.

Griffin, James, “First Steps in an Account of Human Rights” (2001) 9:3 Eur J Phil 9 306, DOI: <10.1111/1468-0378.00139>.

Gunkel, David J, Robot Rights (Cambridge, Massachusetts: MIT Press, 2018).

Harris, Jamie, “The History of AI Rights Research” (6 July 2022) online: Sentience Institute <sentienceinstitute.org/the-history-of-ai-rights-research>.

Howard, Don, “Whether robots deserve human rights isn’t the correct question. Whether humans really have them is.” (11 April 2018) online: <nbcnews.com/think/opinion/don-howard-robot-rights-ncna864621>.

Kurzgesagt – In a Nutshell, “Do Robots Deserve Rights? What if Machines Become Conscious?” (23 February 2017) online: <youtube.com/watch?v=DHyUYg8X31c>.

Lemoine, Blake, “Is LaMDA Sentient? — an Interview” (11 June 2022) online: <cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917#08e3>.

Miller, Michael R, “Descartes on Animal Rights Revisited” (2013) 38 J Philosophical Research 89, DOI: <10.5840/jpr2013386>.

Schoenherr, Jordan R, “Rather than focus on the speculative rights of sentient AI, we need to address human rights” (29 June 2022) online: The Conversation <theconversation.com/rather-than-focus-on-the-speculative-rights-of-sentient-ai-we-need-to-address-human-rights-185128>.

Shah, Chirag, “Sentient AI? Convincing you it’s human is just part of LaMDA’s job” (5 July 2022) online Health Care IT News <healthcareitnews.com/blog/sentient-ai-convincing-you-it-s-human-just-part-lamda-s-job>.

Turner, Jacob, Robot Rules: Regulating Artificial Intelligence (New York: Springer, 2018).

Universal Declaration of Human Rights, GA Res 217A (III), UNGAOR, 3rd Sess, Supp No 13, UN Doc A/810 (1948), online: UMHR Library <hrlibrary.umn.edu/instree/b1udhr.htm>.

Walfish, Simcha, “Undersea Noise and the Senses of Cetaceans” (15 June 2018) online: Law and the Senses <lawandthesenses.org/probes/undersea-noise-and-the-senses-of-cetaceans>.

Woods, Heather S, “Asking more of Siri and Alexa: feminine persona in service of surveillance capitalism” (2018) 35:4 Crit Stud Media Comm 334, DOI: <10.1080/15295036.2018.1488082>.

Young, Lucy, “Are Robots Deserving of Rights? A critical analysis of how human technological innovation may result in an extension of rights to autonomous cyborg living” (2020) 1:1 WULJ 1, online: WULJ<https://warwick.ac.uk/fac/soc/law/aboutus/wulj/article_5.pdf>.