

Any policy move that contradicts this narrative would have to consider them as they have several relevant repercussions. Especially since the general population does not accord any moral status to AI, that is there is no value in the feeling of a robot. Whether it is a digital brain or a biological one, they must be treated equally if they function the same and have the same conscious experience. Essentially, the characteristic of the entity must not decide its moral status, provided they have the same functionality and conscious experience. He proposes the idea of Substrate Non-discrimination. This is problematic in the contexts of the mentally-ill as well as infants who lack cognitive abilities. He draws a parallel between animal rights and rights of AI. Building on this, he argues that AI is not sentient. He suggests that a principle of subjective rate of time, arguing that the subjective duration of an event may differ depending on whether the brain is in human from or in digital form. Nick Bostrom explains the difference between sentience and sapience in his paper, The Ethics of Artificial Intelligence. First principles must regulate AI, that is the law of contracts, torts, and copyright law. He argues that AI is merely a tool and must be treated so. Richard Kemp advocates against anthropomorphism with regards to AI. Some people believe that AI has no moral status. For example, animals have been granted rights and even personhood. The debate revolves around whether the reasoning capacity and the emotional quotient of AI should afford them consideration for their well-being. AI may be sapient, but it is not sentient. Sentience is an ability to feel pain, while sapience pertains to certain human characteristics such as thinking and reasoning. Such moral status might be a precursor to conferring legal person upon AI. This article will begin by setting certain definitions up, following which it will briefly discuss the various models that describe the moral status of AI.

It is a measure of how much consideration the well-being of an entity deserves. Utilitarian would believe that anything with the capacity to suffer has moral status accordingly. For example, most people believe human life is more valuable than animal life. One question that arises when considering the personhood of Artificial Intelligence (AI) is the existence of a moral status for the technology.

Note: This post by Rithvik Mathar is a part of the TLF Editorial Board Test 2018
