Back    All discussions

2009-11-28
Machine Rights
Suppose there were a machine (M) which could pass a very strong version of the Turing test.

The jury is still out as to whether that would mean the machine has some kind of sentience. And it might be natural to think that we can't decide whether that machine has rights (or which rights it has) until we know whether it is sentient (or which kind of sentient it is). For typically, we think humans have rights by virtue either of their having interests, or else, their having something one might call "dignity." Interests seem most naturally to involve sentience on the part of the interested. And whatever "dignity" is, again it seems fairly clear that it involves sentience.

(Not that every bearer of rights has to be sentient, but at least, it is typically thought that a bearer of rights needs to be of a kind which typically does have sentience.)

So if we don't know whether M is sentient--even if M can pass a strong Turing test--then it'd seem we don't know whether M is a bearer of rights. My view, however, is that if M can pass a sufficiently strong Turing test, then M is a bearer of rights regardless of whether M is sentient or not.

For suppose we are trying to decide whether to treat M as though M has certain rights or not. Our decision, in order to be practically rational, should turn on the difference it makes whether we treat M as though M has rights or not. And there is a Turing test sufficiently strong to ensure that M responds to our treatment in a way identical to the way a human would respond to our treatment. This means that the difference it would make whether we treat M as though M has rights or not is the same as the difference it would make whether we treated some human as though that human had rights. Since we ought to treat the human as though it has these rights, and since treating M in the same way makes the same difference, and since our decision should turn on the difference made in each case, it follows that we should treat M as though M has these rights.

But what does it mean to say that we should treat M as though M has these rights, except that M does in fact have these rights? "We should always, in every circumstance, treat Y as though it is Z", while it doesn't imply that Z has rights, is nevertheless (when adopted as a principle of action) practically indistinguishable from the statement "Y is Z."  If I am willing to say "In every circumstance, I shall treat Y as though it is Z," then I should also be willing to say "Y is Z." One fairly trivial argument for this point is just that if I were not willing to say "Y is Z" then that'd itself be a circumstance in which I'm not treating Y as though it were Z, meaning I'm not following the principle "I shall always treat Y as though it is Z" after all.

That's an incredibly brief summary of an argument that certain machines can have rights regardless of the question whether they are sentient. If I'm right, then it's nice that we don't have to solve metaphysical quandaries about consciousness and personhood and so on before we know what to do, politically and ethically, with artificial intelligences that act a lot like us.

I'm interested to hear what others have to say about this.

2009-12-01
Machine Rights
Reply to Kris Rhodes
Kris, it seems that we could plausibly say that we ought to cultivate dispositions to treat animals in certain ways. e.g treat them as though they had rights for reasons that have nothing to do with the question of whether they have rights in the first place or not. i.e. we may want to cultivate various fortunate dispositions (which are in themselves, not fitting/ sensitive to the reasons/ rational). So, we could say that cultivating the disposition would in fact make it the case that we tended to do the right thing (e.g maximise animal's pleasure)  even though the specific wrong making feature of the action is not the fact that its "rights" was violated. In order to make the case that the animal, or the machine has rights, you must modify your claim to say that if an agent, who is fully informed and perfectly sensitive to all the reasons would in fact necessarily treat the machine or animal as if it had rights, then the machine or animal has rights.


2009-12-01
Machine Rights
Reply to Kris Rhodes
What constitutes 'machine'?
Is The State the machine? Is a corporation a machine? They both have either legal authority or rights. They are both treated as if being sentient. Indeed a great deal of resources are spent making sure the State is respected and protected from damage. Does this qualify?

2009-12-01
Machine Rights
Reply to Kris Rhodes

I do apologize if my coffee is not working yet.  But this thought occured to me while reading the post.

Do humans act as though they have rights?  Is it 'being human' or is it a conditioning of environment and experience?  And what would my treating one who has 'rights' appear like in my behavior and language?  Would it involve an attitutde of consideration for the other's point of view (implying a sentience on their part) or is it more of a politeness on my part (I'm sorry but I'm dropping a bomb on you)?  Is there a looked for reaction by the sentient machine in the test?  That is, why is it that in the Terminator movies the self aware machine becomes hostile and launches nuclear weapons at humanity?  Why isn't the machine, created out of total servitude to humans, have an attitude of complete masochism and has no other desire than to please humanity (much like a calculator, with 'desire' defined quite loose)? 

And off on a side, does a human on a coma have rights?   Could the human pass the Turing test?  I see people in bars that might not pass such a test at various stages of a day.  But this will take us off onto a discussion of rights and not machines.  For I wonder if rights isn't anything but socially constructed, a shared meaning, in that a being doesn't have rights as part of its existence but in its proximity to other beings. 


2010-06-10
Machine Rights
Reply to Kris Rhodes
Are you thinking of the Turing test in the way that Turing did, as a "conversation game?" If so, then one problem with your suggestion is that a crucial part of what makes human beings rights-bearers is the fact that in some circumstances, we can tell when a person foresees the possibility of either injury or benefit to him- or herself. And contrary to Turing's apparent belief that we can, it seems dubious to me that this capacity can be detected via linguistic communication alone - we also need to be able to figure out whether a being is capable of evincing aversive or attractive non-linguistic behavior. So the machines that you describe would at the very least have to be robots, rather than just program scripts on a computing machine.

2010-06-19
Machine Rights
Reply to Kris Rhodes
I think part of the difficulty is that we assume that the juridical criteria for personhood as a bearer of rights shoud be co-extensive with a biological or physical criteria, so that any creature that fits a certain physical profile would be considered a juridical subject.  But how is such a link justified?  In the case of the corporation, it seems to be totally flouted, since it is only in the most metaphorical sense that a corporation could be considered a "person".  Perhaps instead of trying to show how an entity has rights by showing how similar it is to an organism, we should begin with a different question: how do (or did, at the time of their emergence) rights function in a given social setting, why did their existence seem intuitive and important, and would non-human rights maintain/extend or compromise this usefulness?