From PhilPapers forum Social and Political Philosophy:

2009-11-28
Machine Rights
Suppose there were a machine (M) which could pass a very strong version of the Turing test.

The jury is still out as to whether that would mean the machine has some kind of sentience. And it might be natural to think that we can't decide whether that machine has rights (or which rights it has) until we know whether it is sentient (or which kind of sentient it is). For typically, we think humans have rights by virtue either of their having interests, or else, their having something one might call "dignity." Interests seem most naturally to involve sentience on the part of the interested. And whatever "dignity" is, again it seems fairly clear that it involves sentience.

(Not that every bearer of rights has to be sentient, but at least, it is typically thought that a bearer of rights needs to be of a kind which typically does have sentience.)

So if we don't know whether M is sentient--even if M can pass a strong Turing test--then it'd seem we don't know whether M is a bearer of rights. My view, however, is that if M can pass a sufficiently strong Turing test, then M is a bearer of rights regardless of whether M is sentient or not.

For suppose we are trying to decide whether to treat M as though M has certain rights or not. Our decision, in order to be practically rational, should turn on the difference it makes whether we treat M as though M has rights or not. And there is a Turing test sufficiently strong to ensure that M responds to our treatment in a way identical to the way a human would respond to our treatment. This means that the difference it would make whether we treat M as though M has rights or not is the same as the difference it would make whether we treated some human as though that human had rights. Since we ought to treat the human as though it has these rights, and since treating M in the same way makes the same difference, and since our decision should turn on the difference made in each case, it follows that we should treat M as though M has these rights.

But what does it mean to say that we should treat M as though M has these rights, except that M does in fact have these rights? "We should always, in every circumstance, treat Y as though it is Z", while it doesn't imply that Z has rights, is nevertheless (when adopted as a principle of action) practically indistinguishable from the statement "Y is Z."  If I am willing to say "In every circumstance, I shall treat Y as though it is Z," then I should also be willing to say "Y is Z." One fairly trivial argument for this point is just that if I were not willing to say "Y is Z" then that'd itself be a circumstance in which I'm not treating Y as though it were Z, meaning I'm not following the principle "I shall always treat Y as though it is Z" after all.

That's an incredibly brief summary of an argument that certain machines can have rights regardless of the question whether they are sentient. If I'm right, then it's nice that we don't have to solve metaphysical quandaries about consciousness and personhood and so on before we know what to do, politically and ethically, with artificial intelligences that act a lot like us.

I'm interested to hear what others have to say about this.