1 The uniqueness of human being

In the Western hemisphere and beyond, people are quite convinced that they are unique living creatures. Unique not only in the way that a particular animal is unique because it embodies a species of its own and is therefore (biologically) distinct from all other species, but also as something that stands out: the crown of creation, as Abrahamic religions would put it. To be considered unique in the above sense, many say that one must possess a particular characteristic that none of the other members have, and which is so different that it enables a distinctive way of living. One such trait is often thought to be reason.

2 Reason: A communicational construct or a construct of communication?

In the following, I want to focus on a premise that can be found in many philosophical concepts of reason: namely, the idea that reason is the same in every human being and has not been found in any other living entity. What I mean by ‘the same’ is that every human is thought to use reason when assessing different situations and though the rigour with which this might be done can differ the general steps, the way it unfolds and more importantly the results are assumed to be the same. We often suppose that hypothetically every human entity can come to the same conclusion about the morality or immorality of an action. This conclusion then can be communicated and therefore the line of reasoning is publicly manifested into the world.

An immediate problem that arises from this is whether the other person (assuming a two-party communication) employed the same reasoning or that just by communication and a subsequent mutual agreement the impression of an equal reasoning was formed.

3 Passed Turing Tests

Now let us assume we have an Artificial Intelligence that passes the Turing Test, i.e., when interacting with the program or machine, a human cannot tell if they are interacting with a machine or a human being (Turing 1950). Following John Searle's thought experiment of the Chinese Room (1980), there has been much debate about whether a machine that exhibits human intelligence can actually think or is just simulating it.

In light of the argument I outlined in the previous paragraph, I would argue that this question is irrelevant, since we can never know whether another human being possesses and employs the same reason that we do. The only thing that matters is the output that we receive and if this is the same, the difference between simulated and ‘real’ thinking simply vanishes. Intuitively, we might believe that reason is universal and unique to humanity, although the only evidence that this is true is a communicative agreement between us. And if we consequently follow this evidence, we might need to see the other person as a black box deciding what bits of information they transmit to us.

But if we now have an AI that communicates with us the way any other human would, what is the difference between us and this machine? Or, put even more succinctly: if we believe that reason and the communication of its workings makes us humans unique, we must either accept this machine as equal to us or conclude that reason is not what makes us unique, which raises the question of whether we might not be unique at all.

4 Humans and machines: Just the same?

The assumption that a machine that passes the Turing Test and therefore adopts the supposed uniqueness-trait of human beings has a variety of implications. One obvious conclusion can be drawn in the realm of ethics: if there is an entity which possesses the feature that makes up the foundations of our ethics and of which we think it makes us humans, then this entity should be treated as (ethically) equal to ourselves. But how can the ethical terms we use for ourselves be converted to an equivalent machine-ethic? How would you define the machine’s dignity or the integrity of the body? Does it need enough power, is it an attempted murder if one tries to cut off the power, would it always need the best GPU and hardware available? These questions probably sound quite far-fetched and are even more difficult to answer, but they are quite legitimate if you consider humans and machines to be equal.

Now another interesting problem arises if we try to follow our intuition and say: even if the machine has the same capabilities in terms of reason and so on, it is clearly not a human being just because it is made of circuits and not cells.Footnote 1 While this seems to be a valid line of thought, we would run the risk of undermining our own ethical standards: if one takes an act that is considered unethical when done to another human being, and now applies the same act to a machine which passes the Turing TestFootnote 2 but is not regarded as being equal, this act could now be considered ethical or at least less unethical. One might be tempted to draw a comparison to the field of animal law and ethics, where ethical standards still exist but are lower than those that apply to humans. But the situation is quite different: if we allow unethical behaviour to an entity that appears to us as if it were exactly like a human, it will tear down any rational justification of why we implemented our ethical standards in the first place, since they are mostly founded on the uniqueness of humanity and the superiority of our reason. After all, what value do ethical rules still have if the consequences for violating them are different in at least seemingly identical scenarios? Could the obvious display of the arbitrariness of ethical rules even lead to a decomposition of our ethical foundations? Perhaps as AI evolves, we need to adjust and question the understanding of our ethics thoroughly.