Download PDF

The Chinese room revisited : artificial intelligence and the nature of mind

Publication date: 2007-08-23

Author:

Gonzalez, Rodrigo

Abstract:

Charles Babbage began the quest to build an intelligent machine in the nineteenth century. Despite finishing neither the Difference nor the Analytical engine, he was aware that the use of mental language for describing the functioning of such machines was figurative. In order to reverse this cautious stance, Alan Turing postulated two decisive ideas that contributed to give birth to Artificial Intelligence: the Turing machine and the Turing test. Nevertheless, a philosophical problem arises from regarding intelligence simulation and make-believe as sufficient to establish that programmed computers are intelligent and have mental states, especially given the nature of mind and its characteristic first-person viewpoint. The origin of Artificial Intelligence is undoubtedly linked to the accounts that inspired John Searle to coin the term strong AI ―or the view that simply equates computers and minds. Especially emphasising the divergence between algorithmic processes and intentional mental states, the Chinese Room thought experiment shows that, since the mind is embodied and able to realise when linguistic understanding takes place, mental states require material implementation, a point that directly conflicts with the accounts that reduce the mind to the functioning of a programmed computer. The experience of linguistic understanding with its typical quale leads to other important philosophical issues. Searle’s theory of intentionality holds that intentional mental states have conditions of satisfaction and appear in semantic networks; thus people know when they understand and what terms are about. In contrast, a number of philosophers maintain that consciousness is only an illusion and that it plays no substantial biological role. However, consciousness is a built-in feature of the system. Moreover, neurological evidence suggests that conscious mental states, qualia and emotions enhance survival chances and are an important part of the phenomenal side of mental life and its causal underpinnings. This renders an important gap between simulating a mind and replicating the properties that allow having mental states and consciousness. On this score, the Turing test and the evidence it offers clearly overestimate simulation and verisimilar make-believe, since such evidence is insufficient to establish that programmed computers have mental life. In summary, this dissertation criticises views which hold that programmed computers are minds and minds are nothing but computers. Despite the arguments in favour of such an equation, they all fail to properly reduce the mind and its first-person viewpoint. Accordingly, the burden of proof still lies with the advocates of strong AI and with those who are willing to deny fundamental parts of the mind to make room for machine intelligence.