Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents (2013)

In The American Philosophical Associat ion (ed.), APA Newsletter Philosophy and Computers Fall 2013 ISSN 2155-9708. The American Philosophical Associat ion (2013)
Authors
Christophe Menant
Ecole Nationale Superieure d'Electronique, d'Electrotechnique, d'Informatique et d'Hydraulique de Toulouse (ENSEEIHT)
Abstract
The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?” We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into “can AAs generate meanings like humans do?” We correspondingly present the TT, the CRA and the SGP as being about generation of human-like meanings. We model and address such possibility by using the Meaning Generator System (MGS) where a system submitted to an internal constraint generates a meaning in order to satisfy the constraint. The system approach of the MGS allows comparing meaning generations in animals, humans and AAs. The comparison shows that in order to have AAs capable of generating human-like meanings, we need the AAs to carry human constraints. And transferring human constraints to AAs raises concerns coming from the unknown natures of life and human mind which are at the root of human constraints. Implications for the TT, the CRA and the SGP are highlighted. It is shown that designing AAs capable of thinking like humans needs an understanding about the natures of life and human mind that we do not have today. Following an evolutionary approach, we propose as a first entry point an investigation about the possibility for extending a “stay alive” constraint into AAs. Ethical concerns are raised from the relations between human constraints and human values. Continuations are proposed. (This paper is an extended version of the proceedings of an AISB/IACAP 2012 presentation (http://www.mrtc.mdh.se/~gdc/work/AISB-IACAP-2012/NaturalComputingProceedings-2012-06-22.pdf).
Keywords Turing test  Chinese room argument  symbol grounding problem  meaning generation  artificial intelligence  artificial life  constraint satisfaction  evolution  ethics  artificial agent
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Minds, Machines and Searle.Stevan Harnad - 1989 - Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
Minds, Machines and Searle.Stevan Harnad - 1989 - Philosophical Explorations.
The Constructability of Artificial Intelligence.Bruce Edmonds - 2000 - Journal of Logic Language and Information 9 (4):419-424.
On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
Grounding Symbols in the Analog World with Neural Nets.Stevan Harnad - 1993 - Philosophical Explorations 2 (1):12-78.
The Constructibility of Artificial Intelligence (as Defined by the Turing Test).B. Edmonds - 2000 - Journal of Logic, Language and Information 9 (4):419-424.

Analytics

Added to PP index
2012-10-17

Total downloads
299 ( #13,175 of 2,242,535 )

Recent downloads (6 months)
34 ( #11,639 of 2,242,535 )

How can I increase my downloads?

Monthly downloads

My notes

Sign in to use this feature