arXiv (2020)

Authors
Jobst Landgrebe
State University of New York (SUNY)
Barry Smith
State University of New York, Buffalo
Abstract
The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability should be accepted also as a necessary condition of AGI, and we provide a description of the nature of human dialogue in particular and of human language in general against this background. We then argue that it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. This is (1) because there are no traditional explicitly designed mathematical models that could be used as a starting point for creating such programs; and (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. If this is so, then we can conclude that a Turing machine also cannot possess AGI, because it fails to fulfil a necessary condition thereof. At the same time, however, we acknowledge the potential of Turing machines to master dialogue behaviour in highly restricted contexts, where what is called “narrow” AI can still be of considerable utility.
Keywords linguistics of dialogue  variance  reinforcement learning  limits of artificial intelligence  General Artificial Intelligence
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

The Meaning of 'Meaning'.Hillary Putnam - 1975 - Minnesota Studies in the Philosophy of Science 7:131-193.
Minds, Brains, and Programs.John R. Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.

View all 65 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Undecidability in the Imitation Game.Y. Sato & T. Ikegami - 2004 - Minds and Machines 14 (2):133-43.
Peeking Behind the Screen: The Unsuspected Power of the Standard Turing Test.Robert M. French - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):331-340.
Making the Right Identification in the Turing Test.Saul Traiger - 2000 - Minds and Machines 10 (4):561-572.
Who's Afraid of the Turing Test?Dale Jacquette - 1993 - Behavior and Philosophy 20 (2):63-74.
Can a Machine Be Conscious? How?Stevan Harnad - 2003 - Journal of Consciousness Studies 10 (4-5):67-75.
The Constructability of Artificial Intelligence.Bruce Edmonds - 2000 - Journal of Logic Language and Information 9 (4):419-424.
The Constructibility of Artificial Intelligence (as Defined by the Turing Test).Bruce Edmonds - 2000 - Journal of Logic, Language and Information 9 (4):419-424.
Turing's Two Tests for Intelligence.Susan G. Sterrett - 1999 - Minds and Machines 10 (4):541-559.

Analytics

Added to PP index
2019-06-14

Total views
462 ( #18,327 of 2,462,379 )

Recent downloads (6 months)
107 ( #6,038 of 2,462,379 )

How can I increase my downloads?

Downloads

My notes