David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
Acta Analytica 18 (30-31):161-175 (2003)
Strong Al presupposes (1) that Super-Searle (henceforth ‘Searle’) comes to know that the symbols he manipulates are meaningful , and (2) that there cannot be two or more semantical interpretations for the system of symbols that Searle manipulates such that the set of rules constitutes a language comprehension program for each interpretation. In this paper, I show that Strong Al is false and that presupposition #1 is false, on the assumption that presupposition #2 is true. The main argument of the paper constructs a second program, isomorphic to Searle’s, to show that if someone, say Dan, runs this isomorphic program, he cannot possibly come to know what its mentioned symbols mean because they do not mean anything to anybody. Since Dan and Searle do exactly the same thing, except that the symbols they manipulate are different, neither Dan nor Searle can possibly know whether the symbols they manipulate are meaningful (let alone what they mean, if they are meaningful). The remainder of the paper responds to an anticipated Strong Al rejoinder, which, I believe, is a necessary extension of Strong Al
|Keywords||Functionalism Language Meaning Semantics Searle, J|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
John R. Searle (1980). Minds, Brains and Programs. Behavioral and Brain Sciences 3 (3):417-57.
Ned Block (1980). What Intuitions About Homunculi Don't Show. Behavioral and Brain Sciences 3 (3):425.
Douglas R. Hofstadter & Daniel C. Dennett (1981). Reflections. In D. R. Hofstadter & D. C. Dennett (eds.), The Mind's I: Fantasies and Reflections on Self and Soul. New York, Basic Books
Citations of this work BETA
No citations found.
Similar books and articles
Simone Gozzano (1997). The Chinese Room Argument: Consciousness and Understanding. In Matjaz Gams, M. Paprzycki & X. Wu (eds.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press 43--231.
Mark Sprevak (2007). Chinese Rooms and Program Portability. British Journal for the Philosophy of Science 58 (4):755 - 776.
Mikhail Kissine (2011). Misleading Appearances: Searle on Assertion and Meaning. [REVIEW] Erkenntnis 74 (1):115-129.
Neal Jahren (1990). Can Semantics Be Syntactic? Synthese 82 (3):309-28.
B. Jack Copeland (1993). The Curious Case of the Chinese Gym. Synthese 95 (2):173-86.
Steffen Borge (2007). A Modal Defence of Strong AI. In Dermot Moran Stephen Voss (ed.), The Proceedings of the Twenty-First World Congress of Philosophy. The Philosophical Society of Turkey 127-131.
Larry Hauser, Searle's Chinese Room Argument. Field Guide to the Philosophy of Mind.
Lawrence Richard Carleton (1984). Programs, Language Understanding, and Searle. Synthese 59 (May):219-30.
Georges Rey (2003). Searle's Misunderstandings of Functionalism and Strong AI. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press 201--225.
Andrew Melnyk (1996). Searle's Abstract Argument Against Strong AI. Synthese 108 (3):391-419.
Added to index2009-01-28
Total downloads64 ( #66,164 of 1,902,049 )
Recent downloads (6 months)2 ( #346,256 of 1,902,049 )
How can I increase my downloads?