Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence

Dissertation, University of Michigan (1993)
Abstract
The apparently intelligent doings of computers occasion philosophical debate about artificial intelligence . Evidence of AI is not bad; arguments against AI are: such is the case for. One argument against AI--currently, perhaps, the most influential--is considered in detail: John Searle's Chinese room argument . This argument and its attendant thought experiment are shown to be unavailing against claims that computers can and even do think. CRA is formally invalid and informally fallacious. CRE's putative experimental result is not robust and fails to generalize from understanding to other mental attributes as claimed. Further, CRE depends for its credibility, in the first place, on a dubious tender of the epistemic privilege of overriding all "external" behavioral evidence to first person disavowals of mental properties like understanding. ;Advertised as effective against AI, Searle's argument is an ignoratio elenchi, feigning to refute AI by disputing a similar claim of "strong AI" or Turing machine functionalism metaphysically identifying minds with programs. AI, however, is warranted independently of FUN: even if CRA disproved FUN this would still fail to refute or seriously disconfirm claims of AI. Searle's contention that everyday predications of mental terms of computers are discountable as equivocal "as-if" predications--impugning independent seeming-evidence of AI if tenable--is unwarranted. Lacking intuitive basis, such accusations of ambiguity require theoretical support. The would-be theoretical differentiation of intrinsic intentionality from as-if intentionality Searle propounds to buttress allegations of ambiguity against mental attributions to computers, however, depends either on dubious doctrines of objective intrinsicality according to which meanings are physically in the head or on even more dubious notions of subjective intrinsicality according to which meanings are phenomenologically "in" consciousness. Neither would such would-be differentiae as these unproblematically rule out seeming instances of AI if granted. The dubiousness of as if dualistic identification of thought with consciousness also undermines the epistemic privileging of the "first person point of view" crucial to Searle's thought experiment
Keywords No keywords specified (fix it)
Categories (categorize this paper)
Options
 Save to my reading list
Follow the author(s)
My bibliography
Export citation
Find it on Scholar
Edit this record
Mark as duplicate
Revision history Request removal from index
 
Download options
PhilPapers Archive


Upload a copy of this paper     Check publisher's policy on self-archival     Papers currently archived: 13,997
External links
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library
References found in this work BETA

No references found.

Citations of this work BETA

No citations found.

Similar books and articles
Larry Hauser (2003). Nixin' Goes to China. In John M. Preston & John Mark Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. 123--143.
Analytics

Monthly downloads

Added to index

2009-01-28

Total downloads

76 ( #26,416 of 1,696,461 )

Recent downloads (6 months)

2 ( #241,809 of 1,696,461 )

How can I increase my downloads?

My notes
Sign in to use this feature


Discussion
Start a new thread
Order:
There  are no threads in this forum
Nothing in this forum yet.