Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence

Dissertation, University of Michigan (1993)
  Copy   BIBTEX

Abstract

The apparently intelligent doings of computers occasion philosophical debate about artificial intelligence . Evidence of AI is not bad; arguments against AI are: such is the case for. One argument against AI--currently, perhaps, the most influential--is considered in detail: John Searle's Chinese room argument . This argument and its attendant thought experiment are shown to be unavailing against claims that computers can and even do think. CRA is formally invalid and informally fallacious. CRE's putative experimental result is not robust and fails to generalize from understanding to other mental attributes as claimed. Further, CRE depends for its credibility, in the first place, on a dubious tender of the epistemic privilege of overriding all "external" behavioral evidence to first person disavowals of mental properties like understanding. ;Advertised as effective against AI, Searle's argument is an ignoratio elenchi, feigning to refute AI by disputing a similar claim of "strong AI" or Turing machine functionalism metaphysically identifying minds with programs. AI, however, is warranted independently of FUN: even if CRA disproved FUN this would still fail to refute or seriously disconfirm claims of AI. Searle's contention that everyday predications of mental terms of computers are discountable as equivocal "as-if" predications--impugning independent seeming-evidence of AI if tenable--is unwarranted. Lacking intuitive basis, such accusations of ambiguity require theoretical support. The would-be theoretical differentiation of intrinsic intentionality from as-if intentionality Searle propounds to buttress allegations of ambiguity against mental attributions to computers, however, depends either on dubious doctrines of objective intrinsicality according to which meanings are physically in the head or on even more dubious notions of subjective intrinsicality according to which meanings are phenomenologically "in" consciousness. Neither would such would-be differentiae as these unproblematically rule out seeming instances of AI if granted. The dubiousness of as if dualistic identification of thought with consciousness also undermines the epistemic privileging of the "first person point of view" crucial to Searle's thought experiment

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,990

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Nixin' goes to china.Larry Hauser - 2002 - In John Mark Bishop & John Preston (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. London: Oxford University Press. pp. 123--143.
Chinese room argument.Larry Hauser - 2001 - Internet Encyclopedia of Philosophy.
Searle's chinese room argument.Larry Hauser - unknown - Field Guide to the Philosophy of Mind.
A Modal Defence of Strong AI.Steffen Borge - 2007 - In Dermot Moran Stephen Voss (ed.), Epistemology. The Proceedings of the Twenty-First World Congress of Philosophy. Vol. 6. The Philosophical Society of Turkey. pp. 127-131.
In Defense of Strong AI.Corey Baron - 2017 - Stance 10:15-25.
A Modal Defence of Strong AI.Steffen Borge - 2007 - The Proceedings of the Twenty-First World Congress of Philosophy 6:127-131.
In Defense of Strong AI.Corey Baron - 2020 - Stance 10 (1):38-49.

Analytics

Added to PP
2009-01-28

Downloads
99 (#172,631)

6 months
99 (#55,188)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Add more citations

References found in this work

No references found.

Add more references