John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence. Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" equal to those of (...) brains." On a morecarefully crafted understanding – understood just to targetmetaphysical identification of thought with computation and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high church– "someday my prince of an AI program will come" – believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them. (shrink)
My pocket calculator (Cal) has certain arithmetical abilities: it seems Cal calculates. That calculating is thinking seems equally untendentious. Yet these two claims together provide premises for a seemingly valid syllogism whose conclusion -- Cal thinks -- most would deny. I consider several ways to avoid this conclusion, and find them mostly wanting. Either we ourselves can't be said to think or calculate if our calculation-like performances are judged by the standards proposed to rule out Cal; or the standards -- (...) e.g., autonomy and self-consciousness -- make it impossible to verify whether anything or anyone (save myself) meets them. While appeals to the intentionality of thought or the unity of minds provide more credible lines of resistance, available accounts of intentionality and mental unity are insufficiently clear and warranted to provide very substantial arguments against Cal's title to be called a thinking thing. Indeed, considerations favoring granting that title are more formidable than generally appreciated. (shrink)
Harnad''s proposed robotic upgrade of Turing''s Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguisticand sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is no evidence of consciousness besides private experience. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from as if thought on the basis of (...) (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution)has the skeptical consequence Harnad accepts — there is in factno evidence for me that anyone else but me has a mind. I disagree with hisacceptance of it! It would be better to give up the neo-Cartesian faith in private conscious experience underlying Harnad''s allegiance to Searle''s controversial Chinese Room Experiment than give up all claim to know others think. It would be better to allow that (passing) Turing''s Test evidences — evenstrongly evidences — thought. (shrink)
Harnad 's proposed "robotic upgrade" of Turing's Test, from a test of linguistic capacity alone to a Total Turing Test of linguistic and sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is "no evidence" [p.45] of consciousness besides "private experience" [p.52]. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from "as if" thought on the (...) basis of consciousness testing as sufficient warrant for mental attribution) has the skeptical consequence Harnad accepts -- "there is in fact no evidence for me that anyone else but me has a mind" [p.45]. I disagree with his acceptance of it! It would be better to give up the neo-Cartesian "faith" [p.52] in private conscious experience underlying Harnad 's allegiance to Searle's controversial Chinese Room "Experiment" than give up all claim to know others think. It would be better to allow that Turing's Test evidences -- even strongly evidences -- thought. (shrink)
The intelligent-seeming deeds of computers are what occasion philosophical debate about artificial intelligence (AI) in the first place. Since evidence of AI is not bad, arguments against seem called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994) is among the most famous and long-running would-be answers to the call. Surprisingly, both the original thought experiment (1980a) and Searle's later would-be formalizations of the embedding argument (1984, 1990) are quite unavailing against AI proper (claims that computers do or someday (...) will think ). Searle lately even styles it a "misunderstanding" (1994, p. 547) to think the argument was ever so directed! The Chinese room is now advertised to target Computationalism (claims that computation is what thought essentially is ) exclusively. Despite its renown, the Chinese Room Argument is totally ineffective even against this target. (shrink)
The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong AI, Searle (...) says, “the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (1980a, p. 417). Searle contrasts strong AI with “weak AI.” According to weak AI, computers just simulate thought, their seeming understanding isn’t real understanding (just as-if), their seeming calculation is only as-if calculation, etc. Nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things). (shrink)
The abject failure of Turing's first prediction (of computer success in playing the Imitation Game) confirms the aptness of the Imitation Game test as a test of human level intelligence. It especially belies fears that the test is too easy. At the same time, this failure disconfirms expectations that human level artificial intelligence will be forthcoming any time soon. On the other hand, the success of Turing's second prediction (that acknowledgment of computer thought processes would become commonplace) in practice amply (...) confirms the thought that computers think in some manner and are possessed of some level of intelligence already. This lends ever-growing support to the hypothesis that computers will think at a human level eventually, despite the abject failure of Turing's first prediction. (shrink)
What Robots Can and Can't Be (hereinafter Robots) is, as Selmer Bringsjord says "intended to be a collection of formal-arguments-that-border-on-proofs for the proposition that in all worlds, at all times, machines can't be minds" (Bringsjord, forthcoming). In his (1994) "Précis of What Robots Can and Can't Be" Bringsjord styles certain of these arguments as proceeding "repeatedly . . . through instantiations of" the "simple schema".
John Searle's 1980a) thought experiment and associated 1984a) argument is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (roughly, someday will) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't suffice_ _for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the computer is not merely a (...) tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" 1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just. (shrink)
Zombies recently conjured by Searle and others threaten civilized philosophy of mind and scientific psychology as we know it. Humanoid beings that behave like us and may share our functional organizations and even, perhaps, our neurophysiological makeups without qualetative conscious experiences, zombies seem to meet every materialist condition for thought on offer and yet -- the wonted intuitions go -- are still disqualefied from being thinking things. I have a plan. Other zombies -- good zombies -- can battle their evil (...) cousins to a standoff. Perhaps even defeat them. Familiar zombies and supersmart zombies resist disqualefication, making the world safe, again, for materialism. Behavioristic materialism. Alas for functionalism, good zombies still eat programs. Alas for identity theory, all zombies -- every B movie fan knows -- eat brains. (shrink)
Hauser considers John Searle's attempt to distinguish acts from movements. On Searle's account, the difference between me raising my arm and my arm's just going up (e.g., if you forcibly raise it), is the causal involvement of my intention to raise my arm in the former, but not the latter, case. Yet, we distinguish a similar difference between a robot's raising its arm and its robot arm just going up (e.g., if you manually raise it). Either robots are rightly credited (...) with intentions or it's not intention that distinguishes action from mere movement. In either case acts are attributable to robots. Since the truth of such attributions depends not on the speaker's "intentional stance" but on "intrinsic" features of the things they are not merely figurative "as if" attributions. Gunderson allows that internally propelled programmed devices (Hauser Robots) do act but denies that they have the mental properties such acts seem to indicate. Rather, given our intuitive conviction that these machines lack consciousness, such performances evidence the dementalizability of acts.Hauser replies that the performances in question provide prima facie warrant for attributions of mental properties that considerations of consciousness are insufficient to override. (shrink)
George Lakoff (in his book Women, Fire, and Dangerous Things(1987) and the paper "Cognitive semantics" (1988)) champions some radical foundational views. Strikingly, Lakoff opposes realism as a metaphysical position, favoring instead some supposedly mild form of idealism such as that recently espoused by Hilary Putnam, going under the name "internal realism." For what he takes to be connected reasons, Lakoff also rejects truth conditional model-theoretic semantics for natural language. This paper examines an argument, given by Lakoff, against realism and MTS. (...) We claim that Lakoff's argument has very little, if any, impact for linguistic semantics. (shrink)
From the fact that experiencing is in the head, nothing follows about the nature, location - or even the existence - of the experiencing's presumed object. It does not follow that direct realism "cannot possibly be true" ; much less that "that the experienced world is wholly locked up within one's brain"; much less still, that it must be "located" in in some spiritual "place" outside of physical space or some "higher-dimensional space " . Direct realism is not only consistent (...) with all the known neurophysiological facts, it coheres far better with surrounding and grounding science - and the neuroscience itself - than the Smythian alternative towards which Crooks tends; and it may be had for a reasonable naïve phenomenological cost. (shrink)
The apparently intelligent doings of computers occasion philosophical debate about artificial intelligence . Evidence of AI is not bad; arguments against AI are: such is the case for. One argument against AI--currently, perhaps, the most influential--is considered in detail: John Searle's Chinese room argument . This argument and its attendant thought experiment are shown to be unavailing against claims that computers can and even do think. CRA is formally invalid and informally fallacious. CRE's putative experimental result is not robust and (...) fails to generalize from understanding to other mental attributes as claimed. Further, CRE depends for its credibility, in the first place, on a dubious tender of the epistemic privilege of overriding all "external" behavioral evidence to first person disavowals of mental properties like understanding. ;Advertised as effective against AI, Searle's argument is an ignoratio elenchi, feigning to refute AI by disputing a similar claim of "strong AI" or Turing machine functionalism metaphysically identifying minds with programs. AI, however, is warranted independently of FUN: even if CRA disproved FUN this would still fail to refute or seriously disconfirm claims of AI. Searle's contention that everyday predications of mental terms of computers are discountable as equivocal "as-if" predications--impugning independent seeming-evidence of AI if tenable--is unwarranted. Lacking intuitive basis, such accusations of ambiguity require theoretical support. The would-be theoretical differentiation of intrinsic intentionality from as-if intentionality Searle propounds to buttress allegations of ambiguity against mental attributions to computers, however, depends either on dubious doctrines of objective intrinsicality according to which meanings are physically in the head or on even more dubious notions of subjective intrinsicality according to which meanings are phenomenologically "in" consciousness. Neither would such would-be differentiae as these unproblematically rule out seeming instances of AI if granted. The dubiousness of as if dualistic identification of thought with consciousness also undermines the epistemic privileging of the "first person point of view" crucial to Searle's thought experiment. (shrink)
It will be found that the great majority, given the premiss that thought is not distinct from corporeal motion, take a much more rational line and maintain that thought is the same in the brutes as in us, since they observe all sorts of corporeal motions in them, just as in us. And they will add that the difference, which is merely one of degree, does not imply any essential difference; from this they will be quite justified in concluding that, (...) although there may be a smaller degree of reason in the beasts than there is in us, the beasts possess minds which are of exactly the same type as ours. (Descartes 1642: 288–289.). (shrink)
Hauser defends the proposition that public languages are our languages of thought. One argument for this proposition is coincidence of productive (i.e., novel, unbounded) cognitive competence with overt possession of recursive symbol systems. Another is phenomenological experience. A third is Occam's razor and the "streetlight principle.".
Hauser defends the proposition that our languages of thought are public languages. One group of arguments points to the coincidence of clearly productive (novel, unbounded) cognitive competence with overt possession of recursive symbol systems. Another group relies on phenomenological experience. A third group cites practical and methodological considerations: Occam's razor and the "streetlight principle" (other things being equal, look under the lamp) that motivate looking for instantiations of outer languages in thought first.
George Lakoff (in his book Women, Fire, and Dangerous Things (1987) and the paper "Cognitive semantics" (1988)) champions some radical foundational views. Strikingly, Lakoff opposes realism as a metaphysical position, favoring instead some supposedly mild form of idealism such as that recently espoused by Hilary Putnam, going under the name internal realism." For what he takes to be connected reasons, Lakoff also rejects truth conditional model-theoretic semantics for natural language.
In connection with John Searle's denial that computers genuinely act, Hauser considers Searle's attempt to distinguish full-blooded acts of agents from mere physical movements on the basis of intent. The difference between me raising my arm and my arm's just going up, on Searle's account, is the causal involvement of my intention to raise my arm in the former, but not the latter, case. Yet, we distinguish a similar difference between a robot's raising its arm and its robot arm just (...) going up. Either robots are rightly credited with intentions, or it is not intention that distinguishes action from mere movement. In either case full-blooded acts under "aspects" are attributable to robots and computers. Since the truth of such attributions depends on "intrinsic" features of the things not on the speaker's "intentional stance," they are not merely figurative "as if" attributions. (shrink)
_The Chinese room argument_ - John Searle's (1980a) thought experiment and associated (1984) derivation - is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (someday might) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't_ _suffice for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the (...) computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" (1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just. (shrink)
[Grace as God’s Self-giving and Respect] Employing the methods of formal and transcendental analysis, the topic is introduced by departing from the experience of love between human beings. Love proves itself to be a unity of self-giving and respect. Both of these related elements of the notion of love are then put to the test in the light of that mode of divine loving called ‘grace’. By studying the history of dogma, this notion of grace is brought to a full (...) definition. (shrink)
Hauser replies that performances in question provide prima facie warrant for attributions of mental properties that appeals to consciousness are empirically too vexed and theoretically too ill connected to override.
Accident: A property or attribute that a (type of) thing or substance can either have or lack while still remaining the same (type of) thing or substance. For instance, I can either be sitting or standing, shod or unshod, and still be me (i.e., one and the same human being). Contrast: essence.
Against the claim that folk psychology is a theory, I contend thatfolk psychology is not empirically vulnerable in the same way theories are, and has evaluative functions that make it irreplaceable by a scientific theory. It is neither would-be nor has-been science.