John Searle's 1980a) thought experiment and associated 1984a) argument is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (roughly, someday will) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't suffice_ _for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the computer is not merely a (...) tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" 1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just. (shrink)
Accident: A property or attribute that a (type of) thing or substance can either have or lack while still remaining the same (type of) thing or substance. For instance, I can either be sitting or standing, shod or unshod, and still be me (i.e., one and the same human being). Contrast: essence.
George Lakoff (in his book Women, Fire, and Dangerous Things (1987) and the paper "Cognitive semantics" (1988)) champions some radical foundational views. Strikingly, Lakoff opposes realism as a metaphysical position, favoring instead some supposedly mild form of idealism such as that recently espoused by Hilary Putnam, going under the name internal realism." For what he takes to be connected reasons, Lakoff also rejects truth conditional model-theoretic semantics for natural language.
Zombies recently conjured by Searle and others threaten civilized (i.e., materialistic) philosophy of mind and scientific psychology as we know it. Humanoid beings that behave like us and may share our functional organizations and even, perhaps, our neurophysiological makeups without qualetative conscious experiences, zombies seem to meet every materialist condition for thought on offer and yet -- the wonted intuitions go -- are still disqualefied (disqualified for lack of qualia) from being thinking things. I have a plan. Other zombies -- (...) good (qualia eating) zombies -- can battle their evil (behavior eating) cousins to a standoff. Perhaps even defeat them. Familiar zombies and supersmart zombies resist disqualefication, making the world safe, again, for materialism. Behavioristic materialism. Alas for functionalism, good zombies still eat programs. Alas for identity theory, all zombies -- every B movie fan knows -- eat brains. (shrink)
The intelligent-seeming deeds of computers are what occasion philosophical debate about artificial intelligence (AI) in the first place. Since evidence of AI is not bad, arguments against seem called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994) is among the most famous and long-running would-be answers to the call. Surprisingly, both the original thought experiment (1980a) and Searle's later would-be formalizations of the embedding argument (1984, 1990) are quite unavailing against AI proper (claims that computers do or someday (...) will think ). Searle lately even styles it a "misunderstanding" (1994, p. 547) to think the argument was ever so directed! The Chinese room is now advertised to target Computationalism (claims that computation is what thought essentially is ) exclusively. Despite its renown, the Chinese Room Argument is totally ineffective even against this target. (shrink)
The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong AI, Searle (...) says, “the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (1980a, p. 417). Searle contrasts strong AI with “weak AI.” According to weak AI, computers just simulate thought, their seeming understanding isn’t real understanding (just as-if), their seeming calculation is only as-if calculation, etc. Nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things). (shrink)
The abject failure of Turing's first prediction (of computer success in playing the Imitation Game) confirms the aptness of the Imitation Game test as a test of human level intelligence. It especially belies fears that the test is too easy. At the same time, this failure disconfirms expectations that human level artificial intelligence will be forthcoming any time soon. On the other hand, the success of Turing's second prediction (that acknowledgment of computer thought processes would become commonplace) in practice amply (...) confirms the thought that computers think in some manner and are possessed of some level of intelligence already. This lends ever-growing support to the hypothesis that computers will think at a human level eventually, despite the abject failure of Turing's first prediction. (shrink)
_The Chinese room argument_ - John Searle's (1980a) thought experiment and associated (1984) derivation - is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (someday might) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't_ _suffice for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the (...) computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" (1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just. (shrink)
What Robots Can and Can't Be (hereinafter Robots) is, as Selmer Bringsjord says "intended to be a collection of formal-arguments-that-border-on-proofs for the proposition that in all worlds, at all times, machines can't be minds" (Bringsjord, forthcoming). In his (1994) "Précis of What Robots Can and Can't Be" Bringsjord styles certain of these arguments as proceeding "repeatedly . . . through instantiations of" the "simple schema".
Hauser defends the proposition that public languages are our languages of thought. One argument for this proposition is coincidence of productive (i.e., novel, unbounded) cognitive competence with overt possession of recursive symbol systems. Another is phenomenological experience. A third is Occam's razor and the "streetlight principle.".
Hauser defends the proposition that our languages of thought are public languages. One group of arguments points to the coincidence of clearly productive (novel, unbounded) cognitive competence with overt possession of recursive symbol systems. Another group relies on phenomenological experience. A third group cites practical and methodological considerations: Occam's razor and the "streetlight principle" (other things being equal, look under the lamp) that motivate looking for instantiations of outer languages in thought first.
In connection with John Searle's denial that computers genuinely act, Hauser considers Searle's attempt to distinguish full-blooded acts of agents (e.g., my raising my arm) from mere physical movements (my arm going up) on the basis of intent. The difference between me raising my arm and my arm's just going up (e.g., if you forcibly raise it), on Searle's account, is the causal involvement of my intention to raise my arm in the former, but not the latter, case. Yet, (...) we distinguish a similar difference between a robot's raising its arm and its robot arm just going up (e.g., if you manually raise it). Either robots are rightly credited with intentions, or it is not intention that distinguishes action from mere movement. In either case full-blooded acts under "aspects" are attributable to robots and computers. Since the truth of such attributions depends on "intrinsic" features of the things not on the speaker's "intentional stance," they are not merely figurative "as if" attributions. (shrink)
Hauser replies that performances in question provide prima facie warrant for attributions of mental properties that appeals to (lack of) consciousness are empirically too vexed and theoretically too ill connected to override.
Harnad''s proposed robotic upgrade of Turing''s Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguisticand sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is no evidence of consciousness besides private experience. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from as if thought on the basis of (...) (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution)has the skeptical consequence Harnad accepts — there is in factno evidence for me that anyone else but me has a mind. I disagree with hisacceptance of it! It would be better to give up the neo-Cartesian faith in private conscious experience underlying Harnad''s allegiance to Searle''s controversial Chinese Room Experiment than give up all claim to know others think. It would be better to allow that (passing) Turing''s Test evidences — evenstrongly evidences — thought. (shrink)
It will be found that the great majority, given the premiss that thought is not distinct from corporeal motion, take a much more rational line and maintain that thought is the same in the brutes as in us, since they observe all sorts of corporeal motions in them, just as in us. And they will add that the difference, which is merely one of degree, does not imply any essential difference; from this they will be quite justified in concluding that, (...) although there may be a smaller degree of reason in the beasts than there is in us, the beasts possess minds which are of exactly the same type as ours. (Descartes 1642: 288–289.). (shrink)
My pocket calculator (Cal) has certain arithmetical abilities: it seems Cal calculates. That calculating is thinking seems equally untendentious. Yet these two claims together provide premises for a seemingly valid syllogism whose conclusion -- Cal thinks -- most would deny. I consider several ways to avoid this conclusion, and find them mostly wanting. Either we ourselves can't be said to think or calculate if our calculation-like performances are judged by the standards proposed to rule out Cal; or the standards -- (...) e.g., autonomy and self-consciousness -- make it impossible to verify whether anything or anyone (save myself) meets them. While appeals to the intentionality of thought or the unity of minds provide more credible lines of resistance, available accounts of intentionality and mental unity are insufficiently clear and warranted to provide very substantial arguments against Cal's title to be called a thinking thing. Indeed, considerations favoring granting that title are more formidable than generally appreciated. (shrink)