David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Husserl Studies 8 (2):107-127 (1991)
For over a decade John Searle's ingenious argument against the possibility of artificial intelligence has held a prominent place in contemporary philosophy. This is not just because of its striking central example and the apparent simplicity of its argument. As its appearance in Scientific American testifies, it is also due to its importance to the wider scientific community. If Searle is right, artificial intelligence in the strict sense, the sense that would claim that mind can be instantiated through a formal program of symbol manipulation, is basically wrong. No set of formal conditions can provide us with the characteristic feature of mind which is the intentionally of its mental contents. Formally regarded, such intentionally is an irreducible primitive. It cannot be analyzed into non-intentional (purely syntactic, symbolic) components. This paper will argue that this objection is based on a misunderstanding. Intentionality is not simply something given which is incapable of further analysis. It only appears so when we mistakenly abstract it from time. When we regard its temporal structure, it shows itself as a rule-governed, synthetic process, one capable of being instantiated both by machines and men.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
James Mensch (2013). The Question of Naturalizing Phenomenology. Symposium 17 (1):210-228.
Kevin Warwick (2002). Alien Encounters. In John M. Preston & John Mark Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford: Clarendon Press. 308.
Larry Hauser (2003). Nixin' Goes to China. In John M. Preston & John Mark Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. 123--143.
Dairon Rodríguez, Jorge Hermosillo & Bruno Lara (2012). Meaning in Artificial Agents: The Symbol Grounding Problem Revisited. [REVIEW] Minds and Machines 22 (1):25-34.
Murat Aydede & Guven Guzeldere (2000). Consciousness, Intentionality, and Intelligence: Some Foundational Issues for Artificial Intelligence. Journal Of Experimental and Theoretical Artificial Intelligence 12 (3):263-277.
John Mark Bishop (2003). Dancing with Pixies: Strong Artificial Intelligence and Panpsychism. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.
Stevan Harnad (1989). Minds, Machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
John M. Preston & John Mark Bishop (eds.) (2002). Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.
Dale Jacquette (1990). Fear and Loathing (and Other Intentional States) in Searle's Chinese Room. Philosophical Psychology 3 (2 & 3):287-304.
Added to index2009-01-28
Total downloads14 ( #112,737 of 1,098,976 )
Recent downloads (6 months)3 ( #114,620 of 1,098,976 )
How can I increase my downloads?