David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum 25--48 (1992)
More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
Nir Fresco (2012). The Explanatory Role of Computation in Cognitive Science. Minds and Machines 22 (4):353-380.
Tom Ziemke & Noel E. Sharkey (2001). A Stroll Through the Worlds of Robots and Animals: Applying Jakob von Uexkülls Theory of Meaning to Adaptive Robots and Artificial Life. Semiotica 2001 (134).
Patrick Chisan Hew (2014). Artificial Moral Agents Are Infeasible with Foreseeable Technologies. Ethics and Information Technology 16 (3):197-206.
Marco Ernandes (2005). Artificial Intelligence & Games: Should Computational Psychology Be Revalued? Topoi 24 (2):229-242.
Eliano Pessa & Graziano Terenzi (2007). Semiosis in Cognitive Systems: A Neural Approach to the Problem of Meaning. [REVIEW] Mind and Society 6 (2):189-209.
Similar books and articles
Andrew Melnyk (1996). Searle's Abstract Argument Against Strong AI. Synthese 108 (3):391-419.
Steffen Borge (2007). A Modal Defence of Strong AI. In Dermot Moran Stephen Voss (ed.), The Proceedings of the Twenty-First World Congress of Philosophy. The Philosophical Society of Turkey 127-131.
Larry Hauser (2003). Nixin' Goes to China. In John M. Preston & John Mark Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press 123--143.
David J. Chalmers (2011). A Computational Foundation for the Study of Cognition. Journal of Cognitive Science 12 (4):323-357.
Stevan Harnad (1989). Minds, Machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
David J. Chalmers (1994). On Implementing a Computation. Minds and Machines 4 (4):391-402.
Dale Jacquette (1990). Fear and Loathing (and Other Intentional States) in Searle's Chinese Room. Philosophical Psychology 3 (2 & 3):287-304.
Koji Tanaka (2004). Minds, Programs, and Chinese Philosophers: A Chinese Perspective on the Chinese Room. Sophia 43 (1):61-72.
Mark Sprevak (2007). Chinese Rooms and Program Portability. British Journal for the Philosophy of Science 58 (4):755 - 776.
Added to index2009-01-28
Total downloads222 ( #11,659 of 1,796,429 )
Recent downloads (6 months)30 ( #26,310 of 1,796,429 )
How can I increase my downloads?