In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum (1992)
|Abstract||More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power|
|Keywords||No keywords specified (fix it)|
|Through your library||Configure|
Similar books and articles
Andrew Melnyk (1996). Searle's Abstract Argument Against Strong AI. Synthese 108 (3):391-419.
Steffen Borge (2007). A Modal Defence of Strong AI. In Dermot Moran Stephen Voss (ed.), Epistemology. The Proceedings of the Twenty-First World Congress of Philosophy. Vol. 6. The Philosophical Society of Turkey.
Larry Hauser (2003). Nixin' Goes to China. In John M. Preston & John Mark Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.
David J. Chalmers (2011). A Computational Foundation for the Study of Cognition. Journal of Cognitive Science 12 (4):323-357.
Stevan Harnad (1989). Minds, Machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
David J. Chalmers (1994). On Implementing a Computation. Minds and Machines 4 (4):391-402.
Dale Jacquette (1990). Fear and Loathing (and Other Intentional States) in Searle's Chinese Room. Philosophical Psychology 3 (2 & 3):287-304.
Koji Tanaka (2004). Minds, Programs, and Chinese Philosophers: A Chinese Perspective on the Chinese Room. Sophia 43 (1):61-72.
Mark Sprevak (2007). Chinese Rooms and Program Portability. British Journal for the Philosophy of Science 58 (4):755 - 776.
Added to index2009-01-28
Total downloads113 ( #4,789 of 549,196 )
Recent downloads (6 months)12 ( #5,539 of 549,196 )
How can I increase my downloads?