Behavioral and Brain Sciences 3 (1):111-32 (1980)

Authors
Zenon Pylyshyn
Rutgers University - New Brunswick
Abstract
The computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach is that there is a natural domain of human functioning that can be addressed exclusively in terms of a formal symbolic or algorithmic vocabulary or level of analysis. Much of the paper elaborates various conditions that need to be met if a literal view of mental activity as computation is to serve as the basis for explanatory theories. The coherence of such a view depends on there being a principled distinction between functions whose explanation requires that we posit internal representations and those that we can appropriately describe as merely instantiating causal physical or biological laws. In this paper the distinction is empirically grounded in a methodological criterion called the " cognitive impenetrability condition." Functions are said to be cognitively impenetrable if they cannot be influenced by such purely cognitive factors as goals, beliefs, inferences, tacit knowledge, and so on. Such a criterion makes it possible to empirically separate the fixed capacities of mind from the particular representations and algorithms used on specific occasions. In order for computational theories to avoid being ad hoc, they must deal effectively with the "degrees of freedom" problem by constraining the extent to which they can be arbitrarily adjusted post hoc to fit some particular set of observations. This in turn requires that the fixed architectural function and the algorithms be independently validated. It is argued that the architectural assumptions implicit in many contemporary models run afoul of the cognitive impenetrability condition, since the required fixed functions are demonstrably sensitive to tacit knowledge and goals. The paper concludes with some tactical suggestions for the development of computational cognitive theories
Keywords cognitive science   artificial intelligence   computational models   computer simulation   cognition   mental representation   mental process   imagery   philosophical foundations   functionalism   philosophy of mind
Categories (categorize this paper)
DOI 10.1017/s0140525x00002053
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 61,025
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

The Language of Thought.Jerry A. Fodor - 1975 - Harvard University Press.
Brainstorms.Daniel C. Dennett - 1978 - MIT Press.

View all 68 references / Add more references

Citations of this work BETA

Minds, Brains, and Programs.John R. Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Cognitive Penetrability of Perception.Dustin Stokes - 2013 - Philosophy Compass 8 (7):646-663.
The Psychology of Folk Psychology.Alvin I. Goldman - 1993 - Behavioral and Brain Sciences 16 (1):15-28.

View all 617 citations / Add more citations

Similar books and articles

Analytics

Added to PP index
2009-01-28

Total views
162 ( #63,197 of 2,439,585 )

Recent downloads (6 months)
10 ( #64,978 of 2,439,585 )

How can I increase my downloads?

Downloads

My notes