Behavioral and Brain Sciences 3 (1):111-32 (1980)
|Abstract||The computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach (the "proprietary vocabulary hypothesis") is that there is a natural domain of human functioning (roughly what we intuitively associate with perceiving, reasoning, and acting) that can be addressed exclusively in terms of a formal symbolic or algorithmic vocabulary or level of analysis. Much of the paper elaborates various conditions that need to be met if a literal view of mental activity as computation is to serve as the basis for explanatory theories. The coherence of such a view depends on there being a principled distinction between functions whose explanation requires that we posit internal representations and those that we can appropriately describe as merely instantiating causal physical or biological laws. In this paper the distinction is empirically grounded in a methodological criterion called the "cognitive impenetrability condition." Functions are said to be cognitively impenetrable if they cannot be influenced by such purely cognitive factors as goals, beliefs, inferences, tacit knowledge, and so on. Such a criterion makes it possible to empirically separate the fixed capacities of mind (called its "functional architecture") from the particular representations and algorithms used on specific occasions. In order for computational theories to avoid being ad hoc, they must deal effectively with the "degrees of freedom" problem by constraining the extent to which they can be arbitrarily adjusted post hoc to fit some particular set of observations. This in turn requires that the fixed architectural function and the algorithms be independently validated. It is argued that the architectural assumptions implicit in many contemporary models run afoul of the cognitive impenetrability condition, since the required fixed functions are demonstrably sensitive to tacit knowledge and goals. The paper concludes with some tactical suggestions for the development of computational cognitive theories|
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
|Through your library||Configure|
Similar books and articles
C. Glymour (1994). On the Methods of Cognitive Neuropsychology. British Journal for the Philosophy of Science 45 (3):815-35.
David J. Chalmers (2011). A Computational Foundation for the Study of Cognition. Journal of Cognitive Science 12 (4):323-357.
Selmer Bringsjord (1998). Cognition is Not Computation: The Argument From Irreversibility. Synthese 113 (2):285-320.
Gordana Dodig Crnkovic & Susan Stuart (eds.) (2007). Computation, Information, Cognition: The Nexus and the Liminal. Cambridge Scholars Press.
Gualtiero Piccinini & Andrea Scarantino (2011). Information Processing, Computation, and Cognition. Journal of Biological Physics 37 (1):1-38.
Paul R. Thagard (2002). How Molecules Matter to Mental Computation. Philosophy of Science 69 (3):497-518.
James H. Fetzer (1997). Thinking and Computing: Computers as Special Kinds of Signs. [REVIEW] Minds and Machines 7 (3):345-364.
Gualtiero Piccinini & Andrea Scarantino (2010). Computation Vs. Information Processing: Why Their Difference Matters to Cognitive Science. Studies in History and Philosophy of Science Part A 41 (3):237-246.
John Haugeland (1987). Book Review:Computation and Cognition: Toward a Foundation for Cognitive Science Zenon W. Pylyshyn. [REVIEW] Philosophy of Science 54 (2):309-.
Sorry, there are not enough data points to plot this chart.
Added to index2009-01-28
Recent downloads (6 months)0
How can I increase my downloads?