We consider the symbol grounding problem, and apply to it philosophical arguments against Cartesianism developed by Sellars and McDowell: the problematic issue is the dichotomy between inside and outside which the definition of a physicalsymbolsystem presupposes. Surprisingly, one can question this dichotomy and still do symbolic computation: a detailed examination of the hardware and software of serial ports shows this.
Classical cognitive science assumes that intelligently behaving systems must be symbol processors that are implemented in physical systems such as brains or digital computers. By contrast, connectionists suppose that symbol manipulating systems could be approximations of neural networks dynamics. Both classicists and connectionists argue that symbolic computation and subsymbolic dynamics are incompatible, though on different grounds. While classicists say that connectionist architectures and symbol processors are either incompatible or the former are mere implementations of the latter, (...) connectionists reply that neural networks might be incompatible with symbol processors because the latter cannot be implementations of the former. In this contribution, the notions of 'incompatibility' and 'implementation' will be criticized to show that they must be revised in the context of the dynamical system approach to cognitive science. Examples for implementations of symbol processors that are incompatible with respect to contextual topologies will be discussed. (shrink)
One holistic thesis about symbols is that a symbol cannot exist singly, but only as apart of a symbolsystem. There is also the plausible view that symbol systems emerge gradually in an individual, in a group, and in a species. The problem is that symbol holism makes it hard to see how a symbolsystem can emerge gradually, at least if we are considering the emergence of a first symbolsystem. (...) The only way it seems possible is if being a symbol can be a matter of degree, which is initially problematic. This article explains how being a cognitive symbol can be a matter of degree after all. The contrary intuition arises from the way a process of interpretation forces an all-or-nothing character on symbols, leaving room for underlying material to realize symbols to different degrees in a way that Daniel Dennett’s work can help illuminate. Holism applies to symbols as interpreted, while gradualism applies to how the underlying material realizes symbols.Selon une thèse holistique sur les symboles, un symbole ne peut exister isolément mais doit faire partie d’un systéme symbolique. Une opinion, elle aussi plausible, veut que les systèmes symboliques émergent graduellement chez un individu, un groupe ou une espèce. Le problème c’est qu’on voit mal, si le holisme des systèmes symboliques tient, comment un système symbolique peut émerger graduellement, du moins pour la première fois. Ce n’est possible, semble-t-il, que si être un symbole est affaire de degré, thèse au départ problématique. Cet article explique comment être un symbole cognitif peut après tout être affaire de degré. L’intuition contraire vient de ce que le processus d’interprétation nous force au tout ou rien, ce qui laisse un jeu dans la façon dont le matériel sous-jacent réalise les symboles à des degrés divers. Les travaux de Daniel Dennett sont à cet égard éclairants. Le holisme vaut pour les symboles tels qu’ils sont interprétés, tandis que le gradualisme vaut pour la façon dont le matériel sous-jacent réalise les symboles. (shrink)
I present the symbol grounding problem in the larger context of a materialist theory of content and then present two problems for causal, teleo-functional accounts of content. This leads to a distinction between two kinds of mental representations: presentations and symbols; only the latter are cognitive. Based on Milner and Goodale’s dual route model of vision, I posit the existence of precise interfaces between cognitive systems that are activated during object recognition. Interfaces are constructed as a child learns, and (...) is taught, how to interact with its environment; hence, interface structure has a social determinant essential for symbol grounding. Symbols are encoded in the brain to exploit these interfaces, by having projections to the interfaces that are activated by what they symbolise. I conclude by situating my proposal in the context of Harnad’s (1990) solution to the symbol grounding problem and responding to three standard objections. (shrink)
Vera & Simon (1993a) have argued that the theories and methods known as situated action or situativity theory are compatible with the assumptions and methodology of the physicalsymbol systems hypothesis and do not require a new approach to the study of cognition. When the central criterion of computational universality is added to the loose definition of a symbolsystem which Vera and Simon provide, it becomes apparent that there are important incompatibilities between the two (...) approaches such that situativity theory cannot be subsumed within the symbol systems approach. Symbol systems and situativity theoretic approaches are, and should be seen to be, competing approaches to the study of cognition. (shrink)
Because intelligent agents employ physically embodied cognitive systems to reason about the world, their cognitive abilities are constrained by the laws of physics. Scientists have used digital computers to develop and validate theories of physically embodied cognition. Computational theories of intelligence have advanced our understanding of the nature of intelligence and have yielded practically useful systems exhibiting some degree of intelligence. However, the view of cognition as algorithms running on digital computers rests on implicit assumptions about the physical world (...) that are incorrect. Recently, the view is emerging of computing systems as goal-directed agents, evolving during problem solving toward improved world models and better task performance. A full realization of this vision requires a new logic for computing that incorporates learning from experience as an intrinsic part of the logic, and that permits full exploitation of the quantum nature of the physical world. This paper proposes a theory of physically embodied cognitive agents founded upon first-order logic, Bayesian decision theory, and quantum physics. An abstract architecture for a physically embodied cognitive agent is presented. The cognitive aspect is represented as a Bayesian decision theoretic agent; the physical aspect is represented as a quantum process; and these aspects are related through von Neumann’s principle of psycho-physical parallelism. Alternative metaphysical positions regarding the meaning of quantum probabilities and the role of efficacious choices by agents are discussed in relation to the abstract agent architecture. The concepts are illustrated with an extended example from the domain of science fiction. (shrink)
Artificial Intelligence, and the cognitivist view of mind on which it is based, represent the last stage of the rationalist tradition in philosophy. This tradition begins when Socrates assumes that intelligence is based on principles and when Plato adds the requirement that these principles must be strict rules, not based on taken-for-granted background understanding. This philosophical position, refined by Hobbes, Descartes and Leibniz, is finally converted into a research program by Herbert Simon and Allen Newell. That research program is now (...) in trouble, so we must return to its source and question Socrates' assumption that intelligence consists in solving problems by following rules, and that one acquires the necessary rules by abstracting them from specific cases. A phenomenological description of skill acquisition suggests that the acquisition of expertise moves in just the opposite direction: from abstract rules to particular cases. This description of expertise accounts for the difficulties that have confronted AI for the last decade. (shrink)
Cognitive science uses the notion of computational information processing to explain cognitive information processing. Some philosophers have argued that anything can be described as doing computational information processing; if so, it is a vacuous notion for explanatory purposes.An attempt is made to explicate the notions of cognitive information processing and computational information processing and to specify the relationship between them. It is demonstrated that the resulting notion of computational information processing can only be realized in a restrictive class of dynamical (...) systems called physical notational systems (after Goodman's theory of notationality), and that the systems generally appealed to by cognitive science-physicalsymbol systems-are indeed such systems. Furthermore, it turns out that other alternative conceptions of computational information processing, Fodor's (1975) Language of Thought and Cummins' (1989) Interpretational Semantics appeal to substantially the same restrictive class of systems. (shrink)
Challenges for extending the mirror systemhypothesis include mechanisms supporting planning, conversation, motivation, theory of mind, and prosody. Modeling remains relevant. Co-speech gestures show how manual gesture and speech intertwine, but more attention is needed to the auditory system and phonology. The holophrastic view of protolanguage is debated, along with semantics and the cultural basis of grammars. Anatomically separated regions may share an evolutionary history.
Since the early eighties, computationalism in the study of the mind has been “under attack” by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified with such approaches. For example, it was identified with both Allen Newell and Herbert Simon’s PhysicalSymbolSystemHypothesis and Jerry Fodor’s theory of Language of Thought, usually without taking into account the fact ,that such approaches are very different as to their (...) methods and aims. Zenon Pylyshyn, in his influential book Computation and Cognition, claimed that both Newell and Fodor deeply influenced his ideas on cognition as computation. This probably added to the confusion, as many people still consider Pylyshyn’s book as paradigmatic of the computational approach in the study of the mind. Since then, cognitive scientists, AI researchers and also philosophers of the mind have been asked to take sides on different “paradigms” that have from time to time been proposed as opponents of (classic or symbolic) computationalism. Examples of such oppositions are: -/- computationalism vs. connectionism, computationalism vs. dynamical systems, computationalism vs. situated and embodied cognition, computationalism vs. behavioural and evolutionary robotics. -/- Our preliminary claim in section 1 is that computationalism should not be identified with what we would call the “paradigm (based on the metaphor) of the computer” (in the following, PoC). PoC is the (rather vague) statement that the mind functions “as a digital computer”. Actually, PoC is a restrictive version of computationalism, and nobody ever seriously upheld it, except in some rough versions of the computational approach and in some popular discussions about it. Usually, PoC is used as a straw man in many arguments against computationalism. In section 1 we look in some detail at PoC’s claims and argue that computationalism cannot be identified with PoC. In section 2 we point out that certain anticomputationalist arguments are based on this misleading identification. In section 3 we suggest that the view of the levels of explanation proposed by David Marr could clarify certain points of the debate on computationalism. In section 4 we touch on a controversial issue, namely the possibility of developing a notion of analog computation, similar to the notion of digital computation. A short conclusion follows in section 5. (shrink)
Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. By understanding the mechanisms underlying the evolution of linguistic abilities, it is possible to understand the evolution of cognitive abilities. Cognitivism, one of the current approaches in psychology and cognitive science, proposes that symbol systems capture mental phenomena, and (...) attributes cognitive validity to them. Therefore, in the same way that language is considered the prototype of cognitive abilities, a symbolsystem has become the prototype for studying language and cognitive systems. Symbol systems are advantageous as they are easily studied through computer simulation (a computer program is a symbolsystem itself), and this is why language is often studied using computational models. (shrink)
The necessary but not sufficient conditions for biological informational concepts like signs, symbols, memories, instructions, and messages are (1) an object or referent that the information is about, (2) a physical embodiment or vehicle that stands for what the information is about (the object), and (3) an interpreter or agent that separates the referent information from the vehicle’s material structure, and that establishes the stands-for relation. This separation is named the epistemic cut, and explaining clearly how the stands-for relation (...) is realized is named the symbol-matter problem. (4) A necessary physical condition is that all informational vehicles are material boundary conditions or constraints acting on the lawful dynamics of local systems. It is useful to define a dependency hierarchy of information types: (1) syntactic information (i.e., communication theory), (2) heritable information acquired by variation and natural selection, (3) non-heritable learned or creative information, and (4) measured physical information in the context of natural laws. High information storage capacity is most reliably implemented by discrete linear sequences of non-dynamic vehicles, while the execution of information for control and construction is a non-holonomic dynamic process. The first epistemic cut occurs in self-replication. The first interpretation of base sequence information is by protein folding; the last interpretation of base sequence information is by natural selection. Evolution has evolved senses and nervous systems that acquire non-heritable information, and only very recently after billions of years, the competence for human language. Genetic and human languages are the only known complete general purpose languages. They have fundamental properties in common, but are entirely different in their acquisition, storage and interpretation. (shrink)
The determination of the past and the future of a physicalsystem are complementary aims of measurements. An optimal determination of the past of a system can be achieved by an informationally complete set of physical quantities. Such a set is always strongly noncommutative. An optimal determination of the future of a physicalsystem can be obtained by a Boolean complete set of quantities. The two aims can be reconciled to a reasonable degree with (...) using unsharp measurements. (shrink)
Intentionality is characteristic of many psychological phenomena. It is commonly held by philosophers that intentionality cannot be ascribed to purely physical systems. This view does not merely deny that psychological language can be reduced to physiological language. It also claims that the appropriateness of some psychological explanation excludes the possibility of any underlying physiological or causal account adequate to explain intentional behavior. This is a thesis which I do not accept. I shall argue that physical systems of a (...) specific sort will show the characteristic features of intentionality. Psychological subjects are, under an alternative description, purely physical systems of a certain sort. The intentional description and the physical description are logically distinct, and are not intertranslatable. Nevertheless, the features of intentionality may be explained by a purely causal account, in the sense that they may be shown to be totally dependent upon physical processes. (shrink)
This paper deals with the question: what are the key requirements for a physicalsystem to perform digital computation? Time and again cognitive scientists are quick to employ the notion of computation simpliciter when asserting basically that cognitive activities are computational. They employ this notion as if there was or is a consensus on just what it takes for a physicalsystem to perform computation, and in particular digital computation. Some cognitive scientists in referring to digital (...) computation simply adhere to Turing’s notion of computability . Classical computability theory studies what functions on the natural numbers are computable and what mathematical problems are undecidable. Whilst a mathematical formalism of computability may perform a methodological function of evaluating computational theories of certain cognitive capacities, concrete computation in physical systems seems to be required for explaining cognition as an embodied phenomenon . There are many non-equivalent accounts of digital computation in physical systems. I examine only a handful of those in this paper: (1) Turing’s account ; (2) The triviality “account”; (3) Reconstructing Smith’s account of participatory computation ; (4) The Algorithm Execution account . My goal in this paper is twofold. First, it is to identify and clarify some of the underlying key requirements mandated by these accounts. I argue that these differing requirements justify a demand that one commits to a particular account when employing the notion of computation in regard to physical systems. Second, it is to argue that despite the informative role that mathematical formalisms of computability may play in cognitive science, they do not specify the relationship between abstract and concrete computation. (shrink)
Nunez's description of the brain as a medium capable of wave propagation has provided some fundamental insights into its dynamics. This approach soon reaches the descriptive limits of the brain as a physicalsystem, however. We point out some biological constraints which differentiate the brain from physical systems and we elaborate on its consequences for future research.
We present here a digital scenario to simulate the emergence of self-organized symbol-based communication among artificial creatures inhabiting a virtual world of predatory events. In order to design the environment and creatures, we seek theoretical and empirical constraints from C.S.Peirce Semiotics and an ethological case study of communication among animals. Our results show that the creatures, assuming the role of sign users and learners, behave collectively as a complex system, where self-organization of communicative interactions plays a major role (...) in the emergence of symbol-based communication. We also strive for a careful use of the theoretical concepts involved, including the concepts of symbol, communication, and emergence, and we use a multi-level model as a basis for the interpretation of inter-level relationships in the semiotic processes we are studying. (shrink)
The purpose of this paper is to present a bio-physical basis of mathematics. The essence of the theory is that function in the nervous system is mathematical. The mathematics arises as a result of the interaction of energy (a wave with a precise curvature in space and time) and matter (a molecular or ionic structure with a precise form in space and time). In this interaction, both energy and matter play an active role. That is, the interaction results (...) in a change in form of both energy and matter. There are at least six mathematical operations in a simple synaptic region. It is believed the form of both energy and matter are specific, and their interaction is specific, that is, function in most of the nervous system is stereotyped. It is suggested that mathematics be taken out of the mind and placed where it belongs — in nature and the synaptic regions of the nervous system; it results in both places from a precise interaction between energy (in a precise form) and matter (in a precise structure). (shrink)
The introduction of massive parallelism and the renewed interest in neural networks gives a new need to evaluate the relationship of symbolic processing and artificial intelligence. The physicalsymbolhypothesis has encountered many difficulties coping with human concepts and common sense. Expert systems are showing more promise for the early stages of learning than for real expertise. There is a need to evaluate more fully the inherent limitations of symbol systems and the potential for programming compared (...) with training. This can give more realistic goals for symbolic systems, particularly those based on logical foundations. (shrink)
In his target article, Barsalou cites current work on emotion theory but does not explore its relevance for this project. The connection is worth pursuing, since there is a plausible case to be made that emotions form a distinct symbolic information processing system of their own. On some views, that system is argued to be perceptual: a direct connection with Barsalou's perceptual symbol systems theory. Also relevant is the hypothesis that there may be different modular subsystems (...) within emotion and the perennial tension between cognitive and perceptual theories of emotion. (shrink)
Van Gelder presents the dynamical hypothesis as a novel law of qualitative structure to compete with Newell and Simon's (1976) physicalsymbol systems hypothesis. Unlike Newell and Simon's hypothesis, the dynamical hypothesis fails to provide necessary and sufficient conditions for cognition. Furthermore, imprecision in the statement of the dynamical hypothesis renders it unfalsifiable.
The logic Kf of the modalities of finite, devised to capture the notion of 'there exists a finite number of accessible worlds such that . . . is true', was introduced and axiomatized by Fattorosi. In this paper we enrich the logical framework of Kf: we give consistency properties and a tableau system (which yields the decidability) explicitly designed for Kf, and we introduce a shorter and more natural axiomatization. Moreover, we show the strong and suggestive relationship between Kf (...) and the much older logic of the physical modalities of Burks. (shrink)
What is the relation between the material, conventional symbol structures that we encounter in the spoken and written word, and human thought? A common assumption, that structures a wide variety of otherwise competing views, is that the way in which these material, conventional symbol-structures do their work is by being translated into some kind of content-matching inner code. One alternative to this view is the tempting but thoroughly elusive idea that we somehow think in some natural language (such (...) as English). In the present treatment I explore a third option, which I shall call the "complementarity" view of language. According to this third view the actual symbol structures of a given language add cognitive value by complementing (without being replicated by) the more basic modes of operation and representation endemic to the biological brain. The "cognitive bonus" that language brings is, on this model, not to be cashed out either via the ultimately mysterious notion of "thinking in a given natural language" or via some process of exhaustive translation into another inner code. Instead, we should try to think in terms of a kind of coordination dynamics in which the forms and structures of a language qua material symbolsystem play a key and irreducible role. Understanding language as a complementary cognitive resource is, I argue, an important part of the much larger project (sometimes glossed in terms of the "extended mind") of understanding human cognition as essentially and multiply hybrid: as involving a complex interplay between internal biological resources and external non-biological resources. (shrink)
Problem: The paper investigates some reasons why RC has not become a mainstream endeavor. Method: The central assumptions of RC are summarized. Analysis is made of how each of these assumptions corresponds to other views, especially to intuitive beliefs that are widely accepted. Is RC consistent with these beliefs, supported by them, or incompatible with them? Results: The construction hypothesis is supported by the results of cognitive science and neurophysiology. However, the closed-systemhypothesis and antirealism are in (...) conflict with deeply rooted convictions of most people. Some ethical and educational aspects claimed by RC are generally accepted but they are not specifically implications of RC. Implications: In the near future, RC will probably not become the leading paradigm or a mainstream endeavor in the sciences or in philosophy. (shrink)
Economic value additions to knowledge and demand provide practical, embedded and extensible meaning to philosophizing cognitive systems. Evaluation of a cognitive system is an empirical matter. Thinking of science in terms of distributed cognition (interactionism) enlarges the domain of cognition. Anything that actually contributes to the specific quality of output of a cognitive system is part of the system in time and/or space. Cognitive science studies behaviour and knowledge structures of experts and categorized structures based on underlying (...) structures. Knowledge representation through understanding of ‘epistemic cultures’ is an evolutionary stage. But cognition goes beyond knowledge representation. Notwithstanding the importance of epistemology of phenomena, the practicability cum philosophical aspects of machine learning needs to be seen in dynamic behaviour in socio-economic-technical value additions if human machine interaction processes that are context specific are incorporated into strong artificial intelligent systems. Cognitive Science is also studied from both computational and biological angles. Evolution of interactive forms of reasoning through understanding of meta-language of computations or biological learning processes is possible. But the limitation of historical cultures predefines the role of interactive processes in user-networks beyond technology networks. Despite this limitation, inclusive development notions of a heterogeneous national society such as India or Europe can be tested and incorporated. (shrink)
In recent work on the foundations of statistical mechanics and the arrow of time, Barry Loewer and David Albert have developed a view that defends both a best system account of laws and a physicalist fundamentalism. I argue that there is a tension between their account of laws, which emphasizes the pragmatic element in assessing the relative strength of different deductive systems, and their reductivism or funda- mentalism. If we take the pragmatic dimension in their account seriously, then the (...) laws of the special sciences should be part of our best explanatory system of the world, as well. (shrink)
Whenever an adequate theory is found in science, we will still be left with two questions: why this theory rather than some other theory, and how should this theory be interpreted? I argue that these questions can be answered by a theory of system relations. The basic idea is that fundamental characteristics of systems, viz. those arising from the general systemic nature of those systems, cannot be comprehended with the aid of discipline-specific methods. The systems theory required should commence (...) with an analysis of the qualitatively different relations possible between systems, because it is precisely the nature of those relations that determines the basic structures of systems. That the theory of the fundamental system relations and their ontological and epistemological implications is indeed able to provide the answers sought is demonstrated in theoretical physics and Plessner's analysis of the basic structures of plant, animal and human being. (shrink)
Falk's hominin mother-infant model presupposes an emerging infant capacity to perceive and learn from afforded gestures and vocalizations. Unlike back-riding offspring of other primates, who were in no need to decenter their own body-centered perspective, a mirror neurons system may have been adapted in hominin infants to subserve the kind of (m)other-centered mirroring we now see manifested by human infants soon after birth.
Explora-se o conceito de que a diversidade propicia robutez nos sistemas processadores de informação e como seria aplicável às areas neurais e sociais, incluindo o fenômeno religioso. Avaliação de estatísticas populacionais indica que o último não é essencial ao humano, apesar de ser praticamente constante nas culturas. Discutem-se os aspectos cognitivos e afetivos na ciência e na religião, sob a proposta de que demarcação adequada pode auxiliar na redução de conflitos. Nesse mesmo sentido pode contribuir a elaboração sobre as tensões (...) entre verdades – determinismos – e liberdades, que decorreriam do uso dos conceitos de crenças fortes e fracas, ou seja, entre crenças e hipóteses que parecem promissoras – merecedoras de créditos de confiança. Indica-se a possibilidade de se considerar um sistema de culturas distintas e diversas, apesar das tendências globalizadoras que podem incidir sobre os aspectos mais materiais. Palavras-chave : Estabilidade; Sistema; Evolução; Biologia; Sociedade; Cognição; Afetividade; Essência; Transcendência; Hipótese; Verdade.The concept that diversity propitiates robustness in information processing systems is explored and indicated to be applicable in the neural and social areas, including the religious phenomenon. Evaluation of statistics in populations indicates that the latter is not essential to the human, in spite of being a constant among cultures. Cognitive and affective aspects in science and religion are discussed under the proposition that adequate demarcation may help in reducing conflicts. Elaboration on the tensions between truths – deterministic – and freedom may contribute in the same direction. Tensions may arise from the utilization of the concepts of strong and weak concepts of belief, that is, between beliefs and seemingly fruitful hypotheses – deserving credits of trustfulness. The possibility of considering a system of distinct and diverse cultures is indicated, in spite of the globalizing tendencies that may prevail over other more material aspects. Key words : Stability; System; Evolution; Biology; Society; Cognition; Affection; Essence; Transcendence; Hypothesis; Truth. (shrink)
Concepts of space and time are widely developed in physics. However, there is a considerable lack of biologically plausible theoretical frameworks that can demonstrate how space and time dimensions are implemented in the activity of the most complex life-system – the brain with a mind. Brain activity is organized both temporally and spatially, thus representing space-time in the brain. Critical analysis of recent research on the space-time organization of the brain’s activity pointed to the existence of so-called operational space-time (...) in the brain. This space-time is limited to the execution of brain operations of differing complexity. During each such brain operation a particular short-term spatio-temporal pattern of integrated activity of different brain areas emerges within related operational space-time. At the same time, to have a fully functional human brain one needs to have a subjective mental experience. Current research on the subjective mental experience offers detailed analysis of space-time organization of the mind. According to this research, subjective mental experience (subjective virtual world) has definitive spatial and temporal properties similar to many physical phenomena. Based on systematic review of the propositions and tenets of brain and mind space-time descriptions, our aim in this review essay is to explore the relations between the two. To be precise, we would like to discuss the hypothesis that via the brain operational space-time the mind subjective space-time is connected to otherwise distant physical space-time reality. (shrink)
A symbol is a pattern (of physical marks, electromagnetic energy, etc.) which denotes, designates, or otherwise has meaning. The notion that intelligence requires the use and manipulation of symbols, and that humans are therefore symbol systems, has been extremely in uential in arti cial intelligence.
In Finnish poetry of the 1960s, the city, and above all the capital Helsinki, is the scene where the metamorphosis of Finland from an agrarian into an urban society is staged, analysed and commented. It is also a symbol that serves to situate the country in the global context, with all the contradictions that were characteristic of the position of Finland in the cold war system. Writing about the city was a means to reflect on the transformations of (...) social and political reality and of the physical environment, a means to represent the confusion these transformations produced or to work towards understanding them. The article analyses the city in texts belonging to the “new poetry” of the 1960s, as well as in texts representing the modernist poetics of the 1950s, arguing that the very co-existence of two contrasting poetic discourses was crucial for the semiotic development of Finnish culture in the period of time in question. (shrink)
Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statis- tics, and programming languages have inspired amodal theories that rest on principles fundamentally different from those underlying perception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement record- ing systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science and neuroscience. During perceptual experience, association areas in the (...) brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The stor- age and reactivation of perceptual symbols operates at the level of perceptual components – not at the level of holistic perceptual expe- riences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a com- mon frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspec- tion (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and ab- stract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinato- rially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a per- ceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal sym- bol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored. (shrink)
It is common in the literature on electrodynamics and relativity theory that the transformation rules for the basic electrodynamical quantities are derived from the hypothesis that the relativity principle (RP) applies for Maxwell's electrodynamics. As it will turn out from our analysis, these derivations raise several problems, and certain steps are logically questionable. This is, however, not our main concern in this paper. Even if these derivations were completely correct, they leave open the following questions: (1) Is (RP) a (...) true law of nature for electrodynamical phenomena? (2) Are, at least, the transformation rules of the fundamental electrodynamical quantities, derived from (RP), true? (3) Is (RP) consistent with the laws of electrodynamics in one single inertial frame of reference? (4) Are, at least, the derived transformation rules consistent with the laws of electrodynamics in one single frame of reference? Obviously, (1) and (2) are empirical questions. In this paper, we will investigate problems (3) and (4). First we will give a general mathematical formulation of (RP). In the second part, we will deal with the operational definitions of the fundamental electrodynamical quantities. As we will see, these semantic issues are not as trivial as one might think. In the third part of the paper, applying what J. S. Bell calls “Lorentzian pedagogy”---according to which the laws of physics in any one reference frame account for all physical phenomena---we will show that the transformation rules of the electrodynamical quantities are identical with the ones obtained by presuming the covariance of the coupled Maxwell--Lorentz equations, and that the covariance is indeed satisfied. As to problem (3), the situation is much more complex. As we will see, the relativity principle is actually not a matter of the covariance of the physical equations, but it is a matter of the details of the solutions of the equations, which describe the behavior of moving objects. This raises conceptual problems concerning the meaning of the notion “the same system in a collective motion”. In case of electrodynamics, there seems no satisfactory solution to this conceptual problem; thus, contrary to the widespread views, the question we asked in the title has no obvious answer. (shrink)
After briefly discussing the relevance of the notions computation and implementation for cognitive science, I summarize some of the problems that have been found in their most common interpretations. In particular, I argue that standard notions of computation together with a state-to-state correspondence view of implementation cannot overcome difficulties posed by Putnam's Realization Theorem and that, therefore, a different approach to implementation is required. The notion realization of a function, developed out of physical theories, is then introduced as a (...) replacement for the notional pair computation-implementation. After gradual refinement, taking practical constraints into account, this notion gives rise to the notion digital system which singles out physical systems that could be actually used, and possibly even built. (shrink)
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to (...)symbol grounding. In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)
This paper challenges arguments that systematic patterns of intelligent behavior license the claim that representations must play a role in the cognitive system analogous to that played by syntactical structures in a computer program. In place of traditional computational models, I argue that research inspired by Dynamical Systems theory can support an alternative view of representations. My suggestion is that we treat linguistic and representational structures as providing complex multi-dimensional targets for the development of individual brains. This approach acknowledges (...) the indispensability of the intentional or representational idiom in psychological explanation without locating representations in the brains of intelligent agents. (shrink)
Gilbert Ryle accused Descartes of advancing what he called the “paramechanical hypothesis,” according to which the structure and operations of the mind can be understood on the model of the structure and operations of a physicalsystem. The body is a complex machine – “a bit of clockwork” – that operates according to laws governing the mechanical interactions of material things. The mind, on the other hand, according to Descartes (according to Ryle), is an immaterial machine that (...) operates according to formally analogous laws governing the paramechanical interactions of immaterial things – “a bit of not-clockwork.” In other words, mental processes are the same as physical processes, only you don’t have the matter. I don’t know whether Descartes actually thought this. But, surely, if he did, he was making some kind of logical or conceptual error. Mental processes can’t be the same as physical processes, minus the matter, since the matter matters. The properties of physical systems have physical explanations, which are explanations in terms of physical properties and physical laws. But it is absurd – a category mistake – to suppose that mechanical explanations could apply to immaterial things with no physical properties, subject to no physical laws. (If matters of mind.. (shrink)
We investigate the use of coalgebra to represent quantum systems, thus providing a basis for the use of coalgebraic methods in quantum information and computation. Coalgebras allow the dynamics of repeated measurement to be captured, and provide mathematical tools such as final coalgebras, bisimulation and coalgebraic logic. However, the standard coalgebraic framework does not accommodate contravariance, and is too rigid to allow physical symmetries to be represented. We introduce a fibrational structure on coalgebras in which contravariance is represented by (...) indexing. We use this structure to give a universal semantics for quantum systems based on a final coalgebra construction. We characterize equality in this semantics as projective equivalence. We also define an analogous indexed structure for Chu spaces, and use this to obtain a novel categorical description of the category of Chu spaces. We use the indexed structures of Chu spaces and coalgebras over a common base to define a truncation functor from coalgebras to Chu spaces. This truncation functor is used to lift the full and faithful representation of the groupoid of physical symmetries on Hilbert spaces into Chu spaces, obtained in our previous work, to the coalgebraic semantics. (shrink)