This commentary focuses on the parts of psychological game theory dealing with preference, as illustrated by team reasoning, and supports the conclusion that these theoretical notions do not contribute above and beyond existing theory in understanding social interaction. In particular, psychology and games are already bridged by a comprehensive, formal, and inherently psychological theory, interdependence theory (Kelley & Thibaut 1978; Kelley et al. 2003), which has been demonstrated to account for a wide variety of social interaction phenomena.
The problem of concept representation is relevant for many sub- ﬁelds of cognitive research, including psychology and philosophy, as well as artiﬁcial intelligence. In particular, in recent years it has received a great deal of attention within the ﬁeld of knowledge representation, due to its relevance for both knowledge engineering as well as ontology-based technologies. However, the notion of a concept itself turns out to be highly disputed and problematic. In our opinion, one of the causes of this state of (...) aﬀairs is that the notion of a concept is, to some extent, heterogeneous, and encompasses diﬀerent cognitive phenomena. This results in a strain between conﬂicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In some ways artiﬁcial intelligence research shows traces of this situation. In this paper, we propose an analysis of this current state of aﬀairs. Since it is our opinion that a mature methodology with which to approach knowledge representation and knowledge engineering should also take advantage of the empirical results of cognitive psychology concerning human abilities, we outline some proposals for concept representation in formal ontologies, which take into account suggestions from psychological research. Our basic assumption is that knowledge representation systems whose design takes into account evidence from experimental psychology (and which, therefore, are more similar to the human way of organizing and processing information) may therefore give better results in many applications (e.g. in the ﬁelds of information retrieval and semantic web). (shrink)
tific sentences from non-scientific ones, the folscientific areas, but try to get forward in discurlowing failure of all trials to establish a scientific sive truthfulness “in the long run” (Peirce) princilanguage of theory which would be coherent with pally ending with human species in an “ultimate the language of observations, or to define a sciopinion” (Peirce) of the things which are discussed. entific language which could be able to depict..
Though the formal coherence and empirical utility of Marcello Barbieri’s concept of organic code have been starting to become established, a general conception of how the semantics of organic codes is related to the pragmatics of their use is still missing. Barbieri took a first step towards such a conception by distinguishing three types of semiosis in living systems: manufacturing, signalling, and interpretive semiosis. This paper integrates Barbieri’s distinction into Roman Jakobson’s systematization of possible functions of messages in order (...) to propose a general conception of possible types of semiosis in living systems. As a result, Barbieri’s thesis that manufacturing and signalling semiosis are basic types of semiosis, can be confirmed and completed communication-theoretically. (shrink)
The genetic code appeared on Earth with the first cells. The codes of cultural evolution arrived almost four billion years later. These are the only codes that are recognized by modern biology. In this book, however, Marcello Barbieri explains that there are many more organic codes in nature, and their appearance not only took place throughout the history of life but marked the major steps of that history. A code establishes a correspondence between two independent 'worlds', and the codemaker (...) is a third party between those 'worlds'. Therefore the cell can be thought of as a trinity of genotype, phenotype and ribotype. The ancestral ribotypes were the agents which gave rise to the first cells. The book goes on to explain how organic codes and organic memories can be used to shed new light on the problems encountered in cell signalling, epigenesis, embryonic development, and the evolution of language. (shrink)
Biosemiotics is the synthesis of biology and semiotics, and its main purpose is to show that semiosis is a fundamental component of life, i.e., that signs and meaning exist in all living systems. This idea started circulating in the 1960s and was proposed independently from enquires taking place at both ends of the Scala Naturae. At the molecular end it was expressed by Howard Pattee’s analysis of the genetic code, whereas at the human end it took the form of Thomas (...) Sebeok’s investigation into the biological roots of culture. Other proposals appeared in the years that followed and gave origin to different theoretical frameworks, or different schools, of biosemiotics. They are: (1) the physical biosemiotics of Howard Pattee and its extension in Darwinian biosemiotics by Howard Pattee and by Terrence Deacon, (2) the zoosemiotics proposed by Thomas Sebeok and its extension in sign biosemiotics developed by Thomas Sebeok and by Jesper Hoffmeyer, (3) the code biosemiotics of Marcello Barbieri and (4) the hermeneutic biosemiotics of Anton Markoš. The differences that exist between the schools are a consequence of their different models of semiosis, but that is only the tip of the iceberg. In reality they go much deeper and concern the very nature of the new discipline. Is biosemiotics only a new way of looking at the known facts of biology or does it predict new facts? Does biosemiotics consist of testable hypotheses? Does it add anything to the history of life and to our understanding of evolution? These are the major issues of the young discipline, and the purpose of the present paper is to illustrate them by describing the origin and the historical development of its main schools. (shrink)
Since the early eighties, computationalism in the study of the mind has been “under attack” by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified with such approaches. For example, it was identified with both Allen Newell and Herbert Simon’s Physical Symbol System Hypothesis and Jerry Fodor’s theory of Language of Thought, usually without taking into account the fact ,that such approaches are very different as to their methods and aims. Zenon (...) Pylyshyn, in his influential book Computation and Cognition, claimed that both Newell and Fodor deeply influenced his ideas on cognition as computation. This probably added to the confusion, as many people still consider Pylyshyn’s book as paradigmatic of the computational approach in the study of the mind. Since then, cognitive scientists, AI researchers and also philosophers of the mind have been asked to take sides on different “paradigms” that have from time to time been proposed as opponents of (classic or symbolic) computationalism. Examples of such oppositions are: -/- computationalism vs. connectionism, computationalism vs. dynamical systems, computationalism vs. situated and embodied cognition, computationalism vs. behavioural and evolutionary robotics. -/- Our preliminary claim in section 1 is that computationalism should not be identified with what we would call the “paradigm (based on the metaphor) of the computer” (in the following, PoC). PoC is the (rather vague) statement that the mind functions “as a digital computer”. Actually, PoC is a restrictive version of computationalism, and nobody ever seriously upheld it, except in some rough versions of the computational approach and in some popular discussions about it. Usually, PoC is used as a straw man in many arguments against computationalism. In section 1 we look in some detail at PoC’s claims and argue that computationalism cannot be identified with PoC. In section 2 we point out that certain anticomputationalist arguments are based on this misleading identification. In section 3 we suggest that the view of the levels of explanation proposed by David Marr could clarify certain points of the debate on computationalism. In section 4 we touch on a controversial issue, namely the possibility of developing a notion of analog computation, similar to the notion of digital computation. A short conclusion follows in section 5. (shrink)
There is quite a bit of disagreement in cognitive science regarding the role that consciousness and control play in explanations of how people do what they do. The purpose of the present paper is to do the following: (1) examine the theoretical choice points that have lead theorists to conflicting positions, (2) examine the philosophical and empirical problems different theories encounter as they address the issue of conscious agency, and (3) provide an integrative framework (Wild Systems Theory) that addresses these (...) problems and potentially naturalizes conscious agency. It does so by grounding conscious and control in the notion of self-sustaining energy-transformation systems (i.e., living systems), versus computational or self- organizing systems, as is the case in information processing theory and dynamical systems theory, respectively. Given its assertion that content (and consciousness) emerges in self-sustaining systems, Wild Systems Theory may also provide a sound theoretical basis for a science of consciousness in general. (shrink)
The project of neuroaesthetics could be interpreted as an attempt to identify a ?neural essence? of art, i.e., a set of necessary and sufficient conditions formulated in the language of neuroscience, which define the concept art . Some proposals developed within this field can be read in this way. I shall argue that such attempts do not succeed in individuating a neural definition of art. Of course, the fact that the proposals available for defining art in neural terms do not (...) work does not mean that such an enterprise is in principle doomed to failure. However, I maintain that there are good reasons to suspect that in general such a definition cannot be worked out. This does not mean, though, that the study of neural correlates in artwork production and fruition is a senseless project. Neuroaesthetics could succeed in individuating widespread mechanisms common to different forms of art coming from remote cultural contexts, which presumably rely on aspects of our mind and/or brain's functioning that are innate and biologically determined, thus contrasting the idea that artistic phenomena are entirely dependent on cultural factors. (shrink)
Deductive inference is usually regarded as being “tautological” or “analytical”: the information conveyed by the conclusion is contained in the information conveyed by the premises. This idea, however, clashes with the undecidability of first-order logic and with the (likely) intractability of Boolean logic. In this article, we address the problem both from the semantic and the proof-theoretical point of view. We propose a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by means of growing (...) computational resources, and converge towards classical propositional logic. The underlying claim is that this hierarchy can be used to represent increasing levels of “depth” or “informativeness” of Boolean reasoning. Special attention is paid to the most basic logic in this hierarchy, the pure “intelim logic”, which satisfies all the requirements of a natural deduction system (allowing both introduction and elimination rules for each logical operator) while admitting of a feasible (quadratic) decision procedure. We argue that this logic is “analytic” in a particularly strict sense, in that it rules out any use of “virtual information”, which is chiefly responsible for the combinatorial explosion of standard classical systems. As a result, analyticity and tractability are reconciled and growing degrees of computational complexity are associated with the depth at which the use of virtual information is allowed. (shrink)
In this article we question the utility of the distinction between conceptual and nonconceptual content in cognitive science, and in particular, in the empirical study of visual perception. First, we individuate some difficulties in characterizing the notion of “concept” itself both in the philosophy of mind and cognitive science. Then we stress the heterogeneous nature of the notion of nonconceptual content and outline the complex and ambiguous relations that exist between the conceptual/nonconceptual duality and other pairs of notions, such as (...) top–down/bottom-up and modular/nonmodular. Finally we look in greater detail at the proposal developed by Jacob and Jeannerod (Ways of seeing. The scopes and limits of visual cognition. Oxford, UK: Oxford University Press, 2003 ), who apply the notion of nonconceptual content to empirical research on visual perception. After reconstructing their point of view on concepts, we try to reject their major arguments in support of the conceptual/nonconceptual distinction, i.e. the compositionality of thought and the fineness of grain of percepts. (shrink)
In "Representations without Rules, Connectionism and the Syntactic Argument'', Kenneth Aizawa argues against the view that connectionist nets can be understood as processing representations without the use of representation-level rules, and he provides a positive characterization of how to interpret connectionist nets as following representation-level rules. He takes Terry Horgan and John Tienson to be the targets of his critique. The present paper marshals functional and methodological considerations, gleaned from the practice of cognitive modelling, to argue against Aizawa's characterization of (...) how connectionist nets may be understood as making use of representation-level rules. (shrink)