Humans are cognitive entities. Our ongoing interactions with the environment are threaded with creations and usages of meaningful information. Animal life is also populated with meaningful information related to survival constraints. Information managed by artificial agents can also be considered as having meanings, as derived from the designer. Such perspective brings us to propose an evolutionary approach to cognition based on meaningful information management. We use a systemic tool, the Meaning Generator System (MGS), and apply it consecutively to animals, humans (...) and artificial agents [1, 2]. The MGS receives information from its environment and compares it with its constraint. The generated meaning is the connection existing between the received information and the constraint. It triggers an action aimed at satisfying the constraint. The action modifies the environment and the generated meaning. Meaning generation links agents to their environments. The MGS is a system: a set of elements linked by a set of relations. Any system submitted to a constraint and capable of receiving information can lead to a MGS. Animals, humans and robots are agents containing MGSs dealing with different constraints. Similar MGSs carrying different constraints will generate different meanings. Cognition is system dependent. Contrary to approaches on meaning generation based on psychology or linguistics, the MGS approach is not based on human mind. We want to avoid the circularity of taking human mind as a starting point. Free will and self-consciousness participate to the management of human meanings. They do not exist for animals or robots. Staying alive is a constraint that we share with animals. Robots ignore that constraint. We first use the MGS for animals with “stay alive” and “group life” constraints. The analysis of meaning and cognition in animals is however limited by our un-complete understanding of the nature of life (the question of final causes). Extending the analysis of meaning generation and cognition to humans is complex and has some true limitations as the nature of human mind is a mystery for today science and philosophy. The natures of our feelings, free will or self-consciousness are unknown. Approaches to identify human constraints are however possible, where the MGS can highlight some openings [3, 4]. Modeling meaning management in artificial agents is rather straightforward with the MGS. We, the designers, know the agents and the constraints. The derived nature of constraints, meaning and cognition is however to be highlighted. We define a meaningful representation of an item for an agent as being the networks of meanings relative to the item for the agent, with the action scenarios involving the item. Such meaningful representations embed the agents in their environments and are far from the GOFAI type of representations. Cognition, meanings and representations exist by and for the agents. We finish by summarizing the points presented here and highlight possible continuations .  “Information and Meaning”  “Introduction to a systemic theory of meaning”  “Computation on Information, Meaning and Representations. An Evolutionary Approach”  “Proposal for a shared evolutionary nature of language and consciousness”. (shrink)
The contents of representations in non-human animals, human core cognition, and perception cannot precisely be characterized by sentences of a natural language. However, this fact does not stop us from giving imprecise characterizations of these contents through natural language. In this paper, I develop an account of the precision of content characterizations by appealing to possible-world semantics combined with set and measurement theory.
Picturing is a poorly understood element of Sellars’s philosophical project. We diagnose the problem with picturing as follows: on the one hand, it seems that it must be connected with action in order for it to do its job. On the other hand, the representational states of a picturing system are characterized in descriptive and seemingly static terms. How can static terms be connected with action? To solve this problem, we adopt a concept from recent work in Sellarsian metaethics: the (...) idea of a material practical inference, which (we argue) features centrally in how we picture. The key distinction is that the picturing of nonhuman animals involves only Humean material practical inference, in which representational states are corrected only by feedback from the environment and not from discursive interactions. The resulting view shows that Sellars’s contributions to practical philosophy (especially theory of action and metaethics) cannot be separated from his contributions to philosophy of mind, language, and cognitive science. Further, the view makes it clear that picturing is neither a version of the Given, nor is it a fifth wheel to inferential role in explaining representation, but is essential to Sellars’s model of how animals—including humans—represent their environment. (shrink)
Event concepts are unstructured atomic concepts that apply to event types. A paradigm example of such an event type would be that of diaper changing, and so a putative example of an atomic event concept would be DADDY'S-CHANGING-MY-DIAPER.1 I will defend two claims about such concepts. First, the conceptual claim that it is in principle possible to possess a concept such as DADDY'S-CHANGING-MY-DIAPER without possessing the concept DIAPER. Second, the empirical claim that we actually possess such concepts and that they (...) play an important role in our cognitive lives. The argument for the empirical claim has the form of an inference to the best explanation and is aimed at those who are already willing to attribute concepts and beliefs to infants and nonhuman animals. Many animals and prelinguistic infants seem capable of re-identifying event-types in the world, and they seem to store information about things happening at particular times and places. My account offers a plausible model of how such organisms are able to do this without attributing linguistically structured mental states to them. And although language allows adults to form linguistically structured mental representations of the world, there is no good reason to think that such structured representations necessarily replace the unstructured ones. There is also no good reason for a philosopher who is willing to explain the behavior of an organism by appealing to atomic concepts of individuals or kinds to not use a similar form of explanation when explaining the organism's capacity to recognize events. -/- We can form empirical concepts of individuals, kinds, properties, event-types, and states of affairs, among other things, and I assume that such concepts function like what François Recanati calls ‘mental files’ or what Ruth Millikan calls ‘substance concepts’ (Recanati 2012; Millikan 1999, 2000, 2017). To possess such a concept one must have a reliable capacity to re-identify the object in question, but this capacity of re-identification does not fix the reference of the concept. Such concepts allow us to collect and utilize useful information about things that we re-encounter in our environment. We can distinguish between a perception-action system and a perception-belief system, and I will argue that empirical concepts, including atomic event concepts, can play a role in both systems. The perception-action system involves the application of concepts in the service of (often skilled) action. We can think of the concept as a mental file containing motor-plans that can be activated once the individual recognizes that they are in a certain situation. In this way, recognizing something (whether an object or an event) as a token of a type, plays a role in guiding immediate action. The perception-belief system, in contrast, allows for the formation of beliefs that can play a role in deliberation and planning and in the formation of expectations. I distinguish between two particular types of belief which I call where-beliefs and when-beliefs, and I argue that we can model the formation of such perceptual beliefs in nonlinguistic animals and human infants in terms of the formation of a link between an empirical concept and a position on a cognitive map. According to the account offered, seemingly complex beliefs, such as a baby's belief that Daddy changed her diaper in the kitchen earlier, will not be linguistically structured. If we think that prelinguistic infants possess such concepts and are able to form such beliefs, it is likely that adults do too. The ability to form such beliefs does not require the capacity for public language, and we can model them in nonlinguistic terms; thus, we have no good reason to think of such beliefs as propositional attitudes. Of course, we can use sentences to refer to such beliefs, and thus it is possible to think of such beliefs as somehow relations to propositions. But it is not clear to me what is gained by this as we have a perfectly good way to think about the structure of such beliefs that does not involve any appeal to language. (shrink)
Temporal binding is the phenomenon in which events related as cause and effect are perceived by humans to be closer in time than they actually are). Despite the fact that temporal binding experiments with humans have relied on verbal instructions, we argue that they are adaptable to nonhuman animals, and that a finding of temporal binding from such experiments would provide evidence of causal reasoning that cannot be reduced to associative learning. Our argument depends on describing and theoretically motivating an (...) intermediate level of representations between the lower levels of associations of sensory features and higher symbolic representations. This intermediate level of representations makes it possible to challenge arguments given by some comparative psychologists that animals lack higher-level abstract and explicit forms of causal reasoning because their cognitive capacities are limited to learning and reasoning at the basic level of perceptual associations. Our multi-level account connects time perception with causal reasoning and provides a philosophically defensible framework for experimental investigations that have not yet been pursued. We describe the structure of some possible experiments and consider the implications that would follow from a positive finding of temporal binding in nonhuman animals. Such a finding would provide evidence of explicit awareness of causal relationships and would warrant attribution of intermediate representations that are more abstract and sophisticated than the associations allowed by the lower level of the two-level account. (shrink)
It is seemingly bad for animals to have their desires modified in at least some cases, for instance where brainwashing or neurological manipulation takes place. In humans, many argue that such modification interferes with our positive liberty or undermines our autonomy but this explanation is inapplicable in the case of animals as they lack the capacity for autonomy in the relevant sense. As such, the standard view has been that, despite any intuitions to the contrary, the modification of animals’ desires (...) is not harmful (at least not in itself). In this article, I offer a different perspective on this issue, laying the foundations of a novel argument in defence of the view that animals _can_ be harmed by desire modification directly. I suggest that the modification of an animal’s desires (under certain circumstances) is harmful for that animal because it undermines their agency. (shrink)
It’s common to think that animals think. The cat thinks it is time to be fed, the monkey thinks the dominant is a threat. In order to make sense of what the other animals around us do, we ascribe mental states to them. The cat meows at the door because she wants to be let in. The monkey the monkey fails the test because he doesn’t remember the answer. -/- We explain animal actions in terms of their mental states, just (...) as we do with humans. One of us has argued that our science of animal minds requires that animal behavior be explained in such terms, and this doesn’t lead to a problematic use of folk psychology or anthropomorphism (Andrews 2016, 2020). By “anthropomorphism” we mean the attribution of human psychological, social, or normative properties to non-human animals “usually with the implication it is done without sound justification” (Shettleworth 2010, 477). And by “folk psychology” we mean the commonsense practice of seeing action as caused or accompanied by mental states like belief and desire, emotions, and seeing people in terms of their moods or personality traits, as well as categorizing complex behaviors as examples of grieving, communicating, or teaching (Andrews 2012). Psychologists routinely describe human behaviors in folk psychological terms, so it’s not that the categories are unscientific. The issue with using folk psychology to describe animal behavior is whether observable similarities between human and nonhuman behavior warrants thinking they involve the same psychological kind. The use of folk psychology when talking about animals need not be problematically anthropomorphic, though we need some evidentiary basis for filing animal behavior under some folk psychological category. -/- Despite it being commonplace for humans to attribute thoughts to animals, and there being arguments in favor of doing so in science, the nature of these mental states so many are happy to see in other animals remains unclear. To help bring some focus into the discussion, we will examine the attitude of belief. In this chapter we will examine the various possible statuses of animal beliefs, and the implications of those various views for our folk practice as well as our scientific investigations. (shrink)
Theory of mind, the attribution of mental states to others is one form of social cognition. The aim of this paper is to highlight the importance of another, much simpler, form of social cognition, which I call vicarious representation. Vicarious representation is the attribution of other-centered properties to objects. This mental capacity is different from, and much simpler than, theory of mind as it does not imply the understanding (or representation) of the mental (or even perceptual) states of another agents. (...) I argue that the most convincing experiments that are supposed to show that non-human primates have theory of mind in fact demonstrate that they are capable of vicarious representation. The same is true for the experiments about the theory of mind of infants under 12 months. (shrink)
In this essay we discuss recent attempts to analyse the notion of representation, as it is employed in cognitive science, in purely informational terms. In particular, we argue that recent informational theories cannot accommodate the existence of metarepresentations. Since metarepresentations play a central role in the explanation of many cognitive abilities, this is a serious shortcoming of these proposals.
Many species rely on the three-dimensional surface layout of an environment to find a desired goal following disorientation. They generally do so to the exclusion of other important spatial cues. Two influential frameworks for explaining that phenomenon are provided by geometric-module theories and view-matching theories of reorientation respectively. The former posit a module that operates only on representations of the global geo- metry of three-dimensional surfaces to guide behavior. The latter place snapshots, stored representations of the subject’s two-dimensional retinal stimulation (...) at specific locations, at the heart of their accounts. In this paper, I take a fresh look at the debate between them. I begin by making a case that the empirical evidence we currently have does not clearly favor one framework over the other, and that the debate has reached something of an impasse. Then, I present a new explanatory problem—the representation selection problem—that offers the pro- spect of breaking the impasse by introducing a new type of explanatory consideration that both frameworks must address. The representation selection problem requires explaining how subjects can reliably select the relevant representation with which they initiate the reorientation process. I argue that the view-matching framework does not have the resources to address this problem, while a certain type of theory within the geometric-module framework can provide a natural response to it. In showing this, I develop a new geometric-module theory. (shrink)
We focus on three main sets of topics emerging from the commentaries on our target article. First, we discuss several types of animal behavior that commentators cite as evidence against our claim that animals are restricted to temporal updating and cannot engage in temporal reasoning. In doing so, we illustrate further how explanations of behavior in terms of temporal updating work. Second, we respond to commentators’ queries about the developmental process through which children acquire a capacity for temporal reasoning and (...) about the relation between our account and accounts drawing similar distinctions in other domains of cognition. Finally, we address some broader theoretical issues arising from the commentaries, concerning in particular the question as to how our account relates to the phenomenology of experience in time, and the question as to whether our dichotomy between temporal reasoning and temporal updating is exhaustive, or whether there might be other forms of cognition or representation related to time not captured by it. (shrink)
We argue that animals are not cognitively stuck in time. Evidence pertaining to multisensory temporal order perception strongly suggests that animals can represent at least some temporal relations of perceived events.
Deception has recently received a significant amount of attention. One of main reasons is that it lies at the intersection of various areas of research, such as the evolution of cooperation, animal communication, ethics or epistemology. This essay focuses on the biological approach to deception and argues that standard definitions put forward by most biologists and philosophers are inadequate. We provide a functional account of deception which solves the problems of extant accounts in virtue of two characteristics: deceptive states have (...) the function of causing a misinformative states and they do not necessarily provide direct benefits to the deceivers and losses to the targets. (shrink)
Anthropomorphism is the methodology of attributing human-like mental states to animals. Zoomorphism is the converse of this: it is the attribution of animal-like mental states to humans. Zoomorphism proceeds by first understanding what kind of mental states animals have and then attributing these mental states to humans. Zoomorphism has been widely used as scientific methodology especially in cognitive neuroscience. But it has not been taken seriously as a philosophical explanatory paradigm: as a way of explaining the building blocks of the (...) human mind. The philosophical explanatory paradigm of zoomorphism may not explain all aspects of human behavior, but if we accept the zoomorphic way of thinking about the human mind, we should only posit new, different kinds of mental states if the zoomorphic attribution of animal mental states fails to explain our behavior. (shrink)
That great apes are the only primates to recognise their reflections is often taken to show that they are self-aware—however, there has been much recent debate about whether the self-awareness in question is psychological or bodily self-awareness. This paper argues that whilst self-recognition does not require psychological self-awareness, to claim that it requires only bodily self-awareness would leave something out. That is that self-recognition requires ‘objective self-awareness’—the capacity for first person thoughts like ‘that's me’, which involve self-identification and so are (...) vulnerable to error through misidentification. This objective self-awareness is distinct from bodily or psychological self-awareness, requires cognitive sophistication and provides the beginnings of a more conceptual self-representation which might play a role in planning, mental time travel and theory of mind. (shrink)
The idea that only complex brains can possess genuine representations is an important element in mainstream philosophical thinking. An alternative view, which I label ‘liberal representationalism’, holds that we should accept the existence of many more full-blown representations, from activity in retinal ganglion cells to the neural states produced by innate releasing mechanisms in cognitively unsophisticated organisms. A promising way of supporting liberal representationalism is to show it to be a consequence of our best naturalistic theories of representation. However, several (...) philosophers and scientists have recently argued against this strategy. In the paper I counter these objections in defense of liberal representationalism. (shrink)
Intentionality is a central feature of our understanding of the world. We daily attribute intentional states (like beliefs, desires or perceptual states) to explain the behavior of other agents, and many theories appeal to them to understand more complex notions. Nonetheless, intentional states are puzzling entities. This article explains what intentionality is and why it is so important and problematic at the same time. Secondly, it examines various naturalistic theories, which seek to show that intentionality is compatible with a scientific (...) worldview. Finally, given that all extant proposals face significant difficulties, it explores the available options in case no naturalistic theory can succeed. (shrink)
Ethological theories usually attribute semantic content to animal signals. To account for this fact, many biologists and philosophers appeal to some version of teleosemantics. However, this picture has recently came under attack: while mainstream teleosemantics assumes that representational systems must cooperate, some biologists and philosophers argue that in certain cases signaling can evolve within systems lacking common interest. In this paper I defend the standard view from this objection.
One of the main tenets of current teleosemantic theories is that simple representations are Pushmi-Pullyu states, i.e. they carry descriptive and imperative content at the same time. In the paper I present an argument that shows that if we add this claim to the core tenets of teleosemantics, then (1) it entails that, necessarily, all representations are Pushmi-Pullyu states and (2) it undermines one of the main motivations for the Pushmi-Pullyu account.
Do animals have minds? We have known at least since Aristotle that humans constitute one species of animal. And some benighted contemporaries apart, we also know that most humans have minds. To have any bite, therefore, the question must be restricted to non-human animals, to which I shall henceforth refer simply as "animals." I shall further assume that animals are bereft of linguistic faculties. So, do some animals have minds comparable to those of humans? As regards that question, there are (...) two basic stances. Differentialists maintain that there are categorical differences separating us from animals; assimilationists hold that the differences are merely quantitative and gradual (see Brandom 2000, pp. 2–3). This paper only deals with one kind of mental phenomenon, namely intentional states such as believing, desiring, and intending. I shall also refer to these as having thoughts or thinking rather than as "propositional attitudes," since that terminology is misguided. My primary target is a variant of differentialism, namely lingualism. It maintains that animals lack intentional states such as beliefs, desires, and intentions, since, on a priori conceptual grounds, the latter require language. The term "language" is here confined to public languages, notably natural languages, and excludes inner symbolisms such as the language of thought postulated by many cognitive scientists. (shrink)
In a number of articles, Hans-Johann Glock has argued against the »lingualist« view that higher mental capacities are a prerogative of language-users. He has defended the »assimilationist« claim that the mental capacities of humans and of non-human animals differ only in degree. In the paper under discussion, Glock argues that animals are capable of acting for reasons, provided that reasons are construed along the lines of the new »objectivist« theory of practical reasons.
In this paper I defend a teleological explanation of normativity, i. e., I argue that what an organism is supposed to do is determined by its etiological function. In particular, I present a teleological account of the normativity that arises in learning processes, and I defend it from some objections.
The received Cognitive Science paradigm holds that the brain manipulates mental representations of reality. This position is problematic. My alternative to representationalism is that each organism lives in its own "world" made up of objects defined by reference to the organism’s perceptual systems. These objects act as supervenient causes on organisms without the mediation of mental representations. (1992).