The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify (...) a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements. (shrink)
Cognitive function certainly poses the biggest challenge for computational neuroscience. As we argue, past efforts to build neural models of cognition (the target article included) had too narrow a focus on implementing rule-based language processing. The problem with these models is that they sacrifice the advantages of connectionism rather than building on them. Recent and more promising approaches for modeling cognition build on the mathematical properties of distributed neural representations. These approaches truly exploit the key (...) advantages of connectionism, that is, the high representational power of distributed neural codes and similarity-based pattern recognition. The architectures for cognitive computing that emerge from these approaches are neural associative memories endowed with additional mapping operations to handle invariances and to form reduced representations of combinatorial structures. (shrink)
Quantum cognition research applies abstract, mathematical principles of quantum theory to inquiries in cognitive science. It differs fundamentally from alternative speculations about quantum brain processes. This topic presents new developments within this research program. In the introduction to this topic, we try to answer three questions: Why apply quantum concepts to human cognition? How is quantum cognitive modeling different from traditional cognitive modeling? What cognitive processes have been modeled using a quantum account? In addition, a brief introduction (...) to quantum probability theory and a concrete example is provided to illustrate how a quantum cognitive model can be developed to explain paradoxical empirical findings in psychological literature. (shrink)
We argue that dynamical and mathematicalmodels in systems and cognitive neuro- science explain (rather than redescribe) a phenomenon only if there is a plausible mapping between elements in the model and elements in the mechanism for the phe- nomenon. We demonstrate how this model-to-mechanism-mapping constraint, when satisfied, endows a model with explanatory force with respect to the phenomenon to be explained. Several paradigmatic models including the Haken-Kelso-Bunz model of bimanual coordination and the difference-of-Gaussians model of visual (...) receptive fields are explored. (shrink)
Humans and other animals have an evolved ability to detect discrete magnitudes in their environment. Does this observation support evolutionary debunking arguments against mathematical realism, as has been recently argued by Clarke-Doane, or does it bolster mathematical realism, as authors such as Joyce and Sinnott-Armstrong have assumed? To find out, we need to pay closer attention to the features of evolved numerical cognition. I provide a detailed examination of the functional properties of evolved numerical cognition, and (...) propose that they prima facie favor a realist account of numbers. (shrink)
This paper supports the literature which argues that derivational robustness can have epistemic import in highly idealized economic models. The defense is based on a particular example from mathematical economic theory, the dynamic Walrasian general equilibrium model. It is argued that derivational robustness first increased and later decreased the credibility of the Walrasian model. The example demonstrates that derivational robustness correctly describes the practices of a particular group of influential economic theorists and provides support for the arguments of (...) philosophers who have offered a general epistemic justification of such practices. (shrink)
We describe recent developments in research on mathematical practice and cognition and outline the nine contributions in this special issue of topiCS. We divide these contributions into those that address (a) mathematical reasoning: patterns, levels, and evaluation; (b) mathematical concepts: evolution and meaning; and (c) the number concept: representation and processing.
Making Sense of Inner Sense 'Terra cognita' is terra incognita. It is difficult to find someone not taken abackand fascinated by the incomprehensible but indisputable fact: there are material systems which are aware of themselves. Consciousness is self-cognizing code. During homo sapiens's relentness and often frustrated search for self-understanding various theories of consciousness have been and continue to be proposed. However, it remains unclear whether and at what level the problems of consciousness and intelligent thought can be resolved. Science's greatest (...) challenge is to answer the fundamental question: what precisely does a cognitive state amount to in physical terms? Albert Einstein insisted that the fundamental ideas of science are essentially simple and can be expressed in a language comprehensible to everyone. When one thinks about the complexities which present themselves in modern physics and even more so in the physics of life, one may wonder whether Einstein really meant what he said. Are we to consider the fundamental problem of the mind, whose understanding seems to lie outside the limits of the mind, to be essentially simple too? Knowledge is neither automatic nor universally deductive. Great new ideas are typically counterintuitive and outrageous, and connecting them by simple logical steps to existing knowledge is often a hard undertaking. The notion of a tensor was needed to provide the general theory of relativity; the notion of entropy had to be developed before we could get full insight into the laws of thermodynamics; the notice of information bit is crucial for communication theory, just as the concept of a Turing machine is instrumental in the deep understanding of a computer. To understand something, consciousness must reach an adequate intellectual level, even more so in order to understand itself. Reality is full of unending mysteries, the true explanation of which requires very technical knowledge, often involving notions not given directly to intuition. Even though the entire content and the results of this study are contained in the eight pages of the mathematical abstract, it would be unrealistic and impractical to suggest that anyone can gain full insight into the theory that presented here after just reading abstract. In our quest for knowledge we are exploring the remotest areas of the macrocosm and probing the invisible particles of the microcosm, from tiny neutrinos and strange quarks to black holes and the Big Bang. But the greatest mystery is very close to home: the greatest mystery is human consciousness. The question before us is whether the logical brain has evolved to a conceptual level where it is able to understand itself. (shrink)
Recently, a number of philosophers of biology have endorsed views about random drift that, we will argue, rest on an implicit assumption that the meaning of concepts such as drift can be understood through an examination of the mathematicalmodels in which drift appears. They also seem to implicitly assume that ontological questions about the causality of terms appearing in the models can be gleaned from the models alone. We will question these general assumptions by showing (...) how the same equation — the simple 2 = p2 + 2pq + q2 — can be given radically different interpretations, one of which is a physical, causal process and one of which is not. This shows that mathematicalmodels on their own yield neither interpretations nor ontological conclusions. Instead, we argue that these issues can only be resolved by considering the phenomena that the models were originally designed to represent and the phenomena to which the models are currently applied. When one does take those factors into account, starting with the motivation for Sewall Wright’s and R.A. Fisher’s early drift models and ending with contemporary applications, a very different picture of the concept of drift emerges. On this view, drift is a term for a set of physical processes, namely, indiscriminate sampling processes. (shrink)
Mathematicalmodels of tumour invasion appear as interesting tools for connecting the information extracted from medical imaging techniques and the large amount of data collected at the cellular and molecular levels. Most of the recent studies have used stochastic models of cell translocation for the comparison of computer simulations with histological solid tumour sections in order to discriminate and characterise expansive growth and active cell movements during host tissue invasion. This paper describes how a deterministic approach based (...) on reaction-diffusion models and their generalisation in the mechano-chemical framework developed in the study of biological morphogenesis can be an alternative for analysing tumour morphological patterns. We support these considerations by reviewing two studies. In the first example, successful comparison of simulated brain tumour growth with a time sequence of computerised tomography (CT) scans leads to a quantification of the clinical parameters describing the invasion process and the therapy. The second example considers minimal hypotheses relating cell motility and cell traction forces. Using this model, we can simulate the bifurcation from an homogeneous distribution of cells at the tumour surface toward a nonhomogeneous density pattern which could characterise a pre-invasive stage at the tumour-host tissue interface. (shrink)
In this essay I argue against I. Bernard Cohen's influential account of Newton's methodology in the Principia: the 'Newtonian Style'. The crux of Cohen's account is the successive adaptation of 'mental constructs' through comparisons with nature. In Cohen's view there is a direct dynamic between the mental constructs and physical systems. I argue that his account is essentially hypothetical-deductive, which is at odds with Newton's rejection of the hypothetical-deductive method. An adequate account of Newton's methodology needs to show how Newton's (...) method proceeds differently from the hypothetical-deductive method. In the constructive part I argue for my own account, which is model based: it focuses on how Newton constructed his models in Book I of the Principia. I will show that Newton understood Book I as an exercise in determining the mathematical consequences of certain force functions. The growing complexity of Newton's models is a result of exploring increasingly complex force functions (intra-theoretical dynamics) rather than a successive comparison with nature (extra-theoretical dynamics). Nature did not enter the scene here. This intra-theoretical dynamics is related to the 'autonomy of the models'. (shrink)
There are presently two leading foreign policy decision-making paradigms in vogue. The first is based on the classical or rational model originally posited by von Neumann and Morgenstern to explain microeconomic decisions. The second is based on the cybernetic perspective whose groundwork was laid by Herbert Simon in his early research on bounded rationality. In this paper we introduce a third perspective — thepoliheuristic theory of decision-making — as an alternative to the rational actor and cybernetic paradigms in international relations. (...) This theory is drawn in large part from research on heuristics done in experimental cognitive psychology. According to the poliheuristic theory, policy makers use poly (many) heuristics while focusing on a very narrow range of options and dimensions when making decisions. Among them, the political dimension is noncompensatory. The paper also delineates the mathematical formulations of the three decision-making models. (shrink)
There are presently two leading foreign policy decision-making paradigms in vogue. The first is based on the classical or rational model originally posited by von Neumann and Morgenstern to explain microeconomic decisions. The second is based on the cybernetic perspective whose groundwork was laid by Herbert Simon in his early research on bounded rationality. In this paper we introduce a third perspective -- the poliheuristic theory of decision-making -- as an alternative to the rational actor and cybernetic paradigms in international (...) relations. This theory is drawn in large part from research on heuristics done in experimental cognitive psychology. According to the poliheuristic theory, policy makers use poly (many) heuristics while focusing on a very narrow range of options and dimensions when making decisions. Among them, the political dimension is noncompensatory. The paper also delineates the mathematical formulations of the three decision-making models. (shrink)
Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I (...) argue that proper evaluation ofmodels does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach. (shrink)
A central claim in Luiz Pessoa’s (2013) book is that the terms “emotion” and “cognition” can be useful in characterizing behaviors but will not be cleanly mapped into brain regions. In order to be verified, this claim requires models for the integration and interfacing of emotion and cognition; yet, such models remain problematic.
The lack of conceptual analysis within cognitive science results in multiple models of the same phenomena. However, these models incorporate assumptions that contradict basic structural features of the domain they are describing. This is particularly true about the domain of mathematicalcognition. In this paper we argue that foundational theoretic aspects of psychological models for language and arithmetic should be clarified before postulating such models. We propose a means to clarify these foundational concepts by (...) analyzing the distinctions between metric and linguistic compositionality, which we use to assess current models of mathematicalcognition. Our proposal is consistent with the scientific methodology that determines that careful conceptual analysis should precede theoretical descriptions of data. 2012 APA, all rights reserved). (shrink)
Gene regulatory networks are intensively studied in biology. One of the main aims of these studies is to gain an understanding of how the structure of genetic networks relates to specific functions such as chemotaxis and the circadian clock. Scientists have examined this question by using model organisms such as Drosophila and mathematicalmodels. In the last years, synthetic models—engineered genetic networks—have become more and more important in the exploration of gene regulation. What is the potential of (...) this new approach in the investigation of gene network structures? How do synthetic models relate to model organisms and mathematicalmodels? (shrink)
The main problem discussed in this paper is: Why and how did animal cognition abilities arise? It is argued that investigations of the evolution of animal cognition abilities are very important from an epistemological point of view. A new direction for interdisciplinary researches – the creation and development of the theory of human logic origin – is proposed. The approaches to the origination of such a theory (mathematicalmodels of ``intelligent invention'' of biological evolution, the cybernetic (...) schemes of evolutionary progress and purposeful adaptive behavior) as well as potential interdisciplinary links of the theory are described and analyzed. (shrink)
Jonathan Walkan challenges cognitive science's dominant model of mental representation and proposes a novel, well-devised alternative. The traditional view in the cognitive sciences uses a linguistic model of mental representation. That logic-based model of cognition informs and constrains both the classical tradition of artificial intelligence and modeling in the connectionist tradition. It falls short, however, when confronted by the frame problem---the lack of a principled way to determine which features of a representation must be updated when new information becomes (...) available. So far, proposed alternatives, including the imagistic model, have not resolved the problem. Waskan proposes the Intrinsic Cognitive Models hypothesis, according to which representational states can be concpetualized as the cognitive equivalent of scale models.Waskan argues further that the proposal that humans harbor and manipulate cognitive counterparts to scale models offers the only viable explanation for what most clearly differentiates humans from other creatures: the capacity to engage in truth-preserving manipulation of representations. The ICM hypothesis, he claims, can be distinguished from sentence-based accounts of truth preservation in a way that is fully compatible with what is known about the brain. (shrink)
We define a mathematical formalism based on the concept of an ‘‘open dynamical system” and show how it can be used to model embodied cognition. This formalism extends classical dynamical systems theory by distinguishing a ‘‘total system’’ (which models an agent in an environment) and an ‘‘agent system’’ (which models an agent by itself), and it includes tools for analyzing the collections of overlapping paths that occur in an embedded agent's state space. To illustrate the way (...) this formalism can be applied, several neural network models are embedded in a simple model environment. Such phenomena as masking, perceptual ambiguity, and priming are then observed. We also use this formalism to reinterpret examples from the embodiment literature, arguing that it provides for a more thorough analysis of the relevant phenomena. (shrink)
Ever since the early decades of this century, there have emerged a number of competing schools of ecology that have attempted to weave the concepts underlying natural resource management and natural-historical traditions into a formal theoretical framework. It was widely believed that the discovery of the fundamental mechanisms underlying ecological phenomena would allow ecologists to articulate mathematically rigorous statements whose validity was not predicated on contingent factors. The formulation of such statements would elevate ecology to the standing of a rigorous (...) scientific discipline on a par with physics. However, there was no agreement as to the fundamental units of ecology. Systems ecologists sought to identify the fundamental organization that tied the physical and biological components of ecosystems into an irreducible unit: the ecosystem was their fundamental unit. Population ecologists sought, instead, to identify the biological mechanisms regulating the abundance and distribution of plant and animal species: to these ecologists, the individual organism was the fundamental unit of ecology, and the physical environment was nothing more than a stage upon which the play of individuals in perennial competition took place. As Joel Hagen has pointed out, the two schools were thus dividied by fundamentally different and irreconcilable assumptions about the nature of ecosystems.Notwithstanding these divisive efforts to elevate the image of ecology, the discipline remained in the shadows of American academia until the mid-1960s, when systems ecologists succeeded in projecting ecology onto the national scene. They did so by seeking closer involvement with practical problems: they argued before Congress that their approach to the theoretical problems of ecology was uniquely suited to the solution of the impending “environmental crisis.” With the establishment of the International Biological Program, they succeeded in attracting unprecedented levels of funding for systems ecology research. Theoretical population ecologists, on the other hand, found themselves consigned to the outer regions of this new institutional landscape. The systems ecologists' successful capture of the limelight and the purse brought the divisions between them and population ecologists into sharper relief — hence the hardening of the division of ecology observed by Hagen.45I have argued that the population biologist Richard Levins, prompted by these institutional developments, sought to challenge the social position of systems ecology, and to assert the intellectual priority of theoretical population ecology. He attempted to do so by articulating a nontrivial and rather carefully thought out classification of ecological models that led to the disqualification of systems analysis as a legitimate approach to the study of ecological phenomena. I have suggested that — ultimately —Levins's case against systems analysis in ecology rested on the view that an aspiration to realism and prediction was incompatible with an interest in theoretical issues, a concern that he equated with the search for generality. He sought to reinforce this argument by exploiting the fact that systems ecologists had staked their future on the provision of technical solutions to the problems of the “environmental crisis”: he associated systems ecologists' aspiration to realism and precision with a concern for practical issues, trading on the widely accepted view that practical imperatives are incompatible with the aims of scientific inquiry.46 These are plausible, but nonetheless questionable, claims which have now become an integral part of ecological knowledge. And finally, I hope to have shown how even the most abstract levels of scientific argument are shaped by political considerations, and how discussions of the conceptual development of modern ecology might benefit from a greater consideration of its historical and social dimensions.47. (shrink)
Quantum cognition is an emerging field that uses mathematical principles of quantum theory to help formalize and understand cognitive systems and processes. The topic on the potential of using quantum theory to build models of cognition (Volume 5, issue 4) introduces and synthesizes its new development through an introduction and six core articles. The current issue presents 14 commentaries on the core articles. Five key issues surface, some of which are interestingly controversial and debatable as expected (...) for a new emerging field. (shrink)
This article takes off from Johan van Benthem’s ruminations on the interface between logic and cognitive science in his position paper “Logic and reasoning: Do the facts matter?”. When trying to answer Van Benthem’s question whether logic can be fruitfully combined with psychological experiments, this article focuses on a specific domain of reasoning, namely higher-order social cognition, including attributions such as “Bob knows that Alice knows that he wrote a novel under pseudonym”. For intelligent interaction, it is important that (...) the participants recursively model the mental states of other agents. Otherwise, an international negotiation may fail, even when it has potential for a win-win solution, and in a time-critical rescue mission, a software agent may depend on a teammate’s action that never materializes. First a survey is presented of past and current research on higher-order social cognition, from the various viewpoints of logic, artificial intelligence, and psychology. Do people actually reason about each other’s knowledge in the way proscribed by epistemic logic? And if not, how can logic and cognitive science productively work together to construct more realistic models of human reasoning about other minds? The paper ends with a delineation of possible avenues for future research, aiming to provide a better understanding of higher-order social reasoning. The methodology is based on a combination of experimental research, logic, computational cognitive models, and agent-based evolutionary models. Keywords Epistemic logic - Cognitive science - Intelligent interaction - Cognitive modeling. (shrink)
This second volume in the Counterpoints Series focuses on alternative models of visual-spatial processing in human cognition. The editors provide a historical and theoretical introduction and offer ideas about directions and new research designs.
Rips et al. appear to discuss, and then dismiss with counterexamples, the brain-based theory of mathematicalcognition given in Lakoff and Nez (2000). Instead, they present another theory of their own that they correctly dismiss. Our theory is based on neural learning. Rips et al. misrepresent our theory as being directly about real-world experience and mappings directly from that experience.
Bayesian Rationality (Oaksford & Chater 2007) illustrates the strengths of Bayesian models of cognition: the systematicity of rational explanations, transparent assumptions about human learners, and combining structured symbolic representation with statistics. However, the book also highlights some of the challenges this approach faces: of providing psychological mechanisms, explaining the origins of the knowledge that guides human learning, and accounting for how people make genuinely new discoveries.
Thomas & Karmiloff- Smith ’ argument that the Residual Normality assumption is not valid for developmental disorders has implications for models of cognition in schizophrenia, a disorder that may involve a neurodevelopmental pathogenesis. A limiting factor for such theories is the lack of understanding about the nature of the cognitive system. Moreover, it is unclear how the proposal that modularization emerges from developmental processes would change that fundamental question.
This paper explores the question of whether connectionist models of cognition should be considered to be scientific theories of the cognitive domain. It is argued that in traditional scientific theories, there is a fairly close connection between the theoretical (unobservable) entities postulated and the empirical observations accounted for. In connectionist models, however, hundreds of theoretical terms are postulated -- viz., nodes and connections -- that are far removed from the observable phenomena. As a result, many of the (...) features of any given connectionist model are relatively optional. This leads to the question of what, exactly, is learned about a cognitive domain modelled by a connectionist network. (shrink)