We describe recent developments in research on mathematical practice and cognition and outline the nine contributions in this special issue of topiCS. We divide these contributions into those that address (a) mathematical reasoning: patterns, levels, and evaluation; (b) mathematical concepts: evolution and meaning; and (c) the number concept: representation and processing.
To explore the relation between mathematicalmodels and reality, four different domains of reality are distinguished: observer-independent reality (to which there is no direct access), personal reality, social reality and mathematical/formal reality. The concepts of personal and social reality are strongly inspired by constructivist ideas. Mathematical reality is social as well, but constructed as an autonomous system in order to make absolute agreement possible. The essential problem of mathematical modelling is that within mathematics there is (...) agreement about ‘truth’, but the assignment of mathematics to informal reality is not itself formally analysable, and it is dependent on social and personal construction processes. On these levels, absolute agreement cannot be expected. Starting from this point of view, repercussion of mathematical on social and personal reality, the historical development of mathematical modelling, and the role, use and interpretation of mathematicalmodels in scientific practice are discussed. (shrink)
Making Sense of Inner Sense 'Terra cognita' is terra incognita. It is difficult to find someone not taken abackand fascinated by the incomprehensible but indisputable fact: there are material systems which are aware of themselves. Consciousness is self-cognizing code. During homo sapiens's relentness and often frustrated search for self-understanding various theories of consciousness have been and continue to be proposed. However, it remains unclear whether and at what level the problems of consciousness and intelligent thought can be resolved. Science's greatest (...) challenge is to answer the fundamental question: what precisely does a cognitive state amount to in physical terms? Albert Einstein insisted that the fundamental ideas of science are essentially simple and can be expressed in a language comprehensible to everyone. When one thinks about the complexities which present themselves in modern physics and even more so in the physics of life, one may wonder whether Einstein really meant what he said. Are we to consider the fundamental problem of the mind, whose understanding seems to lie outside the limits of the mind, to be essentially simple too? Knowledge is neither automatic nor universally deductive. Great new ideas are typically counterintuitive and outrageous, and connecting them by simple logical steps to existing knowledge is often a hard undertaking. The notion of a tensor was needed to provide the general theory of relativity; the notion of entropy had to be developed before we could get full insight into the laws of thermodynamics; the notice of information bit is crucial for communication theory, just as the concept of a Turing machine is instrumental in the deep understanding of a computer. To understand something, consciousness must reach an adequate intellectual level, even more so in order to understand itself. Reality is full of unending mysteries, the true explanation of which requires very technical knowledge, often involving notions not given directly to intuition. Even though the entire content and the results of this study are contained in the eight pages of the mathematical abstract, it would be unrealistic and impractical to suggest that anyone can gain full insight into the theory that presented here after just reading abstract. In our quest for knowledge we are exploring the remotest areas of the macrocosm and probing the invisible particles of the microcosm, from tiny neutrinos and strange quarks to black holes and the Big Bang. But the greatest mystery is very close to home: the greatest mystery is human consciousness. The question before us is whether the logical brain has evolved to a conceptual level where it is able to understand itself. (shrink)
Cognitive function certainly poses the biggest challenge for computational neuroscience. As we argue, past efforts to build neural models of cognition (the target article included) had too narrow a focus on implementing rule-based language processing. The problem with these models is that they sacrifice the advantages of connectionism rather than building on them. Recent and more promising approaches for modeling cognition build on the mathematical properties of distributed neural representations. These approaches truly exploit the key (...) advantages of connectionism, that is, the high representational power of distributed neural codes and similarity-based pattern recognition. The architectures for cognitive computing that emerge from these approaches are neural associative memories endowed with additional mapping operations to handle invariances and to form reduced representations of combinatorial structures. (shrink)
There are presently two leading foreign policy decision-making paradigms in vogue. The first is based on the classical or rational model originally posited by von Neumann and Morgenstern to explain microeconomic decisions. The second is based on the cybernetic perspective whose groundwork was laid by Herbert Simon in his early research on bounded rationality. In this paper we introduce a third perspective — thepoliheuristic theory of decision-making — as an alternative to the rational actor and cybernetic paradigms in international relations. (...) This theory is drawn in large part from research on heuristics done in experimental cognitive psychology. According to the poliheuristic theory, policy makers use poly (many) heuristics while focusing on a very narrow range of options and dimensions when making decisions. Among them, the political dimension is noncompensatory. The paper also delineates the mathematical formulations of the three decision-making models. (shrink)
There are presently two leading foreign policy decision-making paradigms in vogue. The first is based on the classical or rational model originally posited by von Neumann and Morgenstern to explain microeconomic decisions. The second is based on the cybernetic perspective whose groundwork was laid by Herbert Simon in his early research on bounded rationality. In this paper we introduce a third perspective -- the poliheuristic theory of decision-making -- as an alternative to the rational actor and cybernetic paradigms in international (...) relations. This theory is drawn in large part from research on heuristics done in experimental cognitive psychology. According to the poliheuristic theory, policy makers use poly (many) heuristics while focusing on a very narrow range of options and dimensions when making decisions. Among them, the political dimension is noncompensatory. The paper also delineates the mathematical formulations of the three decision-making models. (shrink)
The foundation of Mathematics is both a logico-formal issue and an epistemological one. By the first, we mean the explicitation and analysis of formal proof principles, which, largely a posteriori, ground proof on general deduction rules and schemata. By the second, we mean the investigation of the constitutive genesis of concepts and structures, the aim of this paper. This “genealogy of concepts”, so dear to Riemann, Poincaré and Enriques among others, is necessary both in order to enrich the foundational analysis (...) with an often disregarded aspect (the cognitive and historical constitution of mathematical structures) and because of the provable incompleteness of proof principles also in the analysis of deduction. For the purposes of our investigation, we will hint here to a philosophical frame as well as to some recent experimental studies on numerical cognition that support our claim on the cognitive origin and the constitutive role of mathematical intuition. (shrink)
The present paper argues that ‘mature mathematical formalisms’ play a central role in achieving representation via scientific models. A close discussion of two contemporary accounts of how mathematicalmodels apply—the DDI account (according to which representation depends on the successful interplay of denotation, demonstration and interpretation) and the ‘matching model’ account—reveals shortcomings of each, which, it is argued, suggests that scientific representation may be ineliminably heterogeneous in character. In order to achieve a degree of unification that (...) is compatible with successful representation, scientists often rely on the existence of a ‘mature mathematical formalism’, where the latter refers to a—mathematically formulated and physically interpreted—notational system of locally applicable rules that derive from (but need not be reducible to) fundamental theory. As mathematical formalisms undergo a process of elaboration, enrichment, and entrenchment, they come to embody theoretical, ontological, and methodological commitments and assumptions. Since these are enshrined in the formalism itself, they are no longer readily obvious to either the novice or the proficient user. At the same time as formalisms constrain what may be represented, they also function as inferential and interpretative resources. (shrink)
Quantum cognition research applies abstract, mathematical principles of quantum theory to inquiries in cognitive science. It differs fundamentally from alternative speculations about quantum brain processes. This topic presents new developments within this research program. In the introduction to this topic, we try to answer three questions: Why apply quantum concepts to human cognition? How is quantum cognitive modeling different from traditional cognitive modeling? What cognitive processes have been modeled using a quantum account? In addition, a brief introduction (...) to quantum probability theory and a concrete example is provided to illustrate how a quantum cognitive model can be developed to explain paradoxical empirical findings in psychological literature. (shrink)
Mathematicalmodels of tumour invasion appear as interesting tools for connecting the information extracted from medical imaging techniques and the large amount of data collected at the cellular and molecular levels. Most of the recent studies have used stochastic models of cell translocation for the comparison of computer simulations with histological solid tumour sections in order to discriminate and characterise expansive growth and active cell movements during host tissue invasion. This paper describes how a deterministic approach based (...) on reaction-diffusion models and their generalisation in the mechano-chemical framework developed in the study of biological morphogenesis can be an alternative for analysing tumour morphological patterns. We support these considerations by reviewing two studies. In the first example, successful comparison of simulated brain tumour growth with a time sequence of computerised tomography (CT) scans leads to a quantification of the clinical parameters describing the invasion process and the therapy. The second example considers minimal hypotheses relating cell motility and cell traction forces. Using this model, we can simulate the bifurcation from an homogeneous distribution of cells at the tumour surface toward a nonhomogeneous density pattern which could characterise a pre-invasive stage at the tumour-host tissue interface. (shrink)
It is here proposed an analysis of symbolic and sub-symbolic models for studying cognitive processes, centered on emergence and logical openness notions. The Theory of logical openness connects the Physics of system/environment relationships to the system informational structure. In this theory, cognitive models can be ordered according to a hierarchy of complexity depending on their logical openness degree, and their descriptive limits are correlated to Gödel-Turing Theorems on formal systems. The symbolic models with low logical openness describe (...)cognition by means of semantics which fix the system/environment relationship (cognition in vitro), while the sub-symbolic ones with high logical openness tends to seize its evolutive dynamics (cognition in vivo). An observer is defined as a system with high logical openness. In conclusion, the characteristic processes of intrinsic emergence typical of “bio-logic” - emerging of new codes-require an alternative model to Turing-computation, the natural or bio-morphic computation, whose essential features we are going here to outline. (shrink)
In this essay I argue against I. Bernard Cohen's influential account of Newton's methodology in the Principia: the 'Newtonian Style'. The crux of Cohen's account is the successive adaptation of 'mental constructs' through comparisons with nature. In Cohen's view there is a direct dynamic between the mental constructs and physical systems. I argue that his account is essentially hypothetical-deductive, which is at odds with Newton's rejection of the hypothetical-deductive method. An adequate account of Newton's methodology needs to show how Newton's (...) method proceeds differently from the hypothetical-deductive method. In the constructive part I argue for my own account, which is model based: it focuses on how Newton constructed his models in Book I of the Principia. I will show that Newton understood Book I as an exercise in determining the mathematical consequences of certain force functions. The growing complexity of Newton's models is a result of exploring increasingly complex force functions (intra-theoretical dynamics) rather than a successive comparison with nature (extra-theoretical dynamics). Nature did not enter the scene here. This intra-theoretical dynamics is related to the 'autonomy of the models'. (shrink)
In this commentary to Napoletani et al. (Found Sci 16:1–20, 2011), we argue that the approach the authors adopt suggests that neural nets are mathematical techniques rather than models of cognitive processing, that the general approach dates as far back as Ptolemy, and that applied mathematics is more than simply applying results from pure mathematics.
We define a mathematical formalism based on the concept of an ‘‘open dynamical system” and show how it can be used to model embodied cognition. This formalism extends classical dynamical systems theory by distinguishing a ‘‘total system’’ (which models an agent in an environment) and an ‘‘agent system’’ (which models an agent by itself), and it includes tools for analyzing the collections of overlapping paths that occur in an embedded agent's state space. To illustrate the way (...) this formalism can be applied, several neural network models are embedded in a simple model environment. Such phenomena as masking, perceptual ambiguity, and priming are then observed. We also use this formalism to reinterpret examples from the embodiment literature, arguing that it provides for a more thorough analysis of the relevant phenomena. (shrink)
An influential position in the philosophy of biology claims that there are no biological laws, since any apparently biological generalization is either too accidental, fact-like or contingent to be named a law, or is simply reducible to physical laws that regulate electrical and chemical interactions taking place between merely physical systems. In the following I will stress a neglected aspect of the debate that emerges directly from the growing importance of mathematicalmodels of biological phenomena. My main aim (...) is to defend, as well as reinforce, the view that there are indeed laws also in biology, and that their difference in stability, contingency or resilience with respect to physical laws is one of degrees, and not of kind. In order to reach this goal, in the next sections I will advance the following two arguments in favor of the existence of biological laws, both of which are meant to stress the similarity between physical and biological laws. (shrink)
The paper discusses how systems biology is working toward complex accounts that integrate explanation in terms of mechanisms and explanation by mathematicalmodels—which some philosophers have viewed as rival models of explanation. Systems biology is an integrative approach, and it strongly relies on mathematical modeling. Philosophical accounts of mechanisms capture integrative in the sense of multilevel and multifield explanations, yet accounts of mechanistic explanation (as the analysis of a whole in terms of its structural parts and (...) their qualitative interactions) have failed to address how a mathematical model could contribute to such explanations. I discuss how mathematical equations can be explanatorily relevant. Several cases from systems biology are discussed to illustrate the interplay between mechanistic research and mathematical modeling, and I point to questions about qualitative phenomena (rather than the explanation of quantitative details), where quantitative models are still indispensable to the explanation. Systems biology shows that a broader philosophical conception of mechanisms is needed, which takes into account functional-dynamical aspects, interaction in complex networks with feedback loops, system-wide functional properties such as distributed functionality and robustness, and a mechanism’s ability to respond to perturbations (beyond its actual operation). I offer general conclusions for philosophical accounts of explanation. (shrink)
The published works of scientists often conceal the cognitive processes that led to their results. Scholars of mathematical practice must therefore seek out less obvious sources. This article analyzes a widely circulated mathematical joke, comprising a list of spurious proof types. An account is proposed in terms of argumentation schemes: stereotypical patterns of reasoning, which may be accompanied by critical questions itemizing possible lines of defeat. It is argued that humor is associated with risky forms of inference, which (...) are essential to creative mathematics. The components of the joke are explicated by argumentation schemes devised for application to topic-neutral reasoning. These in turn are classified under seven headings: retroduction, citation, intuition, meta-argument, closure, generalization, and definition. Finally, the wider significance of this account for the cognitive science of mathematics is discussed. (shrink)
Interference resolution is improved for stimuli presented in contexts (e.g. locations) associated with frequent conflict. This phenomenon, the “context-specific proportion congruent” (CSPC) effect, has challenged the traditional juxtaposition of “automatic” and “controlled” processing because it suggests that contextual cues can prime top-down control settings in a bottom-up manner. We recently obtained support for this “priming of control” hypothesis with fMRI by showing that CSPC effects are mediated by contextually-cued adjustments in processing selectivity. However, an equally plausible explanation is that CSPC (...) effects reflect adjustments in response caution triggered by expectancy violations (i.e. prediction errors) when encountering rare events as compared to common ones (e.g. high-conflict incongruent trials in a task context associated with infrequent conflict). Here, we applied a quantitative model of choice, the linear ballistic accumulator (LBA), to distil the reaction time and accuracy data from four independent samples that performed a modified flanker task into latent variables representing the psychological processes underlying task-related decision making. We contrasted models which differentially accounted for CSPC effects as arising either from contextually-cued shifts in the rate of sensory evidence accumulation (“drift” models) or in the amount of evidence required to reach a decision (“threshold” models). For the majority of the participants, the LBA ascribed CSPC effects to increases in response threshold for contextually-infrequent trial types (e.g. low-conflict congruent trials in the frequent conflict context), suggesting that the phenomenon may reflect more a prediction error-triggered shift in decision criterion rather than enhanced sensory evidence accumulation under conditions of frequent conflict. (shrink)
This paper explores the question of whether connectionist models of cognition should be considered to be scientific theories of the cognitive domain. It is argued that in traditional scientific theories, there is a fairly close connection between the theoretical (unobservable) entities postulated and the empirical observations accounted for. In connectionist models, however, hundreds of theoretical terms are postulated -- viz., nodes and connections -- that are far removed from the observable phenomena. As a result, many of the (...) features of any given connectionist model are relatively optional. This leads to the question of what, exactly, is learned about a cognitive domain modelled by a connectionist network. (shrink)
Thomas & Karmiloff-Smith’ (T&K-S’) argument that the Residual Normality assumption is not valid for developmental disorders has implications for models of cognition in schizophrenia, a disorder that may involve a neurodevelopmental pathogenesis. A limiting factor for such theories is the lack of understanding about the nature of the cognitive system (modular components versus global processes). Moreover, it is unclear how the proposal that modularization emerges from developmental processes would change that fundamental question.
This article takes off from Johan van Benthem’s ruminations on the interface between logic and cognitive science in his position paper “Logic and reasoning: Do the facts matter?”. When trying to answer Van Benthem’s question whether logic can be fruitfully combined with psychological experiments, this article focuses on a specific domain of reasoning, namely higher-order social cognition, including attributions such as “Bob knows that Alice knows that he wrote a novel under pseudonym”. For intelligent interaction, it is important that (...) the participants recursively model the mental states of other agents. Otherwise, an international negotiation may fail, even when it has potential for a win-win solution, and in a time-critical rescue mission, a software agent may depend on a teammate’s action that never materializes. First a survey is presented of past and current research on higher-order social cognition, from the various viewpoints of logic, artificial intelligence, and psychology. Do people actually reason about each other’s knowledge in the way proscribed by epistemic logic? And if not, how can logic and cognitive science productively work together to construct more realistic models of human reasoning about other minds? The paper ends with a delineation of possible avenues for future research, aiming to provide a better understanding of higher-order social reasoning. The methodology is based on a combination of experimental research, logic, computational cognitive models, and agent-based evolutionary models. Keywords Epistemic logic - Cognitive science - Intelligent interaction - Cognitive modeling. (shrink)
Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, ‘sophisticated’ probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, (...) explore how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today. (shrink)
Ever since the early decades of this century, there have emerged a number of competing schools of ecology that have attempted to weave the concepts underlying natural resource management and natural-historical traditions into a formal theoretical framework. It was widely believed that the discovery of the fundamental mechanisms underlying ecological phenomena would allow ecologists to articulate mathematically rigorous statements whose validity was not predicated on contingent factors. The formulation of such statements would elevate ecology to the standing of a rigorous (...) scientific discipline on a par with physics. However, there was no agreement as to the fundamental units of ecology. Systems ecologists sought to identify the fundamental organization that tied the physical and biological components of ecosystems into an irreducible unit: the ecosystem was their fundamental unit. Population ecologists sought, instead, to identify the biological mechanisms regulating the abundance and distribution of plant and animal species: to these ecologists, the individual organism was the fundamental unit of ecology, and the physical environment was nothing more than a stage upon which the play of individuals in perennial competition took place. As Joel Hagen has pointed out, the two schools were thus dividied by fundamentally different and irreconcilable assumptions about the nature of ecosystems.Notwithstanding these divisive efforts to elevate the image of ecology, the discipline remained in the shadows of American academia until the mid-1960s, when systems ecologists succeeded in projecting ecology onto the national scene. They did so by seeking closer involvement with practical problems: they argued before Congress that their approach to the theoretical problems of ecology was uniquely suited to the solution of the impending “environmental crisis.” With the establishment of the International Biological Program, they succeeded in attracting unprecedented levels of funding for systems ecology research. Theoretical population ecologists, on the other hand, found themselves consigned to the outer regions of this new institutional landscape. The systems ecologists' successful capture of the limelight and the purse brought the divisions between them and population ecologists into sharper relief — hence the hardening of the division of ecology observed by Hagen.45I have argued that the population biologist Richard Levins, prompted by these institutional developments, sought to challenge the social position of systems ecology, and to assert the intellectual priority of theoretical population ecology. He attempted to do so by articulating a nontrivial and rather carefully thought out classification of ecological models that led to the disqualification of systems analysis as a legitimate approach to the study of ecological phenomena. I have suggested that — ultimately —Levins's case against systems analysis in ecology rested on the view that an aspiration to realism and prediction was incompatible with an interest in theoretical issues, a concern that he equated with the search for generality. He sought to reinforce this argument by exploiting the fact that systems ecologists had staked their future on the provision of technical solutions to the problems of the “environmental crisis”: he associated systems ecologists' aspiration to realism and precision with a concern for practical issues, trading on the widely accepted view that practical imperatives are incompatible with the aims of scientific inquiry.46 These are plausible, but nonetheless questionable, claims which have now become an integral part of ecological knowledge. And finally, I hope to have shown how even the most abstract levels of scientific argument are shaped by political considerations, and how discussions of the conceptual development of modern ecology might benefit from a greater consideration of its historical and social dimensions.47. (shrink)
Three visual habituation studies using abstract animations tested the claim that infants’ attachment behavior in the Strange Situation procedure corresponds to their expectations about caregiver–infant interactions. Three unique patterns of expectations were revealed. Securely attached infants expected infants to seek comfort from caregivers and expected caregivers to provide comfort. Insecure-resistant infants not only expected infants to seek comfort from caregivers but also expected caregivers to withhold comfort. Insecure-avoidant infants expected infants to avoid seeking comfort from caregivers and expected caregivers to (...) withhold comfort. These data support Bowlby’s (1958) original claims—that infants form internal working models of attachment that are expressed in infants’ own behavior. (shrink)
Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research (...) exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. (shrink)
Recent experimental evidence from developmental psychology and cognitive neuroscience indicates that humans are equipped with unlearned elementary mathematical skills. However, formal mathematics has properties that cannot be reduced to these elementary cognitive capacities. The question then arises how human beings cognitively deal with more advanced mathematical ideas. This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment. In this view, mathematical symbols (...) are not only used to express mathematical concepts—they are constitutive of the mathematical concepts themselves. Mathematical symbols are epistemic actions, because they enable us to represent concepts that are literally unthinkable with our bare brains. Using case-studies from the history of mathematics and from educational psychology, we argue for an intimate relationship between mathematical symbols and mathematicalcognition. (shrink)
The aim of cognitive neuropsychology is to articulate the functional architecture underlying normal cognition, on the basis of congnitive performance data involving brain-damaged subjects. Throughout the history of the subject, questions have been raised as to whether the methods of neuropsychology are adequate to its goals. The question has been reopened by Glymour , who formulates a discovery problem for cognitive neuropsychology, in the sense of formal learning theory, concerning the existence of a reliable methodology. It appears that the (...) discovery problem may be insoluble in principle! I propose a modified formulation of Glymour's discovery problem and argue that a sceptical conclusion about the possiblity of cognitive neuropsychology as an empirical science is not warranted. (shrink)
The main problem discussed in this paper is: Why and how did animal cognition abilities arise? It is argued that investigations of the evolution of animal cognition abilities are very important from an epistemological point of view. A new direction for interdisciplinary researches – the creation and development of the theory of human logic origin – is proposed. The approaches to the origination of such a theory (mathematicalmodels of ``intelligent invention'' of biological evolution, the cybernetic (...) schemes of evolutionary progress and purposeful adaptive behavior) as well as potential interdisciplinary links of the theory are described and analyzed. (shrink)
The idea that formal geometry derives from intuitive notions of space has appeared in many guises, most notably in Kant’s argument from geometry. Kant claimed that an a priori knowledge of spatial relationships both allows and constrains formal geometry: it serves as the actual source of our cognition of principles of geometry and as a basis for its further cultural development. The development of non-Euclidean geometries, however, seemed to deﬁnitely undermine the idea that there is some privileged relationship between (...) our spatial intuitions and mathematical theory. This paper’s aim is to look at this longstanding philosophical issue through the lens of cognitive science. Drawing on recent evidence from cognitive ethology, developmental psychology, neuroscience and anthropology, I argue for an enhanced, more informed version of the argument from geometry: humans share with other species evolved, innate intuitions of space which serve as a vital precondition for geometry as a formal science. (shrink)
Reinforcement learning approaches to cognitive modeling represent task acquisition as learning to choose the sequence of steps that accomplishes the task while maximizing a reward. However, an apparently unrecognized problem for modelers is choosing when, what, and how much to reward; that is, when (the moment: end of trial, subtask, or some other interval of task performance), what (the objective function: e.g., performance time or performance accuracy), and how much (the magnitude: with binary, categorical, or continuous values). In this article, (...) we explore the problem space of these three parameters in the context of a task whose completion entails some combination of 36 state–action pairs, where all intermediate states (i.e., after the initial state and prior to the end state) represent progressive but partial completion of the task. Different choices produce profoundly different learning paths and outcomes, with the strongest effect for moment. Unfortunately, there is little discussion in the literature of the effect of such choices. This absence is disappointing, as the choice of when, what, and how much needs to be made by a modeler for every learning model. (shrink)
The process of constructing mathematicalmodels is examined and a case made that the construction process is an integral part of the justification for the model. The role of heuristics in testing and modifying models is described and some consequences for scientific methodology are drawn out. Three different ways of constructing the same model are detailed to demonstrate the claims made here.
The "dynamical systems" model of cognitive processing is not an alternative computational model. The proposals about "computation" that accompany it are either vacuous or do not distinguish it from a variety of standard computational models. I conclude that the real motivation for van Gelder's version of the account is not technical or computational, but is rather in the spirit of natur-philosophie.