The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of (...) challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements. (shrink)
Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's three levels of analysis and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top–down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint (...) at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view. (shrink)
What mechanisms underlie children’s language production? Structural priming—the repetition of sentence structure across utterances—is an important measure of the developing production system. We propose its mechanism in children is the same as may underlie analogical reasoning: structure-mapping. Under this view, structural priming is the result of making an analogy between utterances, such that children map semantic and syntactic structure from previous to future utterances. Because the ability to map relationally complex structures develops with age, younger children are less successful than (...) older children at mapping both semantic and syntactic relations. Consistent with this account, 4-year-old children showed priming only of semantic relations when surface similarity across utterances was limited, whereas 5-year-olds showed priming of both semantic and syntactic structure regardless of shared surface similarity. The priming of semantic structure without syntactic structure is uniquely predicted by the structure-mapping account because others have interpreted structural priming as a reflection of developing syntactic knowledge. (shrink)
Mathematical developments in probabilistic inference have led to optimism over the prospects for Bayesian models of cognition. Our target article calls for better differentiation of these technical developments from theoretical contributions. It distinguishes between Bayesian Fundamentalism, which is theoretically limited because of its neglect of psychological mechanism, and Bayesian Enlightenment, which integrates rational and mechanistic considerations and is thus better positioned to advance psychological theory. The commentaries almost uniformly agree that mechanistic grounding is critical to the success of the Bayesian (...) program. Some commentaries raise additional challenges, which we address here. Other commentaries claim that all Bayesian models are mechanistically grounded, while at the same time holding that they should be evaluated only on a computational level. We argue this contradictory stance makes it difficult to evaluate a model's scientific contribution, and that the psychological commitments of Bayesian models need to be made more explicit. (shrink)
Tenenbaum and Griffiths's article continues three disturbing trends that typify category learning modeling: (1) modelers tend to focus on a single induction task; (2) the drive to create models that are formally elegant has resulted in a gross simplification of the phenomena of interest; (3) related research is generally ignored when doing so is expedient. [Tenenbaum & Griffiths].
Penn et al. argue that the complexity of relational learning is beyond animals. We discuss a model that demonstrates relational learning need not involve complex processes. Novel stimuli are compared to previous experiences stored in memory. As learning shifts attention from featural to relational cues, the comparison process becomes more analogical in nature, successfully accounting for performance across species and development.