Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘sampling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak sampling. In strong sampling, data are assumed to have been deliberately generated as positive examples of a concept, whereas in weak sampling, (...) data are assumed to have been generated without any restrictions. We develop a more general account of sampling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak sampling, but that there are large individual differences in the relative emphasis different people give to each type of sampling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak sampling, and possible extensions of our modeling approach to richer problems of inductive generalization. (shrink)
In recent years quantum probability models have been used to explain many aspects of human decision making, and as such quantum models have been considered a viable alternative to Bayesian models based on classical probability. One criticism that is often leveled at both kinds of models is that they lack a clear interpretation in terms of psychological mechanisms. In this paper we discuss the mechanistic underpinnings of a quantum walk model of human decision making and response time. The quantum walk (...) model is compared to standard sequential sampling models, and the architectural assumptions of both are considered. In particular, we show that the quantum model has a natural interpretation in terms of a cognitive architecture that is both massively parallel and involves both co-operative and competitive interactions between units. Additionally, we introduce a family of models that includes aspects of the classical and quantum walk models. (shrink)
Human languages vary in many ways but also show striking cross-linguistic universals. Why do these universals exist? Recent theoretical results demonstrate that Bayesian learners transmitting language to each other through iterated learning will converge on a distribution of languages that depends only on their prior biases about language and the quantity of data transmitted at each point; the structure of the world being communicated about plays no role (Griffiths & Kalish, , ). We revisit these findings and show that when (...) certain assumptions about the relationship between language and the world are abandoned, learners will converge to languages that depend on the structure of the world as well as their prior biases. These theoretical results are supported with a series of experiments showing that when human learners acquire language through iterated learning, the ultimate structure of those languages is shaped by the structure of the meanings to be communicated. (shrink)
Everyday reasoning requires more evidence than raw data alone can provide. We explore the idea that people can go beyond this data by reasoning about how the data was sampled. This idea is investigated through an examination of premise non-monotonicity, in which adding premises to a category-based argument weakens rather than strengthens it. Relevance theories explain this phenomenon in terms of people's sensitivity to the relationships among premise items. We show that a Bayesian model of category-based induction taking premise sampling (...) assumptions and category similarity into account complements such theories and yields two important predictions: First, that sensitivity to premise relationships can be violated by inducing a weak sampling assumption; and second, that premise monotonicity should be restored as a result. We test these predictions with an experiment that manipulates people's assumptions in this regard, showing that people draw qualitatively different conclusions in each case. (shrink)
Jones & Love (J&L) contend that the Bayesian approach should integrate process constraints with abstract computational analysis. We agree, but argue that the fundamentalist/enlightened dichotomy is a false one: Enlightened research is deeply intertwined with the basic, fundamental work upon which it is based.
Pothos & Busemeyer (P&B) argue that quantum probability (QP) provides a descriptive model of behavior and can also provide a rational analysis of a task. We discuss QP models using Marr's levels of analysis, arguing that they make most sense as algorithmic level theories. We also highlight the importance of having clear interpretations for basic mechanisms such as interference.