Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘sampling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak sampling. In strong sampling, data are assumed to have been deliberately generated as positive examples of a concept, whereas in weak sampling, (...) data are assumed to have been generated without any restrictions. We develop a more general account of sampling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak sampling, but that there are large individual differences in the relative emphasis different people give to each type of sampling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak sampling, and possible extensions of our modeling approach to richer problems of inductive generalization. (shrink)
The “wisdom of the crowd” phenomenon refers to the finding that the aggregate of a set of proposed solutions from a group of individuals performs better than the majority of individual solutions. Most often, wisdom of the crowd effects have been investigated for problems that require single numerical estimates. We investigate whether the effect can also be observed for problems where the answer requires the coordination of multiple pieces of information. We focus on combinatorial problems such as the planar Euclidean (...) traveling salesperson problem, minimum spanning tree problem, and a spanning tree memory task. We develop aggregation methods that combine common solution fragments into a global solution and demonstrate that these aggregate solutions outperform the majority of individual solutions. These case studies suggest that the wisdom of the crowd phenomenon might be broadly applicable to problem-solving and decision-making situations that go beyond the estimation of single numbers. (shrink)
We apply the “wisdom of the crowd” idea to human category learning, using a simple approach that combines people's categorization decisions by taking the majority decision. We first show that the aggregated crowd category learning behavior found by this method performs well, learning categories more quickly than most or all individuals for 28 previously collected datasets. We then extend the approach so that it does not require people to categorize every stimulus. We do this using a model-based method that predicts (...) the categorization behavior people would produce for new stimuli, based on their behavior with observed stimuli, and uses the majority of these predicted decisions. We demonstrate and evaluate the model-based approach in two case studies. In the first, we use the general recognition theory decision-bound model of categorization to infer each person's decision boundary for two categories of perceptual stimuli, and we use these inferences to make aggregated predictions about new stimuli. In the second, we use the generalized context model exemplar model of categorization to infer each person's selective attention for face stimuli, and we use these inferences to make aggregated predictions about withheld stimuli. In both case studies, we show that our method successfully predicts the category of unobserved stimuli, and we emphasize that the aggregated crowd decisions arise from psychologically interpretable processes and parameters. We conclude by discussing extensions and potential real-world applications of the approach. (shrink)
We apply a cognitive modeling approach to the problem of measuring expertise on rank ordering problems. In these problems, people must order a set of items in terms of a given criterion (e.g., ordering American holidays through the calendar year). Using a cognitive model of behavior on this problem that allows for individual differences in knowledge, we are able to infer people's expertise directly from the rankings they provide. We show that our model-based measure of expertise outperforms self-report measures, taken (...) both before and after completing the ordering of items, in terms of correlation with the actual accuracy of the answers. These results apply to six general knowledge tasks, like ordering American holidays, and two prediction tasks, involving sporting and television competitions. Based on these results, we discuss the potential and limitations of using cognitive models in assessing expertise. (shrink)
In most decision-making situations, there is a plethora of information potentially available to people. Deciding what information to gather and what to ignore is no small feat. How do decision makers determine in what sequence to collect information and when to stop? In two experiments, we administered a version of the German cities task developed by Gigerenzer and Goldstein (1996), in which participants had to decide which of two cities had the larger population. Decision makers were not provided with the (...) names of the cities, but they were able to collect different kinds of cues for both response alternatives (e.g., “Does this city have a university?”) before making a decision. Our experiments differed in whether participants were free to determine the number of cues they examined. We demonstrate that a novel model, using hierarchical latent mixtures and Bayesian inference (Lee & Newell, ) provides a more complete description of the data from both experiments than simple conventional strategies, such as the take–the–best or the Weighted Additive heuristics. (shrink)
Jones & Love (J&L) should have given more attention to Agnostic uses of Bayesian methods for the statistical analysis of models and data. Reliance on the frequentist analysis of Bayesian models has retarded their development and prevented their full evaluation. The Ecumenical integration of Bayesian statistics to analyze Bayesian models offers a better way to test their inferential and predictive capabilities.
Glenberg's account falls short in several respects. Besides requiring clearer explication of basic concepts, his account fails to recognize the autonomous nature of perception. His account of what is remembered, and its description, is too static. His strictures against connectionist modeling might be overcome by combining the notions of psychological space and principled learning in an embodied and situated network.
While Tenenbaum and Griffiths impressively consolidate and extend Shepard's research in the areas of stimulus representation and generalization, there is a need for complexity measures to be developed to control the flexibility of their “hypothesis space” approach to representation. It may also be possible to extend their concept learning model to consider the fundamental issue of representational adaptation. [Tenenbaum & Griffiths].