David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Cognitive Science 36 (2):187-223 (2012)
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘sampling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak sampling. In strong sampling, data are assumed to have been deliberately generated as positive examples of a concept, whereas in weak sampling, data are assumed to have been generated without any restrictions. We develop a more general account of sampling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak sampling, but that there are large individual differences in the relative emphasis different people give to each type of sampling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak sampling, and possible extensions of our modeling approach to richer problems of inductive generalization
|Keywords||Bayesian modeling Inductive inference Generalization|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
David Marr (1982). Vision. Freeman.
Nelson Goodman (1983). Fact, Fiction, and Forecast. Harvard University Press.
Daniel Kahneman, Paul Slovic & Amos Tversky (eds.) (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press.
David Marr (1982). Vison. W. H. Freeman.
Citations of this work BETA
Michael C. Frank (2013). Throwing Out the Bayesian Baby with the Optimal Bathwater: Response To. Cognition 128 (3):417-423.
Similar books and articles
Leendert van Maanen, Hedderik van Rijn & Niels Taatgen (2012). RACE/A: An Architectural Account of the Interactions Between Learning, Task Control, and Retrieval Dynamics. Cognitive Science 36 (1):62-101.
Ranald R. Macdonald (1997). Base Rates and Randomness. Behavioral and Brain Sciences 20 (4):778-778.
Roger White (2005). Explanation as a Guide to Induction. Philosophers' Imprint 5 (2):1-29.
Campbell Scott & Franklin James (2004). Randomness and the Justification of Induction. Synthese 138:79-99.
J. -W. Romeyn (2004). Hypotheses and Inductive Predictions. Synthese 141 (3):333-364.
Scott Campbell & James Franklin (2004). Randomness and the Justification of Induction. Synthese 138 (1):79 - 99.
E. Sober (2003). An Empirical Critique of Two Versions of the Doomsday Argument – Gott's Line and Leslie's Wedge. Synthese 135 (3):415 - 430.
Melissa Constantine (2010). Disentangling Methodologies: The Ethics of Traditional Sampling Methodologies, Community-Based Participatory Research, and Respondent-Driven Sampling. American Journal of Bioethics 10 (3):22-24.
Jean-François Bonnefon (2012). Utility Conditionals as Consequential Arguments: A Random Sampling Experiment. Thinking and Reasoning 18 (3):379 - 393.
Jan-Willem Romeijn (2004). Hypotheses and Inductive Predictions. Synthese 141 (3):333 - 364.
Walter Kintsch & Praful Mangalath (2011). The Construction of Meaning. Topics in Cognitive Science 3 (2):346-370.
Added to index2011-12-06
Total downloads12 ( #200,583 of 1,725,443 )
Recent downloads (6 months)4 ( #167,246 of 1,725,443 )
How can I increase my downloads?