Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: the discovery of less-is-more effects; the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; an advancement from vague labels to computational models of heuristics; the (...) development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an “adaptive toolbox;” and the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people’s adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies. (shrink)
While theories of rationality and decision making typically adopt either a single-powertool perspective or a bag-of-tricks mentality, the research program of ecological rationality bridges these with a theoretically-driven account of when different heuristic decision mechanisms will work well. Here we described two ways to study how heuristics match their ecological setting: The bottom-up approach starts with psychologically plausible building blocks that are combined to create simple heuristics that fit specific environments. The top-down approach starts from the statistical problem facing the (...) organism and a set of principles, such as the bias– variance tradeoff, that can explain when and why heuristics work in uncertain environments, and then shows how effective heuristics can be built by biasing and simplifying more complex models. We conclude with challenges these approaches face in developing a psychologically realistic perspective on human rationality. (shrink)
Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth’s (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they argue for. (...) Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately. (shrink)
One way of dealing with the proliferation of conjectures that accompany the diverse study of the evolution of language is to develop precise and testable models which reveal otherwise latent implications. We suggest how verbal theories of the role of individual development in language evolution can benefit from formal modeling, and vice versa.