Simple Heuristics That Make Us Smart invites readers to embark on a new journey into a land of rationality that differs from the familiar territory of cognitive science and economics. Traditional views of rationality tend to see decision makers as possessing superhuman powers of reason, limitless knowledge, and all of eternity in which to ponder choices. To understand decisions in the real world, we need a different, more psychologically plausible notion of rationality, and this book provides it. It is about (...) fast and frugal heuristics--simple rules for making decisions when time is pressing and deep thought an unaffordable luxury. These heuristics can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality. But when and how can such fast and frugal heuristics work? Can judgments based simply on one good reason be as accurate as those based on many reasons? Could less knowledge even lead to systematically better predictions than more knowledge? Simple Heuristics explores these questions, developing computational models of heuristics and testing them through experiments and analyses. It shows how fast and frugal heuristics can produce adaptive decisions in situations as varied as choosing a mate, dividing resources among offspring, predicting high school drop out rates, and playing the stock market. As an interdisciplinary work that is both useful and engaging, this book will appeal to a wide audience. It is ideal for researchers in cognitive psychology, evolutionary psychology, and cognitive science, as well as in economics and artificial intelligence. It will also inspire anyone interested in simply making good decisions. (shrink)
The Empire of Chance tells how quantitative ideas of chance transformed the natural and social sciences, as well as daily life over the last three centuries. A continuous narrative connects the earliest application of probability and statistics in gambling and insurance to the most recent forays into law, medicine, polling and baseball. Separate chapters explore the theoretical and methodological impact in biology, physics and psychology. Themes recur - determinism, inference, causality, free will, evidence, the shifting meaning of probability - but (...) in dramatically different disciplinary and historical contexts. In contrast to the literature on the mathematical development of probability and statistics, this book centres on how these technical innovations remade our conceptions of nature, mind and society. Written by an interdisciplinary team of historians and philosophers, this readable, lucid account keeps technical material to an absolute minimum. It is aimed not only at specialists in the history and philosophy of science, but also at the general reader and scholars in other disciplines. (shrink)
Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: the discovery of less-is-more effects; the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; an advancement from vague labels to computational models of heuristics; the (...) development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an “adaptive toolbox;” and the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people’s adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies. (shrink)
This volume collects Gigerenzer's recent articles on the psychology of rationality. This volume should appeal, like the earlier volumes, to a broad mixture of cognitive psychologists, philosophers, economists, and others who study decision making.
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following H. Simon's notion of satisficing, the authors have proposed a family of algorithms based on a simple psychological mechanism: one-reason decision making. These fast and frugal algorithms violate fundamental tenets of classical rationality: They neither look up nor integrate all information. By computer simulation, the (...) authors held a competition between the satisficing "Take The Best" algorithm and various "rational" inference procedures. The Take The Best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference. (shrink)
What counts as human rationality: reasoning processes that embody content-independent formal theories, such as propositional logic, or reasoning processes that are well designed for solving important adaptive problems? Most theories of human reasoning have been based on content-independent formal rationality, whereas adaptive reasoning, ecological or evolutionary, has been little explored. We elaborate and test an evolutionary approach, Cosmides' social contract theory, using the Wason selection task. In the first part, we disentangle the theoretical concept of a “social contract” from that (...) of a “cheater-detection algorithm”. We demonstrate that the fact that a rule is perceived as a social contract — or a conditional permission or obligation, as Cheng and Holyoak proposed — is not sufficient to elicit Cosmides' striking results, which we replicated. The crucial issue is not semantic, but pragmatic: whether a person is cued into the perspective of a party who can be cheated. In the second part, we distinguish between social contracts with bilateral and unilateral cheating options. Perspective change in contracts with bilateral cheating options turns P & not-Q responses into not-P & Q responses. The results strongly support social contract theory, contradict availability theory, and cannot be accounted for by pragmatic reasoning schema theory, which lacks the pragmatic concepts of perspectives and cheating detection. (shrink)
[Correction Notice: An erratum for this article was reported in Vol 109 of Psychological Review. Due to circumstances that were beyond the control of the authors, the studies reported in "Models of Ecological Rationality: The Recognition Heuristic," by Daniel G. Goldstein and Gerd Gigerenzer overlap with studies reported in "The Recognition Heuristic: How Ignorance Makes Us Smart," by the same authors and with studies reported in "Inference From Ignorance: The Recognition Heuristic". In addition, Figure 3 in the Psychological Review article (...) was originally published in the book chapter and should have carried a note saying that it was used by permission of Oxford University Press.] One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counter-intuitive less-is-more effect in which less knowledge is better than more for making accurate inferences. (shrink)
The paper shows why and how an empirical study of fast-and-frugal heuristics can provide norms of good reasoning, and thus how (and how far) rationality can be naturalized. We explain the heuristics that humans often rely on in solving problems, for example, choosing investment strategies or apartments, placing bets in sports, or making library searches. We then show that heuristics can lead to judgments that are as accurate as or even more accurate than strategies that use more information and computation, (...) including optimization methods. A standard way to defend the use of heuristics is by reference to accuracy-effort trade-offs. We take a different route, emphasizing ecological rationality (the relationship between cognitive heuristics and environment), and argue that in uncertain environments, more information and computation are not always better (the “less-can-be-more” doctrine). The resulting naturalism about rationality is thus normative because it not only describes what heuristics people use, but also in which specific environments one should rely on a heuristic in order to make better inferences. While we desist from claiming that the scope of ecological rationality is unlimited, we think it is of wide practical use. (shrink)
Can the general public learn to deal with risk and uncertainty, or do authorities need to steer people’s choices in the right direction? Libertarian paternalists argue that results from psychological research show that our reasoning is systematically flawed and that we are hardly educable because our cognitive biases resemble stable visual illusions. For that reason, they maintain, authorities who know what is best for us need to step in and steer our behavior with the help of “nudges.” Nudges are nothing (...) new, but justifying them on the basis of a latent irrationality is. In this article, I analyze the scientific evidence presented for such a justification. It suffers from narrow logical norms, that is, a misunderstanding of the nature of rational thinking, and from a confirmation bias, that is, selective reporting of research. These two flaws focus the blame on individuals’ minds rather than on external causes, such as industries that spend billions to nudge people into unhealthy behavior. I conclude that the claim that we are hardly educable lacks evidence and forecloses the true alternative to nudging: teaching people to become risk savvy. (shrink)
What is the nature of moral behavior? According to the study of bounded rationality, it results not from character traits or rational deliberation alone, but from the interplay between mind and environment. In this view, moral behavior is based on pragmatic social heuristics rather than moral rules or maximization principles. These social heuristics are not good or bad per se, but solely in relation to the environments in which they are used. This has methodological implications for the study of morality: (...) Behavior needs to be studied in social groups as well as in isolation, in natural environments as well as in labs. It also has implications for moral policy: Only by accepting the fact that behavior is a function of both mind and environmental structures can realistic prescriptive means of achieving moral goals be developed. (shrink)
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we (...) explore fast and frugal heuristics – simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data – that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program. Key Words: adaptive toolbox; bounded rationality; decision making; elimination models; environment structure; heuristics; ignorance-based reasoning; limited information search; robustness; satisficing; simplicity. (shrink)
Axiomatic rationality is defined in terms of conformity to abstract axioms. Savage limited axiomatic rationality to small worlds, that is, situations in which the exhaustive and mutually exclusive set of future states S and their consequences C are known. Others have interpreted axiomatic rationality as a categorical norm for how human beings should reason, arguing in addition that violations would lead to real costs such as money pumps. Yet a review of the literature shows little evidence that violations are actually (...) associated with any measurable costs. Limiting axiomatic rationality to small worlds, I propose a naturalized version of rationality for situations of intractability and uncertainty, all of which are not in. In these situations, humans can achieve their goals by relying on heuristics that may violate axiomatic rationality. The study of ecological rationality requires formal models of heuristics and an analysis of the structures of environments these can exploit. It lays the foundation of a moderate naturalism in epistemology, providing statements about heuristics we should use in a given situation. Unlike axiomatic rationality, ecological rationality can explain less-is-more effects, formalize when one should move from ‘is’ to ‘ought,’ and be evaluated by goals beyond coherence, such as predictive accuracy, frugality, and efficiency. Ecological rationality can be seen as a formalization of means–end instrumentalist rationality, based on Herbert Simon’s insight that rational behavior is a function of the mind and its environment. (shrink)
Traditional views of rationality posit general-purpose decision mechanisms based on logic or optimization. The study of ecological rationality focuses on uncovering the “adaptive toolbox” of domain-specific simple heuristics that real, computationally bounded minds employ, and explaining how these heuristics produce accurate decisions by exploiting the structures of information in the environments in which they are applied. Knowing when and how people use particular heuristics can facilitate the shaping of environments to engender better decisions.
The classical view that equates rationality with adherence to the laws of probability theory and logic has driven much research on inference. Recently, an increasing number of researchers have begun to espouse a view of rationality that takes account of organisms' adaptive goals, natural environments, and cognitive constraints. We argue that inference is carried out using boundedly rational heuristics, that is, heuristics that allow organisms to reach their goals under conditions of limited time, information, and computational capacity. These heuristics are (...) ecologically rational in that they exploit aspects of both the physical and social environment in order to make adaptive inferences. We review recent work exploring this multifaceted conception of rationality. (shrink)
The terms nested sets, partitive frequencies, inside-outside view, and dual processes add little but confusion to our original analysis (Gigerenzer & Hoffrage 1995; 1999). The idea of nested set was introduced because of an oversight; it simply rephrases two of our equations. Representation in terms of chances, in contrast, is a novel contribution yet consistent with our computational analysis System 1.dual process theory” is: Unless the two processes are defined, this distinction can account post hoc for almost everything. In contrast, (...) an ecological view of cognition helps to explain how insight is elicited from the outside (the external representation of information) and, more generally, how cognitive strategies match with environmental structures. (shrink)
This paper presents an axiomatic framework for the priority heuristic, a model of bounded rationality in Selten’s Bounded rationality: the adaptive toolbox, 2001) spirit of using empirical evidence on heuristics. The priority heuristic predicts actual human choices between risky gambles well. It implies violations of expected utility theory such as common consequence effects, common ratio effects, the fourfold pattern of risk taking and the reflection effect. We present an axiomatization of a parameterized version of the heuristic which generalizes the heuristic (...) in order to account for individual differences and inconsistencies. The axiomatization uses semiorders, which have an intransitive indifference part and a transitive strict preference component. The axiomatization suggests new testable predictions of the priority heuristic and makes it easier for theorists to study the relation between heuristics and other axiomatic theories such as cumulative prospect theory. (shrink)
In the study of judgmental errors, surprisingly little thought is spent on what constitutes good and bad judgment. I call this simultaneous focus on errors and lack of analysis of what constitutes an error, the irrationality paradox. I illustrate the paradox by a dozen apparent fallacies; each can be logically deduced from the environmental structure and an unbiased mind.
Mind and environment evolve in tandem—almost a platitude. Much of judgment and decision making research, however, has compared cognition to standard statistical models, rather than to how well it is adapted to its environment. The author argues two points. First, cognitive algorithms are tuned to certain information formats, most likely to those that humans have encountered during their evolutionary history. In par ticular, Bayesian computations are simpler when the information is in a frequency format than when it is in a (...) probability format. The author investigates whether fre quency formats can make physicians reason more often the Bayesian way. Second, cognitive algorithms need to operate under constraints of limited time, knowledge, and computational power, and they need to exploit the structures of their environments. The author describes a fast and frugal algorithm, Take The Best, that violates standard principles of rational inference but can be as accurate as sophisticated "optimal" mod els for diagnostic inference. Key words: Bayes' theorem; bounded rationality; infor mation format; probabilistic reasoning; satisficing; training; medical education. (shrink)
What Chow calls NHSTP is an inconsistent hybrid of Fisherian and Neyman-Pearsonian ideas. In psychology it has been practiced like ritualistic handwashing and sustained by wishful thinking about its utility. Chow argues that NHSTP is an important tool for ruling out chance as an explanation for data. I disagree. This ritual discourages theory development by providing researchers with no incentive to specify hypotheses.
Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth’s (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they argue for. (...) Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately. (shrink)
Shepard promotes the important view that evolution constructs cognitive mechanisms that work with internalized aspects of the structure of their environment. But what can this internalization mean? We contrast three views: Shepard's mirrors reflecting the world, Brunswik's lens inferring the world, and Simon 's scissors exploiting the world. We argue that Simon 's scissors metaphor is more appropriate for higher-order cognitive mechanisms and ask how far it can also be applied to perceptual tasks. [Barlow; Kubovy & Epstein; Shepard].