This volume collects Gigerenzer's recent articles on the psychology of rationality. This volume should appeal, like the earlier volumes, to a broad mixture of cognitive psychologists, philosophers, economists, and others who study decision making.
Simple Heuristics That Make Us Smart invites readers to embark on a new journey into a land of rationality that differs from the familiar territory of cognitive science and economics. Traditional views of rationality tend to see decision makers as possessing superhuman powers of reason, limitless knowledge, and all of eternity in which to ponder choices. To understand decisions in the real world, we need a different, more psychologically plausible notion of rationality, and this book provides it. It is about (...) fast and frugal heuristics--simple rules for making decisions when time is pressing and deep thought an unaffordable luxury. These heuristics can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality. But when and how can such fast and frugal heuristics work? Can judgments based simply on one good reason be as accurate as those based on many reasons? Could less knowledge even lead to systematically better predictions than more knowledge? Simple Heuristics explores these questions, developing computational models of heuristics and testing them through experiments and analyses. It shows how fast and frugal heuristics can produce adaptive decisions in situations as varied as choosing a mate, dividing resources among offspring, predicting high school drop out rates, and playing the stock market. As an interdisciplinary work that is both useful and engaging, this book will appeal to a wide audience. It is ideal for researchers in cognitive psychology, evolutionary psychology, and cognitive science, as well as in economics and artificial intelligence. It will also inspire anyone interested in simply making good decisions. (shrink)
The Empire of Chance tells how quantitative ideas of chance transformed the natural and social sciences, as well as daily life over the last three centuries. A continuous narrative connects the earliest application of probability and statistics in gambling and insurance to the most recent forays into law, medicine, polling and baseball. Separate chapters explore the theoretical and methodological impact in biology, physics and psychology. Themes recur - determinism, inference, causality, free will, evidence, the shifting meaning of probability - but (...) in dramatically different disciplinary and historical contexts. In contrast to the literature on the mathematical development of probability and statistics, this book centres on how these technical innovations remade our conceptions of nature, mind and society. Written by an interdisciplinary team of historians and philosophers, this readable, lucid account keeps technical material to an absolute minimum. It is aimed not only at specialists in the history and philosophy of science, but also at the general reader and scholars in other disciplines. (shrink)
Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: the discovery of less-is-more effects; the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; an advancement from vague labels to computational models of heuristics; the (...) development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an “adaptive toolbox;” and the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people’s adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies. (shrink)
Presenteeism—going to work while ill—is a widespread phenomenon worldwide. Previous research has concentrated mainly on its negative effects. This study investigates the positive consequences of presenteeism derived from a comprehensive content model of presenteeism that was developed on the basis of negative effects. In a quantitative online-survey employees rated the degree of experienced or potential positive effects depending on whether they had worked while ill or not during the previous year. Results revealed that all postulated positive effects described in the (...) content model were relevant. Most positive effects were rated significantly higher by participants who had shown presenteeism in comparison to those who had not. The positive effects significantly predicted presenteeism propensity for participants having shown presenteeism. In addition, an overall rating of positive effects was significantly related to presenteeism, however, to a lesser degree. Overall, the results demonstrate the applicability of the content model to positive effects of presenteeism. They point to the need for further investigation of them and their consideration for the management of presenteeism. (shrink)
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following H. Simon's notion of satisficing, the authors have proposed a family of algorithms based on a simple psychological mechanism: one-reason decision making. These fast and frugal algorithms violate fundamental tenets of classical rationality: They neither look up nor integrate all information. By computer simulation, the (...) authors held a competition between the satisficing "Take The Best" algorithm and various "rational" inference procedures. The Take The Best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference. (shrink)
What counts as human rationality: reasoning processes that embody content-independent formal theories, such as propositional logic, or reasoning processes that are well designed for solving important adaptive problems? Most theories of human reasoning have been based on content-independent formal rationality, whereas adaptive reasoning, ecological or evolutionary, has been little explored. We elaborate and test an evolutionary approach, Cosmides' social contract theory, using the Wason selection task. In the first part, we disentangle the theoretical concept of a “social contract” from that (...) of a “cheater-detection algorithm”. We demonstrate that the fact that a rule is perceived as a social contract — or a conditional permission or obligation, as Cheng and Holyoak proposed — is not sufficient to elicit Cosmides' striking results, which we replicated. The crucial issue is not semantic, but pragmatic: whether a person is cued into the perspective of a party who can be cheated. In the second part, we distinguish between social contracts with bilateral and unilateral cheating options. Perspective change in contracts with bilateral cheating options turns P & not-Q responses into not-P & Q responses. The results strongly support social contract theory, contradict availability theory, and cannot be accounted for by pragmatic reasoning schema theory, which lacks the pragmatic concepts of perspectives and cheating detection. (shrink)
The paper shows why and how an empirical study of fast-and-frugal heuristics can provide norms of good reasoning, and thus how (and how far) rationality can be naturalized. We explain the heuristics that humans often rely on in solving problems, for example, choosing investment strategies or apartments, placing bets in sports, or making library searches. We then show that heuristics can lead to judgments that are as accurate as or even more accurate than strategies that use more information and computation, (...) including optimization methods. A standard way to defend the use of heuristics is by reference to accuracy-effort trade-offs. We take a different route, emphasizing ecological rationality (the relationship between cognitive heuristics and environment), and argue that in uncertain environments, more information and computation are not always better (the “less-can-be-more” doctrine). The resulting naturalism about rationality is thus normative because it not only describes what heuristics people use, but also in which specific environments one should rely on a heuristic in order to make better inferences. While we desist from claiming that the scope of ecological rationality is unlimited, we think it is of wide practical use. (shrink)
Can the general public learn to deal with risk and uncertainty, or do authorities need to steer people’s choices in the right direction? Libertarian paternalists argue that results from psychological research show that our reasoning is systematically flawed and that we are hardly educable because our cognitive biases resemble stable visual illusions. For that reason, they maintain, authorities who know what is best for us need to step in and steer our behavior with the help of “nudges.” Nudges are nothing (...) new, but justifying them on the basis of a latent irrationality is. In this article, I analyze the scientific evidence presented for such a justification. It suffers from narrow logical norms, that is, a misunderstanding of the nature of rational thinking, and from a confirmation bias, that is, selective reporting of research. These two flaws focus the blame on individuals’ minds rather than on external causes, such as industries that spend billions to nudge people into unhealthy behavior. I conclude that the claim that we are hardly educable lacks evidence and forecloses the true alternative to nudging: teaching people to become risk savvy. (shrink)
What is the nature of moral behavior? According to the study of bounded rationality, it results not from character traits or rational deliberation alone, but from the interplay between mind and environment. In this view, moral behavior is based on pragmatic social heuristics rather than moral rules or maximization principles. These social heuristics are not good or bad per se, but solely in relation to the environments in which they are used. This has methodological implications for the study of morality: (...) Behavior needs to be studied in social groups as well as in isolation, in natural environments as well as in labs. It also has implications for moral policy: Only by accepting the fact that behavior is a function of both mind and environmental structures can realistic prescriptive means of achieving moral goals be developed. (shrink)
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we (...) explore fast and frugal heuristics – simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data – that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program. Key Words: adaptive toolbox; bounded rationality; decision making; elimination models; environment structure; heuristics; ignorance-based reasoning; limited information search; robustness; satisficing; simplicity. (shrink)
Axiomatic rationality is defined in terms of conformity to abstract axioms. Savage limited axiomatic rationality to small worlds, that is, situations in which the exhaustive and mutually exclusive set of future states S and their consequences C are known. Others have interpreted axiomatic rationality as a categorical norm for how human beings should reason, arguing in addition that violations would lead to real costs such as money pumps. Yet a review of the literature shows little evidence that violations are actually (...) associated with any measurable costs. Limiting axiomatic rationality to small worlds, I propose a naturalized version of rationality for situations of intractability and uncertainty, all of which are not in. In these situations, humans can achieve their goals by relying on heuristics that may violate axiomatic rationality. The study of ecological rationality requires formal models of heuristics and an analysis of the structures of environments these can exploit. It lays the foundation of a moderate naturalism in epistemology, providing statements about heuristics we should use in a given situation. Unlike axiomatic rationality, ecological rationality can explain less-is-more effects, formalize when one should move from ‘is’ to ‘ought,’ and be evaluated by goals beyond coherence, such as predictive accuracy, frugality, and efficiency. Ecological rationality can be seen as a formalization of means–end instrumentalist rationality, based on Herbert Simon’s insight that rational behavior is a function of the mind and its environment. (shrink)
Traditional views of rationality posit general-purpose decision mechanisms based on logic or optimization. The study of ecological rationality focuses on uncovering the “adaptive toolbox” of domain-specific simple heuristics that real, computationally bounded minds employ, and explaining how these heuristics produce accurate decisions by exploiting the structures of information in the environments in which they are applied. Knowing when and how people use particular heuristics can facilitate the shaping of environments to engender better decisions.
In the six decades since the publication of Julian Huxley's Evolution: The Modern Synthesis, spectacular empirical advances in the biological sciences have been accompanied by equally significant developments within the core theoretical framework of the discipline. As a result, evolutionary theory today includes concepts and even entire new fields that were not part of the foundational structure of the Modern Synthesis. In this volume, sixteen leading evolutionary biologists and philosophers of science survey the conceptual changes that have emerged since Huxley's (...) landmark publication, not only in such traditional domains of evolutionary biology as quantitative genetics and paleontology but also in such new fields of research as genomics and EvoDevo. Most of the contributors to Evolution—The Extended Synthesis accept many of the tenets of the classical framework but want to relax some of its assumptions and introduce significant conceptual augmentations of the basic Modern Synthesis structure—just as the architects of the Modern Synthesis themselves expanded and modulated previous versions of Darwinism. This continuing revision of a theoretical edifice the foundations of which were laid in the middle of the nineteenth century—the reexamination of old ideas, proposals of new ones, and the synthesis of the most suitable—shows us how science works, and how scientists have painstakingly built a solid set of explanations for what Darwin called the "grandeur" of life. (shrink)
The article shows where the argument of responsibility-gap regarding brain-computer interfaces acquires its plausibility from, and suggests why the argument is not plausible. As a way of an explanation, a distinction between the descriptive third-person perspective and the interpretative first-person perspective is introduced. Several examples and metaphors are used to show that ascription of agency and responsibility does not, even in simple cases, require that people be in causal control of every individual detail involved in an event. Taking up the (...) current debate on liability in BCI use, the article provides and discusses some rules that should be followed when potentially harmful BCI-based devices are brought from the laboratory into everyday life. (shrink)
[Correction Notice: An erratum for this article was reported in Vol 109 of Psychological Review. Due to circumstances that were beyond the control of the authors, the studies reported in "Models of Ecological Rationality: The Recognition Heuristic," by Daniel G. Goldstein and Gerd Gigerenzer overlap with studies reported in "The Recognition Heuristic: How Ignorance Makes Us Smart," by the same authors and with studies reported in "Inference From Ignorance: The Recognition Heuristic". In addition, Figure 3 in the Psychological Review (...) article was originally published in the book chapter and should have carried a note saying that it was used by permission of Oxford University Press.] One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counter-intuitive less-is-more effect in which less knowledge is better than more for making accurate inferences. (shrink)
In this paper, the results of a pilot interview study with 19 subjects participating in an EEG-based non-invasive brain–computer interface (BCI) research study on stroke rehabilitation and assistive technology and of a survey among 17 BCI professionals are presented and discussed in the light of ethical, legal, and social issues in research with human subjects. Most of the users were content with study participation and felt well informed. Negative aspects reported include the long and cumbersome preparation procedure, discomfort with the (...) cap and the wet electrodes, problems concerning BCI control, and strains during the training sessions. In addition, some users reflected on issues concerning system security. When asked for morally problematic issues in this field of non-invasive BCI research, the BCI professionals stressed the need for correct information transfer, the obligation to avoid unrealistic expectations in study participants, the selection of study participants, benefits and strains of participation, BCI illiteracy, the possibility of detrimental brain modifications induced by BCI use, and problems that may arise at the end of the trials. Furthermore, privacy issues were raised. Based on the results obtained, psychosocial and ethical aspects of EEG-based non-invasive BCI research are discussed and possible implications for future research addressed. (shrink)
The well-known proposal to consider the Lüders-von Neumann measurement as a non-classical extension of probability conditionalization is further developed. The major results include some new concepts like the different grades of compatibility, the objective conditional probabilities which are independent of the underlying state and stem from a certain purely algebraic relation between the events, and an axiomatic approach to quantum mechanics. The main axioms are certain postulates concerning the conditional probabilities and own intrinsic probabilistic interpretations from the very beginning. A (...) Jordan product is derived for the observables, and the consideration of composite systems leads to operator algebras on the Hilbert space over the complex numbers, which is the standard model of quantum mechanics. The paper gives an expository overview of the results presented in a series of recent papers by the author. For the first time, the complete approach is presented as a whole in a single paper. Moreover, since the mathematical proofs are already available in the original papers, the present paper can dispense with the mathematical details and maximum generality, thus addressing a wider audience of physicists, philosophers or quantum computer scientists. (shrink)
The contrast between the two approaches alluded to in the title has gained a certain prominence in our own day. With the knowledge of hindsight it will be of interest therefore to study its incidence in an earlier period, in the writings of Whewell and Mill, Which may thus yield added significance for a later generation. Right at the start there is a difficulty. Not all inductivists agree on their principles, or their interpretation of the logic of scientific reasoning, and (...) the same is true of deductivism, and the differences can therefore be discussed only in connection with individual writers. When this is done, we do find moreover some considerable variations in the treatment of their respective doctrines by the members of each of our two schools. To define these differences, we need a finer structure of elements of classification, indicative of criteria for the acceptance of scientific hypotheses. (shrink)
A demarcation between kant's general metaphysics (transcendental principles) and his special metaphysics is attempted, through a discussion of kant's three accounts of lawlikeness, 'transcendental', 'empirical' and 'metaphysical'. the distinctions are defended via a number of 'indicators' in kant's writings, and the 'looseness of fit' between the different types of lawlikeness is discussed.
The terms nested sets, partitive frequencies, inside-outside view, and dual processes add little but confusion to our original analysis (Gigerenzer & Hoffrage 1995; 1999). The idea of nested set was introduced because of an oversight; it simply rephrases two of our equations. Representation in terms of chances, in contrast, is a novel contribution yet consistent with our computational analysis System 1.dual process theory” is: Unless the two processes are defined, this distinction can account post hoc for almost everything. In contrast, (...) an ecological view of cognition helps to explain how insight is elicited from the outside (the external representation of information) and, more generally, how cognitive strategies match with environmental structures. (shrink)