Most philosophical accounts of emergence are incompatible with reduction. Most scientists regard a system property as emergent relative to properties of the system's parts if it depends upon their mode of organization--a view consistent with reduction. Emergence can be analyzed as a failure of aggregativity--a state in which "the whole is nothing more than the sum of its parts." Aggregativity requires four conditions, giving tools for analyzing modes of organization. Differently met for different decompositions of the system, and in different (...) degrees, these conditions provide powerful evaluation criteria for choosing decompositions, and heuristics for detecting biases of vulgar reductionisms. This analysis of emergence is compatible with reduction. (shrink)
Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
Methodological reductionists practice ‘wannabe reductionism’. They claim that one should pursue reductionism, but never propose how. I integrate two strains in prior work to do so. Three kinds of activities are pursued as “reductionist”. “Successional reduction” and inter-level mechanistic explanation are legitimate and powerful strategies. Eliminativism is generally ill-conceived. Specific problem-solving heuristics for constructing inter-level mechanistic explanations show why and when they can provide powerful and fruitful tools and insights, but sometimes lead to erroneous results. I show how traditional (...) metaphysical approaches fail to engage how science is done. The methods used do so, and support a pragmatic and non-eliminativist realism. (shrink)
This paper evaluates the claim that it is possible to use nature’s variation in conjunction with retention and selection on the one hand, and the absence of ultimate groundedness of hypotheses generated by the human mind as it knows on the other hand, to discard the ascription of ultimate certainty to the rationality of human conjectures in the cognitive realm. This leads to an evaluation of the further assumption that successful hypotheses with specific applications, in other words heuristics, seem (...) to have a firm footing because they were useful in another context. I argue that usefulness evaluated through adaptation misconstrues the search for truth, and that it is possible to generate talk of randomness by neglecting aspects of a system’s insertion into a larger situation. The framing of the problem in terms of the elimination of unfit hypotheses is found to be unsatisfying. It is suggested that theories exist in a dimension where they can be kept alive rather than dying as phenotypes do. The proposal that the subconscious could suggest random variations is found to be a category mistake. A final appeal to phenomenology shows that this proposal is orphan in the history of epistemology, not in virtue of its being a remarkable find, but rather because it is ill-conceived. (shrink)
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), (...) we explore fast and frugal heuristics – simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data – that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program. Key Words: adaptive toolbox; bounded rationality; decision making; elimination models; environment structure; heuristics; ignorance-based reasoning; limited information search; robustness; satisficing; simplicity. (shrink)
: Psychological explanations of philosophical intuitions can help us assess their evidentiary value, and our warrant for accepting them. To explain and assess conceptual or classificatory intuitions about specific situations, some philosophers have suggested explanations which invoke heuristic rules proposed by cognitive psychologists. The present paper extends this approach of intuition assessment by heuristics-based explanation, in two ways: It motivates the proposal of a new heuristic, and shows that this metaphor heuristic helps explain important but neglected intuitions: general factual (...) intuitions which have been highly influential in the philosophies of mind and perception but neglected in on-going debates in the epistemology of philosophy. To do so, the paper integrates results from three philosophically pertinent but hitherto largely unconnected strands of psychological research: research on intuitive judgement, analogy and metaphor, and memory-based processing, respectively. The paper shows that the heuristics-based explanation thus obtained satisfies the key requirements cognitive psychologists impose on such explanations, that it can explain the philosophical intuitions targeted, and that this explanation supports normative assessment of the intuitions' evidentiary value: It reveals whether particular intuitions are due to proper exercise of cognitive competencies or constitute cognitive illusions. (shrink)
Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth’s (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they (...) argue for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately. (shrink)
Gigerenzer’s ‘external validity argument’ plays a pivotal role in his critique of the heuristics and biases research program (HB). The basic idea is that (a) the experimental contexts deployed by HB are not representative of the real environment and that (b) the differences between the setting and the real environment are causally relevant, because they result in different performances by the subjects. However, by considering Gigerenzer’s work on frequencies in probability judgments, this essay attempts to show that there are (...) fatal flaws in the argument. Specifically, each of the claims is controversial: whereas (b) is not adequately empirically justified, (a) is inconsistent with the ‘debiasing’ program of Gigerenzer’s ABC group. Therefore, whatever reason we might have for believing that the experimental findings of HB lack experimental validity, this should not be based on Gigerenzer’s version of the argument. (shrink)
Many believe that values are crucially dependent on emotions. This paper focuses on epistemic aspects of the putative link between emotions and value by asking two related questions. First, how exactly are emotions supposed to latch onto or track values? And second, how well suited are emotions to detecting or learning about values? To answer the first question, the paper develops the heuristics-model of emotions. This approach models emotions as sui generis heuristics of value. The empirical plausibility of (...) the heuristics-model is demonstrated using evidence from experimental psychology, evolutionary anthropology and neuroscience. The model is used then to answer the second question. If emotions are indeed heuristics of value, then it follows that emotions can be an important and useful source of information about value. However, emotions will not be epistemically superior in the sense of being the highest court of appeal for the justification of axiological beliefs (the latter view is referred to as the Epistemic Dependence Thesis, or EDT for short). The paper applies the heuristics-model to celebrated cases from the philosophy of emotions literature arguing that while the heuristics-model offers a good explanation of typical patterns of emotional reactions in such cases, advocates of EDT will have a hard time accounting for these patterns. The paper also shows that the conclusions drawn from special cases generalize. The paper ends by arguing that skepticism about the metaethical significance of emotions is compatible with a commitment to the importance of emotions in first-order normative ethics. (shrink)
Approaching science by considering the epistemological virtues which scientists see as constitutive of good science, and the way these virtues trade-off against one another, makes it possible to capture action that may be lost by approaches which focus on either the theoretical or institutional level. Following Wimsatt (1984) I use the notion of heuristics and biases to help explore a case study from the history of biology. Early in the 20th century, mutation theorists and natural historians fought over the (...) role that isolation plays in evolution. This debate was principally about whether replication was the central scientific virtue (and hence the ultimate goal of science to replace non-experimental evidence with experimental evidence) or whether consilience of inductions was the central virtue (and hence, as many kinds of evidence as possible should be pursued). (shrink)
The study describes a method created for the analysis of persuasive strategies, called rhetorical heuristics, which can be applied in speeches where the argument focuses primarily on questions of fact. First, the author explains how the concept emerged from the study of classical oratory. Then the theoretical background of rhetorical heuristics is outlined through briefly discussing relevant aspects of the psychology of decision-making. Finally, an exposition of how one could find these persuasive strategies introduces rhetorical heuristics in (...) more detail. (shrink)
Heuristics can be regarded as justifying the actions and beliefs of problem-solving agents. I use an analysis of heuristics to argue that a symbiotic relationship exists between traditional epistemology and contemporary artificial intelligence. On one hand, the study of models of problem-solving agents usingquantitative heuristics, for example computer programs, can reveal insight into the understanding of human patterns of epistemic justification by evaluating these models'' performance against human problem-solving. On the other hand,qualitative heuristics embody the justifying (...) ability of defeasible rules, the understanding of which is provided by traditional epistemology. (shrink)
Intractability and optimality are two sides of one coin: Optimal models are often intractable, that is, they tend to be excessively complex, or NP-hard. We explain the meaning of NP-hardness in detail and discuss how modem computer science circumvents intractability by introducing heuristics and shortcuts to optimality, often replacing optimality by means of sufficient sub-optimality. Since the principles of decision theory dictate balancing the cost of computation against gain in accuracy, statistical inference is currently being reshaped by a vigorous (...) new trend: the science of simplicity. Simple models, as we show for specific cases, are not just tractable, they also tend to be robust. Robustness is the ability of a model to extract relevant information from data, disregarding noise.Recently, Gigerenzer, Todd and the ABC Research Group (1999) have put forward a collection of fast and frugal heuristics as simple, boundedly rational inference strategies used by the unaided mind in real world inference problems. This collection of heuristics has suggestively been called the adaptive toolbox. In this paper we will focus on a comparison task in order to illustrate the simplicity and robustness of some of the heuristics in the adaptive toolbox in contrast to the intractability and the fragility of optimal solutions. We will concentrate on three important classes of models for comparison-based inference and, in each of these classes, search for that to be used as benchmarks to evaluate the performance of fast and frugal heuristics: lexicographic trees, linear modes and Bayesian networks. Lexicographic trees are interesting because they are particularly simple models that have been used by humans throughout the centuries. Linear models have been traditionally used by cognitive psychologists as models for human inference, while Bayesian networks have only recently been introduced in statistics and computer science. Yet it is the Bayesian networks that are the best possible benchmarks for evaluating the fast and frugal heuristics, as we will show in this paper. (shrink)
Humans have a remarkable capacity for tuning their communicative behaviors to different addressees, a phenomenon also known as recipient design. It remains unclear how this tuning of communicative behavior is implemented during live human interactions. Classical theories of communication postulate that recipient design involves perspective taking, i.e., the communicator selects her behavior based on her hypotheses about beliefs and knowledge of the recipient. More recently, researchers have argued that perspective taking is computationally too costly to be a plausible mechanism in (...) everyday human communication. These researchers propose that computationally simple mechanisms, or heuristics, are exploited to perform recipient design. Such heuristics may be able to adapt communicative behavior to an addressee with no consideration for the addressee's beliefs and knowledge. To test whether the simpler of the two mechanisms is sufficient for explaining the `how' of recipient design we studied communicators' behaviors in the context of a non-verbal communicative task (the Tacit Communication Game, TCG). We found that the specificity of the observed trial-by-trial adjustments made by communicators is parsimoniously explained by perspective taking, but not by simple heuristics. This finding is important as it suggests that humans do have a computationally efficient way of taking beliefs and knowledge of a recipient into account. (shrink)
Balancing the pros and cons of two options is undoubtedly a very appealing decision procedure, but one that has received scarce scientific attention so far, either formally or empirically. We describe a formal framework for pros and cons decisions, where the arguments under consideration can be of varying importance, but whose importance cannot be precisely quantified. We then define eight heuristics for balancing these pros and cons, and compare the predictions of these to the choices made by 62 human (...) participants on a selection of 33 situations. The Levelwise Tallying heuristic clearly emerges as a winner in this competition. Further refinements of this heuristic are considered in the discussion, as well as its relation to Take the Best and Cumulative Prospect Theory. (shrink)
Surrogates’ decisions to withhold or withdraw life-sustaining treatments (LSTs) are pervasive. However, the factors influencing surrogates’ decisions to initiate LSTs are relatively unknown. We present evidence from two experiments indicating that some surrogates’ decisions about when to initiate LSTs can be predictably manipulated. Factors that influence surrogate decisions about LSTs include the patient’s cognitive state, the patient’s age, the percentage of doctors not recommending the initiation of LSTs, the percentage of patients in similar situations not wanting LSTs, and default treatment (...) settings. These results suggest that some people may use heuristics when making these important life-and-death decisions. These findings may have important moral implications for improving surrogate decisions about LSTs and reconsidering paternalism. (shrink)
The notion of ecological rationality implies that the accuracy of a decision strategy depends on features of the information environment in which it is tested. We demonstrate that the performance of a group may be strongly affected by the decision strategies used by its individual members and specify how this effect is moderated by environmental features. Specifically, in a set of simulation studies, we systematically compared four decision strategies used by the individual group members: two linear, compensatory decision strategies and (...) two simple, noncompensatory heuristics. Individual decisions were aggregated by using a majority rule. To assess the ecological rationality of the strategies, we varied (a) the distribution of cue validities, (b) the quantity, and (c) the quality of shared information. Group performance strongly depended on the distribution of cue validities. When validities were linearly distributed, groups using a compensatory strategy achieved the highest accuracy. Conversely, when cue validities followed a J-shaped distribution, groups using a simple lexicographic heuristic performed best. While these effects were robust across different quantities of shared information, the quality of shared information exerted stronger effects on group performance. Consequences for prescriptive theories on group decision making are discussed. (shrink)
In its focus on heuristics as opposed to hierarchically structured general principles, expert systems technology suggests a pedagogic strategy with affinities to the approaches of some of the creative philosophers of East and West, and a challenge to the reliance on presentation of general principles found in academic tradition. A tutoring approach to classroom presentation may be seen to relate to the point that non-trivial general principles cannot be verbally expressed without substantial loss of meaning.
Richard Levins’ distinction between aggregate, composed and evolved systems acquires new significance as we recognize the importance of mechanistic explanation. Criteria for aggregativity provide limiting cases for absence of organization, so through their failure, can provide rich detectors for organizational properties. I explore the use of failures of aggregativity for the analysis of mechanistic systems in diverse contexts. Aggregativity appears theoretically desireable, but we are easily fooled. It may be exaggerated through approximation, conditions of derivation, and extrapolating from some conditions (...) of decomposition illegtimately to others. Evolved systems particularly may require analyses under alternative complementary decompositions. Exploring these conditions helps us to better understand the strengths and limits of reductionistic methods. (shrink)
Germany is considered to be a pioneer of social security systems; nonetheless, globalization and demographic changes have put enormous pressure on them. A solution is not yet in sight as the debate on the future of the German social security systems still lacks consensus. We argue that ideas matter and that the debate can benefit from a deeper reflection on the concept of social security. This objective is pursued along two lines. First, we take a historical perspective and reconstruct the (...) development of Germany's social security systems. Second, we scrutinize from a theoretical perspective how social security is conceptualized in public and theoretical debates. Behind the various positions, we identify four basic ideal types. We then analyze how these ideal types account for the benefits of social security systems and what role they assign to corporations in providing social security. While two ultimately reinforce potential conflicts between different groups in society, the other two ideal types reveal possible benefits for all. The last ideal type actually conceptualizes social security systems as insurance that fosters risky but overall productive investments in human and other forms of capital. Therefore, it can be shown that social security systems are not necessarily threatened by globalization and that incentives exist for corporations to invest in the provision of social security. (shrink)
Simple Heuristics That Make Us Smart invites readers to embark on a new journey into a land of rationality that differs from the familiar territory of cognitive science and economics. Traditional views of rationality tend to see decision makers as possessing superhuman powers of reason, limitless knowledge, and all of eternity in which to ponder choices. To understand decisions in the real world, we need a different, more psychologically plausible notion of rationality, and this book provides it. It is (...) about fast and frugal heuristics--simple rules for making decisions when time is pressing and deep thought an unaffordable luxury. These heuristics can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality. But when and how can such fast and frugal heuristics work? Can judgments based simply on one good reason be as accurate as those based on many reasons? Could less knowledge even lead to systematically better predictions than more knowledge? Simple Heuristics explores these questions, developing computational models of heuristics and testing them through experiments and analyses. It shows how fast and frugal heuristics can produce adaptive decisions in situations as varied as choosing a mate, dividing resources among offspring, predicting high school drop out rates, and playing the stock market. As an interdisciplinary work that is both useful and engaging, this book will appeal to a wide audience. It is ideal for researchers in cognitive psychology, evolutionary psychology, and cognitive science, as well as in economics and artificial intelligence. It will also inspire anyone interested in simply making good decisions. (shrink)
With respect to questions of fact, people use heuristics – mental short-cuts, or rules of thumb, that generally work well, but that also lead to systematic errors. People use moral heuristics too – moral short-cuts, or rules of thumb, that lead to mistaken and even absurd moral judgments. These judgments are highly relevant not only to morality, but to law and politics as well. Examples are given from a number of domains, including risk regulation, punishment, reproduction and sexuality, (...) and the act/omission distinction. In all of these contexts, rapid, intuitive judgments make a great deal of sense, but sometimes produce moral mistakes that are replicated in law and policy. One implication is that moral assessments ought not to be made by appealing to intuitions about exotic cases and problems; those intuitions are particularly unlikely to be reliable. Another implication is that some deeply held moral judgments are unsound if they are products of moral heuristics. The idea of error-prone heuristics is especially controversial in the moral domain, where agreement on the correct answer may be hard to elicit; but in many contexts, heuristics are at work and they do real damage. Moral framing effects, including those in the context of obligations to future generations, are also discussed. (shrink)
A common objection to utilitarianism is that it clashes with our common moral intuitions. Understanding the role that heuristics play in moral judgments undermines this objection. It also indicates why we should not use John Rawls' model of reflective equilibrium as the basis for testing normative moral theories.
Sunstein is right that poorly informed heuristics can influence moral judgment. His case could be strengthened by tightening neurobiologically plausible working definitions regarding what a heuristic is, considering a background moral theory that has more strength in wide reflective equilibrium than “weak consequentialism,” and systematically examining what naturalized virtue theory has to say about the role of heuristics in moral reasoning.
Sunstein represents moral heuristics as rigid rules that lead us to jump to moral conclusions, and contrasts them with reflective moral deliberation, which he represents as independent of heuristics and capable of supplanting them. Following John Dewey's psychology of moral judgment, I argue that successful moral deliberation does not supplant moral heuristics but uses them flexibly as inputs to deliberation. Many of the flaws in moral judgment that Sunstein attributes to heuristics reflect instead the limitations of (...) the deliberative context in which people are asked to render judgments. (shrink)
Moral heuristics are pervasive, and they produce moral errors. We can identify those errors as such even if we do not endorse any contentious moral view. To accept this point, it is also unnecessary to make controversial claims about moral truth. But the notion of moral heuristics can be understood in diverse ways, and a great deal of work remains to be done in understanding the nature of moral intuitions, especially those that operate automatically and nonreflectively, and in (...) exploring the possibility of altering such intuitions through modest changes in context and narrative. (shrink)
This chapter investigates the extent to which claims of massive modular organization of the mind (espoused by some members of the evolutionary psychology research program) are consistent with the main elements of the simple heuristics research program. A number of potential sources of conflict between the two programs are investigated and defused. However, the simple heuristics program turns out to undermine one of the main arguments offered in support of massive modularity, at least as the latter is generally (...) understood by philosophers. So one result of the argument will be to force us to re-examine the way in which the notion of modularity in cognitive science should best be characterized, if the thesis of massive modularity isn’t to be abandoned altogether. What is at stake in this discussion, is whether there is a well-motivated notion of ‘module’ such that we have good reason to think that the human mind must be massively modular in its organization. I shall be arguing (in the end) that there is. (shrink)
If, as is not implausible, the correct moral theory is indexed to human capacity for moral reasoning, then the thesis that moral heuristics exist faces a serious objection. This objection can be answered by embracing a wide reflective equilibrium account of the origins of our normative principles of morality.
Sunstein aims to provide a nonsectarian account of moral heuristics, yet the account rests on a controversial meta-ethical view. Further, moral theorists who reject act consequentialism may deny that Sunstein's examples involve moral mistakes. But so what? Within a theory that counts consequences as a morally weighty feature of actions, the moral judgments that Sunstein points to are indeed mistaken, and the fact that governmental action at odds with these judgments will be controversial doesn't bar such action.
Gigerenzer and his co-workers make some bold and striking claims about the relation between the fast and frugal heuristics discussed in their book and the traditional norms of rationality provided by deductive logic and probability theory. We are told, for example, that fast and frugal heuristics such as “Take the Best” replace “the multiple coherence criteria stemming from the laws of logic and probability with multiple correspondence criteria relating to real-world decision performance.” This commentary explores just how we (...) should interpret this proposed replacement of logic and probability theory by fast and frugal heuristics. (shrink)
Successful application of heuristics depends on how a problem is represented, mentally. Moral imagination is a good technique for reflecting on, and sharing, mental representations of ethical dilemmas, including those involving emerging technologies. Future research on moral heuristics should use more ecologically valid problems and combine quantitative and qualitative methods.
In his debates with Daniel Kahneman and Amos Tversky, Gerd Gigerenzer puts forward a stricter standard for the proper representation of judgment heuristics. I argue that Gigerenzer’s stricter standard contributes to naturalized epistemology in two ways. First, Gigerenzer’s standard can be used to winnow away cognitive processes that are inappropriately characterized and should not be used in the epistemic evaluation of belief. Second, Gigerenzer’s critique helps to recast the generality problem in naturalized epistemology and cognitive psychology as the methodological (...) problem of identifying criteria for the appropriate specification and characterization of cognitive processes in psychological explanations. I conclude that naturalized epistemologists seeking to address the generality problem should turn their focus to methodological questions about the proper characterization of cognitive processes for the purposes of psychological explanation. (shrink)
The notion of rationality is crucial to Computer Science and Artificial Intelligence, Economics, Law, Philosophy, Psychology, Anthropology, etc. Most if not all of these disciplines presuppose the agent's capacity to infer in a logical manner. Theories about rationality tend toward two extremes: either they presuppose an unattainable logical capacity, or they tend to minimize the role of logic, in light of vast data on fallacious inferential performance. We analyze some presuppositions in the classical view of logic, and suggest empirical and (...) theoretical evidence for the place of inferential heuristics in a theory of rationality. We propose (1) to outline a new theory of rationality that includes the key notion of logical capacity as a necessary but realistic factor, (2) to expand the notion of inference to include non-deductive inference, specifically non-monotonic, and (3) to emphasize the logical role of inferential heuristics and constraints such as cognitive economy. (shrink)
Models in decision theory and game theory assume that preferences are determinate: for any pair of possible outcomes, a and b, an agent either prefers a to b, prefers b to a, or is indifferent as between a and b. Preferences are also assumed to be stable: provided the agent is fully informed, trivial situational inﬂuences will not shift the order of her preferences. Research by behavioral economists suggests, however, that economic and hedonic preferences are to some degree indeterminate and (...) unstable, which in turn suggests that other sorts of preferences may suffer the same problem. Even fully informed agents do not always determinately prefer a to b, prefer b to a, or feel indifferent as between a and b. Seemingly trivial situational inﬂuences rearrange the order of their preferences. One could respond that decision theory and game theory are not meant to describe actual behavior, and that they instead adumbrate an ideal of rationality from which human action diverges in various ways. When the divergences are small and systematic, they help us identify the heuristics that conspire to help people approximate rationality. One such heuristic, dubbed the Wilde heuristic, is explored. However, the divergences documented by behavioral economists threaten to be too large to handle through idealization. The Rum Tum Tugger Model, in which indifference is intransitive, is spelled out as one promising way for decision and game theory to retrench. Preferences may be locally unstable and indeterminate, but when the differences between options are sufﬁciently large, they approximate stability and determinacy. (shrink)
Gigerenzer et al.'s is an extremely important book. The ecological validity of the key heuristics is strengthened by their relation to ubiquitous Poisson processes. The recognition heuristic is also used in conspecific cueing processes in ecology. Three additional classes of problem-solving heuristics are proposed for further study: families based on near-decomposability analysis, exaptive construction of functional structures, and robustness.
I argue that an ecologically distributed conception of instrumental rationality can and should be extended to a socially distributed conception of instrumental rationality in social environments. The argument proceeds by showing that the assumption of exogenously fixed units of activity cannot be justified; different units of activity are possible and some are better means to independently given ends than others, in various circumstances. An important social heuristic, the mirror heuristic, enables the flexible formation of units of activity in game theoretic (...) situations, including collective units where these are instrumentally effective. In effect, the mirror heuristic makes the formation of units of activity endogenous to instrumental rationality. Moreover, the mirror heuristic is a conditional metaheuristic, which depends on mind reading of the heuristics of other players rather than on predictions of their behavior. Such mind reading can be regarded as emerging from an arms race between behavioral mimicry and ever smarter behavior reading. Even though unilateral mind reading may have benefits, the mirror metaheuristic illustrates that mutual mind reading has distinctive functions in responding to the challenges of social complexity. If simple heuristics can make us smart in the right environments, then social heuristics can make us smarter still. (shrink)
In the last decade, the study of moral heuristics has gained in importance. I argue that we can consider speciesism as a moral heuristic: an intuitive rule of thumb that substitutes a target attribute (that is difficult to detect, e.g. “having rationality”) for a heuristic attribute (that is easier to detect, e.g. “looking like a human being”). This speciesism heuristic misfires when applied to some atypical humans such as the mentally disabled, giving them rights although they lack rationality. But (...) I argue that it is not necessarily irrational or inconsistent to hold on to this heuristic rule, because we have to take time and knowledge constraints, uncertainty aversion and emotional costs into account. However, this “heuristic defense” of speciesism uses a target attribute (rationality) that has implications of disrespect towards some atypical humans. Therefore, based on notions of impartiality and compassion, I argue for a morally better target attribute: sentience (“having a sense of well-being”). “Being a vertebrate” is suitable as a corresponding heuristic attribute because it is easy to detect and has a strong correlation with the target attribute of sentience. (shrink)
Tversky and Kahneman (1974) originally discussed three main heuristics: availability, representativeness, and anchoring-and-adjustment. Research on judgemental forecasting suggests that the type of information on which forecasts are based is the primary factor determining the type of heuristic that people use to make their predictions. Specifically, availability is used when forecasts are based on information held in memory; representativeness is important when the value of one variable is forecast from explicit information about the value of another variable; and anchoring-and-adjustment is (...) employed when the value of a variable is forecast from explicit information about previous values of that same variable. Although there has been increased emphasis on the adaptiveness of heuristics and increased interest in specifying their use in terms of computational models, this way of structuring our knowledge about judgemental forecasting continues to be a useful one. I use it to frame discussion of some recent debates in the area. (shrink)
A mental heuristic is a shortcut (means) to a desired end. In the moral (as opposed to factual) realm, the means/end distinction is not self-evident: How do we decide whether a given moral intuition is a mere heuristic to achieve some freestanding moral principle, or instead a freestanding moral principle in its own right? I discuss Sunstein's solution to that threshold difficulty in translating “heuristics” to the moral realm.
Simple heuristics are clearly powerful tools for making near optimal decisions, but evidence for their use in specific situations is weak. Gigerenzer et al. (1999) suggest a range of heuristics, but fail to address the question of which environmental or task cues might prompt the use of any specific heuristic. This failure compromises the falsifiability of the fast and frugal approach.
It is difficult to overestimate Paul Meehl’s influence on judgment and decision-making research. His ‘disturbing little book’ (Meehl, 1986, p. 370) Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence (1954) is known as an attack on human judgment and a call for replacing clinicians with actuarial methods. More than 40 years later, fast and frugal heuristics—proposed as models of human judgment—were formalized, tested, and found to be surprisingly accurate, often more so than the actuarial (...) models that Meehl advocated. We ask three questions: Do the findings of the two programs contradict each other? More generally, how are the programs conceptually connected? Is there anything they can learn from each other? After demonstrating that there need not be a contradiction, we show that both programs converge in their concern to develop (a) domain-specific models of judgment and (b) nonlinear process models that arise from the bounded nature of judgment. We then elaborate the differences between the programs and discuss how these differences can be viewed as mutually instructive: First, we show that the fast and frugal.. (shrink)
Simple heuristics that make us smart offers an impressive compilation of work that demonstrates fast and frugal (one-reason) heuristics can be simple, adaptive, and accurate. However, many decision environments differ from those explored in the book. We conducted a Monte Carlo simulation that shows one-reason strategies are accurate in “friendly” environments, but less accurate in “unfriendly” environments characterized by negative cue intercorrelations, that is, tradeoffs.
To evaluate the success of simple heuristics we need to know more about how a relevant heuristic is chosen and how we learn which cues are relevant. These meta-abilities are at the core of ecological rationality, rather than the individual heuristics.
Heuristics provide insight into the inconsistencies that characterize thinking related to the use of nonhuman animals. We examine paradoxes in judgments and policy related to the treatment of animals in science from a moral intuition perspective. Sunstein's ideas are consistent with a model of animal-related ethical evaluation we developed twenty-five years ago and which appear readily formulated as moral heuristics.
This commentary focuses on three issues raised by Gigerenzer, Todd, and the ABC Research Group (1999). First, I stress the need for further experimental evidence to determine which heuristics people use in cognitive judgment tasks. Second, I question the scope of cognitive models based on simple heuristics, arguing that many aspects of cognition are too sophisticated to be modeled in this way. Third, I note the complementary role that rational explanation can play to Gigenerenzer et al.'s “ecological” analysis (...) of why heuristics succeed. (shrink)
I investigate whether heuristics similar to those studied by Gigerenzer and his co-authors can apply to the problem of finding a suitable heuristic for a given problem. I argue that not only can heuristics of a very similar kind apply but they have the added advantage that they need not incorporate specific trade-off parameters for balancing the different desiderata of a good decision-procedure.
The Adaptive Toolbox framework specifies heuristics for choice and categorisation that search through cues in previously learned orders (Gigerenzer & Todd, 1999). We examined the learning of three cue parameters defining different orders: discrimination rate (DR) (the probability that a cue points to a unique choice), validity (the probability of correct choice given that a cue discriminates), and success (the probability of correct choice). Success orderings are identical to those by expected information gain (Klayman & Ha, 1987). In two (...) experiments, participants made choices in real-world environments with objective outcome criteria. Participant ratings indicated some appropriate parameter learning when the relevant cue parameter values were highly dispersed. Rated orders of cue validity and DR were less distinct than the objective orders—learning one parameter may be biased towards success by variation in the other. Success ratings capture the variation in validity and DR as well as participants' perception of these parameters. (shrink)