The Ultimatum Game is commonly interpreted as a two-person bargaining game. The third person who donates and may withdraw the money is not included in the theoretical equations, but treated like a neutral measurement instrument. Yet in a cross-cultural analysis it seems necessary to consider the possibility that the thoughts of a player – strategic, altruistic, selfish, or concerned about reputation – are influenced by both an anonymous second player and the non-anonymous experimenter.
The terms nested sets, partitive frequencies, inside-outside view, and dual processes add little but confusion to our original analysis (Gigerenzer & Hoffrage 1995; 1999). The idea of nested set was introduced because of an oversight; it simply rephrases two of our equations. Representation in terms of chances, in contrast, is a novel contribution yet consistent with our computational analysis System 1.dual process theory” is: Unless the two processes are defined, this distinction can account post hoc for almost everything. In (...) contrast, an ecological view of cognition helps to explain how insight is elicited from the outside (the external representation of information) and, more generally, how cognitive strategies match with environmental structures. (shrink)
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), (...) we explore fast and frugal heuristics – simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data – that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program. Key Words: adaptive toolbox; bounded rationality; decision making; elimination models; environment structure; heuristics; ignorance-based reasoning; limited information search; robustness; satisficing; simplicity. (shrink)
Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth’s (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they argue (...) for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately. (shrink)
Gerd Gigerenzer's influential work examines the rationality of individuals not from the perspective of logic or probability, but from the point of view of adaptation to the real world of human behavior and interaction with the environment. Seen from this perspective, human behavior is more rational than it might otherwise appear. This work is extremely influential and has spawned an entire research program. This volume (which follows on a previous collection, Adaptive Thinking, also published by OUP) collects his most (...) recent articles, looking at how people use "fast and frugal heuristics" to calculate probability and risk and make decisions. It includes a newly writen, substantial introduction, and the articles have been revised and updated where appropriate. This volume should appeal, like the earlier volumes, to a broad mixture of cognitive psychologists, philosophers, economists, and others who study decision making. (shrink)
The paper shows why and how an empirical study of fast-and-frugal heuristics can provide norms of good reasoning, and thus how (and how far) rationality can be naturalized. We explain the heuristics that humans often rely on in solving problems, for example, choosing investment strategies or apartments, placing bets in sports, or making library searches. We then show that heuristics can lead to judgments that are as accurate as or even more accurate than strategies that use more information and computation, (...) including optimization methods. A standard way to defend the use of heuristics is by reference to accuracy-effort trade-offs. We take a different route, emphasizing ecological rationality (the relationship between cognitive heuristics and environment), and argue that in uncertain environments, more information and computation are not always better (the “less-can-be-more” doctrine). The resulting naturalism about rationality is thus normative because it not only describes what heuristics people use, but also in which specific environments one should rely on a heuristic in order to make better inferences. While we desist from claiming that the scope of ecological rationality is unlimited, we think it is of wide practical use. (shrink)
What is the nature of moral behavior? According to the study of bounded rationality, it results not from character traits or rational deliberation alone, but from the interplay between mind and environment. In this view, moral behavior is based on pragmatic social heuristics rather than moral rules or maximization principles. These social heuristics are not good or bad per se, but solely in relation to the environments in which they are used. This has methodological implications for the study of morality: (...) Behavior needs to be studied in social groups as well as in isolation, in natural environments as well as in labs. It also has implications for moral policy: Only by accepting the fact that behavior is a function of both mind and environmental structures can realistic prescriptive means of achieving moral goals be developed. (shrink)
Humans hunt and kill many different species of animals, but whales are our biggest prey. In the North Atlantic, a male long-ﬁ nned pilot whale (Globiceph- ala melaena), a large relative of the dolphins, can grow as large as 6.5 meters and weigh as much as 2.5 tons. As whales go, these are not particularly large, but there are more than 750,000 pilot whales in the North Atlantic, traveling in groups, “pods,” that range from just a few individuals to a (...) thousand or more. Each pod is led by an individual known as the “pilot,” who appears to set the course of travel for the rest of the group. This pilot is both an asset and a weakness to the pod. The average pilot whale will yield about a half ton of meat and blubber, and North Atlantic societies including Ireland, Iceland, and the Shetlands used to manipulate the pilot to drive the entire pod ashore. In the Faroe Islands, a group of 18 grassy rocks due north of Scotland, pilot whale hunts have continued for the last 1200 years, at least. The permanent residents of these islands, the Faroese, previously killed an average of 900 whales each year, yielding about 500 tons of meat and fat that was consumed by local residents. Hunts have declined in recent years. From 2001 to 2005, about 3400 whales were killed, yielding about 890 metric tons of blubber and 990 metric tons of meat. The whale kill, or grindadráp in the Faroese language, begins when a ﬁ shing boat spots a pod close enough to a suitable shore, on a suitably clear day. A single boat, or even a small group of ﬁ shermen, is not sufﬁ cient to trap a.. (shrink)
In the study of judgmental errors, surprisingly little thought is spent on what constitutes good and bad judgment. I call this simultaneous focus on errors and lack of analysis of what constitutes an error, the irrationality paradox. I illustrate the paradox by a dozen apparent fallacies; each can be logically deduced from the environmental structure and an unbiased mind.
Shepard promotes the important view that evolution constructs cognitive mechanisms that work with internalized aspects of the structure of their environment. But what can this internalization mean? We contrast three views: Shepard's mirrors reflecting the world, Brunswik's lens inferring the world, and Simon's scissors exploiting the world. We argue that Simon's scissors metaphor is more appropriate for higher-order cognitive mechanisms and ask how far it can also be applied to perceptual tasks. [Barlow; Kubovy & Epstein; Shepard].
We attack the SSK's rejection of the distinction between discovery and justification (the DJ distinction), famously introduced by Hans Reichenbach and here defended in a "lean" version. Some critics claim that the DJ distinction cannot be drawn precisely, or that it cannot be drawn prior to the actual analysis of scientific knowledge. Others, instead of trying to blur or to reject the distinction, claim that we need an even more fine-grained distinction (e.g. between discovery, invention, prior assessment, test and justification). (...) Adherents of the SSK, however, maintain that the distinction is useless and perhaps nonexistent. We first argue against the assumption that the SSK's objection to the DJ distinction is just the same as Thomas Kuhn's. Second, we point out general weaknesses of the SSK's arguments against the DJ distinction. Finally, we argue that the distinction is useful not only in order to explicate what is meant by an evaluation but even for the empirical explanation of knowledge. We use two case studies from the history of cognitive science to support this point. (shrink)
Most students are trained in using but not in actively choosing a research methodology. I support Hertwig and Ortmann's call for more rationality in the use of methodology. I comment on additional practices that sacrifice experimental control to the experimenter's convenience, and on the strange fact that such laissez-faire attitudes and rigid intolerance actually co-exist in psychological research programs.
Simple Heuristics That Make Us Smart invites readers to embark on a new journey into a land of rationality that differs from the familiar territory of cognitive science and economics. Traditional views of rationality tend to see decision makers as possessing superhuman powers of reason, limitless knowledge, and all of eternity in which to ponder choices. To understand decisions in the real world, we need a different, more psychologically plausible notion of rationality, and this book provides it. It is about (...) fast and frugal heuristics--simple rules for making decisions when time is pressing and deep thought an unaffordable luxury. These heuristics can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality. But when and how can such fast and frugal heuristics work? Can judgments based simply on one good reason be as accurate as those based on many reasons? Could less knowledge even lead to systematically better predictions than more knowledge? Simple Heuristics explores these questions, developing computational models of heuristics and testing them through experiments and analyses. It shows how fast and frugal heuristics can produce adaptive decisions in situations as varied as choosing a mate, dividing resources among offspring, predicting high school drop out rates, and playing the stock market. As an interdisciplinary work that is both useful and engaging, this book will appeal to a wide audience. It is ideal for researchers in cognitive psychology, evolutionary psychology, and cognitive science, as well as in economics and artificial intelligence. It will also inspire anyone interested in simply making good decisions. (shrink)
Gigerenzer’s ‘external validity argument’ plays a pivotal role in his critique of the heuristics and biases research program (HB). The basic idea is that (a) the experimental contexts deployed by HB are not representative of the real environment and that (b) the differences between the setting and the real environment are causally relevant, because they result in different performances by the subjects. However, by considering Gigerenzer’s work on frequencies in probability judgments, this essay attempts to show that there (...) are fatal flaws in the argument. Specifically, each of the claims is controversial: whereas (b) is not adequately empirically justified, (a) is inconsistent with the ‘debiasing’ program of Gigerenzer’s ABC group. Therefore, whatever reason we might have for believing that the experimental findings of HB lack experimental validity, this should not be based on Gigerenzer’s version of the argument. (shrink)
Gigerenzer and Brighton (2009) have argued for a “Homo heuristicus” view of judgment and decision making, claiming that there is evidence for a majority of individuals using fast and frugal heuristics. In this vein, they criticize previous studies that tested the descriptive adequacy of some of these heuristics. In addition, they provide a reanalysis of experimental data on the recognition heuristic that allegedly supports Gigerenzer and Brighton’s view of pervasive reliance on heuristics. However, their arguments and reanalyses are (...) both conceptually and methodologically problematic. We provide counterarguments and a reanalysis of the data considered by Gigerenzer and Brighton. Results clearly replicate previous findings, which are at odds with the claim that simple heuristics provide a general description of inferences for a majority of decision makers. (shrink)
There are two kinds of beliefs. If the ultimate objective is wellbeing (utility), the generated beliefs are “practical.” If the ultimate objective is truth, the generated beliefs are “scientific.” This article defends the practical/scientific belief distinction. The proposed distinction has been ignored by standard rational choice theory—as well as by its two major critics, viz., the Tversky/Kahneman program and the Simon/Gigerenzer program. One ramification of the proposed distinction is clear: agents who make errors with regard to scientific beliefs (e.g., (...) the conjunction fallacy) should not be taken as committing irrationality—because they are most probably engaging the other kind of maximization, the pursuit of wellbeing. (shrink)
Within the Cognitive Science of Religion, Justin Barrett has proposed that humans possess a hyperactive agency detection device that was selected for in our evolutionary past because ‘over detecting’ (as opposed to ‘under detecting’) the existence of a predator conferred a survival advantage. Within the Intelligent Design debate, William Dembski has proposed the law of small probability, which states that specified events of small probability do not occur by chance. Within the Fine-Tuning debate, John Leslie has asserted a tidiness principle (...) such that, if we can think of a good explanation for some state of affairs, then an explanation is needed for that state of affairs. In this paper I examine similarities between these three proposals and suggest that they can all be explained with reference to the existence of an explanation attribution module in the human mind. The forgoing analysis is considered with reference to a contrast between classical rationality and what Gerd Gigerenzer and others have called ecological rationality. (shrink)
Much recent research has sought to uncover the neural basis of moral judgment. However, it has remained unclear whether "moral judgments" are sufficiently homogenous to be studied scientifically as a unified category. We tested this assumption by using fMRI to examine the neural correlates of moral judgments within three moral areas: (physical) harm, dishonesty, and (sexual) disgust. We found that the judgment ofmoral wrongness was subserved by distinct neural systems for each of the different moral areas and that these differences (...) were much more robust than differences in wrongness judgments within a moral area. Dishonest, disgusting, and harmful moral transgression recruited networks of brain regions associated with mentalizing, affective processing, and action understanding, respectively. Dorsal medial pFC was the only region activated by all scenarios judged to be morally wrong in comparison with neutral scenarios. However, this region was also activated by dishonest and harmful scenarios judged not to be morally wrong, suggestive of a domain-general role that is neither peculiar to nor predictive of moral decisions. These results suggest that moral judgment is not a wholly unified faculty in the human brain, but rather, instantiated in dissociable neural systems that are engaged differentially depending on the type of transgression being judged. (shrink)
Contemporary moral psychology has been enormously enriched by recent theoretical developments and empirical findings in evolutionary biology, cognitive psychology and neuroscience, and social psychology and psychopathology. Yet despite the fact that some theorists have developed specifically “social heuristic” (Gigerenzer, 2008) and “social intuitionist” (Haidt, 2007) theories of moral judgment and behavior, and despite regular appeals to the findings of experimental social psychology, contemporary moral psychology has largely neglected the social dimensions of moral judgment and behavior. I provide a brief (...) sketch of these dimensions, and consider the implications for contemporary theory and research in moral psychology. (shrink)
Benjamin Libet, Do we have free will? -- Adina L. Roskies, Why Libet's studies don't pose a threat to free will? -- Alfred r. mele, libet on free will : readiness potentials, decisions, and awareness? -- Susan Pockett and Suzanne Purdy, Are voluntary movements initiated preconsciously? : the relationships between readiness potentials, urges, and decisions? -- William P. Banks and Eve A. Isham, Do we really know what we are doing? : implications of reported time of decision for theories of (...) volition? -- Elisabeth Pacherie and Patrick Haggard, What are intentions? -- Mark Hallett, Volition : how physiology speaks to the issue of responsibility? -- John-Dylan Haynes, Beyond Libet : long-term prediction of free choices from neuroimaging signals? -- F. Carota, M. Desmurget, and A. Sirigu, Forward modeling mediates motor awareness? -- Tashina Graves, Brian Maniscalco, and Hakwan Lau, Volition and the function of consciousness? -- Deborah Talmi and Chris D. Frith, Neuroscience, free will, and responsibility? -- Jeffrey P. Ebert and Daniel M. Wegner, Bending time to one's will? -- Thalia Wheatley and Christine Looser, Prospective codes fulfilled : a potential neural mechanism of the will? -- Terry Horgan, The phenomenology of agency and the libet results? -- Thomas Nadelhoffer, The threat of shrinking agency and free will disillusionism? -- Gideon Yaffe, Libet and the criminal law's voluntary act requirement? -- Larry Alexander, Criminal and moral responsibility and the libet experiments? -- Michael S. Moore, Libet's challenge(s) to responsible agency? -- Walter Sinnott-Armstrong, Lessons from Libet?. (shrink)
While the situationist challenge has been prominent in philosophical literature in ethics for over a decade, only recently has it been extended to virtue epistemology . Alfano argues that virtue epistemology is shown to be empirically inadequate in light of a wide range of results in social psychology, essentially succumbing to the same argument as virtue ethics. We argue that this meeting of the twain between virtue epistemology and social psychology in no way signals the end of virtue epistemology, but (...) is rather a boon to naturalized virtue epistemology. We use Gird Gigerenzer’s models for bounded rationality (2011) to present a persuasive line of defense for virtue epistemology, and consider prospects for a naturalized virtue epistemology that is supported by current research in psychology. (shrink)
Experimental evidence on reasoning and decision making has been used to argue both that human rationality is adequate and that it is defective. The idea that reasoning involves not one but two mental systems (see Evans and Over, 1996; Sloman, 1996; Stanovich, 2004 for reasoning, and Kahneman and Frederick, 2005 for decision making) makes better sense of this evidence. ‘System 1’ reasoning is fast, automatic, and mostly unconscious; it relies on ‘fast and frugal’ heuristics (to use Gigerenzer’s expression ( (...) class='Hi'>Gigerenzer et al., 1999)) offering seemingly effortless conclusions that are generally appropriate in most settings, but may be faulty, for instance in experimental situations devised to test the limits of human reasoning abilities. ‘System 2’ reasoning is slow, consciously controlled and effortful, but makes it possible to follow normative rules and to overcome the shortcomings of system 1 (Evans and Over, 1996). The occurrence of both sound and unsound inferences in reasoning experiments and more generally in everyday human thinking can be explained by the roles played by these two kinds of processes. Depending on the problem, the context, and the person (the ability for system 2 reasoning is usually seen as varying widely between individuals, see Stanovich and West (2000)) either system 1 or system 2 reasoning is more likely to be activated, with different consequences for people’s ability to reach the normatively correct solution (Evans, 2006). The two systems can even compete: system 1 suggests an intuitively appealing response while system 2 tries to inhibit this response and to impose its own norm-guided one. Much evidence has accumulated in favour of such a dual view of reasoning (Evans, 2003, in press; for arguments against, see Osman, 2004). There is, however, some vagueness in the way the two systems are characterized. Instead of a principled distinction, we are presented with a bundle of contrasting features—slow/fast, automatic/controlled, explicit/implicit, associationist/rule based, modular/central— which, depending on the specific dual process theory, are attributed more or less exclusively to one of the two systems.. (shrink)
Gigerenzer and his co-workers make some bold and striking claims about the relation between the fast and frugal heuristics discussed in their book and the traditional norms of rationality provided by deductive logic and probability theory. We are told, for example, that fast and frugal heuristics such as “Take the Best” replace “the multiple coherence criteria stemming from the laws of logic and probability with multiple correspondence criteria relating to real-world decision performance.” This commentary explores just how we should (...) interpret this proposed replacement of logic and probability theory by fast and frugal heuristics. (shrink)
Kleinberg (1999) describes a novel procedure for efficient search in a dense hyper-linked environment, such as the world wide web. The procedure exploits information implicit in the links between pages so as to identify patterns of connectivity indicative of “authorative sources”. At a more general level, the trick is to use this second-order link-structure information to rapidly and cheaply identify the knowledge-structures most likely to be relevant given a specific input. I shall argue that Kleinberg’s procedure is suggestive of a (...) new, viable, and neuroscientifically plausible solution to at least (one incarnation of) the so-called “Frame Problem” in cognitive science viz the problem of explaining global abductive inference. More accurately, I shall argue that Kleinberg’s procedure suggests a new variety of “fast and frugal heuristic” (Gigerenzer and Todd (1999)) capable of pressing maximum utility from the vast bodies of information and associations commanded by the biological brain. The paper thus takes up the challenge laid down by Fodor ((1983)(Ms)). Fodor depicts the problem of global knowledge-based reason as the point source of many paradigmatic failings of contemporary computational theories of mind. These failings, Fodor goes on to argue, cannot be remedied by any simple appeal to alternative (e.g. connectionist) modes of encoding and processing. I shall show, however, that connectionist models can provide for one neurologically plausible incarnation of Kleinberg’s procedure. The paper ends by noting that current commercial applications increasingly confront the kinds of challenge (such as managing complexity and making efficient use of vast data-bases) initially posed to biological thought and reason. (shrink)
In his debates with Daniel Kahneman and Amos Tversky, Gerd Gigerenzer puts forward a stricter standard for the proper representation of judgment heuristics. I argue that Gigerenzer’s stricter standard contributes to naturalized epistemology in two ways. First, Gigerenzer’s standard can be used to winnow away cognitive processes that are inappropriately characterized and should not be used in the epistemic evaluation of belief. Second, Gigerenzer’s critique helps to recast the generality problem in naturalized epistemology and cognitive psychology (...) as the methodological problem of identifying criteria for the appropriate specification and characterization of cognitive processes in psychological explanations. I conclude that naturalized epistemologists seeking to address the generality problem should turn their focus to methodological questions about the proper characterization of cognitive processes for the purposes of psychological explanation. (shrink)
The importance of unconscious cognition is seeping into popular consciousness. A number of recent books bridging the academic world and the reading public stress that at least a portion of decision-making depends not on conscious reasoning, but instead on cognition that occurs below awareness. However, these books provide a limited perspective on how the unconscious mind works and the potential power of intuition. This essay is an effort to expand the picture. It is structured around the book that has garnered (...) the most attention, Malcolm Gladwell’s Blink (2005), but it also considers Gut Feelings by Gerd Gigerenzer (2007) and How Doctors Think by Jerome Groopman (2007). These books help deepen the .. (shrink)
This paper discusses the ways in which a person’s character ( ethos ) and a hearer’s emotional response ( pathos ) are part of the complex judgments made about experts’ claims, along with an actual assessment of those claims ( logos ). The analysis is rooted in the work of Aristotle, but expands to consider work on emotion and cognition conducted by Thagard and Gigerenzer. It also draws on some conclusions of the general epistemology of testimony (of which expert (...) testimony is a special subset), where it is argued that we learn not just from the transmission of another’s beliefs, but from the words they speak. This shifts the onus in testimony away from the intentions of a speaker onto the judgments of an audience, capturing better its social character and reflecting our experience of receiving testimony. I conclude, however, that accepting the arguments of experts involves much more than simply believing what they say. (shrink)
Gigerenzer et al.'s is an extremely important book. The ecological validity of the key heuristics is strengthened by their relation to ubiquitous Poisson processes. The recognition heuristic is also used in conspecific cueing processes in ecology. Three additional classes of problem-solving heuristics are proposed for further study: families based on near-decomposability analysis, exaptive construction of functional structures, and robustness.
A physician's lack of humanity is a general complaint in public surveys. The physician-patient relationship is viewed by the public as being reduced to a business relationship where the patient feels that she is merely a 'client' and the physician a healthcare 'practitioner' instead of a 'care giver'. This public perception is not a phenomenon that is peculiar to Lebanon. Yet, the problem has been increasing over the years to the extent that patients feel that physicians are becoming inhumane and (...) business oriented. While this might not characterize all physicians of the 21 st century, this might be true of at least some. Responses were collected from a study that was undertaken based on a questionnaire distributed to a pool of 650 participants from different geographical areas and different social and educational backgrounds in Lebanon. Participants were all older than18 years and mentally competent. None were physicians. The questionnaire was open-ended and initially piloted among a random sample. The physician traits most desired by the public were found to be: moral traits (41%), interpersonal traits (36%), scientific traits (19%) and other (4%). The most unwanted traits/behaviours were a lack of interpersonal traits (57%), a lack of moral traits (40%) and a lack of scientific skills (3%). The physician-patient relationship was perceived, in general, as being a flawed one. What can be done to remedy the image of the Lebanese physician that has been projected in the minds of the patients and the public at large? Nine major recommendations are presented. (shrink)
Neuromoral theorists are those who claim that a scientific understanding of moral judgment through the methods of psychology, neuroscience and related disciplines can have normative implications and can be used to improve the human ability to make moral judgments. We consider three neuromoral theories: one suggested by Gazzaniga, one put forward by Gigerenzer, and one developed by Greene. By contrasting these theories we reveal some of the fundamental issues that neuromoral theories in general have to address. One important issue (...) concerns whether the normative claims that neuromoral theorists would like to make are to be understood in moral terms or in non-moral terms. We argue that, on either a moral or a non-moral interpretation of these claims, neuromoral theories face serious problems. Therefore, neither the moral nor the non-moral reading of the normative claims makes them philosophically viable. (shrink)
The theory of fast and frugal heuristics, developed in a new book called Simple Heuristics that make Us Smart (Gigerenzer, Todd, and the ABC Research Group, in press), includes two requirements for rational decision making. One is that decision rules are bounded in their rationality –- that rules are frugal in what they take into account, and therefore fast in their operation. The second is that the rules are ecologically adapted to the environment, which means that they `fit to (...) reality.' The main purpose of this article is to apply these ideas to learning rules–-methods for constructing, selecting, or evaluating competing hypotheses in science, and to the methodology of machine learning, of which connectionist learning is a special case. The bad news is that ecological validity is particularly difficult to implement and difficult to understand. The good news is that it builds an important bridge from normative psychology and machine learning to recent work in the philosophy of science, which considers predictive accuracy to be a primary goal of science. (shrink)