Synthese
DOI 10.1007/s11229-013-0292-2
Philosophical intuitions, heuristics, and metaphors
Eugen Fischer
Received: 24 August 2012 / Accepted: 26 April 2013
© Springer Science+Business Media Dordrecht 2013
Abstract Psychological explanations of philosophical intuitions can help us assess
their evidentiary value, and our warrant for accepting them. To explain and assess conceptual or classificatory intuitions about specific situations, some philosophers have
suggested explanations which invoke heuristic rules proposed by cognitive psychologists. The present paper extends this approach of intuition assessment by heuristicsbased explanation, in two ways: It motivates the proposal of a new heuristic, and shows
that this metaphor heuristic helps explain important but neglected intuitions: general
factual intuitions which have been highly influential in the philosophies of mind and
perception but neglected in ongoing debates in the epistemology of philosophy. To do
so, the paper integrates results from three philosophically pertinent but hitherto largely
unconnected strands of psychological research: research on intuitive judgement, analogy and metaphor, and memory-based processing, respectively. The paper shows that
the heuristics-based explanation thus obtained satisfies the key requirements cognitive psychologists impose on such explanations, that it can explain the philosophical
intuitions targeted, and that this explanation supports normative assessment of the
intuitions’ evidentiary value: It reveals whether particular intuitions are due to proper
exercise of cognitive competencies or constitute cognitive illusions.
Keywords Philosophical intuition · Heuristic rules · Conceptual metaphor ·
Analogical inference · Experimental philosophy · Cognitive epistemology ·
Epistemology of philosophy
E. Fischer (B)
School of Philosophy, University of East Anglia, Norwich NR4 7TJ, UK
e-mail: e.fischer@uea.ac.uk
URL: http://eastanglia.academia.edu/EugenFischer
123
Synthese
Intuitions play a central role in philosophy. They are frequently regarded as evidence
for or against philosophical claims (e.g. Bealer 1996; Pust 2000, general review:
Cappelen 2012, pp. 1–23). Philosophical theories are typically required to be consistent
with the intuitions of their proponents and to accommodate them in the light of tensions
with background beliefs or among each other (Fischer 2011). A central ambition of
the currently much discussed movement of experimental philosophy is to develop
psychological explanations of relevant intuitions that help us assess philosophers’
warrant for accepting them. As a recent manifesto puts it: ‘First we use ... experimental
[or survey] results to develop a theory about the underlying psychological processes
that generate people’s intuitions; then we use our theory about the psychological
processes to determine whether or not those intuitions are warranted’ (Knobe and
Nichols 2008, p. 8).
The currently most prominent strand of experimental philosophy proceeds by eliciting philosophically relevant intuitions through surveys, and seeks either to expose
conflicts between the intuitions of academic philosophers and plain men (e.g. Nahmias et al. 2006), or to determine the sensitivity of the elicited intuitions to parameters
including cultural, linguistic, socio-economic, or educational background (Alexander
and Weinberg 2007)—or to establish their instability within the same subject (Swain
et al. 2008, p. 335). But instead of first constructing psychological explanations of the
intuitions thus studied, these philosophers typically seek to derive normative assessments of these intuitions directly from the conflict, sensitivity, or instability results of
their surveys. Critics have called into question the normative or philosophical relevance of each of these kinds of results (e.g. Cole Wright 2010; Shieber 2010; Pinillos
et al. 2011; Williamson 2011).
A less prominent strand of work, which might usefully be dubbed ‘cognitive epistemology’, pursues the same central ambition by drawing on experiments and theories already available from cognitive and social psychology. This approach seeks
to explain—and assess—philosophically relevant intuitions as the results of cognitive processes for which psychologists have already provided experimental evidence
(e.g. Nagel 2010, 2011; Spicer 2007). The focus of this work has been on intuitions
about specific situations of the kind frequently considered in conceptual analysis and
thought experiments. One approach tried, if tentatively, e.g. by Hawthorne (2004)
and Williamson (2005), is to explain such intuitions by reference to heuristic rules
posited by cognitive psychologists working within the ‘heuristics and biases’ research
programme.
The present paper extends this approach of assessment through heuristics-based
explanation, in two ways: It applies the approach to more general intuitive judgments
which are philosophically at least as important but have up to now been neglected in
the discussion of philosophical intuitions (Sect. 1); and it develops a fresh heuristic,
to do so. We will integrate concepts and findings from three hitherto disparate strands
of literature, viz. from work on intuitive judgments (Sect. 2), on analogical reasoning
and metaphor (Sect. 3), and on memory-based processing (Sect. 4). This integration
motivates the proposal of a fresh heuristic. The paper develops this ‘metaphor heuristic’
(Sect. 3) and shows how it helps account for general intuitive judgments that have
been highly influential in the philosophy of mind (Sect. 4). The paper establishes that
this heuristics-based explanation satisfies cognitive psychologists’ desiderata for such
123
Synthese
explanations and facilitates the assessment of the intuitions explained (Sect. 5)—with
intriguing results.
1 Explaining and assessing intuitions: an approach and its extension
In philosophy as in psychology, intuition is contrasted with deliberate reflection.1
Practically all psychological research on intuition conceptualises intuitions as a kind of
judgments—which we may, but need not, be entitled to accept and which may, but need
not, provide evidential support for other judgments. We (i) do not control the processes
that give rise to intuitive judgments (Mercier and Sperber 2009) and (ii) are not even
conscious of those processes but only of the judgments in which they issue (Sloman
1996). In these two respects, intuitive judgments are like perceptual judgments, though
they do not involve the use of our five senses in anything like the same way. Many
cognitive psychologists define ‘intuitive judgments’ in terms of the kinds of processes
that generate them (e.g. Kahneman and Frederick 2005, p. 268; Evans 2010, p. 314);
others employ phenomenological definitions to pick out the intuitions they seek to
explain (e.g. Gigerenzer 2007, p. 16). To determine the explanandum independently
from the explanation, we shall seize on phenomenological properties that reflect the
two core features of lack of (i) control and (ii) conscious awareness of underlying
processes, and focus (like Gigerenzer, loc. cit.) on those judgments that are ‘strong’
or compelling ‘enough’ for thinkers ‘to act upon’, in deed, word, or thought, which
tend to go with high levels of subjective confidence (Thompson et al. 2011). In first
approximation:
Intuitions are non-perceptual judgments which thinkers make (i) spontaneously,
(ii) without being aware of making any inference or rehearsing any reasoning,
and (iii) find plausible or compelling.
Even when we find them so intuitively compelling that we unwittingly presuppose
them in further reasoning, we need not reflectively accept the judgments we spontaneously make. Accordingly, many philosophers who apply the term ‘judgment’ only
to reflective or controlled judgments prefer to characterise intuitions as ‘attractions to
assent’ (e.g. Sosa 2007b). But they are talking about the same kind of thing (as will
become clear upon introduction, in Sect. 2, of the ‘dual process accounts of cognition’ that increasingly inform the relevant psychological work). Proper explication of
the notions ‘non-perceptual’ and ‘judgment’ (which is beyond this paper’s scope) can
exclude, respectively, introspective reports and statements of recalled facts (perceptual
or testimonial knowledge), which prima facie appear to satisfy most brief explanations of ‘intuitions’, including the one above. This leaves us with the kind of ‘intuitive
insights’ philosophers seek to honour and psychologists try to explain.
Central contributors to current debates about philosophical intuition include,
among others, Goldman (2007), Jackson (2011), Hilary Kornblith (2007), Ernest Sosa
(2007a,b), Williamson (2007), and the experimental philosophers engaging with or
1 See Cappelen (2012, Chaps. 2–3), for comprehensive review of the various different uses of the term in
ordinary English and philosophy, respectively. Cp. also op. cit. 98–114.
123
Synthese
engaged by them. All these philosophers have focused on conceptual, classificatory,
or modal intuitions, mainly about specific situations described and considered in the
context of a thought experiment. This focus is largely due to a desire, shared (e.g. Sosa
2007) or attacked (e.g. Kornblith 2007), to practise philosophy as an armchair science
that discovers truths through a priori reflection2 : Such reflection seems, by and large,
more apt to reveal truths about concepts and about what is necessary or possible than
about matters of contingent fact.3 Thus, Ernest Sosa restricts the scope of the kind of
‘rational intuitions’ whose use in philosophy he seeks to defend, to intuitions whose
‘content is explicitly or implicitly modal’ and then writes with commendable frankness: ‘One might quite properly wonder why we should restrict ourselves to modal
propositions. And there is no very deep reason. It’s just that this seems the proper
domain for philosophical uses of intuition’ (Sosa 2007a, p. 101).
Proper or not, the use of non-modal intuitions has, however, been at least as widespread and influential in philosophy. Classical texts in the philosophies of mind and
perception, for instance, early modern as well as twentieth century, abound with contingent and apparently non-conceptual claims about the workings of the mind and
perception, and precisely some of the most fundamental of these claims have been
maintained by several philosophers either without or independently from any argument, and seem to express intuitive judgments in the sense explained: non-perceptual
judgments their protagonists make spontaneously, without being aware of any underlying reasoning, and find so intuitively compelling that they rely on them in further
reasoning. To take particularly influential examples, historically, most philosophers
who maintained the merely apparently perceptual claim (see below, Sect. 4.2) that in
thinking we perceive ideas in the mind, or that attentive subjects know what they know,
understand, think, and believe, seem to have maintained them in this way (Fischer
2011, Chap. 2). As advanced by most of their proponents, these and related intuitions
about the mind are neither conceptual nor modal but thoroughly factual (about how
things actually are, rather than must or may be). They are also quite general in scope
(rather than about a specific situation like a Gettier case).
The intuitive nature of such general factual claims has attracted rather little scholarly
attention, almost exclusively in philosophical history and when an earlier philosopher
regarded claims as intuitively obvious which struck later scholars as rather wild, as with
the debate about the ‘intuitive basis’ of Berkeley’s idealism, initiated by Smith (1985)
and Fogelin (2001). However, the intuitive basis of philosophical beliefs arguably calls
for even closer scrutiny where the beliefs at issue are not idiosyncratic but widely influential. While their intuitive nature was not necessarily adduced as evidence, various
claims were confidently derived from the above and related intuitions about the mind,
and philosophical theories were required to honour or account for these general factual intuitions no less (if not more) than (for) conceptual and modal intuitions, and
other intuitions about specific scenarios (Fischer 2011). Since the philosophical relevance of the latter intuitions has been forcefully called into question (Cappelen 2012;
2 In some defenders of armchair philosophy, this desire is tempered by a co-operative naturalism
(e.g. Goldman) or transformed by scepticism about the viability of the a priori/a posteriori distinction
(e.g. Williamson).
3 For exceptions see e.g. Hawthorne (2002).
123
Synthese
Williamson 2007) increasing attention to the neglected former kind may be timely.
Let’s extend the investigation of philosophical intuitions beyond the usual suspects
that dominate current debate in the epistemology of philosophy, and try to explain and
assess general factual intuitions about contingent but general facts.
The move from psychological explanation to the rather different and frequently
unrelated task of epistemological assessment is viable, however, only in quite specific
cases, to which we will now build up. The general factual intuitions about the workings
of the mind just mentioned have been reflectively endorsed by many thinkers who had
them. These intuitive judgments have then been ‘bedrock’ for those philosophers, i.e.,
these thinkers either refused to dig deeper or their spades turned the moment they tried
to do so (Fischer 2011):
The judgment that p is bedrock for a thinker iff he relies on it in argument and either
(i) regards it as being itself in no need or incapable of argumentative or evidentiary
support or
(ii) presupposes it directly or indirectly in what argument he subsequently adduces in
its support. (All his arguments to support the conclusion that p presuppose either
p itself or a claim q that he bases in turn on the assumption that p.)
When a thinker subsequently adduces arguments to support his judgments, intuitive
and other, his warrant for accepting them may derive from these arguments. But when
judgments are bedrock for a thinker, in the—subject-relative—sense explained, he has
no non-circular argument to positively support their acceptance. Whenever they are
contentious or at odds with other convictions of his, or of common sense, or otherwise
come to be in need of justification or support, a thinker therefore has warrant for
accepting bedrock intuitions to the extent, and only to the extent, to which the mere
fact that he has these intuitions speaks for their truth. So in case a controversial or
paradoxical intuition is bedrock for a thinker, his warrant for accepting it will depend
entirely upon whether his intuition has such probative force.
When a thinker’s intuition has such force is a matter of ongoing philosophical
debate. One of the least controversial sufficient conditions is that an intuition has
such force if it is ‘virtuous’: if it issues from the exercise of an ‘epistemic virtue’ or
cognitive competence which reliably has the thinker get things right about a certain
subject matter—a competence ‘to discriminate the true from the false reliably (enough)
in some subfield of. . .propositional contents’ (Sosa 2007b, p. 58). Many philosophers
accept, for example, competent speakers’ intuitions about whether protagonists of
Gettier cases qualify as ‘knowing’ a given proposition, and take for granted that these
intuitions issue from some such competencies of ordinary speakers. One can try to
defend this usually tacit assumption, e.g. by showing that Gettier intuitions are due
to the exercise, under suitable conditions, of natural ‘mindreading’ competencies of
attributing mental states, which are generally reliable despite predictable occasional
‘cognitive illusions’ (Nagel 2012; criticised by Stich 2012).
On the other hand, our intuitions have no probative force, e.g., in case they are
constitutive of such an illusion. Illusions are predictable and reproducible deviations
of perceptions, judgments, or memories, from relevant facts or normative standards.
In optical illusions, (i) things look different than they actually are (in the Müller–Lyer
illusion, one line looks longer than another of measurably equal length); (ii) it is not
123
Synthese
random but predictable how people will misperceive relevant objects under relevant
conditions (which line will look longer); (iii) these misperceptions occur involuntarily
and (iv) are not influenced by better knowledge (one line looks longer than the other
to you, even when you know they are the same length). In cognitive illusions, (i)
thinkers make spontaneous judgments violating relevant normative rules (e.g. of logic
or probability theory) which define, determine, or constrain what is right or reasonable
to believe; (ii) thinkers do so in a predictable, rather than random fashion; while these
misjudgments can be modified and even completely corrected by conscious reflection,
(iii) they are automatic or involuntary in origin and (iv) subjects typically find them
intuitively compelling even once they have realised they cannot be right (Pohl 2004,
pp. 2–3).
The heuristics-and-biases programme in cognitive psychology seeks to explain
intuitive judgments by positing largely automatic cognitive processes governed by
heuristic rules that typically deliver accurate verdicts but lead to cognitive illusions, in
specific cases (Tversky and Kahneman 1974, p. 1124). Philosophers have invoked such
heuristics to explain away an apparently inconsistent pattern in our intuitive knowledge attributions: The moment non-actualised possibilities of error are mentioned, we
take back previously confident knowledge-attributions. To show that such retractions
are incorrect and support no conclusions about the concept or nature of knowledge,
Hawthorne (2004) and Williamson (2005) tentatively proposed an explanation that
attributes these intuitive judgments to a heuristic whose philosophical relevance had
already been noted by Vogel (1990): the ‘availability heuristic’ which has us assess
the probabilities of events according to the ease with which events of that type come
to mind (Tversky and Kahneman 1973). Jennifer Nagel has recently criticised this
explanation and proposed an alternative psychological explanation which invokes a
bias known as ‘epistemic egocentrism’ (Royzman et al. 2003) and also allows us to
dismiss the common shift in epistemic intuitions as a cognitive illusion engendered
by an otherwise generally reliable competence (Nagel 2010).
Sometimes, namely when they are in need of justification but bedrock for her, we
can hence assess whether a given thinker has warrant for accepting particular intuitions,
by explaining why she has them: We can show them ‘virtuous’ by tracing them back to
the exercise of cognitive competencies in propitious circumstances; and we can show
them ‘treacherous’ by exposing them as cognitive illusions engendered under particular conditions, or in particular ranges of cases, by cognitive processes governed by
specific heuristic rules.4 This approach is particularly pertinent for bedrock intuitions
constitutive of ‘hidden paradoxes’: for intuitions which appear too obviously true to
merit further support but actually are inconsistent with common-sense beliefs (or with
other intuitions), and therefore need further justification which their champions either
regard as superfluous or fail to provide, as their possibly half-hearted efforts yield only
circular arguments.
With a focus on such intuitions, the present paper will extend the approach of epistemological assessment by psychological explanation. It will develop a fresh heuristic
that helps to explain general factual intuitions whose hidden or overt clash with com-
4 This is emphatically not intended as an exhaustive dichotomy.
123
Synthese
mon sense (or science) has given rise to important philosophical problems—and which
therefore deserve at least as much attention as the situation-specific, modal and ‘conceptual’ intuitions on which the debate in the epistemology of philosophy has focused
so far. To develop the fresh ‘metaphor heuristic’, the paper will integrate concepts and
findings from two influential but hitherto largely distinct strands of psychological literature (whose defence against philosophical objections will have to be left for another
occasion): from work on intuitive judgment which has recently come to attract significant philosophical attention (Sect. 2), and from work on metaphor processing and
analogical reasoning (Sect. 3)—that driving engine of creative thought which, perhaps
surprisingly, has been studied far more extensively in psychology than in philosophy.
2 Explaining intuitions: heuristics
In cognitive psychology, intuition research is dominated by two partially competing
but largely complementary (Read and Grushka-Cockayne 2011) research programmes:
the heuristics and biases programme (Tversky and Kahneman 1974, 1983; Kahneman
and Frederick 2002, 2005; Kahneman 2011) and the adaptive behaviour and cognition (ABC) programme (aka ‘fast and frugal heuristics’, Gigerenzer and Todd 1999;
Gigerenzer 2007, 2008). Both seek to explain intuitive judgments as resulting from the
largely automatic application of heuristic rules which can be used to answer many different kinds of questions (multi-purpose heuristics). While normative rules (of logic,
probability theory, morality) define, determine, or constrain which answer, solution,
decision, or action is right, heuristic rules are rules of thumb which typically yield
reasonably accurate results, but do not define or constrain what is right. While we
employ many such rules in explicit reasoning (Gigerenzer 2008), such rules also govern automatic cognition.
Automaticity is a complex notion which has been tied to several distinct, partially
linked features (Bargh 1994; Moors and De Houwer 2006). The first and most basic
of these is widely used for an operational definition of automatic processes (Evans
2008, p. 259): A cognitive process is
• effortless iff its execution requires no attention or other limited cognitive
resource—and is hence not impaired when the subject is distracted by tasks requiring such resources (say, keeping in mind long numbers),
• unconscious to the extent to which the subject is unable to report the course of the
process, as opposed to articulating its outcome (judgment, decision, etc.),
• non-intentional iff the process is initiated regardless of whether or not the subject
wants to, namely regardless of what aims or goals she pursues,
• autonomous or ‘uncontrolled’ iff once the process has started, the subject cannot
alter its course or terminate it before it has run its course.
Effortless processes also tend to be executed more rapidly than effortful processes.
Since some processes possess some but not all of these features, and different
processes possess them to different extent (e.g., are more or less strongly impaired by
multi-tasking), the distinction between automatic and controlled cognition does not
123
Synthese
constitute a neat dichotomy.5 However, many of these features co-occur or co-vary and
deserve a joint label: cognitive processes that are relatively effortless, unconscious,
and non-intentional are called spontaneous (in particular in social psychology, Uleman
et al. 2008).6 When also autonomous, they are ‘fully automatic’.
Many fully automatic processes get routinely or continually carried out in senseperception or language-comprehension, even in the absence of specific tasks set. Frequently, the outcomes of such natural processes are positively correlated with answers
to various questions. Take a straightforward example: In a modest sense of the term,
‘recognition’ is automatically assessed by a natural process—when hearing a name or
seeing a person we cannot help having a gut feeling about whether we have encountered this name or person before (regardless of whether we recall where and when).
By and large, bigger cities and more successful athletes receive more media attention, so we are more likely to have encountered their names before. Hence, in game
shows and elsewhere, it makes good sense to employ this ‘recognition heuristic’: If
you recognise one of two objects (cities, athletes, etc.) but not the other, judge that
the recognised object has the higher value (is bigger, more successful, etc.) (Goldstein
and Gigerenzer 2002). Such judgment heuristics are simple strategies for obtaining
answers to a variety of questions (Which of two cities is bigger? Which of two players
is more successful? etc.) from the ‘natural’ (Tversky and Kahneman 1983) or ‘basic
assessments’ (Kahneman 2011) that are continually generated by natural processes.
Several experiments suggest the application of such strategies is effortless (De
Neys 2006; Pachur and Hertwig 2006; Volz et al. 2006): Regardless of whether they
are right or wrong, we give more responses consistent with the relevant heuristic when
put under time-pressure or required to multi-task; and when and where the heuristic
yields a wrong response, the subjects who avoid this mistake require significantly
more time for their—correct—response. Apparently, all participants in these experiments first apply the pertinent heuristic in an effortless and rapid process and some
of them then correct the outcome, where necessary, in a further, effortful step which
takes a few seconds and falls by the wayside under the pressures of multi-tasking.7
Subjects cannot report application of the rule when first responding, but frequently
give reasons consistent with a heuristic rule (‘At least I’d heard of that one before’)
when subsequently asked to justify or explain their answers to previous forced-choice
5 One hence needs to beware of simplistic interpretations of the increasingly influential ‘dual process
accounts of cognition’ (Evans 2008; Evans and Frankish 2010) in which Kahneman and Frederick (2002,
2005) embed earlier work on heuristics and biases and which some philosophers take up to explain philosophical intuitions (e.g. Nagel 2011; Pinillos et al. 2011), while leading proponents of the ABC programme
reject these accounts (Gigerenzer and Regier 1996; Gigerenzer 2009; further criticism: Keren and Schul
2009; Osman 2004). See Evans and Stanovich (2013) for the current state of the debate which is beyond this
paper’s scope. We will develop an explanation largely consistent with but not reliant upon such accounts.
6 When found compelling, non-perceptual judgments generated by spontaneous processes satisfy our above
definition of ‘intuition’.
7 In these experiments subjects were not provided with information that would have allowed them to
make judgments based on normative rules—which could hence only be applied to outputs of prior heuristic reasoning or processing. Where such information (e.g. base rates) is provided, heuristic and analytic
processes may generate competing responses in parallel (De Neys and Glumicic 2008). The present paper is
consistent with but does not rely on the purely ‘default-interventionist’ architecture (Evans 2007) endorsed
by Kahneman and Frederick (2002, 2005).
123
Synthese
questions. We thus appear to face a sequence of two to three processes of decreasing
automaticity: a fully automatic natural process produces outputs used by a heuristic rule which is applied largely spontaneously and produces an intuitive judgment
that may, where necessary, be subsequently corrected or modified through effortful,
typically conscious, reflection.
While frequently useful, some of the proposed heuristics are under some conditions
bound to lead to results that violate pertinent normative rules and cannot be right. Their
application will result in cognitive illusions or predictable biases. Within the heuristics
and biases programme, the spontaneous application of heuristic rules is attributed to
subjects to predict and explain such biases or non-random, systematic fallacies. The
reproduction of such fallacies in behavioural experiments like the famous ‘Bill and
Linda study’ (Tversky and Kahneman 1983) is taken to support the hypothesis that
the proposed heuristic is indeed being applied unconsciously. Such experiments are
taken to support such hypotheses the more strongly, the more surprising the fallacies
are that the hypotheses predict and the experiments reproduce.8 Since the effortless
rapid application of heuristic rules may be followed by effortful conscious correction
or modification of the outputs thus obtained, heuristics-based explanations of intuitive
judgments should not only predict otherwise surprising fallacies and explain how the
heuristic led to such wrong judgments, but should also predict when, and explain
why, this error was either not detected or only insufficiently corrected (Kahneman and
Frederick 2002, p. 52).
Where heuristics lead to judgments constrained by normative rules, people with a
higher IQ are more likely than people with a lower IQ to check and correct their intuitive
judgments, when cued to do so (Stanovich 2011). But they are not more likely to engage
in controlled rather than automatic cognition in the absence of such cues (Stanovich and
West 2008). Philosophers should hence beware of assuming that superior intelligence
might inure them against the pull of what conclusions they spontaneously leap to in line
with heuristic rules. Nor should we assume that logical reasoning will prevent us from
unwittingly falling back on heuristics: Hypothetical and counter-factual reasoning do
indeed seem to require effortful and conscious cognition, which may pre-empt, reduce,
or override the spontaneous application of heuristic rules (Evans 2008; Sloman 1996).
But much philosophical reflection involves only simpler logical operations, which
are demonstrably capable of non-intentional and unconscious execution: negation
(Deutsch et al. 2009), universal instantiation, and modus ponens inferences (Reverberi
et al. 2012), in particular from assumptions about familiar cases or topics, believed
(rather than supposed) true, etc. In view of extant work on intuitive judgment, it would
be outright surprising if heuristics were not used in philosophy.
This paper seeks to develop a heuristic that can explain philosophically particularly
relevant intuitions in a way that meets the several requirements imposed by cognitive
psychologists pursuing the heuristics and biases programme: We will build up to a
fresh multi-purpose judgment heuristic which seizes on natural processes to generate
answers to many different questions (Sect. 3), and use this heuristic to develop an
8 Within the ABC programme, researchers seek to predict and explain different but no less surprising
effects, e.g. ‘less-is-more effects’ where subjects perform better when possessing less relevant information
(Goldstein and Gigerenzer 2002).
123
Synthese
explanation of the philosophical intuitions targeted, which lets us successfully predict
surprising fallacies in the reasoning of highly capable thinkers (Sect. 4) and understand
thinkers’ typical failure to notice or properly correct the mistakes identified (Sect.
5). Indeed, the proposed heuristic will also meet the requirements imposed by the
competing ABC programme (Sect. 3.2).
3 The metaphor heuristic
The fresh heuristic to which we will build up is the ‘metaphor heuristic’. To build
up to it, we will use some recent findings from another area of cognitive psychology, to develop into a respectable hypothesis an iconoclastic idea mooted by Ludwig
Wittgenstein and subsequently taken up to varying extent by Rorty (1980) and Lakoff
and Johnson (1999), cp. Johnson (2008), among others: In philosophical reflection, we
are frequently guided by metaphors and ‘analogies in language’, without being aware
of being guided by anything of the sort (Wittgenstein 2005, pp. 409, 427). With the help
of concepts and findings from metaphor and analogy research in cognitive psychology and linguistics, we will now develop this hunch into an empirical hypothesis with
proper theoretical motivation and empirical support. To do so, we will identify unconscious inferences that may have us employ analogies without being aware of doing so
(Sect. 3.1) and consider how such inferences may forge ‘analogies in language’ which
encourage us to make further such inferences (Sect. 3.2).
3.1 Unconscious analogical inference
The familiar solar-system model of the atom illustrates the notion of ‘analogy’: In this
model, electrons correspond to planets and the nucleus to the sun. Electrons do not
share any object attributes with planets, i.e., do not satisfy the same 1-place predicates
(like “x is round”). The same applies to the sun and the nucleus (which is neither round
nor yellow). But, according to the model, the electrons x stand in some of the same
relations to the nucleus y as the planets to the sun: x revolves around y, x is attracted by
y, and y has a greater mass than x. These first-order relations obtain between individuals
and stand, in turn, in higher-order causal relations: x revolves around y, because x is
attracted by y, and y attracts x because it has a greater mass. Such a first-order analogy
obtains between two domains, the source domain A (here: the solar system) and the
target domain B (the atom), just in case elements of A (planets and sun) can be mapped
onto elements of B (electrons and nucleus) in such a way that they stand in some of
the same relations, but share few or no object attributes or properties (Gentner 1983,
p. 159). Below (in Sect. 3.2), we will encounter second-order analogies which obtain
between domains when their elements stand in different first-order relations, but these
can be mapped onto each other in such a way that they stand in some of the same
second-order relations of a causal or logical nature.
Analogies can be used for inferences to conclusions about the target domain.
According to all major psychological theories of analogical reasoning, such reasoning about a target domain (say, atoms) involves at least three steps: First, a suitable
source-model (e.g. the solar system) is identified and knowledge about it is retrieved
123
Synthese
from memory. Second, model and target are aligned and elements of the source-model
(planets, sun, relations between them) are mapped onto elements of the target domain
(electrons, nucleus, etc.). Third, the actual inferences are made (Holyoak 2005).
The first two steps are subject to the two constraints of semantic and structural
similarity (Gentner et al. 1993; Holyoak and Koh 1987; Ross 1989). In thinking about
a target domain, we are more likely to seize on a potential source domain when we
apply the same or semantically related (one- or more-place) predicates in both. In
particular when several different source models are semantically sufficiently similar
to the target to be potentially pertinent, subjects choose the one most structurally similar to the target (Wharton et al. 1994, 1996). The same constraints guide alignment
and mapping: According to influential computational models of analogical inference
(including SME, Falkenhainer et al. 1989; Forbus et al. 1995), we first correlate sourceand target-domain elements to which the same concepts apply, and then prune these
correlations and add new ones by imposing a two-fold requirement of structural consistency.9 Finally, mappings are preferred when they systematically map a whole
system of lower-order relations as well as several higher-order (causal or logical, etc.)
relations that link or constrain them.10
By establishing correspondences that satisfy these constraints, we identify relational structures which source and target domain share. This elicits a joint structure
or pattern in them. The actual inference then proceeds by spontaneous completion of
this pattern: All major computational models of analogical inference use some version
of the process of copying with substitution and generation (CSG, Falkenhainer et al.
1989; Hummel and Holyoak 2003):
CSG
1. When a source-domain element A has been mapped onto a target-domain element
B, all the relations in which A stands in the source domain, plus relevant relata,
are transferred onto B, into the target-domain (‘copying’).
2. As far as possible, relations and relata from the source domain are replaced by those
elements of the target-domain onto which they have been mapped (‘substitution’).
3. Where an element of the source domain could not be mapped onto any element
of the target domain, however, it is simply carried over identically, and new elements are postulated in the target domain, to the extent necessary to complete the
transferred structure (‘generation’).
Experimental findings on analogical reasoning (incl. Markman 1997; Gentner and
Kurtz 2006; Gentner and Bowdle 2008) and metaphor processing (next section) are
consistent with the assumption that our analogical reasoning is actually governed by
these rules and the above constraints: Subjects have a pronounced tendency to map
9 When a property or relation in the source domain (say, ‘x orbits y’) is mapped onto a property or relation in
the target domain (itself, if available), their arguments or relata (be they individuals—like planets and sun—
or lower-level properties or relations) have to be placed in correspondence too (‘parallel connectivity’); and
each element (individual, property, relation) in one domain must be placed in correspondence with at most
one element in the other domain (‘one-to-one mapping’).
10 In the process, both source and target may get ‘re-represented’: Where the two domains do not share
the same but similar relations, these may either get subsumed under more generic concepts or analysed into
simpler concepts, which apply in both domains (Falkenhainer et al. 1989; Forbus et al. 1995).
123
Synthese
Table 1 Prompts for analogical inferences
Source text [extracts]
Target text [extracts]
When the widow-heiress Margaret Haverty died
suddenly, there were some who suspected foul
play…
The death of wealthy old Alexander van Houten
was abrupt and unexpected, and there were
rumours that it might not be of natural causes….
[A] As soon as her death was announced, her niece
Helena mysteriously booked a flight from Amsterdam, where she also lived, off to Naples
[or]
Surprisingly, when his death was announced,
Mr. van Houten’s nephew George immediately
bought a ticket and flew to Rio de Janeiro
[B] As soon as her death was announced, her niece
Helena respectfully booked a flight from Naples,
where she lived, to Amsterdam for the funeral
entire systems of relations and make inferences which complete the common system
of relations (Clement and Gentner 1991; Spellman and Holyoak 1996). When the firstorder relations involved stand in some of the same higher-order relations, we are readier
to accept claims about the model as premises of analogical inferences about the target
(Gentner et al. 1993) and judge their conclusions more credible (Lassaline 1996).
Where mappings connect semantically similar elements and meet the consistency
requirements, we are particularly likely to draw analogical inferences (Gentner and
Markman 2005).
Until about 15 years ago, cognitive psychologists generally assumed that such analogical reasoning is always conscious and controlled. But recent text-comprehension
and problem-solving experiments suggest we also make analogical inferences nonintentionally and without being aware of making any inference or relying on any
analogy (Day 2007; Day and Gentner 2007; Thibodeau and Boroditsky 2011): Without being set a specific task, participants in one experiment were asked to read first a
source text and then a target text with similar content and structure, e.g. two stories
about suspicious deaths (Day and Gentner 2007, exp. 1). The target text contained
some vague or ambiguous passages which could be reasonably interpreted in the light
of analogous passages in the source text, which were less vague or unambiguous. As
a source text, two groups were each given a different version of the same story (which
included passages A and B below, respectively), which invite different interpretations
of analogous passages in the target text (Table 1).
Subsequently participants were given the recognition task of telling which of several
‘facts’ on a list had been stated by a text they had read. The list included ‘new’ items that
were not stated by any text but could be obtained by interpreting vague or ambiguous
passages in the target text, in the light of corresponding passages in the two different
source texts. Thus, the list included two statements that can be obtained by primarily
relational matching and CSG inference from the source passages A and B, respectively
(Table 2).11
11 In similar studies, including the one mentioned below (Blanchette and Dunbar 2002, p. 675) experi-
menters explicitly used CSG to generate the test sentences.
123
Synthese
Table 2 Analogical conclusions
From A
From B
George had been in the same city as his uncle but left town when the death was announced.
George was among those who came into town for Alexander van Houten’s funeral.
In 72.5 % of the cases where the new statement could be inferred in this way from
the source text they had read, subjects erroneously attributed it to the target text. By
contrast, only 25 % of the new statements not thus obtainable were misattributed. The
experimenters inferred that subjects had drawn analogical inferences from the source
texts they had previously read. These inferences were not conscious: When asked ‘Did
you feel that each of the passages was completely understandable on its own, or did
you find yourself referring to previous stories in order to understand later ones?’ 80 %
of participants stated that all passages were completely understandable on their own.
Reading speed measures suggest that these typically unconscious inferences are
made already when subjects first read the target text. In another experiment (Day and
Gentner 2007, exp. 3), a further sentence was inserted further down in the target text:
George’s absence from the service was conspicuous, especially since he had been seen around his
uncle’s estate prior to his death, and the police soon found out about his flight to Rio.
While readily intelligible as part of the story of a get-away (in analogy to source text
A), this sentence makes little sense in the context of a story about a respectful relative
flying in for the funeral (in analogy to source text B). Readers of the incompatible
source text B needed significantly longer to read this sentence in the target text—
presumably, because they had made analogical inferences that render this sentence
difficult to understand, already when first reading the earlier target passage reproduced above (“Surprisingly…”). In a similar study (Blanchette and Dunbar 2002), in
which the analogous nature of source and target was, however, explicitly pointed out,
participants who had previously read an analogous source text needed longer to read, in
a target text, the sentences that could serve as premises of analogical inferences: Probably, analogical inferences (with relational matching and CSG) were already made
unconsciously, when subjects first read those potential premises (op. cit. 680–681).
As another study revealed, they are made irrespective of whether their conclusions are
consistent or at odds with subjects’ (previously assessed) attitudes (Perrott et al. 2005).
It seems these typically unconscious inferences are not under control. Arguably, they
are automatic.
At any rate when people, like the participants in these experiments, are not given any
specific practical task, they seem, as a default, to be open to fresh models and analogies
and to automatically deploy suitable analogies they come across, to make sense of texts
and statements. Indeed, the experimenters concluded that what they dubbed ‘nonintentional analogical reasoning’ is involved in the ‘routine task of organising and
interpreting our daily experiences’ (Day and Gentner 2007, p. 39). Some analogical
reasoning seems to be constitutive of ‘natural processes’ which are automatically and
123
Synthese
routinely executed as part of language comprehension or sense-perception (ibid.), and
may be involved as sub-component in a variety of other cognitive processes, such
as categorisation (Gentner and Markman 2005, p. 8). It is, it seems, just the kind of
process that can be recruited for heuristic inferences (Sect. 2).
3.2 Conceptual metaphor and a fresh heuristic
In the above experiments subjects unwittingly seized on models provided by experimenters for the purpose—rather ad hoc affairs unlikely to influence subjects’ thought
outside the specific situation. We will now identify a class of more widely influential models: models which are ‘written into’ language, are demonstrably employed in
non-intentional analogical reasoning in the process, and are then liable to guide such
reasoning more widely.
In many natural languages, the use of entire families of related expressions is
extended from concrete source- to more abstract target-domains, and is extended so
systematically that inferential relations between the family-members are preserved
(Sweetser 1990; Sullivan 2007). Like many other languages, English seems to owe a
large part of its mental vocabulary to such metaphorical extension of terms initially
applied to visual perception and to manual operation (as you may, e.g., ‘see’ or ‘grasp’
my point, Jäkel 1995). Regardless of the extent to which such processes of extension are
historically real and replicated for one language after another, the fact potentially due
to them is evident: The same sets of related terms are applied in different domains. For
instance, terms used to talk about visual search are also employed, wholesale, in talk
about goal-directed intellectual efforts: efforts to solve problems, answer questions,
explain or understand facts, events, or actions. About the latter we can say such things
as:
It is clear to me why you acted that way, when I manage to see your reasons—
and obscure when I fail. I may look for reasons where these are hidden or be
blind to reasons in plain view. An illuminating explanation throws new light on
your action and lets me discern crucial reasons I had previously overlooked, get
a fuller picture, or at least catch some glimpse, of reasons about which I was
previously completely in the dark. A fresh look at your situation may, e.g., reveal
threats in whose light your action no longer looks as odd as it did at first sight.
The same inferential relations between the terms involved then obtain both in the
concrete (source) and in the more abstract (target) domain. E.g., regardless of whether
I am looking for my keys or your reasons, I cannot see what is lying in the dark, and
something that sheds light may help me to see it.
The use of these terms in the abstract domain is taken to be ‘motivated’ by conceptual metaphors: cross-domain mappings that preserve relations (Lakoff and Johnson
1999; Kövesces 2012). They map elements (individuals, properties, and, especially,
relations) of a usually more concrete source domain (like visual search) onto elements
of a usually more abstract target domain (here: intellectual effort), and do so in such a
way that the first-order relations mapped stand among each other in some of the same
higher-order relations of a causal or logical nature.
123
Synthese
Such comprehensive mappings are systematic affairs. The most elementary analogical inferences generate further mappings from such ‘basic mappings’ as:
(1) S sees x → S knows x
The first mappings made in analogical reasoning correlate elements of the different domains, to which the same concepts apply (Sect. 3.1). Prior to metaphorical
extension, they hence map attributes and relations that obtain in both source- and
target-domain onto themselves; according to the rules of CSG, these attributes and
relations get ‘substituted’ by themselves (and are, in effect, simply copied). The most
elementary CSG inferences involve only copying with substitution (CS), no generation, and employ only such ‘mappings onto self’ plus a basic mapping (like (1)
above), while (non-relational) logical and modal operators, which also pertain to both
domains, simply get copied. Such elementary CS inferences can proceed from either
closed or open sentences. In the latter case, attributes and relations designated by the
premises can be mapped onto those designated by the conclusions, yielding further
mappings:
(2) S does not see x → S does not know x
(3) It is possible for S to see x, i.e., x is visible for S → It is possible for S to know x
(4) It is not possible for S to see x, i.e., x is invisible for S → It is not possible (for
S) to know x
(5) X makes it possible for S to see y, i.e., x makes y visible for S → X makes it
possible for S to know y
(6) X makes it impossible for S to see y, i.e., x makes y invisible to S → X makes it
impossible for S to know y
(7) S tries to get to see x, i.e., S looks for x → S tries to get to know x
Elementary CS also delivers conclusions where domain-neutral operators, or relations
obtaining in both domains, are capable of adverbial qualification. This yields further
mappings, such as:
(3*) It is readily possible for S to see x / x is readily visible to S → It’s readily possible
for S to know x
The basic mapping and the mappings obtainable from it through elementary CS are
jointly constitutive of conceptual metaphors like the present metaphor Intellectual
Effort as Visual Search.12
Conceptual metaphors are linguistically realised to the extent to which they motivate the metaphorical extension of terms from their source to their target domains.
Such extension is motivated by implications13 : We frequently say things by implying
them without stating them, relying on our interlocutor to draw the requisite inference.
12 Despite its frequently noted importance, this metaphor has not yet been reconstructed in any detail.
See Fischer (2011, pp. 22–28, 41–49), for a more detailed discussion, also of its relation to other ‘mind
metaphors’.
13 The idea outlined in this paragraph is developed, e.g., by the theory of ‘information-based processing’
(Budiu and Anderson 2004, 2008), see Sect. 4.1. This ‘unified account’ of metaphor and literal understanding
is consistent with the ‘career of metaphor’ hypothesis below.
123
Synthese
Table 3 Analogical inference grid
Source-domain premise
Operation
Target-domain conclusion
X is obvious
Generation
X is obvious
Implies (1;3)
Substitution: identical
Implies (1;3)
X is easily visible
Substitution: mapping 3*
X is easily knowable
E.g. most people associate attributes like strength, courage, and nobility with lions,
and reliably infer from ‘x is a lion’ that x is strong, courageous, and noble. Hence
we may say ‘Achilles is a lion’ to say that Achilles is strong, courageous, and noble,
and thus metaphorically extend the application of ‘x is a lion’ to heroes and humans
who we take to share some or all of these properties with those animals. Metaphorical
extension of relational terms to their salient implications can motivate the basic mappings of conceptual metaphors: Typically, when you see something (happening), you
know it (happens).
Conceptual metaphors facilitate a plethora of CSG inferences which forge fresh
implications that motivate (further) such metaphorical extension. Take a familiar fact
about the present source domain of visual search: When things stand right in the way,
in front of us, they are, by and large, easily visible. That things are ‘right before your
eyes’—or ‘obvious’, used until the eighteenth century to mean ‘standing in the way,
positioned in front of, opposite to, facing’ (OED)—hence implies that they are easily
visible. Straightforward CSG inference takes us from this premise to the conclusion
that when things are ‘right before your eyes’ or ‘obvious’, they are easily knowable.
Let’s represent such CSG inferences through ‘grids’ (as in Table 3).
When, as here, CSG inferences proceed from conditional open sentences about the
source domain, ‘generate’ or carry over the antecedent, and substitute the consequent,
they endow terms employed in talk about the source domain (which remain in the
antecedent of the conclusion) with metaphorical implications about the target domain
(specified by the substituting terms in the consequent).14 To interpret the fresh use of
the former ‘source-domain terms’ in talk about the target domain, competent speakers
spontaneously seize on such implications and infer, e.g., from ‘the solution is obvious’
that it is easy to get to know. Hence speakers could unproblematically start to say the
former to state the latter. In these ways, conventional and metaphorical implications
motivate the metaphorical extension of terms, in talk of heroes and solutions, respectively. Such extension ‘writes into language’ similarities between different kinds of
things and analogies between different domains.
Crucially, conceptual metaphors and analogical reasoning mutually reinforce each
other: Recall (from Sect. 3.1) that we are the more likely to draw analogical inferences
based on a particular mapping, the more semantically similar the elements connected
by the mapping are, the more the mapping meets the consistency requirements, and
the more relations it maps together with higher-order relations connecting them. CS
14 p has the metaphorical implication q* iff p→q* can be obtained through CSG from a truth p→q about
the source-domain of a conceptual metaphor whose constitutive mappings license substitution of or in q
yielding q*, but license no substitution of or in p. ‘→’ designates de- or inductive inference. (My definition).
123
Synthese
inferences expand conceptual metaphors in a way that includes both first-order relations and (logical) higher-order relations between them, and satisfies the consistency
requirements.15 The mappings obtained facilitate ever more CSG inferences; many
of these forge metaphorical implications which motivate the metaphorical extension
of again more terms, thus increasing the semantic similarity between the domains
involved. The more suitable CS and CSG inferences we make, the more extensive,
and extensively realised, respectively, a conceptual metaphor becomes; and the more
this happens, the more likely we are to make CSG inferences about the metaphor’s
target from premises about its source domain. The source domains of extensively
developed and realised conceptual metaphors are particularly likely to be employed
as models in analogical reasoning—which may in turn extend them.
Experimental findings on so-called ‘metaphor consistency effects’ (Boroditsky
2000; Gentner et al. 2002; Boroditsky and Ramscar 2002; Gentner and Bowdle 2008,
pp. 113–115) contributed to motivating the career of metaphor hypothesis (Bowdle and
Gentner 2005; Gentner and Bowdle 2008): When freshly coined, metaphorical expressions are initially understood on the basis of the conceptual metaphor that motivates
their introduction, and of automatic analogical (CSG) inferences employing its constituent mappings. With some as yet little understood exceptions, which include wellestablished spatial time metaphors, these metaphorical expressions then typically come
to be independently lexicalised and understood in the same way as literally used expressions, once their metaphorical use becomes conventional. If this is correct, CSG inferences which proceed from premises about the source-domain of conceptual metaphors
and employ their constitutive mappings are routinely made automatically, in language
comprehension, at least whenever metaphorical expressions are fresh. I.e.: If this is
correct, not only the non-intentional analogical inferences we considered above but
also this specific kind of CSG inferences is constitutive of a ‘natural process’ (Sect. 2).
Inferences of this kind are not only involved in language development and comprehension, however. Rather, such language development further facilitates the use
of conceptual metaphors in analogical problem-solving. In the most thorough experimental study to date (Thibodeau and Boroditsky 2011), participants were asked how
to reduce crime in an imaginary city, after reading texts outlining its crime statistics
and recent developments. These texts were couched either in the metaphor crime as
(infectious) virus or the metaphor crime as (raging) beast. The metaphor frame was
found to have a stronger effect even than subsequently self-reported political affiliation on whether participants proposed ‘reform’ measures consistent with diagnosing,
treating, and inoculating (looking for the root cause of crime, alleviating poverty,
improving education, etc.) or ‘enforcement’ measures consistent with capturing and
confining (increasing the police force, prison space, etc.). The responses offered were
analogous to suggestions other subjects previously made in an independent survey on
how communities should respond to literal viruses and beasts (looking for the source
of the virus, etc. vs. organising hunting parties to capture the beast, etc.). This suggests that participants in the main studies retrieved from memory knowledge about
15 E.g.: Just as the ‘antecedents’ of (4) and (7) above imply that of (2), so their ‘consequents’ imply the
‘consequent’ of (2). Ditto for (1) and (3), (6) and (4), etc.
123
Synthese
the source-domain of the relevant conceptual metaphor and inferred responses to the
target-domain problem by CSG.
Indeed, the metaphor frame significantly influenced participants’ choices even
when they were presented with a comprehensive list of options (op. cit., exp. 4).
Arguably, metaphor-based analogical inferences did not merely influence which
answers occurred to subjects first but also which they found most compelling. Similar effects were observed when the texts contained only a single word instantiating
the conceptual metaphor (exp. 2) but not when such words were used as independent
primes (exp. 3). In all experiments, most subjects were unaware of relying on the
metaphor: When asked to indicate which part of the text was most influential in their
decision, subjects typically referred to the crime statistics, only 3–15 % mentioned the
metaphor used, and the observed effects remained significant even once the responses
of this small minority were disregarded. The experimenters inferred that subjects’ use
of metaphors in analogical problem-solving was unconscious.
In science, including physics (Hesse 1966; Hesse 2000) and psychology (Genter
and Grudin 1985), analogical reasoning issuing in, and proceeding from, metaphors
is also used consciously and deliberately. I would therefore like to conceptualise their
unconscious use in problem-solving and judgment-making as driven by the largely
automatic application of a heuristic rule that can also be deliberately employed in
conscious reasoning:
Metaphor Heuristic: To obtain conclusions about a domain X, choose a linguistically realised conceptual metaphor that takes X as target domain, retrieve from
memory knowledge about its source-domain Y, and infer conclusions about X
from known facts about Y, through copying with substitution and generation,
employing mappings constitutive of the metaphor.
If our reasoning so far is correct, this heuristic performs the ‘constitutive trick’ of
a multi-purpose judgment heuristic: It seizes on CSG inferences constitutive of a
natural process that is involved in language comprehension and sense-perception; and
it transforms the output of this natural process into answers to a wide range of questions
(indeed, about different domains X).
More generally, the heuristic satisfies the requirements imposed by the two dominant research programmes on intuitive judgments, in cognitive psychology (cp. Sect.
2). The ABC programme requires that heuristics be ‘(a) ecologically rational, i.e.
[exploit] structures of information in the environment, (b) founded in evolved psychological capacities such as memory and the perceptual system, (c) fast, frugal, and
simple enough to operate effectively when time, knowledge and computational might
are limited, (d) precise enough to be modelled computationally, and (e) powerful
enough to model both good and poor reasoning’ (Goldstein and Gigerenzer 2002, p.
75).
(a) The metaphor heuristic exploits structural information embedded in our publicly
shared language, namely information about structural or higher-order analogies
between different domains, which is both acquired and deployed in coming to
understand (some) metaphorical expressions.
123
Synthese
(b) It is founded on exemplary ‘evolved psychological capacities’ (cp. Gigerenzer
2007, p. 58): language comprehension and memory.
(c) The heuristic is fast enough to operate in real time16 : It draws on processes demonstrably executed in little over a second, in everyday conversational settings.17
(d) Several computational models of the three processes invoked by the metaphor
heuristic are currently available. These include MAC/FAC (Forbus et al. 1995)
for retrieval and the structure mapping engine SME (Falkenhainer et al. 1989) for
mapping and inference.18
(e) While contributing to an explanation of poor reasoning (below), the heuristic can
account also for good reasoning like the abovementioned reasoning about crime
(whose analysis has to be spelled out elsewhere).
4 Explaining philosophical intuitions
We will now seek to explain philosophical intuitions of the kind targeted (in Sect. 1)
as resulting from the largely automatic application of the proposed heuristic, and
show that this explanation facilitates the assessment of the intuitions explained. More
specifically, we now seek to develop an explanation that meets also the remaining
requirements from the heuristics and biases programme (Sect. 2): an explanation that
successfully predicts an unexpected fallacy (Sect. 4) and lets us understand why the
mistakes and fallacies made are not properly corrected through effortful conscious
reflection, even where such reflection is not pre-empted by time pressure or competing
tasks (Sect. 5). Proponents of that programme seek to derive predictions of fallacies
directly from the proposed heuristics. Work on the recognition heuristic within the
competing ABC programme (Pohl 2006), however, supports the hypothesis of the
‘intelligence of the unconscious’, according to which a heuristic is employed in automatic cognition only when and where its cue is correlated with the criterion to be
judged, and it is therefore likely to get things right (Gigerenzer 2007, 2008).19 If this
is correct, predictable fallacies will be primarily due not to the spontaneous application of heuristic rules but to non-random ‘interference’ of, or interaction with, other
16 ABC researchers use a heuristic’s simplicity and frugality (i.e. use of little information) to argue that
it can account for performance in real-life situations. Where proposed heuristics—like ours—deploy only
natural processes which are demonstrably executed in real time, this argument can be made more directly
and convincingly, without detour via frugality (which we hence do not discuss here).
17 Gentner et al. (2002) used such a ‘naturalistic’ setting for an experiment revealing metaphor consistency
effects indicative of processes of re-mapping and inference. Participants needed on average 1277 ms longer
to answer a metaphorically phrased question when it employed a different conceptual metaphor than the
previous question (op. cit. 555). This leaves just over one second for fresh mapping and inference.
18 A recent review (Genter and Forbus 2011) reviews 7 computational models of retrieval and 16 models
of mapping. This provides the resources to model the natural processes the proposed heuristic draws on,
rendering it precise enough to be modelled computationally.
19 Application to the representativeness heuristic [see Read and Grushka-Cockayne (2011) for a devel-
opment within the ABC framework] illustrates that correlation of cue (prototypicality of judgment object
for category) and criterion (probability that object belongs to category) need not prevent even systematic
fallacies (like the conjunction fallacy) from arising from the application of the rule itself. Hence stress
‘likely’ and ‘primarily’ in the main text.
123
Synthese
automatic cognitive processes. We therefore now turn to a process that is liable to
interact with spontaneous applications of the metaphor heuristic, in predictable ways.
This takes us to ‘one of the most exciting pursuits in psychological research’ (Kahneman 2011, p. 53): the study of the associative memory links that underpin spontaneous
inferences.
4.1 Memory-based processing
Social psychologists seek to explain a wide variety of spontaneous inferences as
the outcome of simple associative processes which can duplicate achievements of
complex reasoning (Uleman et al. 2008). Cognitive psychologists are reconceptualising automatic applications of familiar judgment heuristics as automatic associative
processes in memory (Kahneman 2011; Morewedge and Kahneman 2010; Sloman
1996). An important theoretical framework suggests that also applications of the proposed metaphor heuristic are duplicated by associative memory processes and yield
outputs subject to further such processing.
Mainly through reading comprehension experiments, social and cognitive psychologists have shown that several different types of inference are spontaneously
made in language comprehension: predictive inferences about what will happen
next (e.g. Casteel 2007), causal inferences about why certain things happen (e.g.
Hassin et al. 2002), inferences of agents’ goals (e.g. Aarts and Hassin 2005) and
traits (e.g. Uleman et al. 1996), and of properties of the situation they are in (e.g.
Lupfer et al. 1990). According to the influential theoretical framework of memorybased processing (to which all the psychological work referenced in the following paragraphs broadly conforms), all spontaneous inferences subjects make in the
absence of specific goals when hearing or reading texts result from automatic association processes in memory (e.g. Gerrig and O’Brien 2005; O’Brien 1995). If this
view is correct, also analogical inferences such as those studied by the above textcomprehension experiments (Day and Gentner 2007) should be duplicated by such
processes.20 Since memory-based processing has been shown to draw not only on
‘contextual knowledge’ about earlier parts of the text, or earlier texts (e.g. Albrecht
and Myers 1998), but also on ‘general world knowledge’ (e.g. Cook and Guéraud
2005), this point should extend from spontaneous analogical inferences that employ
ad hoc models provided by carefully manipulated textual contexts (as in Day and
Gentner 2007, above), to inferences employing familiar models written into language by conceptual metaphors—the kind the metaphor heuristic would have us
make.
The slippery notion of associative processing can be rendered more precise by
reference to semantic or to connectionist networks, which double as information
storage and inference facilitators (Gigerenzer and Regier 1996). The nodes of a
semantic network (Anderson 1983; Anderson et al. 2001, 2004) stand for con20 Day and Gentner (2007, exp. 2), rules out simple lexical priming but no other associative processes.
While no account of how associative processes could duplicate analogical inferences has yet been spelled
out in detail, e.g. Leech et al. (2008) provide an associationist model of basic analogical processing.
123
Synthese
cepts. Nodes which represent things or events that are spatiotemporally contiguous or share attributes or relations come to be linked, and the network thus comes
to link, e.g., potential causes and their typical effects as well as things and their
properties, and things and the categories to which they belong. Activation from
stimuli to nodes builds and decays rapidly. When a node is activated, the activation spreads with decreasing strength to the several nodes directly or indirectly
linked to it. The strength of activation depends upon the familiarity of the activated concept and the strength of the link: The more often the subject is exposed
to a concept, the more strongly its node is activated each time, and the more frequently a link is activated, the more activation it gradually comes to pass on, while
gradually atrophying upon disuse. The network changes over time, differently in
different individuals. An activated concept or proposition becomes conscious if—
and only if—the activation is sufficiently strong. Spreading activation can therefore
duplicate inferences, by spreading in sufficient strength from connected nodes representing one proposition to nodes that jointly represent another proposition. The
spread of activation from one node to associated nodes is a fully automatic process.
This conception informs sophisticated computational models (like ACT-R, Anderson 1983; Anderson et al. 2004) capable of addressing many different cognitive
tasks.
According to this conception, we spontaneously remember a fact in response to
a question (say, ‘How many animals of each kind did Moses take on the ark?’),
when the question activates most strongly a representation of the pertinent proposition (‘Noah took two animals of each kind on the ark’). Since the same fact
or belief can be relevant as answer to various different questions, and each question can be differently phrased, the belief’s representation has to be activated also
by questions which employ different concepts (Reder and Kusbit 1991, p. 401).
This may be achieved by partial matching (Barton and Sandford 1993; Hannon
and Daneman 2001; Kamas and Reder 1995, Kamas et al. 1996; van Oostendorp
and Mul 1990; Park and Reder 2004): Activation is passed from nodes representing a stimulus concept (say, ‘Moses’) to connected nodes representing attributes
of the concept’s bearer (male, Biblical figure, leader, having a covenant with God,
etc.), which in turn activate nodes representing other bearers of these attributes (like
Noah), who share some but not all of them. When such nodes receive activation
above the relevant threshold, the process generates a partial match for the stimulus concept: It brings to consciousness a concept which stands for something (or
someone) sharing many but not all of the attributes of the stimulus concept’s referent.
Such partial matching explains semantic illusions like the ‘Moses illusion’ (to
which the reader has just been exposed): In a classic experiment (Erickson and Mattson 1981), 81 % of the subjects who, in a subsequent test, demonstrated knowledge that Noah is the protagonist of the ark story, responded ‘two’ to the above
question about Moses, after correctly reading out aloud the question and having
been explicitly instructed to either answer questions or indicate that something is
wrong with them, as appropriate. The stimulus question activates, among others,
the nodes representing ‘ark’, ‘animals’, and ‘Moses’. Since Moses and Noah share
many attributes, the first and the third of these nodes will pass on activation to
123
Synthese
the node representing ‘Noah’. They will jointly activate it more strongly than the
name in the question activates the node of ‘Moses’, and the fact about Noah is
spontaneously retrieved as an answer to the question about Moses.21 Accordingly,
subjects spontaneously answer this question but reject the analogous question about
Adam, who—as an attribution-generation task reveals (van Oostendorp and Mul
1990)—is generally believed to share fewer attributes with Noah (also male and
Biblical, but no leader, no covenant, etc.). Hence less activation spreads to representations of beliefs about Noah, their activation remains below the threshold, no
answer is spontaneously retrieved, and subjects notice something is wrong with the
question.
Retrieval processes generating partial matches are at work not only when we face
questions but also when we read or hear statements or stories: We then automatically
retrieve knowledge about the context and about the world at large, and interpret what
is said in the light of this knowledge (Albrecht and Myers 1998; Cook and Guéraud
2005). Thus, the semantic illusion persists in an attenuated form when people are
given statements like ‘Moses took two animals of each kind into the ark’ (Erickson
and Mattson 1981, exp. 2): Over 40 % of knowledgeable subjects judge the statement
true, apparently interpreting it spontaneously as a statement about Noah. According
to the above ‘partial matching hypothesis’, people do so because (only) the proposition about Noah is activated strongly enough to become conscious, when reading the
sentence that refers to Moses. In other words: Due to partial matching, we are prone to
automatically interpret statements as expressing propositions we already believe true,
and will spontaneously accept such a prior belief as content of the statement if their
semantic similarity exceeds a certain threshold. In the presently relevant sense,
a concept is semantically similar to another, to the extent to which the things
(individuals, stuffs, properties, etc.) they stand for are believed to share the same
attributes or relations.
The semantic similarity between two propositions then depends upon that of concepts
filling the same thematic roles (agent, patient, verb, place oblique) in the different
propositions, and on the number of different such roles that get filled in either proposition (Budiu and Anderson 2004, p. 38).
According to the theory of information-based processing (Budiu and Anderson
2004, 2008) this interpretation process is incremental and occurs already while we
read or hear the given sentence. As sentences get read, semantically similar propositions are automatically retrieved from memory, and used as candidate interpretations
which specify what the statement asserts: As the subject reads each semantic unit (verb,
adverb, noun phrase), activation spreads to representations of all propositions containing concepts semantically similar to the one represented by that unit. The proposition
represented by the most strongly activated nodes is picked as ‘candidate interpretation’.
These nodes retain such rapidly decaying activation—only—while the next semantic
unit is read, so that momentarily both they and the new stimulus spread activation.
21
Careful experiment excluded alternative explanations, crucially including explanations that invoke
Gricean principles of cooperation (Reder and Kusbit 1991; Park and Reder 2004), and supports the partial
match hypothesis, which can also explain a wide range of further phenomena (Kamas and Reder 1995).
123
Synthese
This may activate more strongly the representation of another proposition, which thus
replaces the previous candidate interpretation, and so on, until the entire sentence is
read. In this way, a semantically similar proposition the subject already believes true
may come to be accepted as content of the statement. While still rudimentary (if more
subtle than this thumbnail sketch might suggest), this theory has been successfully
implemented as a computational model within an influential cognitive architecture
(ATC-R, Anderson 1983; Anderson et al. 2004), and this model has proved consistent
with experimental findings about a range of phenomena including semantic illusions
and order effects in comprehension times for metaphorical expressions (Budiu and
Anderson 2004; cp. Kamas and Reder 1995).
Its authors presented this account as contributing to a theory of sentence comprehension. But they specified a process which serves to seamlessly integrate new
information with prior beliefs, and can respond not only to external stimuli but can
equally well be used to integrate self-generated new information, like the conclusions of the subject’s own spontaneous inferences, with her prior beliefs. Processes of
the kind specified—search-and-match processes with partial matching—can explain a
wide range of phenomena, which crucially (for our purposes) include striking failures
to detect contradictions between new information and prior beliefs, in semantic illusions and beyond (Kamas and Reder 1995; Park and Reder 2004). The account may
therefore help us to understand when and why paradoxical intuitions are formed and
accepted as obviously true, without further ado. In particular if the specified process
is, as suggested by its authors—Budiu and Anderson (2004) and Anderson et al.
(2004)—and above (Fn. 13), involved in metaphor understanding, it is reasonable to
hypothesise that it will act on the deliverances of the proposed metaphor heuristic,
and that the interaction of these two processes: of the metaphor heuristic’s automatic
application and information-based processing, can explain the intuitions targeted (in
Sect. 1): paradoxical bedrock intuitions.
4.2 Fallacies and stealthy mistakes
These processes are liable to interact in two ways: Memory-based processing conforming to the theory of interpretation-based processing (henceforward INP) will turn
clearly wrong conclusions of spontaneous analogical (CSG) inferences (conclusions
‘about Moses’, as it were) into apparent truisms (‘about Noah’). And it will help
forge fresh mappings which facilitate novel analogical inferences. This interaction is
best explained by discussing a specific example. Arguably, the philosophically most
important example among potential candidates identified in the extant literature is the
development of the modern (post-Aristotelian) conception of the mind as a realm of
(inner) perception, which various scholars have put down to visual metaphors such
as those we considered above (Rorty 1980; Lakoff and Johnson 1999). By combining the proposed metaphor heuristic with INP, we can obtain a fresh explanation of
paradoxical bedrock intuitions central to that consequential conception of the mind:
intuitions which turn the faculty of reason, intellect, or understanding, into an organ
of sense, peering into a distinct perceptual space—a key component of the early modern transformation of the mind (McDonald 2003). Our fresh explanation will allow
us to advance significantly beyond extant suggestions: to derive surprising predic-
123
Synthese
Table 4 CSG inference to C0
Source-domain premise
Operation
Target-domain conclusion
S thinks about X
1
S looks at X
Substitution
2
Implies (1;3–4)
Substitution: identical
implies (1;3–4)
3
S uses Y
Substitution: identical
S uses Y
4
S’s eyes (Y)
Generation
S’s eyes (Y)
tions, pin down exactly where and why the non-intentional use of the pertinent visual
metaphors went wrong, and explain why the resulting mistakes were not corrected
upon reflection.
The intuitions to be explained can be generated by several parallel and mutually reinforcing analogical (CSG) inferences which employ distinct but related visual
metaphors and proceed from source-domain premises about different acts and achievements of visual perception. One of the most important of these conceptual metaphors
is the metaphor Thinking-about as looking-at which motivates metaphorical talk of
‘looking hard at the problem’, ‘looking at the issue from different sides’, etc. A CSG
inference (detailed in Table 4) which employs its basic mapping takes us from the
source-domain premise that when we look at things we use our eyes to the conclusion
that
C0 When we think about something, we use our eyes.
This conclusion is, of course, false: We think about matters invisible (problems, implications, etc.) and can typically engage in reflection about almost anything, with closed
eyes.
When acting on this falsehood, the INP process will transform it into an apparent truism: The process is set to interpret fresh conclusions as expressing a proposition the thinker already believes true, namely in case some such proposition is
step by step sufficiently semantically similar to the input. Where new and previously believed propositions fill the same, and same number of, thematic roles, and
employ identical concepts in all but the last, already a comparatively low degree
of semantic similarity will suffice for concepts in final position (provided no yet
more similar proposition is believed true). Some common beliefs—which also figure prominently in faculty psychology, on which most seventeenth century philosophers were reared—can be expressed by sentences which employ the same terms
as C0 , in all but the final positions: ‘When we think about things we use our…’—
‘wits’, ‘reason’, ‘intellect’, or ‘understanding’. In faculty psychology as in ordinary
language, these terms are used to refer not to any organ of sense but to a faculty,
power, or ability, namely to the ‘faculty of comprehending or reasoning’ or the ‘power
or ability to understand’, as the OED explanation of ‘understanding’ puts it. But
both ‘wits’ and ‘understanding’ can, like eyes, be said to be ‘used’ by subjects who
‘have’ them; like eyes, wits and understanding may be more or less ‘sharp’, etc. Both
concepts enjoy a (if comparatively low) degree of semantic similarity with ‘eyes’.
Since most of the use of ‘wits’ is motivated by manipulation-, rather than visual,
123
Synthese
metaphors, ‘understanding’ emerges as semantically more similar to ‘eyes’.22 The
concepts employed earlier in the resulting statement are identical with those filling
the same thematic roles in C1 , and both statements fill the same number of roles.
Hence
C0I N P ‘When we think about something, we use our understanding’
is semantically highly similar to C0 , by the lights of the relevant definition (Sect.
4.1), and INP will activate C0I N P throughout, eventually more strongly than any other
proposition, and accept it as interpretation of C0 . Thus, I submit, the eyes of Moses
are turned into the Noah of understanding.
This is bound to trigger new mappings which, unlike those constitutive of the
conceptual metaphors we considered above, are not warranted by structural analogies
between source and target domains: INP aligns candidate interpretations with the claim
to be interpreted, and compares expressions in matching roles for semantic similarity
(Budiu and Anderson 2004, pp. 9–10). In processing the present CSG conclusion, it
thus aligns:
S – uses – his eyes.
S – uses – his understanding.
This alignment facilitates mapping. In non-intentional analogical reasoning, we
map not merely first-order relations onto other such relations (as the conceptual
metaphor did) but readily map also relata of such relations. In such reasoning, subjects automatically first correlate the source- and target-domain elements to which
the same concepts apply, and subsequently add mappings that correlate the hitherto
unmapped relata of mapped relations (Sect. 3.1). Hence the present alignment will
have them automatically map ‘x uses y’ onto its target-domain homonym, and then
map the ‘rear’ relatum:
New mapping N: eyes → understanding
Such interplay of non-intentional analogical reasoning and INP can also yield
this mapping’s twin. Most ordinary uses of ‘the mind’ are motivated by a conceptual metaphor that builds on the mapping of spatial inclusion onto remembering and
thinking-of: To remember something is to ‘retain’ it in one’s vicinity, to ‘keep’ or
‘have’ it ‘in’ a personal space, ‘the mind’, from which it may ‘slip’, etc.23 The present
processes lead to the integration of this personal space into visual metaphors. CSG
transforms the truism about the visual source domain, ‘When we look at things, they
are in our visual field’, into the (wild) target-domain conclusion ‘When we think about
things, they are in our visual field’. For the reasons explained, INP aligns this with the
semantically similar proposition ‘When we think of things, they are in our mind’, and
facilitates the fresh
22 Also recall (from Sect. 4.1) that frequent exposure to concepts and their combinations strengthens the
nodes representing them and the links connecting these. When a thinker is well-versed in faculty psychology,
the nodes representing the technical concepts ‘intellect’ and ‘understanding’ will attract more activation
than ‘wits’, both because of their own strength and that of their link to ‘think’ etc.
23 Fischer (2011, pp. 41–45), offers a fuller reconstruction of this complex spatial operation metaphor.
123
Synthese
Mapping M: visual field → mind.
Mappings M and N facilitate a plethora of CSG inferences that jointly transform
‘mind’ and ‘understanding’. These inferences include (non-identical substitutions
underlined):
C1
When we look at things, things are before our eyes.
When we think about things, things are before our understanding.
C2
When we look at things, things are in our visual field.
When we think about things, things are in our mind.
C3
Things before our eyes are in our visual field.
Things before our understanding are in our mind.
Philosophers frequently use the verb ‘to perceive’ as shorthand for ‘to see or hear
or smell or taste or feel’, to cover all five sense-modalities at one go (Fischer 2011, pp.
114–116, 246f). In philosophical discourse, sight typically serves as primary model
of such ‘sense-perception’, and familiar facts about the present source-domain (e.g.:
When we look at things, we see things with our eyes) are often expressed by statements
using ‘perceive’. The latter thus come to serve as premises of CSG inferences with
mappings constitutive of the visual metaphors discussed, along with M and N.24
While subject to contextual specification, in non-philosophical discourse the verb
‘perceive’ ordinarily applies in the same generic sense, ‘to apprehend with the mind
or senses’ (OED), to epistemic achievements brought off by using either one’s wit or
one’s senses, no matter which. (You may, for instance, say, ‘I perceived his dismay’,
without divulging whether you looked into his face or read between a letter’s lines.) ‘S
perceives X’ thus stands for a generic (epistemic) relation that is an element of both the
present source- and target-domains.25 It hence gets mapped onto itself in analogical
reasoning (Sect. 3.1), and is ‘substituted’ by itself in further CSG inferences like:
C4
When we look at things, we perceive things with our eyes, in our visual field.
When we think about things, we perceive things with our understanding,
in our mind.
Together with C1 –C3 , this transforms the understanding from an intellectual faculty
into an organ of sense employed in thought, and the mind into this organ’s perceptual
field.
24 Such inferences are facilitated by re-representation (Fn. 10).
25 The verb’s use was not metaphorically extended from the visual to the intellectual domain (cp. Sweetser
1990): Deriving from the Latin ‘capere’ (to take, seize; ‘per-’ = thoroughly), it was extended from the source
domain of spatial operation (what you seize is in your surrounding space) to the different target domains of
intellectual achievement (where you ‘grasp’ my point) and sense-perception (where you ‘catch’ a glimpse).
Parallel extension forged the generic concept ‘to apprehend [=seize] with the mind or senses’, i.e., ‘to
become aware’ or cognizant of, by thinking or seeing, etc. (OED) which applies in present source- and
target-domain.
123
Synthese
Influential early modern philosophers including Locke refer to the things we
supposedly perceive with the understanding in the mind as ‘ideas’26 and maintain
C1 –C4 in these terms: Thinking involves perceiving ideas which then are present
in the mind and before the understanding. To illustrate: Consistent with the above
account, Locke—whose Essay Concerning Human Understanding (Locke 1975/1700)
we quote as EHU—explicitly introduces ‘the understanding’ as a faculty (EHU II.vi.2)
but compares it not to the faculty of sight but to its organ of sense: ‘the understanding, like the eye [!] ... makes us see, and perceive all other things’ (EHU I.i.1).
He frequently uses the terms ‘understanding’ and ‘mind’ interchangeably, to stand
for both an organ and a space of perception, and occasionally speaks of the organ
metonymically as the subject of perception, as in defining: ‘Whatsoever the mind
perceives in itself ... that I call idea’ (EHU II.viii.8). These objects of perception
are ‘convey[ed] ... to their audience in the brain, the mind’s presence room (as I
may so call it)’, where they have ‘to bring themselves into view, and be perceived
by the understanding’ (EHU II.iii.1). Locke identifies ‘perception, or thinking’ as
one of two ‘principal actions’ or ‘faculties’ of the mind (EHU II.vi.2) but maintains that perception, including thought, only occurs when an idea is present in the
mind (EHU II.ix.2–4).
C4 and such related judgments would seem to imply that their truth can be easily established by introspection. Accordingly, Locke, for one, thinks they are readily
established by ‘reflecting’ on what we do when we think (EHU II.ix.2), and construes
‘reflection’ as an introspective process (EHU II.i.4). But where obtained by spontaneous inference and found intuitively compelling, conclusions C1 –C4 all satisfy our
definition of intuitive judgments (Sect. 1): For sure, we do sometimes have a memoryor other mental image when we think, and we then sometimes speak sotto voce. We
can then be said to be ‘aware’ or ‘conscious of’ the memories and thoughts we have.27
But even if we make introspective judgments when reporting that we are conscious of
a memory, were speaking sotto voce, or simply ‘have a thought’, we would go beyond
these reports when leaping to the conclusion that we then see or hear ‘ideas’: things
perceived in the mind, with the mind, i.e. either: in the brain, with an inward-looking
organ of sense (materialist conception), or: in a figurative ‘space’ without extension
(cp. EHU II.ix.10), with a non-bodily organ of sense (immaterialist conception).28 In
simply making those reports, we do not claim to see or otherwise perceive anything in
our brain, and make no claim about which organs (if any) are involved in our having
26 Note this term’s now defunct visual senses: ‘likeness, image, representation’ (1530s—early eighteenth
century, as in ‘ideas in the mirror’, cp. Locke EHU II.i.25), was extended, first, to memory images (1570s),
Footnote 26 continued
then to pictures or notions of something formed in the mind, independently of memory (1580s), and finally
given the yet more general philosophical use at issue (OED).
27 Quite possibly, competent speakers do not place an introspective interpretation on these phrases when
they do not have, or dwell on, the intuitions explained: ‘aware of’ means ‘to know, have cognizance of’,
and applies in the same sense regardless of the nature of its objects (OED): investment risks, deadlines, or
sensations, own or other, etc. Similarly, the use of ‘conscious of’ in which it only takes ‘one’s sensations,
feelings, thoughts, etc.’ as objects is marked as ‘philosophical and psychological’. In its ordinary use, the
verb simply means ‘having knowledge or awareness’ and takes facts and information as objects. (ibid.)
28 Locke states that, and explains why, he feels torn between these two conceptions, in EHU IV.iii.6.
123
Synthese
the, say, thought we report. Nor are we then claiming that we see or hear something in a
‘space’ without physical extension or location: In reporting we are having a particular
thought, we make a claim with definite meaning, regardless of whether we are able
or (like Locke, EHU IV.iii.6) confessedly unable to make sense of the notion of such
a non-physical ‘space’. Despite first appearances to the contrary, even C4 thus goes
beyond any introspective or perceptual judgment.
To bring out the potential philosophical relevance of the interaction-process we
proposed, we needed to show that it can explain influential, and hence familiar, philosophical intuitions, like C1 – C4 . But to support the hypothesis that it actually explains
these (and other) intuitions, we need to go beyond the necessarily ex-post explanation
of familiar intuitions and predict unexpected intuitions and fallacies (Sect. 2). The
proposed account successfully predicts such a fallacy: It predicts that whoever uses
the notion of ‘ideas’ so succinctly explained by Locke will conflate ideas and the
intentional objects we think of and about. The acute reader will have noticed that,
instead of inferring C1 to C4 , we could have used the very same mappings to make
equally simple analogical inferences from even more straightforward premises: When
we look at things, they, the things we look at, are before our eyes, in our visual field; and
typically (unless it is too dark, etc.), we then perceive them with our eyes, in that field.
CSG inferences with the above mappings take us from these premises to the conclusions that when we think about things, they, the things we think about, are before our
understanding, in our mind, and that typically we then perceive them with our understanding, in the mind. According to these conclusions, the predominantly public and
frequently physical things we think about satisfy the definition of ideas as ‘whatever
is perceived in the mind, with the mind’ or understanding. These intentional objects
of thought will qualify as ideas, and the things our thoughts and ideas are thoughts or
ideas of will be liable to be conflated or confused with the ideas themselves.
This prediction is borne out, e.g., by Locke’s first explanation of his liberal use of
the technical term ‘idea’: ‘I have used it to express whatever is meant by phantasm,
notion, species, or whatever it is, which the mind can be employed about in thinking’
(EHU I.i.8). On the one hand, the technical term is meant to pick out the things
we can employ our mind, understanding, or wits about, i.e., the things we can think
about, the potential intentional objects of our thoughts, which range from problems
to primroses. But, at the same time, the term is to stand for roughly the same thing as
philosophical terms used at the time to talk about things which cannot exist outside
of, or independently from the mind: for mental images which present things as if from
outside the mind (‘phantasms’), and images which are viewed in the mind after coming
there from external objects (‘species’). Accordingly, Locke continues in the very next
sentence: ‘I presume it will be easily granted me that there are such ideas in men’s
minds; everyone is conscious of them in himself, and men’s words and actions will
satisfy him, that they are in others’ (ibid.). Thus the predominantly public objects we
think about are run together with the contents of a private realm of perception, and
are conflated without any hint at an argument, in a passage clearly intended to merely
state the obvious and link a new technical usage to it.
123
Synthese
5 Assessing intuitions
The explanation developed allows us now, first, to identify a crucial mistake in nonintentional analogical reasoning which leads to C1 –C4 , and other influential philosophical intuitions, second, to show that the intuitions due to this mistake are constitutive
of cognitive illusions, and, third, to explain why competent thinkers fall prey to this
illusion. The metaphor heuristic encourages us to make analogical inferences which
are frequently profitable but always risky, as they are not guaranteed to be logically
valid (and typically aren’t valid).29 Like familiar judgment heuristics, it hence has
different ‘ecological validity’ (Pohl 2006; Gigerenzer 2008; Gigerenzer and Sturm
2012) in different contexts, and its use would lead to mistakes in some contexts but is
helpful in many others. The interaction of the heuristic’s automatic application with
information-based processing, by contrast, gives rise to a move bound to be almost
always pernicious: the simultaneous use of mappings which are constitutive of conceptual metaphors and mappings, like M and N, which are not, but are created—only—by
that interaction.
Philosophers and scientists alike make conscious and explicit use of metaphors,
which can be highly productive (Hesse 1966, 2000; Genter and Grudin 1985). However, even if competent thinkers are typically able to keep track of similarities and
dissimilarities between source and target domains when their reliance on metaphor
is explicit and conscious, this need not be so where metaphors are employed unwittingly. Indeed, the proposed explanation lets us see how the use of metaphors in largely
automatic reasoning can lead us astray, even if the hypothesis of the ‘intelligence of
the unconscious’ (Gigerenzer 2008) is true, and heuristics are, by and large, only
employed in such reasoning when and where they are likely to get things right (top
Sect. 4): The interaction with INP introduces into such reasoning mappings which are
not constitutive of conceptual metaphors and are therefore not recommended by the
metaphor heuristic (cp. Sect. 3.2). Hence even the most competent thinkers, with full
implicit mastery of this heuristic, may unwittingly make analogical inferences that
are otiose: We will now gradually come to see how the use of mappings M and N in
non-intentional analogical reasoning leads competent thinkers to assimilate target- to
source-domains of conceptual metaphors in ways in which they know the two to be
different—and to assume entities they full well know not to exist.
In analogical reasoning, the first mappings to be made connect source- and targetdomain elements to which the same terms apply (Sect. 3.1). In the case of first-order
analogies (e.g. between atoms and the solar system), relevant relational terms (‘x orbits
y’, etc.) stand for the same relations in both source and target domain, so that these
relations get mapped onto themselves. By contrast, where linguistically realised conceptual metaphors are built on second-order analogies between concrete source- and
more abstract target-domains, the same terms (e.g. ‘x looks at y’) typically stand for
radically different first-order relations in the two domains (looking at versus thinking
about). The relata that stand in one of these relations (say, when John looks at Joan)
29 An anonymous reviewer helpfully clarified (my italics): ‘Under some circumstances, candidate inferences can be deductively valid, e.g., when the statements in the base are an instantiation of a logically
quantified statement, and the match has no analogy skolems.’
123
Synthese
will typically stand in a host of further relations (x stands in front of y, x is taller than
y, etc.) not shared by the relata standing in the other (John may think of absentees)
and, indeed, cannot be shared by many of them (John also thinks of problems, opportunities, and risks without physical location). Hence CSG inferences that involve not
only substitutions licensed by mappings constitutive of a given metaphor, but also
generation, are particularly risky, where moving from a concrete to a more abstract
domain.
This need not lead to false conclusions: In ordinary discourse, we mitigate the
present risk by placing a metaphorical interpretation on generated terms, wherever
possible, as a default: We interpret them in the light of their metaphorical implications
(Sect. 3.2). For instance, things we look at are wont to be before our eyes, so that
CSG inference without the new mapping N generates ‘before our eyes’ and leads to
the conclusion that things we think about are ‘before our eyes’. Since things before
our eyes are, by and large, easily visible, further simple CSG30 yields ‘If things are
before our eyes, they are easy to get to know’, which motivates metaphorical extension
of ‘x is before your eyes’ to mean that x is easy to get to know for you. Similarly,
when something is ‘in’ or ‘beyond my visual field’, it is, respectively, possible or
impossible for me to see it, and simple CSG concludes, respectively, that it is possible
or impossible for me to know it (and thus motivates saying that things we cannot get
to know or understand are ‘beyond our ken’). When followed by such metaphorical
interpretation, CSG inferences without M or N take us from the premises of C1 , C2 ,
and C4 to the conclusions: ‘When we think about things, we can easily get to know
things’, ‘...it is possible for us to get to know things’, and ‘... we (get to) know things’,
respectively. Instead of C3 , we get ‘When it is easy for us to get to know things, it
is possible for us to get to know them.’ We thus obtain claims about the intellectual
target domain which do not even appear to refer to spaces or organs of perception, of
any kind or description.31
Use of INP-generated mappings like M and N, in CSG inference, prevents such
metaphorical interpretation. In ‘…before my eyes’, e.g., N has us substitute ‘eyes’ by
‘understanding’. But ‘… before my understanding’ has no implications in the visual
source domain. You recall that a source-domain premise acquires metaphorical implications through CSG inferences from its source-domain implications (Fn. 14). Devoid
of visual implications, ‘...before my understanding’ therefore also lacks metaphorical
implications. In this way, the use of M and N turns terms and statements that possess
metaphorical implications which facilitate their metaphorical interpretation, into less
easily digestible fare without such saving graces.
CSG inferences without the new mappings M and N generate or carry over to the
intellectual target-domain predicates like ‘x is before my eyes’ or ‘x is in our visual
field’, which include spatial terms but which, in their entirety, have source-domain
implications that facilitate metaphorical interpretation. By contrast, CSG inferences
30 With mapping 3* (Sect. 3.2) but without N.
31 More generally, this default move frequently restores intelligibility, sometimes truth, and makes it
possible to apply expressions containing spatial terms (‘before’, ‘in’, etc.) to abstract entities, without lapse
into nonsense, even where the relevant metaphor does not unfold from a basic mapping that involves a
spatial relation (as with Thinking-about as Looking-at, in contrast with Thinking-of as Spatial Inclusion).
123
Synthese
with M and N replace ‘my eyes’ and ‘our visual field’, respectively, and thus generate
or carry over to the target domain only those spatial relations (x is before y, x is in y,
etc.). They thus lead to conclusions which place elements of the intellectual targetdomain into spatial relations, and are not amenable to metaphorical interpretation (but
at most to a superficial ersatz treatment, see below).
These conclusions include C1 –C4 above. As they carry over those spatial relations,
they conclude that the things we think about then stand to something called ‘the
understanding’ in the very same spatial relation in which the books we read then stand
to our eyes, and stand to something called ‘the mind’ in the very same spatial relation
in which those books then stand to our visual field. By accepting such conclusions,
thinkers come to posit the existence of entities which stand in the same spatial—and
other—relations to the thinking subjects and the objects they think about as visual
fields and eyes, respectively, stand to visual observers and the things they see: (C2 ) a
‘mind’ in which those objects are located, (C3 ) before an ‘understanding’ (C4 ) with
which they are perceived.
Thinkers who speak of such ‘minds’ simultaneously explain that they are meant ‘to
take up no space, to have no extension’ (Locke, EHU II.ix.10), while introducing ‘the
understanding’ as a faculty, rather than bodily organ (EHU II.vi.2). Such explanations
evidence knowledge that there are no entities which stand in all the italicised relations
when we think. But, even so, the very same thinkers explicitly assume or tacitly
presuppose the existence of such entities, in much of their argument.32 They thus rely
in their argument on existence-assumptions they know to be false.
When they are bedrock, intuitive judgments which are due to this process and
involve this mistake are constitutive—possess all four defining characteristics—of
cognitive illusions (Sect. 1): They then violate an uncontroversial rule that constrains
what a thinker has warrant to conclude or believe: the no assumed false lemma rule33
that we should not base our judgments, deliberate or intuitive, on information or
assumptions we all along know or believe to be false. Second, the proposed explanation
allows us to predict when thinkers make intuitive judgments (like C1 –C4 ) which
violate this rule in this way, namely involve the mistake identified: Both INP-generated
mappings and the transformation of CSG conclusions through INP can be predicted
with: the proposed metaphor heuristic, information about which conceptual metaphors
are linguistically realised in the thinkers’ language, and information about the thinkers’
prior beliefs and their subjective semantic similarity to conclusions inferable with the
heuristic.34 Third, both interacting processes, namely, relevant applications of the
metaphor heuristic, and INP, are largely automatic in character—even if their joint
outputs may be subject to effortful modification (including the explanations quoted,
which evidence better knowledge). Fourth, even once we have realised that they cannot
be right, we find these intuitions intuitively compelling: Even after explicitly endorsing
explanations inconsistent with them, Locke found judgments C1 –C4 so compelling
32 Locke does the former in the passages quoted above and the latter, e.g., in EHU II.iii.1 and II.viii.12.
See Fischer (2011, pp. 103–109, 116–123) on Locke and Berkeley, respectively.
33 Not to be confused with the ‘no false lemma rule’ proposed in response to the Gettier problem.
34 For an experimental paradigm to establish such semantic similarity, see e.g. van Oostendorp and Mul
(1990).
123
Synthese
he could not help stating and presupposing them in further argument (Fn. 32). These
intuitive judgments seem to have been bedrock for him (in the sense explained in Sect.
1): Misconceiving C4 as an introspection-based judgment, he regarded it as in no need
of further argumentative or evidentiary support, and based other claims on it. Where we
thus expose a bedrock intuition as a cognitive illusion, we establish that the thinker at
issue has no warrant for accepting this intuition or maintaining the claims he bases on it.
Thinkers frequently fail to notice when they fall prey to cognitive illusions, and
typically fail to sufficiently correct the underlying mistakes, when and where they
do notice (Kahneman and Frederick 2002, 2005; Pohl 2004). In the case of C1 to C4 ,
sufficient correction would involve metaphorical interpretation. To make this possible,
one needs to reverse the substitutions with mappings M and N, thus restore CSG conclusions which have metaphorical implications, and interpret the former in the light
of the latter. As we have seen, this leaves no reference to a ‘mind’ or ‘understanding’.
Many early modern thinkers who speak of ‘minds’ try to place a metaphorical
interpretation on such talk: ‘When I speak of objects as existing in the mind ... I
would not be understood in the gross literal sense, as when bodies are said to exist
in a place’ (Berkeley 1996, p. 239). But they continue to accept M and N, to posit
mind-spaces and understanding-organs, and accordingly seek to place a non-literal
interpretation merely on the spatial terms (‘x is in y’ and ‘x is before y’) carried over
by the CSG inferences employing those mappings, rather than on the entire phrases
that are carried over by CSG inferences without M or N (e.g., ‘x is before our eyes’).35
Whereas proper metaphorical interpretation of the conclusions at issue removes all
reference to perceptual spaces and organs of any kind or description, such superficial
metaphorical interpretation merely leaves us stranded with the puzzling notion of
non-physical ‘locations’ and ‘spaces’ without extension.
Leading proponents of the heuristics and biases programme (Kahneman and Frederick 2002, 2005; Morewedge and Kahneman 2010; Kahneman 2011) have tried to
explain failure to notice or sufficiently correct mistaken intuitive judgments, within the
framework of ‘dual process accounts’ of the interaction of automatic and more effortful cognitive processes (Evans 2008, 2010; Evans and Frankish 2010). According to
all major such accounts, conclusions of spontaneous inferences serve as defaults: They
are the first responses available, and thinkers have to, automatically or deliberately,
decide whether to accept them without further ado or engage in further reasoning.
They are accepted without further ado whenever thinkers are under time pressure or
multi-tasking, or lack motivation for ensuring truth or accuracy. Under more propitious
conditions—such as those obtaining in much unhurried philosophical reflection—the
decision is largely determined by two factors (Simmons and Nelson 2006): The more
information inconsistent with the conclusion is available, and the more salient it is, the
more likely thinkers are to engage in effortful reasoning which may modify or overturn
it. Second, the more subjective confidence thinkers have in the truth or accuracy of
the conclusion, the more likely they are to accept it without further ado (Thompson
et al. 2011)—even when aware of information at odds with it. This crucial level of
35 Other explanations of meaning are informed by the basic mapping of the conceptual metaphor Remembering as Spatial Inclusion. See Fischer (2011, pp. 56–57).
123
Synthese
subjective confidence, in turn, is largely determined by cognitive feelings, in particular
by feelings of cognitive ease known as ‘fluency’36 : The less effort is felt to be made
when making a spontaneous inference, the more confident we are by and large that
its conclusion is true or accurate (Gill et al. 1998; Kelley and Lindsay 1993). If we
experience significant difficulty, by contrast, this experience of ‘disfluency’ serves as
a cue that the task is difficult and our spontaneous response may be unreliable, and
prompts thinkers to engage in more effortful reasoning (Alter et al. 2007; Alter and
Oppenheimer 2009). While the underlying rationale is perfectly reasonable, it leads
us astray wherever fluency is influenced by factors that have nothing to do with the
content of the intuitive judgment at issue. Recent fluency research has brought out just
how varied and strong these factors are (Alter and Oppenheimer 2009): Reliance on
fluency is liable to make us feel confident about many judgments requiring more care—
which may, though need not, lead to appropriate correction, as and when appropriate
(Thompson et al. 2011).
Our above account facilitates an explanation along these lines (to be developed
elsewhere): The steps involved in applying the metaphor heuristic as well as successful INP are all facilitated by the activation of nodes in the semantic network of
memory (‘priming’),37 which increases fluency (Alter and Oppenheimer 2009)—for
reasons that have nothing to do with the content of the intuition at issue. At the same
time, successful INP may transform patently paradoxical conclusions and obscure the
conflict with prior beliefs or other available information. Where either or both things
happen, the intuitions generated will be accepted without further ado—even when
problematic. In addition, even where thinkers do not accept intuitions as they stand,
superficial metaphorical interpretation (above) is liable to obscure the need for more
thorough correction.
The proposed explanation of philosophical intuitions identifies a crucial mistake
and lets us understand why highly competent thinkers fall for it. Our account satisfies
all the desiderata the heuristics and biases programme imposes on explanations of
intuitive judgments (Sect. 2): It employs a multi-purpose judgment heuristic (which
simultaneously meets the requirements of the ABC programme) (Sect. 3); the account
successfully predicts unexpected fallacies (Sect. 4); it can identify fundamental mistakes in automatic cognition that generates intuitive judgments and explain why competent thinkers fail to notice, or sufficiently correct, these mistakes (Sect. 5). Such
explanation of philosophical intuitions facilitates their assessment along the lines set
out (in Sect. 1): It exposes as cognitive illusions vastly influential intuitions which
have been bedrock for important philosophers.
Heuristic-based explanations need not, however, debunk the intuitions explained.
Rather, the tools developed within the ABC programme (Gigerenzer 2008; Gigerenzer
and Sturm 2012; Pohl 2006) allow us to empirically assess (i) the ‘ecological validity’
of the underlying heuristic, which may be high or low, for a given thinker and a
given environment or range of application, as well as (ii) how sensitive its automatic
application, by the thinkers studied, is to changes in its validity between contexts.
36 Thompson et al. (2011, exp. 3) found evidence that cue ambiguity is another relevant factor.
37 For analogical priming and relational fluency, respectively, see, e.g., Spellman et al. (2001) and Day
(2007).
123
Synthese
Where no other process (like INP) ‘interferes’, the evidentiary value of the intuitions
a thinker has due to automatic application of a heuristic rule is a function of these
two factors. These studies remain to be done. But, in principle, explanations invoking
the proposed heuristic can not only debunk but also validate philosophically relevant
intuitions. Explanations of this kind can help us separate the wheat from the chaff of
intuition.
Acknowledgments For helpful comments on previous drafts and closely related material I am indebted
to John Collins, Hilary Kornblith, Jennifer Nagel, David Papineau, Finn Spicer, two anonymous referees
for this journal, and audiences in Belfast, Bielefeld, Graz, and London.
References
Aarts, H., & Hassin, R. R. (2005). Automatic goal inference and contagion: On pursuing goals one perceives
in other people’s behavior. In S. M. Latham, J. P. Forgas, & K. D. Williams (Eds.), Social motivation:
Conscious and unconscious processes (pp. 153–167). New York: Cambridge University Press.
Albrecht, J. E., & Myers, J. L. (1998). Accessing distant text information during reading. Discourse
Processes, 26, 87–107.
Alexander, J., & Weinberg, J. (2007). Analytic epistemology and experimental philosophy. Philosophy
Compass, 2, 56–80.
Alter, A. L., Oppenheimer, D. M., Epley, N., & Eyre, R. N. (2007). Overcoming intuition: meta-cognitive
difficulty activates analytic reasoning. Journal of Experimental Psychology: General, 136, 569–576.
Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a metacognitive nation.
Personality and Social Psychology Review, 13, 219–235.
Anderson, J. R. (1983). The architecture of cognition. Cambridge: Harvard University Press.
Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory
of the mind. Psychological Review, 111, 1036–1060.
Anderson, J., Budiu, R., & Reder, L. (2001). A theory of sentence memory as part of a general theory of
memory. Journal of Memory and Language, 45, 337–367.
Bargh, J. A. (1994). The four horsemen of automaticity. In R. Wyer & T. Srull (Eds.), Handbook of social
cognition, vol. 1 (pp. 1–40). Hillsdale: Earlbaum.
Barton, S. B., & Sandford, A. J. (1993). A case study of anomaly detection: Shallow semantic processing
and cohesion establishment. Memory and Cognition, 21, 477–487.
Bealer, G. (1996). On the possibility of philosophical knowledge. In J. E. Tomberlin (Ed.), Philosophical
perspectives 10, metaphysics, vol. 1–34. Cambridge: Ridgeview Publishing.
Berkeley, G. (1996). Philosophical works. M. Ayers (ed.) London: Dent.
Blanchette, I., & Dunbar, K. (2002). Representational change and analogy: How analogical inferences alter
representations. Journal of Experimental Psychology: Learning, Memory and Cognition, 28, 672–685.
Boroditsky, L. (2000). Metaphoric structuring: Understanding time through spatial metaphors. Cognition,
75, 1–27.
Boroditsky, L., & Ramscar, M. (2002). The roles of body and mind in abstract thought. Psychological
Science, 13, 185–188.
Bowdle, B., & Gentner, D. (2005). The career of metaphor. Psychological Review, 112, 193–216.
Budiu, R., & Anderson, J. R. (2004). Interpretation-based processing: A unified theory of semantic sentence
comprehension. Cognitive Science, 28, 1–44.
Budiu, R. & Anderson, J.R. (2008). Integration of background knowledge in language processing: A unified
theory of metaphor understanding, Moses illusions and text memory. CMU Department of Psychology.
http://repository.cmu.edu/psychology/52.
Cappelen, H. (2012). Philosophy without intuitions. Oxford: Oxford University Press.
Casteel, M. A. (2007). Contextual support and predictive inferences: What do readers generate and keep
available for use? Discourse Processes, 44, 51–72.
Clement, C. A., & Gentner, D. (1991). Systematicity as a selection constraint in analogical mapping.
Cognitive Science, 15, 89–132.
Cole Wright, J. (2010). On intuitional stability: The clear, the strong, and the paradigmatic. Cognition, 115,
491–503.
123
Synthese
Cook, A. E., & Guéraud, S. (2005). What have we been missing? The role of general world knowledge in
discourse processing. Discourse Processes, 39, 265–278.
Day, S.B. 2007: Processing fluency for relational structure. PhD Dissertation, Northwestern University,
UMI No. 3284195, http://search.proquest.com/docview/304816618.
Day, S. B., & Gentner, D. (2007). Non-intentional analogical inference in text-comprehension. Memory
and Cognition, 35, 39–49.
De Neys, W. (2006). Automatic-heuristic and executive-analytic processing during reasoning: Chronometric
and dual-task considerations. The Quarterly Journal of Experimental Psychology, 59, 1070–1100.
De Neys, W., & Glumicic, T. (2008). Conflict monitoring in dual process theories of thinking. Cognition,
106, 1248–1299.
Deutsch, R., Kordts-Freudinger, R., Gawronski, B., & Strack, F. (2009). Fast and fragile. A new look at the
automaticity of negation processing. Experimental Psychology, 56, 434–46.
Erickson, T., & Mattson, M. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning
and Verbal Behaviour, 20, 540–551.
Evans, J. S. B. T. (2007). On the resolution of conflict in dual-process theories of reasoning. Thinking and
Reasoning, 13, 321–339.
Evans, J. S. B. T. (2008). Dual processing accounts of reasoning, judgment and social cognition. Annual
Review of Psychology, 59, 255–278.
Evans, J. S. B. T. (2010). Intuition and reasoning: A dual-process perspective. Psychological Inquiry, 21,
313–326.
Evans, J. S. B. T., & Frankish, K. (2009). In two minds: Dual processes and beyond. Oxford: Oxford
University Press.
Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the
debate. Perspectives on Psychological Science, 8, 223–241.
Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and
examples. Artificial Intelligence, 41, 1–63.
Fischer, E. (2011). Philosophical delusion and its therapy. New York: Routledge.
Fogelin, R. (2001). Berkeley and the principles of human knowledge. London: Routledge.
Forbus, K. D., Gentner, D., & Law, K. (1995). MAC/FAC: A model of similarity-based retrieval. Cognitive
Science, 19, 141–205.
Gentner, D. (1983). Structure mapping: A theoretical framework for analogy. Cognitive Science, 7, 155–170.
Gentner, D., & Bowdle, B. (2008). Metaphor as structure-mapping. In R. Gibbs (Ed.), The Cambridge
handbook of metaphor and thought (pp. 109–128). New York: Cambridge University Press.
Genter, D., & Forbus, K. D. (2011). Computational models of analogy. WIREs Cognitive Science, 2, 266–
276.
Genter, D., & Grudin, J. (1985). The evolution of mental metaphors in psychology. American Psychologist,
40, 181–192.
Gentner, D., & Kurtz, K. (2006). Relations, objects, and the composition of analogies. Cognitive Science,
30, 609–642.
Gentner, D., Imai, M., & Boroditsky, L. (2002). As time goes by: Evidence for two systems in processing
space-time metaphors. Language and Cognitive Processes, 17, 537–565.
Gentner, D., & Markman, A. (2005). Defining structural similarity. Journal of Cognitive Science, 6, 1–20.
Gentner, D., Ratterman, M., & Forbus, K. (1993). The roles of similarity in transfer: Separating retrievability
from inferential soundness. Cognitive Psychology, 25, 527–575.
Gerrig, R. J., & O’Brien, E. J. (2005). The scope of memory-based processing. Discourse Processes, 39,
225–242.
Gigerenzer, G. (2007). Gut feelings. London: Allen Lane.
Gigerenzer, G. (2008). Rationality for mortals. Oxford: Oxford University Press.
Gigerenzer, G. (2009). Surrogates for theories. APS Observer, 22, 21–23.
Gigerenzer, G., & Regier, T. (1996). How do we tell an association from a rule? Comment on Sloman
(1996). Psychological Bulletin, 119, 23–26.
Gigerenzer, G., & Sturm, T. (2012). How (far) can rationality be rationalised? Synthese, 187, 243–268.
Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that make us smart. Oxford: Oxford University
Press.
Gill, M. J., Swann, W. B., & Silvera, D. H. (1998). On the genesis of confidence. Journal of Personality
and Social Psychology, 75, 1101–1114.
123
Synthese
Goldman, A. (2007). Philosophical intuitions: Their target, their scope, and their epistemic status. Grazer
Philosophische Studien, 74, 1–26.
Goldstein, D., & Gigerenzer, G. (2002). Model of ecological rationality: The recognition heuristic. Psychological Review, 109, 75–90.
Hannon, B., & Daneman, M. (2001). Susceptibility to semantic illusions: An individual-differences perspective. Memory and Cognition, 29, 449–460.
Hassin, R. R., Barch, J. A., & Uleman, J. S. (2002). Spontaneous causal inferences. Journal of Experimental
Social Psychology, 38, 515–22.
Hawthorne, J. (2002). Deeply contingent a priori knowledge. Philosophy and Phenomenological Research,
65, 247–269.
Hawthorne, J. (2004). Knowledge and lotteries. Oxford: Oxford University Press.
Hesse, M. (1966). Models and analogies in science. Notre Dame: University of Notre Dame Press.
Hesse, M. (2000). Models and analogies. In W. H. Newton-Smith (Ed.), A companion to the philosophy of
science (pp. 299–319). Oxford: Blackwell.
Holyoak, K. J. (2005). Analogy. In K. J. Holyoak & R. Morrison (Eds.), The Cambridge handbook of
thinking and reasoning (pp. 117–142). Cambridge: Cambridge University Press.
Holyoak, K. J., & Koh, K. (1987). Surface and structural similarity in analogical transfer. Memory and
Cognition, 15, 332–340.
Hummel, J. E., & Holyoak, K. J. (2003). A symbolic-connectionist theory of relational inference and
generalization. Psychological Review, 110, 220–263.
Jackson, F. (2011). On gettier holdouts. Mind and Language, 26, 468–481.
Jäkel, O. (1995). The metaphorical concept of mind. In J. R. Taylor & R. E. McLaury (Eds.), Language
and the cognitive construal of the world (pp. 197–229). Berlin: de Gruyter.
Johnson, M. (2008). Philosophy’s debt to metaphor. In R. W. Gibbs (Ed.), The Cambridge handbook of
metaphor and thought (pp. 39–52). Cambridge: Cambridge University Press.
Kahneman, D. (2011). Thinking, fast and slow. London: Allen Lane.
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive
judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases. The psychology of
intuitive judgment (pp. 49–81). New York: Cambridge University Press.
Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. In K. J. Holyoak & R. Morrison
(Eds.), The Cambridge handbook of thinking and reasoning (pp. 267–293). Cambridge: Cambridge
University Press.
Kamas, E. N., & Reder, L. M. (1995). The role of familiarity in cognitive processing. In R. F. Lorch & E.
J. O’Brien (Eds.), Sources of coherence in reading (pp. 177–202). Hillsdale: Earlbaum.
Kamas, E. N., Reder, L. M., & Ayers, M. S. (1996). Partial matching in the Moses illusion: Response bias
not sensitivity. Memory and Cognition, 24, 687–699.
Kelley, C. M., & Lindsay, D. S. (1993). Remembering mistaken for knowing: Ease of retrieval as a basis
for confidence in answers to general knowledge questions. Journal of Memory and Language, 32, 1–24.
Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-systems
theories. Perspectives on Psychological Science, 4, 533–550.
Knobe, J., & Nichols, S. (2008). An experimental philosophy manifesto. In J. Knobe & S. Nichols (Eds.),
Experimental philosophy (pp. 3–14). Oxford: Oxford University Press.
Kornblith, H. (2007). Naturalism and intuitions. Grazer Philosophische Studien, 74, 27–49.
Kövesces, Z. (2002). Metaphor. Oxford: Oxford University Press.
Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh. New York: Basic Books.
Lassaline, M. E. (1996). Structural alignment in induction and similarity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 754–770.
Leech, R., Mareschal, D., & Cooper, R. P. (2008). Analogy as relational priming. Behavioural and Brain
Sciences, 31, 357–414.
Locke, J. (1975/1700). An essay concerning human understanding, 4th edn. Ed. by P. Nidditch. Oxford:
Clarendon Press.
Lupfer, M. B., Clark, L. F., & Hutcherson, H. W. (1990). Impact of context on spontaneous trait and
situational attributions. Journal of Personality and Social Psychology, 58, 239–49.
Markman, A. (1997). Constraints on analogical inference. Cognitive Science, 21, 373–418.
McDonald, P. (2003). History of the concept of mind. Aldershot: Ashgate.
Mercier, H., & Sperber, D. (2009). Intuitive and reflective inferences. In J. Evans & K. Frankish (Eds.), In
two minds: Dual processes and beyond (pp. 149–170). Oxford: Oxford University Press.
123
Synthese
Moors, A., & De Houwer, J. (2006). Automaticity: A theoretical and conceptual analysis. Psychological
Bulletin, 132, 297–326.
Morewedge, C. K., & Kahneman, D. (2010). Associative processes in intuitive judgment. Trends in Cognitive
Science, 14, 435–440.
Nagel, J. (2010). Knowledge ascriptions and the psychological consequences of thinking about error. Philosophical Quarterly, 60, 286–306.
Nagel, J. (2011). The psychological basis of the Harman-Vogel paradox. Philosophers’ Imprint, 11(5),
1–28.
Nagel, J. (2012). Intuitions and experiments. Philosophy and Phenomenological Research, 85, 495–527.
Nahmias, E., Morris, S. G., Nadelhoffer, T., & Turner, J. (2006). Is incompatibilism intuitive? Philosophy
and Phenomenological Research, 73, 28–53.
O’Brien, E. J. (1995). Automatic components in discourse comprehension. In R. F. Lorch & E. J. O’Brien
(Eds.), Sources of coherence in reading. Hillsdale: Earlbaum.
OED: The Oxford english dictionary, 2nd edn., Online edition March 2012. Oxford : Oxford University
Press.
van Oostendorp, H., & de Mul, S. (1990). Modes beats Adam: A semantic relatedness effect on a semantic
illusion. Acta Psychologica, 74, 35–46.
Osman, M. (2004). An evaluation of dual-process theories of reasoning. Psychonomic Bulletin and Review,
11, 988–1010.
Pachur, T., & Hertwig, R. (2006). On the psychology of the recognition heuristic: Retrieval primacy as a
key determinant of its use. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32,
983–1002.
Park, H., & Reder, L. M. (2004). Moses illusion. In R. Pohl (Ed.), Cognitive illusions (pp. 275–291). New
York: Psychology Press.
Perrott, D. A., Gentner, D., & Bodenhausen, G. V. (2005). Resistance is futile: The unwitting insertion of
analogical inferences in memory. Psychonomic Bulletin and Review, 12, 696–702.
Pinillos, N. A., Smith, N., Nair, G. S., Marchetto, P., & Mun, C. (2011). Philosophy’s new challenge:
Experiments and intentional action. Mind and Language, 26, 115–139.
Pohl, R. F. (Ed.). (2004). Cognitive illusions. New York: Psychology Press.
Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioural Decision Making,
19, 251–271.
Pust, J. (2000). Intuitions as evidence. London: Garland Publishing.
Read, D., & Grushka-Cockayne, Y. (2011). The similarity heuristic. Journal of Behavioral Decision Making,
24, 23–46.
Reder, L. M., & Kusbit, G. W. (1991) Locus of the Moses illusion: Imperfect encoding, retrievel, or match?
Journal of Memory and Language, 30, 385–406
Reverberi, C., Pischedda, D., Burigo, M., & Cherubini, P. (2012). Deduction without awareness. Acta
Psychologica, 139, 244–253.
Rorty, R. (1980). Philosophy and the mirror of nature. Oxford: Blackwell.
Ross, B. (1989). Distinguishing types of superficial similarity: Different effects on the access and use of
earlier problems. Journal of Experimental Psychology: Learning, Memory and Cognition, 15, 456–468.
Royzman, E. B., Cassidy, K. W., & Baron, J. (2003). I know, you know. Review of General Psychology, 7,
407–435.
Shieber, J. (2010). On the nature of thought experiments and a core motivation of experimental philosophy.
Philosophical Psychology, 23, 547–564.
Simmons, J. P., & Nelson, L. D. (2006). Intuitive confidence: Choosing between intuitive and non-intuitive
alternatives. Journal of Experimental Psychology, 135, 409–428.
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3–22.
Smith, A. D. (1985). Berkeley’s central argument against material substance. In J. Foster & H. Robinson
(Eds.), Essays on Berkeley: A tercentennial celebration. Oxford: Clarendon.
Sosa, E. (2007a). Experimental philosophy and philosophical intuition. Philosophical Studies, 132, 99–107.
Sosa, E. (2007b). Intuitions: Their nature and epistemic efficacy. Grazer Philosophische Studien, 74, 51–67.
Spellman, B. A., & Holyoak, K. J. (1996). Pragmatics in analogical mapping. Cognitive Psychology, 31,
307–346.
Spellman, B. A., Holyoak, K. J., & Morrison, R. G. (2001). Analogical priming via semantic relations.
Memory and Cognition, 29, 383–393.
123
Synthese
Spicer, F. (2007). Knowledge and the heuristics of folk psychology. In V. Hendricks & D. Pritchard (Eds.),
New waves in epistemology. London: Palgrave Macmillan.
Stanovich, K. E. (2011). Rationality and the reflective mind. New York: Oxford University Press.
Stanovich, K., & West, R. (2008). On the relative independence of thinking biases and cognitive ability.
Journal of Personality and Social Psychology, 94, 672–695.
Stich, S. (2012). Do different groups have different epistemic intuitions? A reply to Jennifer Nagel. Philosophy and Phenomenological Research. doi:10.1111/j.1933-1592.2012.00590.x.
Sullivan, K. (2007). Metaphoric extension and invited inferencing in semantic change. Culture Language
and Representation, 5, 257–274.
Swain, S., Alexander, J., & Weinberg, J. (2008). The instability of philosophical intuitions: Running hot
and cold on truetemp. Philosophy and Phenomenological Research, 76, 138–155.
Sweetser, E. (1990). From etymology to pragmatics. Metaphorical and cultural aspects of semantic structure. Cambridge: Cambridge University Press.
Thibodeau, P. H., & Boroditsky, L. (2011). Metaphors we think with: The role of metaphor in reasoning.
PLoS ONE, 6, e16782. doi:10.1371/journal.pone.0016782.
Thompson, V. A., & Prowse Turner, J. A. (2011). Intuition, reason, and metacognition. Cognitive Psychology, 63, 107–140.
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability.
Cognitive Psychology, 5, 207–232.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185,
1124–1131.
Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in
probability judgment. Psychological Review, 90, 293–315.
Uleman, J. S., Newman, L. S., & Moskowitz, G. B. (1996). People as flexible interpreters: Evidence and
issues from spontaneous trait inferences. Advances in Experimental Social Psychology, 28, 211–279.
Uleman, J. S., Sairbay, S. A., & Gonzales, C. M. (2008). Spontaneous inferences, implicit impressions, and
implicit theories. Annual Review of Psychology, 59, 329–60.
Vogel, J. (1990). Are there counterexamples to the closure principle? In M. Ross & G. Ross (Eds.), Doubting:
Contemporary perspectives on scepticism (pp. 13–27). Dordrecht: Kluwer.
Volz, K. G., Schubotz, R. I., Raab, M., Schooler, L. J., Gigerenzer, G., & Cramon, D. Y. (2006). Why you
think Milan is larger than Modena. Journal of Cognitive Neuroscience, 18, 1924–36.
Wharton, C. M., Holyoak, K. J., Lange, T. E., Wickens, T. D., & Melz, E. R. (1994). Below the surface:
Analogical similarity and retrieval competition in reminding. Cognitive Psychology, 26, 64–101.
Wharton, C. M., Holyoak, K. J., & Lange, T. E. (1996). Remote analogical reminding. Memory and Cognition, 24, 629–643.
Williamson, T. (2005). Contextualism, subject-sensitive invariantism and knowledge of knowledge. Philosophical Quarterly, 55, 213–35.
Williamson, T. (2007). The philosophy of philosophy. Oxford: Blackwell.
Williamson, T. (2011). Philosophical expertise and the burden of proof. Metaphilosophy, 42, 215–229.
Wittgenstein, L. (2005). The big typescript: TS213. Oxford: Blackwell.
123