Sherrilyn Roush defends a new theory of knowledge and evidence, based on the idea of "tracking" the truth, as the best approach to a wide range of questions about knowledge-related phenomena. The theory explains, for example, why scepticism is frustrating, why knowledge is power, and why better evidence makes you more likely to have knowledge. Tracking Truth provides a unification of the concepts of knowledge and evidence, and argues against traditional epistemological realist and anti-realist positions about scientific theories and for (...) a piecemeal approach based on a criterion of evidence, a position Roush calls "real anti-realism." Epistemologists and philosophers of science will recognize this as a significant original contribution. (shrink)
How confident does the history of science allow us to be about our current well-tested scientific theories, and why? The scientific realist thinks we are well within our rights to believe our best-tested theories, or some aspects of them, are approximately true.2 Ambitious arguments have been made to this effect, such as that over historical time our scientific theories are converging to the truth, that the retention of concepts and claims is evidence for this, and that there can be no (...) other serious explanation of the success of science than that its theories are approximately true. There is appeal in each of these ideas, but making such strong claims has tended to be hazardous, leaving us open to charges that many typical episodes in the history of science just do not fit the model. (See, e.g., Laudan 1981.) Arguing for a realist attitude via general claims – properties ascribed to sets of theories, trends we see in progressions of theories, and claimed links between general properties like success and truth that apply or fail to apply to any theory regardless of its content – is like arguing for or via a theory of science, which brings with it the obligation to defend that theory. I think a realist attitude toward particular scientific theories for which we have evidence can be maintained rationally without such a theory, even in the face of the pessimistic induction over the history of science. The starting point at which questions arise as to what we have a right to believe about our theories is one where we have theories and evidence for them, and we are involved in the activity of apportioning our belief in each particular theory or hypothesis in accord with the strength of the particular evidence.3 The devil’s advocate sees our innocence and tries his best to sow seeds of doubt. If our starting point is as I say, though, the innocent believer in particular theories does not have to play offense and propose sweeping views about science in general, but only to respond to the skeptic’s challenges; the burden of initial argument is on the skeptic.. (shrink)
This paper defends the naïve thesis that the method of experiment has per se an epistemic superiority over the method of computer simulation, a view that has been rejected by some philosophers writing about simulation, and whose grounds have been hard to pin down by its defenders. I further argue that this superiority does not come from the experiment’s object being materially similar to the target in the world that the investigator is trying to learn about, as both sides of (...) dispute over the epistemic superiority thesis have assumed. The superiority depends on features of the question and on a property of natural kinds that has been mistaken for material similarity. Seeing this requires holding other things equal in the comparison of the two methods, thereby exposing that, under the conditions that will be specified, the simulation is necessarily epistemically one step behind the corresponding experiment. Practical constraints like feasibility and morality mean that scientists do not often face an other-things- equal comparison when they choose between experiment and simulation. Nevertheless, I argue, awareness of this superiority and of the general distinction between experiment and simulation is important for maintaining motivation to seek answers to new questions. (shrink)
I develop a general framework with a rationality constraint that shows how coherently to represent and deal with second-order information about one's own judgmental reliability. It is a rejection of and generalization away from the typical Bayesian requirements of unconditional judgmental self-respect and perfect knowledge of one's own beliefs, and is defended by appeal to the Principal Principle. This yields consequences about maintaining unity of the self, about symmetries and asymmetries between the first- and third-person, and a principled way of (...) knowing when to stop second-guessing oneself. Peer disagreement is treated as a special case where one doubts oneself because of news that an intellectual equal disagrees. This framework, and variants of it, imply that the typically stated belief that an equally reliably peer disagrees is incoherent, and thus that pure rationality constraints without further substantive information cannot give an answer as to what to do. The framework also shows that treating both ourselves and others as thermometers in the disagreement situation does not imply the Equal Weight view. (shrink)
This paper defends the naïve thesis that the method of experiment has per se an epistemic superiority over the method of computer simulation, a view that has been rejected by some philosophers writing about simulation, and whose grounds have been hard to pin down by its defenders. I further argue that this superiority does not come from the experiment’s object being materially similar to the target in the world that the investigator is trying to learn about, as both sides of (...) dispute over the epistemic superiority thesis have assumed. The superiority depends on features of the question and on a property of natural kinds that has been mistaken for material similarity. Seeing this requires holding other things equal in the comparison of the two methods, thereby exposing that, under the conditions that will be specified, the simulation is necessarily epistemically one step behind the corresponding experiment. Practical constraints like feasibility and morality mean that scientists do not often face an other-things- equal comparison when they choose between experiment and simulation. Nevertheless, I argue, awareness of this superiority and of the general distinction between experiment and simulation is important for maintaining motivation to seek answers to new questions. (shrink)
It is received wisdom that the skeptic has a devastating line of argument in the following. You probably think, he says, that you know that you have hands. But if you knew that you had hands, then you would also know that you were not a brain in a vat, a brain suspended in fluid with electrodes feeding you perfectly coordinated impressions that are generated by a supercomputer, of a world that looks and moves just like this one. You would (...) know you weren’t in this state if you knew you had hands, since having hands implies you are no brain in a vat. You obviously don’t know you’re not a brain in a vat, though—you have no evidence that would distinguish that state from the normal one you think you’re in. Therefore, by modus tollens, you don’t know you have hands. At least, the skeptic has a devastating argument, it is thought, if we grant him closure of knowledge under known implication, which many of us are inclined to do: roughly, if you know p, and you know that p implies q, then you know q. (shrink)
When we get evidence that tells us our belief-forming mechanisms may not be reliable this presents a thorny set of questions about whether and how to revise our original belief. This article analyzes aspects of the problem and a variety of approaches to its solution.
There is a widespread view that in order to be rational we must mostly know what we believe. In the probabilistic tradition this is defended by arguments that a person who failed to have this knowledge would be vulnerable to sure loss, or probabilistically incoherent. I argue that even gross failure to know one's own beliefs need not expose one to sure loss, and does not if we follow a generalization of the standard bridge principle between first-order and second-order beliefs. (...) This makes it possible for a subject to use probabilistic decision theory to manage in a rational way cases of potential failure of this self-knowledge, as we find in implicit bias. Through such cases I argue that it is possible for uncertainty about what our beliefs are to be not only rationally permissible but advantageous. (shrink)
Abstract: Knowledge requires more than mere true belief, and we also tend to think it is more valuable. I explain the added value that knowledge contributes if its extra ingredient beyond true belief is tracking . I show that the tracking conditions are the unique conditions on knowledge that achieve for those who fulfill them a strict Nash Equilibrium and an Evolutionarily Stable Strategy in what I call the True Belief Game. The added value of these properties, intuitively, includes preparedness (...) and an expectation of survival advantage. On this view knowledge is valuable not because knowledge persists but because it makes the bearer more likely to maintain an appropriate belief state—possibly nonbelief—through time and changing circumstances. When Socrates concluded that knowledge of the road to Larissa was no more valuable than true belief for the purpose of getting to Larissa, he did not take into account that one might want to be prepared for a possible meeting with a misleading sophist along the way, or for the possibility of road work. (shrink)
This paper argues that if knowledge is defined in terms of probabilistic tracking then the benefits of epistemic closure follow without the addition of a closure clause. (This updates my definition of knowledge in Tracking Truth 2005.) An important condition on this result is found in "Closure Failure and Scientific Inquiry" (2017).
Relatively few philosophers of science today could do all of what Nicholas Maxwell does without hesitating: treat theoretical physics as indicative of all science because it is deemed fundamental, observe and expect science to exhibit a cumulative history of more and more unified knowledge, claim to solve the problem of induction, and insist that philosophy, particularly metaphysics, is crucially relevant to ongoing progress in science. Maxwell is distinctly out of fashion—this is no dappled world—but in hewing to a line he (...) has been developing for decades he presents several intriguing and intertwined proposals for understanding and continuing the progress of physics over the last three and a half centuries toward a single, true, theory of everything. (shrink)
In the last three decades several cosmological principles and styles of reasoning termed 'anthropic' have been introduced into physics research and popular accounts of the universe and human beings' place in it. I discuss the circumstances of 'fine tuning' that have motivated this development, and what is common among the principles. I examine the two primary principles, and find a sharp difference between these 'Weak' and 'Strong' varieties: contrary to the view of the progenitors that all anthropic principles represent a (...) departure from Copernicanism in cosmology, the Weak Anthropic Principle is an instance of Copernicanism. It has close affinities with the step of Copernicus that Immanuel Kant took himself to be imitating in the 'critical' turn that gave rise to the Critique of Pure Reason. I conclude that the fact that a way of going about natural science mentions human beings is not sufficient reason to think that it is a subjective approach; in fact, it may need to mention human beings in order to be objective. (shrink)
The transferability problem—whether the results of an experiment will transfer to a treatment population—affects not only Randomized Controlled Trials but any type of study. The problem for any given type of study can also, potentially, be addressed to some degree through many different types of study. The transferability problem for a given RCT can be investigated further through another RCT, but the variables to use in the further experiment must be discovered. This suggests we could do better on the epistemological (...) problem of transferability by promoting, in the repeated process of formulating public health guidelines, feedback loops of information from the implementation setting back to researchers who are defining new studies. (shrink)
This paper addresses two examples due to Peter Achinstein purporting to show that the positive relevance view of evidence is too strong, that is, that evidence need not raise the probability of what it is evidence for. The first example can work only if it makes a false assumption. The second example fails because what Achinstein claims is evidence is redundant with information we already have. Without these examples Achinstein is left without motivation for his account of evidence, which uses (...) the concept of explanation in addition to that of probability. (shrink)
Less discussed than Hume’s skepticism about what grounds there could be for projecting empirical hypotheses is his concern with a skeptical regress that he thought threatened to extinguish any belief when we reflect that our reasoning is not perfect. The root of the problem is the fact that a reflection about our reasoning is itself a piece of reasoning. If each reflection is negative and undermining, does that not give us a diminution of our original belief to nothing? It requires (...) much attention to detail, we argue, to determine whether or not there is a skeptical problem in this neighborhood. Consider that if we subsequently doubt a doubt we had about our reasoning, that would suggest a restoration of some of the force of our original belief. We would then have instead of runaway skepticism an alternating sequence of pieces of skeptical reasoning that cancel each others’ effects on our justification in the original proposition, at least to some degree. We will argue that the outcome of the sequence of reflections Hume is imagining depends on information about a given case that is not known a priori. We conclude this from the fact that under three precise, explanatory, and viable contemporary reconstructions of what this kind of reasoning about reasoning could be like and how it has the potential to affect our original beliefs, a belief-extinguishing regress is not automatic or necessary. (shrink)
In Tracking Truth I undertook a broader project than is typical today toward questions about knowledge, evidence, and scientific realism. The range of knowledge phenomena is much wider than the kind of homely examples—such as ‘‘She has a bee in her bonnet’’—that are often the fare in discussions of knowledge. Scientists have knowledge gained in sophisticated and deliberate ways, and non-human animals have reflexive and rudimentary epistemic achievements that we can easily slip into calling ‘‘knowledge.’’ What is it about knowledge (...) that makes it natural for us to use the same word in cases that are so vastly different? How is it possible for knowledge to have evolved? What is it about knowledge that it should enhance our power over nature, as Francis Bacon observed? What is it about evidence and knowledge that makes you more likely to have the latter when you have the former? Specialization is necessary to progress, but the division of labor it requires has allowed such questions to fall through the gaps between discussions. These gaps are opportunities. Sometimes newly discovered problems can bring new and better answers even to old questions. The questions I have asked above are ‘‘Why?’’ questions expressed as (apparently) Socratic ‘‘What is?’’ questions, and that is the approach taken in the first five chapters of this book, to offer explanations of familiar phenomena on the basis of rigorous definitions of knowledge and evidence. One might object that this is an old, not a new, style of answer, and one that I ought to be educated enough to reject. Many have thought the project of giving necessary and sufficient conditions for knowledge was in its death rattle long ago. The most common argument for this conclusion is an empirical one, that no such attempt has ever been successful in giving the right answer for all examples. And when one asks, as one must, what the ‘‘right’’ answer would be answering to anyway, the project can look even more depressing. But even if there is a clear.... (shrink)
On analogy with testimony, I define a notion of a scientific theory’s lacking or having candor, in a testing situation, according to whether the theory under test is probabilistically relevant to the processes in the test procedures, and thereby to the reliability of test outcomes. I argue that this property identifies what is distinctive about those theories that Karl Popper denounced as exhibiting “reinforced dogmatism” through their self-protective behavior (e.g., psychoanalysis, Hegelianism, Marxism). I explore whether lack of candor interferes with (...) the testing of theories, and conclude that (1) our default attitude toward theories that lack candor in a given test should be suspicion, but (2) the circumstance that a theory lacks candor in a testing situation does not preclude obtaining independent evidence for the auxiliary assumptions to which the theory is probabilistically relevant, and thereby eliminating the problem that lack of candor creates. Thus, Popper was right to think that lack of candor is a bad thing, but wrong to conclude that candor is a criterion of the scientificity of a theory. Seeing this requires recognition of some differences between intuitive relevance and probabilistic relevance, and proper appreciation of the notion of screening off and of the fact that probabilistic relevance is not transitive. (shrink)
Self-knowledge has always played a role in healthcare since a person needs to be able to accurately assess her body or behaviour in order to determine whether to seek medical help. But more recently it has come to play a larger role, as healthcare has moved from a more paternalistic model to one where patients are expected to take charge of their health; as we realise that early detection, and hence self-examination, can play a crucial role in outcomes; as medical (...) science improves and makes more terminal illnesses into chronic conditions requiring self-management; as genetic testing makes it possible to have more information about our futures; and with the advent of personal electronic devices that make it easy for a person to gather accurate real-time information about her body. It can be hard to get good information about oneself, and even harder to know what to do it. Sometimes self-knowledge is needed for a good outcome, but sometimes it is useless, or worse. For instance, breast self-examination can lead to over-treatment, learning that one has a predisposing gene can create a detrimental illusion of knowing more about the future than one does, and data about one’s vital signs can be meaningless if taken out of a context of interpretation. This collection explores how these and other issues play out in a variety of medical specialities. (shrink)
This is a very short textbook on probabilistic reasoning, expected utility decision-making, cognitive biases, and self-correction, especially in application to medical examples. It also includes a chapter on concepts of health.
This develops a framework for second-order conditionalization on statements about one's own epistemic reliability. It is the generalization of the framework of "Second-Guessing" (2009) to the case where the subject is uncertain about her reliability. See also "Epistemic Self-Doubt" (2017).
One of the most common criticisms one hears of the idea of granting a legitimate role for social values in theory choice in science is that it just doesn’t make sense to regard social preferences as relevant to the truth or to the way things are. “What is at issue,” wrote Susan Haack, is “whether it is possible to derive an ‘is’ from an ‘ought.’ ” One can see that this is not possible, she concludes, “as soon as one expresses (...) it plainly: that propositions about what states of affairs are desirable or deplorable could be evidence that things are, or are not, so” (Haack 1993a, 35, emphasis in original). The purpose of this chapter is not to determine whether this widespread view is correct, but rather to show that even if we grant it (which I do), we may still consistently believe that social values have a legitimate role in theory choice in science. I will defend this conclusion by outlining a view about social values and theory choice that is available to a Constructive Empiricist anti-realist, but not to a realist. (shrink)
Why should we make our beliefs consistent or, more generally, probabilistically coherent? That it will prevent sure losses in betting and that it will maximize one’s chances of having accurate beliefs are popular answers. However, these justifications are self-centered, focused on the consequences of our coherence for ourselves. I argue that incoherence has consequences for others because it is liable to mislead others, to false beliefs about one’s beliefs and false expectations about one’s behavior. I argue that the moral obligation (...) of truthfulness thus constrains us to either conform to the logic our audience assumes we use, educate them in a new logic, or give notice that one will do neither. This does not show that probabilistic coherence is uniquely suited to making truthful communication possible, but I argue that classical probabilistic coherence is superior to other logics for maximizing efficiency in communication. (shrink)
Epistemic injustice is injustice to a person qua knower. In one form of this phenomenon a speaker’s testimony is denied credence in a way that wrongs them. I argue that the received definition of this testimonial injustice relies too heavily on epistemic criteria that cannot explain why the moral concept of injustice should be invoked. I give an account of the nature of the wrong of epistemic injustice that has it depend not on the accuracy of judgments that are used (...) or made in the process of deciding whether to listen to or trust a speaker, but on whether the basis of the decision about a speaker is their reliability or their identity, and the account explains why the latter is a moral wrong. A key difference between the two accounts is how they classify the use of true statistical generalizations connecting identity and reliability. The received view implies that this cannot be an injustice, while the view proposed here implies that it can. As such the new view appears to imply a conflict between moral and epistemic obligations: it is morally wrong to use true statistical generalizations in certain contexts, yet they are part of our evidence, and we are epistemically obligated to take all of our evidence into account. I reconcile these two thoughts without adopting the currently popular view that a belief’s being morally wrong makes it epistemically unjustified, and I argue that following the principle of total evidence encourages epistemic justice rather than thwarting it. (shrink)
Over the centuries since the modern scientific revolution that started with Copernicus, Galileo, Kepler, and Newton, two things have changed that have required reorientation of our assumptions and re-education of our reflexes. First, we have learned that even the very best science is fallible; eminently successful theories investigated and supported through the best methods, and by the best evidence available, might be not just incomplete but wrong. That is, it is possible to have a justified belief that is false. Second, (...) we have learned that it is impossible, even for scientists, to maintain the Enlightenment ideal of “thinking for oneself” on every matter about which we want to have, and do think we have, knowledge; the volume of information involved makes us all epistemically dependent on others. Scientists in practice have adjusted to these developments much more easily than have lay people. It is also easier to adjust in scientific practice than it is to explain these matters explicitly and accurately to others. To do so it is helpful to consider our epistemological situation precisely, and to understand the broader cultural ideas and historical forces at work in modern science and its public reception. (shrink)
Over the centuries since the modern scientific revolution that started with Copernicus, Galileo, Kepler, and Newton, two things have changed that have required reorientation of our assumptions and re-education of our reflexes. First, we have learned that even the very best science is fallible; eminently successful theories investigated and supported through the best methods, and by the best evidence available, might be not just incomplete but wrong. That is, it is possible to have a justified belief that is false.
Health anxiety is, among other things, a response to a universal epistemological problem about whether changes in one’s body indicate serious illness, a problem that grows more challenging to the individual with age and with every advance in medical science, detection, and treatment. There is growing evidence that dysfunctional metacognitive beliefs – beliefs about thinking – are the driving factor, with dysfunctional substantive beliefs about the probability of illness a side‐effect, and that Metacognitive Therapy (MCT) is more effective than Cognitive (...) Behavioral Therapy (CBT). However, hypochondria is distinct from other forms of anxiety, I argue, in ways that make some reality‐checking techniques of CBT and MCT of limited usefulness. I propose a Re‐Calibration Technique (RCT) that complements these therapies by focusing on a metacognitive belief that has not been studied: the patient’s presumption of his own personal reliability in judging symptoms, an assumption exposed every time he disagrees with a doctor. I propose a technique whereby a patient keeps a long‐term register of every episode of alarm about symptoms and its resolution, possibly years later. When healthcare‐seeking impulses arise the patient then uses his own track record to re‐calibrate his confidence that medical attention is needed. The new technique allows one to improve self‐judgment about whether one has an illness or not by improving self‐knowledge of one’s own reliability. (shrink)
It is widely accepted that in fallible reasoning potential error necessarily increases with every additional step, whether inferences or premises, because it grows in the same way that the probability of a lengthening conjunction shrinks. As it stands, this is disappointing but, I will argue, not out of keeping with our experience. However, consulting an expert, proof-checking, constructing gap-free proofs, and gathering more evidence for a given conclusion also add more steps, and we think these actions have the potential to (...) improve our reliability or justifiedness. Thus, the received wisdom about the growth of error implies a skepticism about the possibility of improving our reliability and level of justification through effort. Paradoxically, and even more implausibly, taking steps to decrease your potential error necessarily increases it. I will argue that the self-help steps listed here are of a distinctive type, involving composition rather than conjunction. Error grows differently over composition than over conjunction, I argue, and this dissolves the apparent paradox. (shrink)
Many who think that naked statistical evidence alone is inadequate for a trial verdict think that use of probability is the problem, and something other than probability – knowledge, full belief, causal relations – is the solution. I argue that the issue of whether naked statistical evidence is weak can be formulated within the probabilistic idiom, as the question whether likelihoods or only posterior probabilities should be taken into account in our judgment of a case. This question also identifies a (...) major difference between the Process Reliabilist and Probabilistic Tracking views of knowledge and other concepts. Though both are externalist, and probabilistic, epistemic theories, Tracking does and Process Reliabilism does not put conditions on likelihoods. So Tracking implies that a naked statistic is not adequate evidence about an individual, and does not yield knowledge, whereas the Reliabilist thinks it gives justified belief and knowledge. Not only does the Tracking view imply that naked statistical evidence is insufficient for a verdict, but it gives us resources to explain why, in terms of risk and the special conditions of a trial. (shrink)
Replies.Sherrilyn Roush - 2009 - Philosophy and Phenomenological Research 79 (1):240-247.details
Reply to Goldman I would like to thank Alvin for a spirited, and gentlemanly, debate we’ve had on these issues, which is extended further here. Alvin is exactly right that if we make his assumption about maximum specificity and deduceability (which I have doubts about), then on my view of knowledge Sphere Guy doesn’t know there’s a sphere in front of him. This may sound silly when we focus on his tactile access to the sphere in the actual world, but (...) if we take a broader view we see that there is more at stake than this. Contrary to Alvin’s impression, methods are not at all excised from my view of knowledge. My theory of how to judge whether someone knows requires us to consider everything (probable) that is and would be responsible for the fact that the person believes or not, whether these occur in his head or in the world, which the formulation in terms of probability helps to make very clear. (See Chapter 3.) Ironically, my refusal to relativize to method has us taking into consideration more facts about the subject’s method than Alvin’s criteria do, for my view takes into account, as appropriate, what process the person would have used and has a tendency to use, and not just the properties of the one he happened in fact to use. When the fact that a method was used by a subject in coming to belief in p is independent of the truth of p, which is actually most of the time in our lives, the conditions of application of the variation condition insure that we evaluate the subject by considering only what he would do and how he would fare in his beliefs were he to use that method he actually used. So, under that condition, my view agrees with Alvin, and Nozick also. But when whether a subject used that method is not independent of the truth value of p, then the variation condition in my view says we must consider in addition the subject’s resulting beliefs in all probable scenarios where he is such that he might well.. (shrink)
There is much disagreement about how extensive a role theoretical mind-reading, behavior-reading, and simulation each have and need to have in our knowing and understanding other minds, and how each method is implemented in the brain, but less discussion of the epistemological question what it is about the products of these methods that makes them count as knowledge or understanding. This question has become especially salient recently as some have the intuition that mirror neurons can bring understanding of another's action (...) despite involving no higher-order processing, whereas most epistemologists writing about understanding think that it requires reflective access to one's grounds, which is closer to the intuitions of other commenters on mirror neurons. I offer a definition of what it is that makes something understanding that is compelling independently of the context of cognition of other minds, and use it to show two things: 1) that theoretical mind-reading and simulation bring understanding in virtue of the same epistemic feature, and 2) why the kind of motor representation without propositional attitudes that is done by mirror neurons is sufficient for action understanding. I further suggest that more attention should be paid to the potential disadvantages of a simulative method of knowing. Though it can be more efficient in some cases, it can also bring vulnerability, wear and tear on one's personal equipment, and unintended mimicry. (shrink)
In the aftermath of Gettier’s examples, knowledge came to be thought of as what you would have if in addition to a true belief and your favorite epistemic goody, such as justifiedness, you also were ungettiered, and the theory of knowledge was frequently equated, especially by its detractors, with the project of pinning down that extra bit. It would follow that knowledge contributes something distinctive that makes it indispensable in our pantheon of epistemic concepts only if avoiding gettierization has a (...) value that can be explained without presupposing the value of knowledge. Tracking-type knowledge has a value that no other logically possible conditions on true belief does. As an Evolutionarily Stable Strategy it preserves appropriate belief states through time and changing circumstances. If we characterize gettierization through the concept of relevance matching, then we see that avoiding gettierization has a value independent of that of knowledge, namely, understanding, and that it is unnecessary to add a clause to the tracking conditions to make them suppress gettierization directly, though fallibly. The bright line of value is between gettierization avoidance and understanding on the one hand and knowledge on the other, and so should be the bright line defining concepts. The concept of relevance matching is key to a definition of what it is to understand why p is true, as opposed merely to knowing that p is true. Perfect tracking implies perfect relevance matching, so knowledge and understanding are intimately connected but understanding also requires that one own states that accomplish the relevance matching rather than achieving it vicariously. The theory of understanding based on relevance matching implies that understanding requires appreciation of not only p but its connections to other matters, and explains how it is possible to know that p is true without understanding why. The view implies that understanding is literally simulation, and is suggestive about understanding other minds. (shrink)