What is real? Less than you might think. We advocate austere metaphysical realism—a form of metaphysical realism claiming that a correct ontological theory will repudiate numerous putative entities and properties that are posited in everyday thought and discourse, and also will even repudiate numerous putative objects and properties that are posited by well confirmed scientific theories. We have lately defended a specific version of austere metaphysical realism which asserts that there is really only one concrete particular, viz., the entire cosmos (...) (see Horgan and Potrč (2000, 2002), Potrč (2003)). But there are various potential versions of the generic position we are here calling austere metaphysical realism; and it is the generic view that constitutes the ontological part of the overall approach to realism and truth that we will describe here. What is true? More than you might think, given our austere metaphysical realism. We maintain that truth is semantically correct affirmability, under contextually operative semantic standards. We also maintain that most of the time, the contextually operative semantic standards work in such a way that semantic correctness (i.e., truth) is a matter of indirect correspondence rather than direct correspondence between thought or language on the one hand, and the world on the other.1 When correspondence is indirect rather than direct, a given statement (or thought) can be true even if the correct ontology does not include items answering to all the referential commitments (as we will here call them) of the statement. 2 This means that even if a putative object is repudiated by a correct ontological theory, ordinary statements that are putatively about that object may still be true. For instance, the statement “The University of St. Andrews is in Scotland” can be semantically correct (i.e., true) even if the right ontology does not include any entity answering to the referring term ‘The University of St. Andrews’, or any entity.... (shrink)
For the last 20 years or so, philosophers of mind have been using the term ‘qualia’, which is frequently glossed as standing for the “what-it-is-like” of experience. The examples of what-it-is-like that are typically given are feelings of pain or itches, and color and sound sensations. This suggests an identification of the experiential what-it-islike with such states. More recently, philosophers have begun speaking of the “phenomenology“ of experience, which they have also glossed as “what-it-is-like”. Many say, for example, that any (...) acceptable materialism—or any acceptable account of the relation of mind and body—must “respect the phenomenology.”1 Typically, no examples beyond those mentioned in the first paragraph are offered. This suggests that the picture of the phenomenology that ”must be respected” is the what-it-is-like of bodily sensations, of sensations that occur in perception, and perhaps of certain analogous nonperceptual states, such as imaginings and image-like rememberings. According to the suggested picture, all there is to phenomenology is such states; intentional mental states—as such— have no phenomenology; there is nothing that it is like to undergo them. Although beliefs and desires are intentionally directed—i.e., they have aboutness—these mental states allegedly are not inherently phenomenal. On this view, there is nothing that it is like to be.. (shrink)
You are given a choice between two envelopes. You are told, reliably, that each envelope has some money in it—some whole number of dollars, say—and that one envelope contains twice as much money as the other. You don’t know which has the higher amount and which has the lower. You choose one, but are given the opportunity to switch to the other. Here is an argument that it is rationally preferable to switch: Let x be the quantity of money in (...) your chosen envelope. Then the quantity in the other is either 1/2x or 2x, and these possibilities are equally likely. So the expected utility of switching is 1/2(1/2x) + 1/2(2x) = 1.25x, whereas that for sticking is only x. So it is rationally preferable to switch. There is clearly something wrong with this argument. For one thing, it is obvious that neither choice is rationally preferable to the other: it’s a tossup. For another, if you switched on the basis of this reasoning, then the same argument could immediately be given for switching back; and so on, indefinitely. For another, there is a parallel argument for the rational preferability of sticking, in terms of the quantity y in the other envelope. But the problem is to provide an adequate account of how the argument goes wrong. This is the two envelope paradox. In an earlier paper Horgan 2000) I offered a diagnosis of the paradox. I argued that the flaw in the argument is considerably more subtle and interesting than is usually believed, and that an adequate diagnosis reveals important morals about both probability and the foundations of decision theory. One moral is that there is a kind of expected utility, not previously noticed as far as I know, that I call nonstandard expected utility. I proposed a general normative principle governing the proper application of nonstandard expected utility in rational decisionmaking. But this principle is inadequate in several respects, some of which I acknowledged in note added in press and some of which I have meanwhile discovered.. (shrink)
We present a new argument for the claim that in the Sleeping Beauty problem, the probability that the coin comes up heads is 1/3. Our argument depends on a principle for the updating of probabilities that we call ‘generalized conditionalization’, and on a species of generalized conditionalization we call ‘synchronic conditionalization on old information’. We set forth a rationale for the legitimacy of generalized conditionalization, and we explain why our new argument for thirdism is immune to two attacks that Pust (...) (Synthese 160:97–101, 2008) has leveled at other arguments for thirdism. (shrink)
The semantic blindness objection to contextualism challenges the view that there is no incompatibility between (i) denials of external-world knowledge in contexts where radical-deception scenarios are salient, and (ii) affirmations of external-world knowledge in contexts where such scenarios are not salient. Contextualism allegedly attributes a gross and implausible form of semantic incompetence in the use of the concept of knowledge to people who are otherwise quite competent in its use; this blindness supposedly consists in wrongly judging that there is genuine (...) conflict between claims of type (i) and type (ii). We distinguish two broad versions of contextualism: relativistic-content contextualism and categorical-content contextualism. We argue that although the semantic blindness objection evidently is applicable to the former, it does not apply to the latter. We describe a subtle form of conflict between claims of types (i) and (ii), which we call différance-based affirmatory conflict. We argue that people confronted with radical-deception scenarios are prone to experience a form of semantic myopia (as we call it): a failure to distinguish between différance-based affirmatory conflict and outright inconsistency. Attributing such semantic myopia to people who are otherwise competent with the concept of knowledge explains the bafflement about knowledge-claims that so often arises when radical-deception scenarios are made salient. Such myopia is not some crude form of semantic blindness at all; rather, it is an understandable mistake grounded in semantic competence itself: what we call a competence-based performance error. (shrink)
Phenomenal intentionality and the evidential role of perceptual experience: comments on Jack Lyons, Perception and Basic Beliefs Content Type Journal Article DOI 10.1007/s11098-010-9604-2 Authors Terry Horgan, University of Arizona, Tucson, AZ USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
In the formation of epistemically justified beliefs, what is the role of attention, and what is the role (if any) of non-attentional aspects of cognition? We will here argue that there is an essential role for certain nonattentional aspects. These involve epistemically relevant background information that is implicit in the standing structure of an epistemic agent’s cognitive architecture and that does not get explicitly represented during belief-forming cognitive processing. Since such “morphological content” (as we call it) does not become explicit (...) during belief formation, it cannot be information that is within the scope of attention. Nevertheless,it does exert a subtle influence on the character of conscious experience, rather than operating in a purely unconscious way. (shrink)
The philosophical account of vagueness I call "transvaluationism" makes three fundamental claims. First, vagueness is logically incoherent in a certain way: it essentially involves mutually unsatisfiable requirements that govern vague language, vague thought-content, and putative vague objects and properties. Second, vagueness in language and thought (i.e., semantic vagueness) is a genuine phenomenon despite possessing this form of incoherence—and is viable, legitimate, and indeed indispensable. Third, vagueness as a feature of objects, properties, or relations (i.e., ontological vagueness) is impossible, because of (...) the mutually unsatisfiable conditions that such putative items would have to meet. In this paper I set forth the core claims of transvaluationism in a way that acknowledges and explicitly addresses a challenging critique by Timothy Williamson of my prior attempts to articulate and defend this approach to vagueness. I sketch my favored approach to truth and ontological commitment, and I explain how it accommodates the impossibility of ontological vagueness. I argue that any approach to the logic and semantics of vagueness that both (i) eschews epistemicism and (ii) thoroughly avoids positing any arbitrary sharp boundaries (either first-order or higher-order) will have to be not an alternative to transvaluationism but an implementation of it. I sketch my reasons for repudiating epistemicism. I briefly describe my current thinking about how to accommodate intentional mental properties with vague content within an ontology that eschews ontological vagueness. And I revisit the idea, which played a key role in my earlier articulations of transvaluationism, that moral conflicts provide an illuminating model for understanding vagueness. (shrink)
Morphological content is information that is implicitly embodied in the standing structure of a cognitive system and is automatically accommodated during cognitive processing without first becoming explicit in consciousness. We maintain that much belief-formation in human cognition is essentially morphological : i.e., it draws heavily on large amounts of morphological content, and must do so in order to tractably accommodate the holistic evidential relevance of background information possessed by the cognitive agent. We also advocate a form of experiential evidentialism concerning (...) epistemic justification—roughly, the view that the justification-status of an agent’s beliefs is fully determined by the character of the agent’s conscious experience. We have previously defended both the thesis that much belief-formation is essentially morphological, and also a version of evidentialism. Here we explain how experiential evidentialism can be smoothly and plausibly combined with the thesis that much of the cognitive processing that generates justified beliefs is essentially morphological. The leading idea is this: even though epistemically relevant morphological content does not become explicit in consciousness during the process of belief-generation, nevertheless such content does affect the overall character of conscious experience in an epistemically significant way: it is implicit in conscious experience, and is implicitly appreciated by the experiencing agent. (shrink)
In his 1958 seminal paper “Saints and Heroes”, J. O. Urmson argued that the then dominant tripartite deontic scheme of classifying actions as being exclusively either obligatory, or optional in the sense of being morally indifferent, or wrong, ought to be expanded to include the category of the supererogatory. Colloquially, this category includes actions that are “beyond the call of duty” (beyond what is obligatory) and hence actions that one has no duty or obligation to perform. But it is a (...) controversial category. Some have argued that the concept of supererogation is paradoxical because on one hand, supererogatory actions are (by definition) supposed to be morally good, indeed morally best, actions. But then if they are morally best, why aren't they morally required, contrary to the assumption that they are morally optional? In short: how can an action that is morally best to perform fail to be what one is morally required to do? The source of this alleged paradox has been dubbed the ‘good-ought tie-up’. In our article, we address this alleged paradox by first making a phenomenological case for the reality of instances of genuine supererogatory actions, and then, by reflecting on the relevant phenomenology, explaining why there is no genuine paradox. Our explanation appeals to the idea that moral reasons can play what we call a merit conferring role. The basic idea is that moral reasons that favor supererogatory actions function to confer merit on the actions they favor—they play a merit conferring role—and can do without also requiring the actions in question. Hence, supererogatory actions can be both good and morally meritorious to perform yet still be morally optional. Recognition of a merit conferring role unties the good-ought tie up, and (as we further argue) there are good reasons, independent of helping to resolve the alleged paradox, for recognizing this sort of role that moral reasons may play. (shrink)
In Chapters 4 and 5 of his 1998 book From Metaphysics to Ethics: A Defence of Conceptual Analysis, Frank Jackson propounds and defends a form of moral realism that he calls both ‘moral functionalism’ and ‘analytical descriptivism’. Here we argue that this metaethical position, which we will henceforth call ‘analytical moral functionalism’, is untenable. We do so by applying a generic thought-experimental deconstructive recipe that we have used before against other views that posit moral properties and identify them with certain (...) natural properties, a recipe that we believe is applicable to virtually any metaphysically naturalist version of moral realism. The recipe deploys a scenario we call Moral Twin Earth. (shrink)
Within cognitive science, mental processing is often construed as computation over mental representations—i.e., as the manipulation and transformation of mental representations in accordance with rules of the kind expressible in the form of a computer program. This foundational approach has encountered a long-standing, persistently recalcitrant, problem often called the frame problem; it is sometimes called the relevance problem. In this paper we describe the frame problem and certain of its apparent morals concerning human cognition, and we argue that these morals (...) have significant import regarding both the nature of moral normativity and the human capacity for mastering moral normativity. The morals of the frame problem bode well, we argue, for the claim that moral normativity is not fully systematizable by exceptionless general principles, and for the correlative claim that such systematizability is not required in order for humans to master moral normativity. (shrink)
I maintain, in defending “thirdism,” that Sleeping Beauty should do Bayesian updating after assigning the “preliminary probability” 1/4 to the statement S: “Today is Tuesday and the coin flip is heads.” (This preliminary probability obtains relative to a specific proper subset I of her available information.) Pust objects that her preliminary probability for S is really zero, because she could not be in an epistemic situation in which S is true. I reply that the impossibility of being in such an (...) epistemic situation is irrelevant, because relative to I, statement S nonetheless has degree of evidential support 1/4. (shrink)
Moral phenomenology is (roughly) the study of those features of occurrent mental states with moral significance which are accessible through direct introspection, whether or not such states possess phenomenal character – a what-it-is-likeness. In this paper, as the title indicates, we introduce and make prefatory remarks about moral phenomenology and its significance for ethics. After providing a brief taxonomy of types of moral experience, we proceed to consider questions about the commonality within and distinctiveness of such experiences, with an eye (...) on some of the main philosophical issues in ethics and how moral phenomenology might be brought to bear on them. In discussing such matters, we consider some of the doubts about moral phenomenology and its value to ethics that are brought up by Walter Sinnott-Armstrong and Michael Gill in their contributions to this issue. (shrink)
Moral phenomenology is concerned with the elements of one's moral experiences that are generally available to introspection. Some philosophers argue that one's moral experiences, such as experiencing oneself as being morally obligated to perform some action on some occasion, contain elements that (1) are available to introspection and (2) carry ontological objectivist purportargument from phenomenological introspection.neutrality thesisthe phenomenological data regarding one's moral experiences that is available to introspection is neutral with respect to the issue of whether such experiences carry ontological (...) objectivist purport. (shrink)
Bayesians take “definite” or “single-case” probabilities to be basic. Definite probabilities attach to closed formulas or propositions. We write them here using small caps: PROB(P) and PROB(P/Q). Most objective probability theories begin instead with “indefinite” or “general” probabilities (sometimes called “statistical probabilities”). Indefinite probabilities attach to open formulas or propositions. We write indefinite probabilities using lower case “prob” and free variables: prob(Bx/Ax). The indefinite probability of an A being a B is not about any particular A, but rather about the (...) property of being an A. In this respect, its logical form is the same as that of relative frequencies. For instance, we might talk about the probability of a human baby being female. That probability is about human babies in general — not about individuals. If we examine a baby and determine conclusively that she is female, then the definite probability of her being female is 1, but that does not alter the indefinite probability of human babies in general being female. Most objective approaches to probability tie probabilities to relative frequencies in some way, and the resulting probabilities have the same logical form as the relative frequencies. That is, they are indefinite probabilities. The simplest theories identify indefinite probabilities with relative frequencies.3 It is often objected that such “finite frequency theories” are inadequate because our probability judgments often diverge from relative frequencies. For example, we can talk about a coin being fair (and so the indefinite probability of a flip landing heads is 0.5) even when it is flipped only once and then destroyed (in which case the relative frequency is either 1 or 0). For understanding such indefinite probabilities, it has been suggested that we need a notion of probability that talks about possible instances of properties as well as actual instances.. (shrink)
We propose an approach to epistemic justification that incorporates elements of both reliabilism and evidentialism, while also transforming these elements in significant ways. After briefly describing and motivating the non-standard version of reliabilism that Henderson and Horgan call “transglobal” reliabilism, we harness some of Henderson and Horgan’s conceptual machinery to provide a non-reliabilist account of propositional justification (i.e., evidential support). We then invoke this account, together with the notion of a transglobally reliable belief-forming process, to give an account of doxastic (...) justification. (shrink)
The hypothesis of the mental state-causation of behavior (the MSC hypothesis) asserts that the behaviors we classify as actions are caused by certain mental states. A principal reason often given for trying to secure the truth of the MSC hypothesis is that doing so is allegedly required to vindicate our belief in our own agency. I argue that the project of vindicating agency needs to be seriously reconceived, as does the relation between this project and the MSC hypothesis. Vindication requires (...) addressing what I call the agent-exclusion problem: the prima facie incompatibility between the intentional content of agentive experience and certain metaphysical hypotheses often espoused in philosophy. (shrink)
It has often been thought that our knowledge of ourselves is _different_ from, perhaps in some sense _better_ than, our knowledge of things other than ourselves. Indeed, there is a thriving research area in epistemology dedicated to seeking an account of self-knowledge that would articulate and explain its difference from, and superiority over, other knowledge. Such an account would thus illuminate the descriptive and normative difference between self-knowledge and other knowledge.<sup>1</sup> At the same time, self- knowledge has also encountered its (...) share of skeptics – philosophers who refuse to accord it any descriptive, let alone normative, distinction. In this paper, we argue that there is at least one _species_ of self-knowledge that is different from, and better than, other knowledge. It is a specific kind of knowledge of one’s concurrent phenomenal experiences. Call knowledge of one’s own phenomenal experiences _phenomenal knowledge_. Our claim is that some (though not all) phenomenal knowledge is different from, and better than, non-phenomenal knowledge. In other. (shrink)
According to rationalism regarding the psychology of moral judgment, people’s moral judgments are generally the result of a process of reasoning that relies on moral principles or rules. By contrast, intuitionist models of moral judgment hold that people generally come to have moral judgments about particular cases on the basis of gut-level, emotion-driven intuition, and do so without reliance on reasoning and hence without reliance on moral principles. In recent years the intuitionist model has been forcefully defended by Jonathan Haidt. (...) One important implication of Haidt’s model is that in giving reasons for their moral judgments people tend to confabulate – the reasons they give in attempting to explain their moral judgments are not really operative in producing those judgments. Moral reason-giving on Haidt’s view is generally a matter of post hoc confabulation. Against Haidt, we argue for a version of rationalism that we call ‘morphological rationalism.’ We label our version ‘morphological’ because according to it, the information contained in moral principles is embodied in the standing structure of a typical individual’s cognitive system, and this morphologically embodied information plays a causal role in the generation of particular moral judgments. The manner in which the principles play this role is via ‘proceduralization’ – such principles operate automatically. In contrast to Haidt’s intuitionism, then, our view does not imply that people’s moral reason-giving practices are matters of confabulation. In defense of our view, we appeal to what we call the ‘nonjarring’ character of the phenomenology of making moral judgments and of giving reasons for those judgments. (shrink)
We here propose an account of what it is for an agent to be objectively justified in holding some belief. We present in outline this approach, which we call transglobal reliabilism, and we discuss how it is motivated by various thought experiments. While transglobal reliabilism is an externalist epistemology, we think that it accommodates traditional internalist concerns and objections in a uniquely natural and respectful way.
How should the metaphysical hypothesis of materialism be formulated? What strategies look promising for defending this hypothesis? How good are the prospects for its successful defense, especially in light of the infamous "hard problem" of phenomenal consciousness? I will say something about each of these questions.
We sketch the view we call contextual semantics. It asserts that truth is semantically correct affirmability under contextually variable semantic standards, that truth is frequently an indirect form of correspondence between thought/language and the world, and that many Quinean commitments are not genuine ontological commitments. We argue that contextualist semantics fits very naturally with the view that the pertinent semantic standards are particularist rather than being systematizable as exceptionless general principles.
Metaethics, understood as a distinct branch of ethics, is often traced to G. E. Moore's 1903 classic, Principia Ethica. Whereas normative ethics is concerned to answer first-order moral questions about what is good and bad, right and wrong, metaethics is concerned to answer second-order non-moral questions about the semantics, metaphysics, and epistemology of moral thought and discourse. Moore has continued to exert a powerful influence, and the sixteen essays here (most of them specially written for the volume) represent the most (...) up-to-date work in metaethics after, and in some cases directly inspired by, the work of Moore. Contributors include Robert Audi, Stephen Barker, Paul Bloomfield, Panayot Butchvarvov, Jonathan Dancy, Stephen Darwall, Jamie Dreier, Allan Gibbard, Brad Hooker, Terry Horgan, Connie Rosati, Russ Shafer-Landau, Walter Sinnott-Armstrong, Michael Smith, Philip Stratton-Lake, Sigrun Svavarsdottir, Mark Timmons, and Judith Jarvis Thompson. (shrink)
Eliminative materialism, as William Lycan (this volume) tells us, is materialism plus the claim that no creature has ever had a belief, desire, intention, hope, wish, or other “folk-psychological” state. Some contemporary philosophers claim that eliminative materialism is very likely true. They sketch certain potential scenarios, for the way theory might develop in cognitive science and neuroscience, that they claim are fairly likely; and they maintain that if such.
1. The story of Sleeping Beauty is set forth as follows by Dorr (2002): Sleeping Beauty is a paradigm of rationality. On Sunday she learns for certain that she is to be the subject of an experiment. The experimenters will wake her up on Monday morning, and tell her some time later that it is Monday. When she goes back to sleep, they will toss a fair coin. If the outcome of the toss is Heads, they will do nothing. If (...) the outcome is Tails, they will administer a drug whose effect is to destroy all memories from the previous day, so that when she wakes up on Tuesday, she will be unable to tell  that it is not Monday. (2002: 292) Let HEADS be the hypothesis that the coin lands heads, and let TAILS be the hypothesis that it lands tails. The Sleeping Beauty Problem is this. When Sleeping Beauty finds herself awakened by the experimenters, with no memory of a prior awakening and with no ability to tell whether or not it is Monday, what probabilities should she assign to HEADS and TAILS respectively? Elga (2000) maintains that when she is awakened, P(HEADS) = 1/3 and P(TAILS) = 2/3. He offers the following intuitively plausible argument (2000: 143 4). If the experiment were performed many times, then over the long run about 1/3 of the awakenings would happen on trials in which the coin lands heads, and about 2/3 on trials in which it lands tails. So in the present circumstance in which the experiment is performed just once, P(HEADS) = 1/3 and.. (shrink)
Jaegwon Kim argues that one should distinguish naturalism from materialism, and that both should be construed as ontological rather than epistemological. I agree, on both counts. Although I have sometimes tended to slur together materialism and naturalism in of my writings (as is done in much recent philosophy), I do think that it is important to distinguish them. It is a serious philosophical task to get clearer about how each position is best articulated, and about ways that one could embrace (...) naturalism without embracing materialism. British emergentism, for example, seems reasonably classified as a position that is naturalist but not materialist (and evidently the British emergentists themselves construed their view this way). Here are two key tenets of British emergentism, both of which seem to disqualify the view from being a form of materialism without thereby disqualifying it as a form of naturalism: (E.1) There are emergent properties in nature, in the following sense: although (i) these properties are supervenient on certain other properties, the relevant supervenience facts are ontologically sui generis (and hence are unexplainable). (E.2) Emergent properties are fundamental force generating properties , in this sense: they produce additional fundamental forces that affect the distribution of matter, above and beyond the fundamental forces posited in physics. A position worth of the label “materialism,” it seems to me, should preclude both of these emergentist theses. My notion of superdupervenience is intended as a condition that any version of materialism should satisfy, and is supposed to be incompatible with theses (E.1) and (E.2). Although sometimes, as in Horgan and Timmons (1992) and Horgan (1994), the condition is articulated in terms of the need for supervenience to be explainable “in a naturalistically acceptable way” (thereby slurring the naturalism/materialism distinction), what I had in mind was that supervenience relations must be explainable in a materialistically acceptable way.. (shrink)
Is conceptual relativity a genuine phenomenon? If so, how is it properly understood? And if it does occur, does it undermine metaphysical realism? These are the questions we propose to address. We will argue that conceptual relativity is indeed a genuine phenomenon, albeit an extremely puzzling one. We will offer an account of it. And we will argue that it is entirely compatible with metaphysical realism. Metaphysical realism is the view that there is a world of objects and properties that (...) is independent of our thought and discourse (including our schemes of concepts) about such a world. Hilary Putnam, a former proponent of metaphysical realism, later gave it up largely because of the alleged phenomenon that he himself has given the label ‘conceptual relativity’. One of the key ideas of conceptual relativity is that certain concepts—including such fundamental concepts as object, entity, and existence—have a multiplicity of different and incompatible uses (Putnam 1987, p. 19; 1988, pp. 110 14). According to Putnam, once we recognize the phenomenon of conceptual relativity we must reject metaphysical realism: The suggestion . . . is that what is (by commonsense standards) the same situation can be described in many different ways, depending on how we use the words. The situation does not itself legislate how words like “object,” “entity,” and “exist” must be used. What is wrong with the notion of objects existing “independently” of conceptual schemes is that there are no standards for the use of even the logical notions apart from conceptual choices.” (Putnam 1988, p. 114) Putnam’s intriguing reasoning in this passage is difficult to evaluate directly, because conceptual  relativity is philosophically perplexing and in general is not well understood. In this paper we propose a construal of conceptual relativity that clarifies it considerably and explains how it is possible despite its initial air of paradox. We then draw upon this construal to explain why, contrary to Putnam and others, conceptual relativity does not conflict with metaphysical realism, but in fact comports well with it.. (shrink)
Alvin Goldman’s contributions to contemporary epistemology are impressive—few epistemologists have provided others so many occasions for reflecting on the fundamental character of their discipline and its concepts. His work has informed the way epistemological questions have changed (and remained consistent) over the last two decades. We (the authors of this paper) can perhaps best suggest our indebtedness by noting that there is probably no paper on epistemology that either of us individually or jointly have produced that does not in its (...) notes and references bear clear testimony to the influence of Professor Goldman’s arguments. The present paper is no exception (and this would be a particularly inapt place to break with our tradition of indebtedness). Professor Goldman has produced a series of discussions that we find particularly important for coming to terms with the venerable idea that there may be truths that can be known a priori (Goldman 1992a, 1992b, 1999). We do not altogether follow his lead, while he draws on the idea that a priori justification has something to do with innateness or processess, we prefer to accentuate the idea that a priori justification turns on a conceptually grounded truths and access via acquired conceptual competence (at least in many significant philosophical cases). Still, in developing our understanding we have been aided by much that Professor Goldman says regarding concepts, conceptual competence, and related psychological processes. The influences should become progressively clear, particularly in the later sections of this paper. What would it take for there to be a priori knowledge or justification? We can begin by reflecting on a widely agreed on answer to this question—one that purports to identify something that would at least be adequate for a priori justification. The answer will then serve as one anchor for the present investigation, a bit of shared ground on which empiricists and rationalists can, and typically do, agree.. (shrink)