Josep Corbi raises several worries about the metaethical position that Mark Timmons and I have articulated and defended, which we call â€œnondescriptivist cognitivism.â€â€¦ His remarks prompt some points of clarificationâ€¦. Timmons and I characterize descriptive content as â€œway-the-world-might-beâ€ content. We maintain that â€œbase caseâ€ beliefsâ€”roughly, those non-evaluative and evaluative beliefs whose contents have the simplest kinds of logical formâ€”are of two types: a non-evaluative belief is an is-commitment with respect to a core descriptive content, and an evaluative belief is an (...) ought-commitment with respect to a core descriptive content. Core descriptive contents are those descriptive contents expressible by (nonevaluative) atomic sentences. Concerning the notion of a core descriptive content, Corbi says. (shrink)
What is real? Less than you might think. We advocate austere metaphysical realism—a form of metaphysical realism claiming that a correct ontological theory will repudiate numerous putative entities and properties that are posited in everyday thought and discourse, and also will even repudiate numerous putative objects and properties that are posited by well confirmed scientific theories. We have lately defended a specific version of austere metaphysical realism which asserts that there is really only one concrete particular, viz., the entire cosmos (...) (see Horgan and Potrč (2000, 2002), Potrč (2003)). But there are various potential versions of the generic position we are here calling austere metaphysical realism; and it is the generic view that constitutes the ontological part of the overall approach to realism and truth that we will describe here. What is true? More than you might think, given our austere metaphysical realism. We maintain that truth is semantically correct affirmability, under contextually operative semantic standards. We also maintain that most of the time, the contextually operative semantic standards work in such a way that semantic correctness (i.e., truth) is a matter of indirect correspondence rather than direct correspondence between thought or language on the one hand, and the world on the other.1 When correspondence is indirect rather than direct, a given statement (or thought) can be true even if the correct ontology does not include items answering to all the referential commitments (as we will here call them) of the statement. 2 This means that even if a putative object is repudiated by a correct ontological theory, ordinary statements that are putatively about that object may still be true. For instance, the statement “The University of St. Andrews is in Scotland” can be semantically correct (i.e., true) even if the right ontology does not include any entity answering to the referring term ‘The University of St. Andrews’, or any entity.... (shrink)
For the last 20 years or so, philosophers of mind have been using the term ‘qualia’, which is frequently glossed as standing for the “what-it-is-like” of experience. The examples of what-it-is-like that are typically given are feelings of pain or itches, and color and sound sensations. This suggests an identification of the experiential what-it-islike with such states. More recently, philosophers have begun speaking of the “phenomenology“ of experience, which they have also glossed as “what-it-is-like”. Many say, for example, that any (...) acceptable materialism—or any acceptable account of the relation of mind and body—must “respect the phenomenology.”1 Typically, no examples beyond those mentioned in the first paragraph are offered. This suggests that the picture of the phenomenology that ”must be respected” is the what-it-is-like of bodily sensations, of sensations that occur in perception, and perhaps of certain analogous nonperceptual states, such as imaginings and image-like rememberings. According to the suggested picture, all there is to phenomenology is such states; intentional mental states—as such— have no phenomenology; there is nothing that it is like to undergo them. Although beliefs and desires are intentionally directed—i.e., they have aboutness—these mental states allegedly are not inherently phenomenal. On this view, there is nothing that it is like to be.. (shrink)
You are given a choice between two envelopes. You are told, reliably, that each envelope has some money in it—some whole number of dollars, say—and that one envelope contains twice as much money as the other. You don’t know which has the higher amount and which has the lower. You choose one, but are given the opportunity to switch to the other. Here is an argument that it is rationally preferable to switch: Let x be the quantity of money in (...) your chosen envelope. Then the quantity in the other is either 1/2x or 2x, and these possibilities are equally likely. So the expected utility of switching is 1/2(1/2x) + 1/2(2x) = 1.25x, whereas that for sticking is only x. So it is rationally preferable to switch. There is clearly something wrong with this argument. For one thing, it is obvious that neither choice is rationally preferable to the other: it’s a tossup. For another, if you switched on the basis of this reasoning, then the same argument could immediately be given for switching back; and so on, indefinitely. For another, there is a parallel argument for the rational preferability of sticking, in terms of the quantity y in the other envelope. But the problem is to provide an adequate account of how the argument goes wrong. This is the two envelope paradox. In an earlier paper Horgan 2000) I offered a diagnosis of the paradox. I argued that the flaw in the argument is considerably more subtle and interesting than is usually believed, and that an adequate diagnosis reveals important morals about both probability and the foundations of decision theory. One moral is that there is a kind of expected utility, not previously noticed as far as I know, that I call nonstandard expected utility. I proposed a general normative principle governing the proper application of nonstandard expected utility in rational decisionmaking. But this principle is inadequate in several respects, some of which I acknowledged in note added in press and some of which I have meanwhile discovered.. (shrink)
We respond to the central concerns raised by our commentators to our book, The Epistemological Spectrum. Casullo believes that our account of what we term “low-grade a priori” justification provides important clarification of a kind of philosophical reflection. However he objects to calling such reflection a priori. We explain what we think is at stake. Along the way, we comment on his idea of that there may be an epistemic payoff to making a distinction between assumptions and presumptions. In the (...) book, we argued that an epistemically important form of nonaccidental reliability can be understood as a matter of processes being “transglobally reliable under modulational control.” Graham recommends another form of nonaccidental reliability, one rooted in evolutionary etiology. We explain why we think that the reliability of perceptual processes is best understood as turning of the kinds of modulational control that we highlight. We clarify how this approach represents a kind of reasonable epistemic patience—modulational control takes time, as it must turn on agents generating information about their own capacities and foibles. Lyons raises interesting questions regarding how (what we term) morphological content possessed by the agent can do the work that we set for it. We argue that it is necessary in order for agents to accommodate the background information that is relevant to many central problems of belief formation. We clarify how it can be expected to work. (shrink)
We present a new argument for the claim that in the Sleeping Beauty problem, the probability that the coin comes up heads is 1/3. Our argument depends on a principle for the updating of probabilities that we call ‘generalized conditionalization’, and on a species of generalized conditionalization we call ‘synchronic conditionalization on old information’. We set forth a rationale for the legitimacy of generalized conditionalization, and we explain why our new argument for thirdism is immune to two attacks that Pust (...) (Synthese 160:97–101, 2008) has leveled at other arguments for thirdism. (shrink)
The semantic blindness objection to contextualism challenges the view that there is no incompatibility between (i) denials of external-world knowledge in contexts where radical-deception scenarios are salient, and (ii) affirmations of external-world knowledge in contexts where such scenarios are not salient. Contextualism allegedly attributes a gross and implausible form of semantic incompetence in the use of the concept of knowledge to people who are otherwise quite competent in its use; this blindness supposedly consists in wrongly judging that there is genuine (...) conflict between claims of type (i) and type (ii). We distinguish two broad versions of contextualism: relativistic-content contextualism and categorical-content contextualism. We argue that although the semantic blindness objection evidently is applicable to the former, it does not apply to the latter. We describe a subtle form of conflict between claims of types (i) and (ii), which we call différance-based affirmatory conflict. We argue that people confronted with radical-deception scenarios are prone to experience a form of semantic myopia (as we call it): a failure to distinguish between différance-based affirmatory conflict and outright inconsistency. Attributing such semantic myopia to people who are otherwise competent with the concept of knowledge explains the bafflement about knowledge-claims that so often arises when radical-deception scenarios are made salient. Such myopia is not some crude form of semantic blindness at all; rather, it is an understandable mistake grounded in semantic competence itself: what we call a competence-based performance error. (shrink)
David Henderson and Terence Horgan set out a broad new approach to epistemology, which they see as a mixed discipline, having both a priori and empirical elements. They defend the roles of a priori reflection and conceptual analysis in philosophy, but their revisionary account of these philosophical methods allows them a subtle but essential empirical dimension. They espouse a dual-perspective position which they call iceberg epistemology, respecting the important differences between epistemic processes that are consciously accessible and those that are (...) not. Reflecting on epistemic justification, they introduce the notion of transglobal reliability as the mark of the cognitive processes that are suitable for humans. Which cognitive processes these are depends on contingent facts about human cognitive capacities, and these cannot be known a priori. (shrink)
Phenomenal intentionality and the evidential role of perceptual experience: comments on Jack Lyons, Perception and Basic Beliefs Content Type Journal Article DOI 10.1007/s11098-010-9604-2 Authors Terry Horgan, University of Arizona, Tucson, AZ USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
In the formation of epistemically justified beliefs, what is the role of attention, and what is the role (if any) of non-attentional aspects of cognition? We will here argue that there is an essential role for certain nonattentional aspects. These involve epistemically relevant background information that is implicit in the standing structure of an epistemic agent’s cognitive architecture and that does not get explicitly represented during belief-forming cognitive processing. Since such “morphological content” (as we call it) does not become explicit (...) during belief formation, it cannot be information that is within the scope of attention. Nevertheless,it does exert a subtle influence on the character of conscious experience, rather than operating in a purely unconscious way. (shrink)
The philosophical account of vagueness I call "transvaluationism" makes three fundamental claims. First, vagueness is logically incoherent in a certain way: it essentially involves mutually unsatisfiable requirements that govern vague language, vague thought-content, and putative vague objects and properties. Second, vagueness in language and thought (i.e., semantic vagueness) is a genuine phenomenon despite possessing this form of incoherence—and is viable, legitimate, and indeed indispensable. Third, vagueness as a feature of objects, properties, or relations (i.e., ontological vagueness) is impossible, because of (...) the mutually unsatisfiable conditions that such putative items would have to meet. In this paper I set forth the core claims of transvaluationism in a way that acknowledges and explicitly addresses a challenging critique by Timothy Williamson of my prior attempts to articulate and defend this approach to vagueness. I sketch my favored approach to truth and ontological commitment, and I explain how it accommodates the impossibility of ontological vagueness. I argue that any approach to the logic and semantics of vagueness that both (i) eschews epistemicism and (ii) thoroughly avoids positing any arbitrary sharp boundaries (either first-order or higher-order) will have to be not an alternative to transvaluationism but an implementation of it. I sketch my reasons for repudiating epistemicism. I briefly describe my current thinking about how to accommodate intentional mental properties with vague content within an ontology that eschews ontological vagueness. And I revisit the idea, which played a key role in my earlier articulations of transvaluationism, that moral conflicts provide an illuminating model for understanding vagueness. (shrink)
Morphological content is information that is implicitly embodied in the standing structure of a cognitive system and is automatically accommodated during cognitive processing without first becoming explicit in consciousness. We maintain that much belief-formation in human cognition is essentially morphological : i.e., it draws heavily on large amounts of morphological content, and must do so in order to tractably accommodate the holistic evidential relevance of background information possessed by the cognitive agent. We also advocate a form of experiential evidentialism concerning (...) epistemic justification—roughly, the view that the justification-status of an agent’s beliefs is fully determined by the character of the agent’s conscious experience. We have previously defended both the thesis that much belief-formation is essentially morphological, and also a version of evidentialism. Here we explain how experiential evidentialism can be smoothly and plausibly combined with the thesis that much of the cognitive processing that generates justified beliefs is essentially morphological. The leading idea is this: even though epistemically relevant morphological content does not become explicit in consciousness during the process of belief-generation, nevertheless such content does affect the overall character of conscious experience in an epistemically significant way: it is implicit in conscious experience, and is implicitly appreciated by the experiencing agent. (shrink)
In his 1958 seminal paper “Saints and Heroes”, J. O. Urmson argued that the then dominant tripartite deontic scheme of classifying actions as being exclusively either obligatory, or optional in the sense of being morally indifferent, or wrong, ought to be expanded to include the category of the supererogatory. Colloquially, this category includes actions that are “beyond the call of duty” (beyond what is obligatory) and hence actions that one has no duty or obligation to perform. But it is a (...) controversial category. Some have argued that the concept of supererogation is paradoxical because on one hand, supererogatory actions are (by definition) supposed to be morally good, indeed morally best, actions. But then if they are morally best, why aren't they morally required, contrary to the assumption that they are morally optional? In short: how can an action that is morally best to perform fail to be what one is morally required to do? The source of this alleged paradox has been dubbed the ‘good-ought tie-up’. In our article, we address this alleged paradox by first making a phenomenological case for the reality of instances of genuine supererogatory actions, and then, by reflecting on the relevant phenomenology, explaining why there is no genuine paradox. Our explanation appeals to the idea that moral reasons can play what we call a merit conferring role. The basic idea is that moral reasons that favor supererogatory actions function to confer merit on the actions they favor—they play a merit conferring role—and can do without also requiring the actions in question. Hence, supererogatory actions can be both good and morally meritorious to perform yet still be morally optional. Recognition of a merit conferring role unties the good-ought tie up, and (as we further argue) there are good reasons, independent of helping to resolve the alleged paradox, for recognizing this sort of role that moral reasons may play. (shrink)
In Chapters 4 and 5 of his 1998 book From Metaphysics to Ethics: A Defence of Conceptual Analysis, Frank Jackson propounds and defends a form of moral realism that he calls both ‘moral functionalism’ and ‘analytical descriptivism’. Here we argue that this metaethical position, which we will henceforth call ‘analytical moral functionalism’, is untenable. We do so by applying a generic thought-experimental deconstructive recipe that we have used before against other views that posit moral properties and identify them with certain (...) natural properties, a recipe that we believe is applicable to virtually any metaphysically naturalist version of moral realism. The recipe deploys a scenario we call Moral Twin Earth. (shrink)
Within cognitive science, mental processing is often construed as computation over mental representations—i.e., as the manipulation and transformation of mental representations in accordance with rules of the kind expressible in the form of a computer program. This foundational approach has encountered a long-standing, persistently recalcitrant, problem often called the frame problem; it is sometimes called the relevance problem. In this paper we describe the frame problem and certain of its apparent morals concerning human cognition, and we argue that these morals (...) have significant import regarding both the nature of moral normativity and the human capacity for mastering moral normativity. The morals of the frame problem bode well, we argue, for the claim that moral normativity is not fully systematizable by exceptionless general principles, and for the correlative claim that such systematizability is not required in order for humans to master moral normativity. (shrink)
We argue that the letter of the Extended Mind hypothesis can be accommodated by a strongly internalist, broadly Cartesian conception of mind. The argument turns centrally on an unusual but (we argue) highly plausible view on the mark of the mental.
I maintain, in defending “thirdism,” that Sleeping Beauty should do Bayesian updating after assigning the “preliminary probability” 1/4 to the statement S: “Today is Tuesday and the coin flip is heads.” (This preliminary probability obtains relative to a specific proper subset I of her available information.) Pust objects that her preliminary probability for S is really zero, because she could not be in an epistemic situation in which S is true. I reply that the impossibility of being in such an (...) epistemic situation is irrelevant, because relative to I, statement S nonetheless has degree of evidential support 1/4. (shrink)
Moral phenomenology is (roughly) the study of those features of occurrent mental states with moral significance which are accessible through direct introspection, whether or not such states possess phenomenal character – a what-it-is-likeness. In this paper, as the title indicates, we introduce and make prefatory remarks about moral phenomenology and its significance for ethics. After providing a brief taxonomy of types of moral experience, we proceed to consider questions about the commonality within and distinctiveness of such experiences, with an eye (...) on some of the main philosophical issues in ethics and how moral phenomenology might be brought to bear on them. In discussing such matters, we consider some of the doubts about moral phenomenology and its value to ethics that are brought up by Walter Sinnott-Armstrong and Michael Gill in their contributions to this issue. (shrink)
Moral phenomenology is concerned with the elements of one's moral experiences that are generally available to introspection. Some philosophers argue that one's moral experiences, such as experiencing oneself as being morally obligated to perform some action on some occasion, contain elements that (1) are available to introspection and (2) carry ontological objectivist purportargument from phenomenological introspection.neutrality thesisthe phenomenological data regarding one's moral experiences that is available to introspection is neutral with respect to the issue of whether such experiences carry ontological (...) objectivist purport. (shrink)
Bayesians take “definite” or “single-case” probabilities to be basic. Definite probabilities attach to closed formulas or propositions. We write them here using small caps: PROB(P) and PROB(P/Q). Most objective probability theories begin instead with “indefinite” or “general” probabilities (sometimes called “statistical probabilities”). Indefinite probabilities attach to open formulas or propositions. We write indefinite probabilities using lower case “prob” and free variables: prob(Bx/Ax). The indefinite probability of an A being a B is not about any particular A, but rather about the (...) property of being an A. In this respect, its logical form is the same as that of relative frequencies. For instance, we might talk about the probability of a human baby being female. That probability is about human babies in general — not about individuals. If we examine a baby and determine conclusively that she is female, then the definite probability of her being female is 1, but that does not alter the indefinite probability of human babies in general being female. Most objective approaches to probability tie probabilities to relative frequencies in some way, and the resulting probabilities have the same logical form as the relative frequencies. That is, they are indefinite probabilities. The simplest theories identify indefinite probabilities with relative frequencies.3 It is often objected that such “finite frequency theories” are inadequate because our probability judgments often diverge from relative frequencies. For example, we can talk about a coin being fair (and so the indefinite probability of a flip landing heads is 0.5) even when it is flipped only once and then destroyed (in which case the relative frequency is either 1 or 0). For understanding such indefinite probabilities, it has been suggested that we need a notion of probability that talks about possible instances of properties as well as actual instances.. (shrink)
We propose an approach to epistemic justification that incorporates elements of both reliabilism and evidentialism, while also transforming these elements in significant ways. After briefly describing and motivating the non-standard version of reliabilism that Henderson and Horgan call “transglobal” reliabilism, we harness some of Henderson and Horgan’s conceptual machinery to provide a non-reliabilist account of propositional justification (i.e., evidential support). We then invoke this account, together with the notion of a transglobally reliable belief-forming process, to give an account of doxastic (...) justification. (shrink)
The hypothesis of the mental state-causation of behavior (the MSC hypothesis) asserts that the behaviors we classify as actions are caused by certain mental states. A principal reason often given for trying to secure the truth of the MSC hypothesis is that doing so is allegedly required to vindicate our belief in our own agency. I argue that the project of vindicating agency needs to be seriously reconceived, as does the relation between this project and the MSC hypothesis. Vindication requires (...) addressing what I call the agent-exclusion problem: the prima facie incompatibility between the intentional content of agentive experience and certain metaphysical hypotheses often espoused in philosophy. (shrink)
It has often been thought that our knowledge of ourselves is _different_ from, perhaps in some sense _better_ than, our knowledge of things other than ourselves. Indeed, there is a thriving research area in epistemology dedicated to seeking an account of self-knowledge that would articulate and explain its difference from, and superiority over, other knowledge. Such an account would thus illuminate the descriptive and normative difference between self-knowledge and other knowledge.<sup>1</sup> At the same time, self- knowledge has also encountered its (...) share of skeptics – philosophers who refuse to accord it any descriptive, let alone normative, distinction. In this paper, we argue that there is at least one _species_ of self-knowledge that is different from, and better than, other knowledge. It is a specific kind of knowledge of one’s concurrent phenomenal experiences. Call knowledge of one’s own phenomenal experiences _phenomenal knowledge_. Our claim is that some (though not all) phenomenal knowledge is different from, and better than, non-phenomenal knowledge. In other. (shrink)
According to rationalism regarding the psychology of moral judgment, people’s moral judgments are generally the result of a process of reasoning that relies on moral principles or rules. By contrast, intuitionist models of moral judgment hold that people generally come to have moral judgments about particular cases on the basis of gut-level, emotion-driven intuition, and do so without reliance on reasoning and hence without reliance on moral principles. In recent years the intuitionist model has been forcefully defended by Jonathan Haidt. (...) One important implication of Haidt’s model is that in giving reasons for their moral judgments people tend to confabulate – the reasons they give in attempting to explain their moral judgments are not really operative in producing those judgments. Moral reason-giving on Haidt’s view is generally a matter of post hoc confabulation. Against Haidt, we argue for a version of rationalism that we call ‘morphological rationalism.’ We label our version ‘morphological’ because according to it, the information contained in moral principles is embodied in the standing structure of a typical individual’s cognitive system, and this morphologically embodied information plays a causal role in the generation of particular moral judgments. The manner in which the principles play this role is via ‘proceduralization’ – such principles operate automatically. In contrast to Haidt’s intuitionism, then, our view does not imply that people’s moral reason-giving practices are matters of confabulation. In defense of our view, we appeal to what we call the ‘nonjarring’ character of the phenomenology of making moral judgments and of giving reasons for those judgments. (shrink)
We here propose an account of what it is for an agent to be objectively justified in holding some belief. We present in outline this approach, which we call transglobal reliabilism, and we discuss how it is motivated by various thought experiments. While transglobal reliabilism is an externalist epistemology, we think that it accommodates traditional internalist concerns and objections in a uniquely natural and respectful way.
How should the metaphysical hypothesis of materialism be formulated? What strategies look promising for defending this hypothesis? How good are the prospects for its successful defense, especially in light of the infamous “hard problem” of phenomenal consciousness? I will say something about each of these questions.
Human cognition is rich, varied, and complex. In this Chapter we argue that because of the richness of human cognition (and human mental life generally), there must be a syntax of cognitive states, but because of this very richness, cognitive processes cannot be describable by exceptionless rules. The argument for syntax, in Section 1, has to do with being able to get around in any number of possible environments in a complex world. Since nature did not know where in the (...) world humans would find themselves—nor within pretty broad limits what the world would be like —nature had to provide them with a means of “representing” a great deal of information about any of indefinitely many locations. We see no way that this could be done except by way of syntax— that is, by a systematic way of producing new, appropriate representations as needed. We discuss what being systematic must amount to, and what, in consequence, syntax should mean. We hold that syntax does not require a part/whole relationship. The argument for the claim that human cognitive processes cannot be described by exceptionless rules, in Section 2, appeals to the fact that there is no limit to the factors one might.. (shrink)
How should the metaphysical hypothesis of materialism be formulated? What strategies look promising for defending this hypothesis? How good are the prospects for its successful defense, especially in light of the infamous "hard problem" of phenomenal consciousness? I will say something about each of these questions.