"Dogmatism" is a term renovated by James Pryor  to stand for a certain kind of neo-Moorean response to Scepticism and an associated conception of the architecture of basic perceptual warrant. Pryor runs the response only for (some kinds of) perceptual knowledge but here I will be concerned with its general structure and potential as a possible global anti-sceptical strategy. Something like it is arguably also present in recent writings of Burge 1 and Peacocke.2 If the global strategy could succeed, (...) (...) it would pre-empt any role in the diagnosis and treatment of sceptical paradoxes for the kind of notion of entitlement (rational, non-evidential warrant) I have proposed elsewhere [Wright 2004]. But my overarching contention will be that Dogmatism is, generally and locally, too problematic a stance to be helpful in that project. (shrink)
The pain of rejection, the sweetness of revenge Content Type Journal Article Pages 1-12 DOI 10.1007/s11098-011-9794-2 Authors Crispin Wright, Department of Philosophy, New York University, 5 Washington Place, New York, NY, USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
Comment on Paul Boghossian, “The nature of inference” Content Type Journal Article Pages 1-11 DOI 10.1007/s11098-012-9892-9 Authors Crispin Wright, New York University, New York, NY, USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
[Crispin Wright] Two kinds of epistemological sceptical paradox are reviewed and a shared assumption, that warrant to accept a proposition has to be the same thing as having evidence for its truth, is noted. 'Entitlement', as used here, denotes a kind of rational warrant that counter-exemplifies that identification. The paper pursues the thought that there are various kinds of entitlement and explores the possibility that the sceptical paradoxes might receive a uniform solution if entitlement can be made to reach (...) sufficiently far. Three kinds of entitlement are characterised and given prima facie support, and a fourth is canvassed. Certain foreseeable limitations of the suggested anti-sceptical strategy are noted. The discussion is grounded, overall, in a conception of the sceptical paradoxes not as directly challenging our having any warrant for large classes of our beliefs but as crises of intellectual conscience for one who wants to claim that we do. /// [Martin Davies] Wright's account of sceptical arguments and his use of the idea of epistemic entitlement are reviewed. His notion of non-transmission of epistemic warrant is explained and a concern about his notion of entitlement is developed. An epistemological framework different from Wright's is described and several notions of entitlement are introduced. One of these, negative entitlement, is selected for more detailed comparison with Wright's notion. Thereafter, the paper shows how the two notions of entitlement have contrasting consequences for non-transmission of warrant and how they go naturally with two conceptions of the presuppositions of epistemic projects. Problems for negative entitlement are explained and solutions are proposed. (shrink)
Two and a half thousand years ago Greek philosophers "looked up at the sky and formed a theory of everything." Though their solutions are little credited today, the questions remain fresh. Early Greek thinkers struggled to come to terms with and explain the totality of their surroundings, to identitify an original substance from which the universe was compounded, and to reconcile the presence of balance and proportion with the apparent disorder of the cosmos. M. R. Wright examines cosmological theories (...) of the "natural philosophers" from Thales, Anaximander and Anaximenes to Plato, the Stoics and the NeoPlatonists. The importance of Babylonian and Egyptian forerunners is also emphasized. Cosmology in Antiquity is a comprehensive introduction to the cosmological thought in ancient times. (shrink)
Abstract Can the wise person be fooled? The Stoics take a very strong view on this question, holding that the wise person (or sage) is never deceived and never believes anything that is false. This seems to be an implausibly strong claim, but it follows directly from some basic tenets of the Stoic cognitive and psychological world-view. In developing an account of what wisdom really requires, I will explore the tenets of the Stoic view that lead to this infallibilism about (...) wisdom, and show that many of the elements of the Stoic picture can be preserved in a more plausible fallibilist approach. Specifically, I propose to develop a Stoic fallibilist virtue epistemology that is based on the Stoic model of the moral virtues. This model of the intellectual virtues will show that (in keeping with a folk distinction) the wise person is never befooled, though that person might be fooled. Content Type Journal Article Pages 1-14 DOI 10.1007/s12136-012-0158-0 Authors Sarah Wright, Department of Philosophy, University of Georgia, 107 Peabody Hall, Athens, GA 30602, USA Journal Acta Analytica Online ISSN 1874-6349 Print ISSN 0353-5150. (shrink)
This is a 5 page summary with three diagrams of the main objectives and some work in progress at the University of Birmingham Cognition and Affect project. involving: Professor Glyn Humphreys (School of Psychology), and Luc Beaudoin, Chris Paterson, Tim Read, Edmund Shing, Ian Wright, Ahmed El-Shafei, and (from October 1994) Chris Complin (research students). The project is concerned with "global" design requirements for coping simultaneously with coexisting but possibly unrelated goals, desires, preferences, intentions, and other kinds of motivators, (...) all at different stages of processing. Our work builds on and extends seminal ideas of H.A.Simon (1967). We are exploring "broad and shallow" architectures combining varied capabilities most of which are not implemented in great depth. The poster summarises some ideas about management and meta-management processes, attention filtering, and the relevance to emotional states involved "perturbances", where there is partial loss of control of attention. (shrink)
The setting of relativistic ideas about truth in the general style of semantic-theoretic apparatus pioneered by Lewis, Kaplan and others has persuaded many that they should at least be taken seriously as competition in the space of explanatory linguistic theory, a type of view which properly formulated, may offer an at least coherent — and indeed, in the view of some, a superior —account of certain salient linguistic data manifest in, for example, discourse about epistemic modals, about knowledge and about (...) matters of taste and value, and may also offer the prospect of a coherent regimentation of the Aristotelian "Open Future" (along with, perhaps, the Dummettian ’anti-real’ past.) My main purpose here to enter a reminder of certain underlying philosophical issues about relativism — about its metaphysical coherence, its metasemantic obligations, and the apparent limitations of the kind of local linguistic evidence which contemporary proponents have adduced in its favour —of which there is a risk that its apparent rehabilitation in rigorous semantic dress may encourage neglect. (shrink)
This is a short, and therefore necessarily very incomplete discussion of one of the great questions of modern philosophy. I return to a station at which an interpretative train of thought of mine came to a halt in a paper written almost 20 years ago, about Wittgenstein and Chomsky, hoping to advance a little bit further down the track. The rule-following passages in the Investigations and Remarks on the Foundations of Mathematics in fact raise a number of distinct (though connected) (...) issues about rules, meaning, objectivity, and reasons, whose conflation is encouraged by the standard caption, "the Rule-following Considerations". So, let me begin by explaining my focus here. (shrink)
Due to the wide array of phenomena that are of interest to them, psychologists offer highly diverse and heterogeneous types of explanations. Initially, this suggests that the question "What is psychological explanation?" has no single answer. To provide appreciation of this diversity, we begin by noting some of the more common types of explanations that psychologists provide, with particular focus on classical examples of explanations advanced in three different areas of psychology: psychophysics, physiological psychology, and information-processing psychology. To analyze what (...) is involved in these types of explanations, we consider the ways in which law-like representations of regularities and representations of mechanisms factor in psychological explanations. This consideration directs us to certain fundamental questions, e.g., "To what extent are laws necessary for psychological explanations?" and "What do psychologists have in mind when they appeal to mechanisms in explanation?" In answering such questions, it appears that laws do play important roles in psychological explanations, although most explanations in psychology appeal to accounts of mechanisms. Consequently, we provide a unifying account of what psychological explanation is. (shrink)
As much as assumptions about mechanisms and mechanistic explanation have deeply affected psychology, they have received disproportionately little analysis in philosophy. After a historical survey of the influences of mechanistic approaches to explanation of psychological phenomena, we specify the nature of mechanisms and mechanistic explanation. Contrary to some treatments of mechanistic explanation, we maintain that explanation is an epistemic activity that involves representing and reasoning about mechanisms. We discuss the manner in which mechanistic approaches serve to bridge levels rather than (...) reduce them, as well as the different ways in which mechanisms are discovered. Finally, we offer a more detailed example of an important psychological phenomenon for which mechanistic explanation has provided the main source of scientific understanding. (shrink)
§1 To many in or on the edges of the Academy, ”Relativism” is a word with overtones of sinister iconoclasm, representing a kind of intellectual and ethical free-for-all in which the traditional investigative virtues of clarity, rigour, objectivity, consistency and the unbiased pursuit of truth are dismissed as illusory and the great scientific constructions of the last two hundred years, together with our deepest moral convictions, rated merely as ‘our way of seeing’ the world, more elaborate and organised but otherwise (...) on all fours with the cosmology and customs of primitive tribes. In his short book, Paul Boghossian aims to address, and to expose as bankrupt, the idea that there is even a coherent, let alone defensible philosophical stance about truth and knowledge that can underwrite these ‘pluralist’, or ‘postmodernist’ tendencies. But since it is crucial to his project that his style of discussion make it available to non-specialists, much of the recent more technical, less emotional debate within analytical philosophy about relativism’s renaissance as a particular form of semantic theory is passed over unmentioned. Part of my aim in what follows is to illustrate how Boghossian's discussion connects quite straightforwardly with relativism in its contemporary analytical philosophical livery—what I have elsewhere called New Age relativism1— and how some of his critical arguments may be presented in that setting. My main contentions will be, first, that when relativism about epistemic justification, and about morals, are couched in the currently canonical sort of form, they still remain in range of artillery that Boghossian positions in chapter 6 of his book; second, that there is evasive action that they can.. (shrink)
What I am calling New Age Relativism is usually proposed as a thesis about the truth-conditions of utterances, where an utterance is an actual historic voicing or inscription of a sentence of a certain type. Roughly, it is the view that, for certain discourses, whether an utterance is true depends not just on the context of its making—when, where, to whom, by whom, in what language, and so on—and the “circumstances of evaluation”—the state of the world in relevant respects—but also (...) on an additional parameter: a context of assessment. Vary the latter and the truth-value of the utterance can vary, even though the context of its making and the associated state of the world remain fixed. (shrink)
When talking about truth, we ordinarily take ourselves to be talking about one-and-the-same thing. Alethic monists suggest that theorizing about truth ought to begin with this default or pre-reflective stance, and, subsequently, parlay it into a set of theoretical principles that are aptly summarized by the thesis that truth is one. Foremost among them is the invariance principle.
It is common among philosophers who take an interest in the phenomenon of vagueness in natural language not merely to acknowledge higher-order vagueness but to take its existence as a basic datum— so that views that lack the resources to account for it, or that put obstacles in the way, are regarded as deficient just on that score. My main purpose in what follows is to loosen the hold of this deeply misconceived idea. Higher-order vagueness is no basic datum but (...) an illusion, fostered by misunderstandings of the nature of (ordinary, if you will ‘first-order’) vagueness itself. To see through the illusion is to take a step that is prerequisite for a correct understanding of vagueness, and for any satisfying dissolution of its attendant paradoxes. (shrink)
This paper argues that the form of explanation at issue in the hard problem of consciousness is scientifically irrelevant, despite appearances to the contrary. In particular, it is argued that the 'sense of understanding' that plays a critical role in the form of explanation implicated in the hard problem provides neither a necessary nor a sufficient condition on satisfactory scientific explanation. Considerations of the actual tools and methods available to scientists are used to make the case against it being a (...) necessary condition, and work by J.D. Trout that exploits psychological research on the hindsight and overconfidence biases is used to show that it is not a sufficient condition. It is argued, however, that certain intellectual and moral concerns give us good reason to still try to meet the hard problem's explanatory challenge, despite its extrascientific nature. (shrink)
In the Begriffschrift Frege drew no distinction—or anyway signalled no importance to the distinction—between quantifying into positions occupied by what he called eigennamen—singular terms—in a sentence and quantification into predicate position or, more generally, quantification into open sentences—into what remains of a sentence when one or more occurrences of singular terms are removed. He seems to have conceived of both alike as perfectly legitimate forms of generalisation, each properly belonging to logic. More accurately: he seems to have conceived of quantification (...) as such as an operation of pure logic, and in effect to have drawn no distinction between first-order, second-order and higherorder quantification in general. (shrink)
It has been claimed that the attempt to analyze know-how in terms of propositional knowledge over-intellectualizes the mind. Exploiting the methods of so-called “experimental philosophy”, we show that the charge of over-intellectualization is baseless. Contra neo-Ryleans, who analyze know-how in terms of ability, the concrete-case judgments of ordinary folk are most consistent with the view that there exists a set of correct necessary and sufficient conditions for know-how that does not invoke ability, but rather a certain sort of propositional knowledge. (...) To the extent that one’s considered judgments agree with those of the folk (or to the extent that one is unwilling to contravene widespread judgments), this constitutes a strong prima facie case against neo-Ryleanism. (shrink)
Embodied Cognition is the kind of view that is all trees, no forest. Mounting experimental evidence gives it momentum in fleshing out the theoretical problems inherent in Cognitivists’ separation of mind and body. But the more its proponents compile such evidence, the more the fundamental concepts of Embodied Cognition remain in the dark. This conundrum is nicely exemplified by Pecher and Zwaan’s (2005) book, Grounding Cognition, which is a programmatic attempt to rally together an array of empirical results and linguistic (...) data, and its successes in this endeavor nicely epitomize current directions among the various research provinces of Embodied Cognition. The untoward drawback, however, is that such successes are symptomatic of the growing imbalance between experimental progress and theoretical interrogation. In particular, one of the theoretical cornerstones of Embodied Cognition—namely, the very concept of grounding under investigation here—continues to go unilluminated. Hence, the advent of this volume indicates that—now more than ever—the concept of grounding is in dire need of some plain old-fashioned conceptual analysis. In that sense, Embodied Cognition is grounded until further notice. (shrink)
Psychoneural reductionists sometimes claim that sufficient amounts of lower-level explanatory achievement preclude further contributions from higher-level psychological research. Ostensibly, with nothing left to do, the effect of such preclusion on psychological explanation is extinction. Reductionist arguments for preclusion have recently involved a reorientation within the philosophical foundations of neuroscience---namely, away from the philosophical foundations and toward the neuroscience. In this chapter, I review a successful reductive explanation of an aspect of reward function in terms of dopaminergic operations of the mesocorticolimbic (...) system in order to demonstrate why preclusion/extinction claims are dubious. (shrink)
What is it to trust someone? What is it for someone to be trustworthy? These are the two main questions that this paper addresses. There are various situations that can be described as ones of trust, but this paper considers the issue of trust between individuals. In it, I suggest that trust is distinct from reliance or cases where someone asks for something on the expectation that it will be done due to the different attitude taken by the trustor. I (...) argue that the trustor takes Holtonâs âparticipant stanceâ and this distinguishes trust from reliance. I argue that trustworthiness is different from reliability and that an account of trustworthiness cannot be successful whilst ignoring the point that aligning trustworthiness with reliability removes the virtue from being trustworthy. On the question of what it is distinguishes trustworthiness from reliability, I argue that the distinction is in the opportunity for the trustee to act against the wishes of the trustor and the trusteeâs consideration of the value of the trust that has been placed in them by the trustor. (shrink)
In the paper we argue that truth-relativism is potentially hostage to a problem of exhibiting witnesses of its own truth. The problem for the relativist stems from acceptance of a trumping principle according to which there is a dependency between ascriptions of truth of an utterance and ascriptions of truth to other ascriptions of truth of that utterance. We argue that such a dependency indeed holds in the case of future contingents and the case of epistemic modals and that, consequently, (...) the relativist about these domains cannot exhibit witnesses to his relativism. In the appendix we provide some results on the relation between trumping and multi-order relativism. (shrink)
Attention to the conversational role of alethic terms seems to dominate, and even sometimes exhaust, many contemporary analyses of the nature of truth. Yet, because truth plays a role in judgment and assertion regardless of whether alethic terms are expressly used, such analyses cannot be comprehensive or fully adequate. A more general analysis of the nature of truth is therefore required – one which continues to explain the significance of truth independently of the role alethic terms play in discourse. We (...) undertake such an analysis in this paper; in particular, we start with certain elements from Kant and Frege, and develop a construct of truth as a normative modality of cognitive acts (e.g., thought, judgment, assertion). Using the various biconditional T-schemas to sanction the general passage from assertions to (equivalent) assertions of truth, we then suggest that an illocutionary analysis of truth can contribute to its locutionary analysis as well, including the analysis of diverse constructions involving alethic terms that have been largely overlooked in the philosophical literature. Finally, we briefly indicate the importance of distinguishing between alethic and epistemic modalities. (shrink)
Abstract: Recent experimental research on the 'Knobe effect' suggests, somewhat surprisingly, that there is a bi-directional relation between attributions of intentional action and evaluative considerations. We defend a novel account of this phenomenon that exploits two factors: (i) an intuitive asymmetry in judgments of responsibility (e.g. praise/blame) and (ii) the fact that intentionality commonly connects the evaluative status of actions to the responsibility of actors. We present the results of several new studies that provide empirical evidence in support of this (...) account while disconfirming various currently prominent alternative accounts. We end by discussing some implications of this account for folk psychology. (shrink)
This book is an important collection of new essays on various topics relating to realism and its rivals in metaphysics, logic, metaethics, and epistemology. The contributors include some of the leading authors in these fields and in several cases their essays constitute definitive statements of their views. In some cases authors write in response to the essays of other contributors, in other cases they proceed independently. Although not primarily historical this collection includes discussions of philosophers from the middle ages to (...) the present day, from Aquinas to Wittgenstein. No one seriously interested in questions about realism, whether as a general philosophical outlook or as a particular position within specific debates, can afford to be without this collection. (shrink)
What is wrong with abstraction, Michael Potter and Peter Sullivan explain a further objection to the abstractionist programme in the foundations of mathematics which they first presented in their Hale on Caesar and which they believe our discussion in The Reason's Proper Study misunderstood. The aims of the present note are: To get the character of this objection into sharper focus; To explore further certain of the assumptions—primarily, about reference-fixing in mathematics, about certain putative limitations of abstractionist set theory, and (...) about the effects of impredicativity in abstraction principles—which underlie it; and To advance the debate of the issues thereby raised. Thanks for helpful comments to Roy Cook and to an anonymous referee. CiteULike Connotea Del.icio.us What's this? (shrink)
In view of the excellent arguments that have been put forth recently in favour of qualia, internal sensory presentations, it would strike an impartial observer - one could imagine a future historian of philosophy - as extremely odd why so many philosophers who are opposed to qualia, that is, sensory experiences internal to the brain, have largely ignored those arguments in their own. There has been a fashionable assumption that any theory of perception which espouses qualia has long since been (...) overcome by a number of 'formidable' objections, in particular, the Homunculus/Infinite Regress Objection, the Solipsism Objection, Austin's Illusion/Delusion Objection, the Ludicrousness-of-Colours-in-the-Brain Objection, the Indirect-Realist-has-to-assume-Direct-Realism Objection, the Impossibility-of-Comparing-Internal-with-External Objection, the Impossibility of Intrinsic Experience, and several more minor varieties of these. It is uncanny how they continue to be repeated, indeed, with a kind of automatism, evidenced by the fact that none of those who repeat them appear to have taken note of the answers to the objections. Indeed, they only appear to refer to those philosophers with whom they agree: it has long been insisted upon in the study of rhetoric that one of the weakest things to do in an argument is to ignore the main points made by one's opponent:
[it is] the wisest plan _to state Objections in their full force_ ; at least, wherever there does exist a satisfactory answer to them; otherwise, those who hear them stated more strongly than by the uncandid advocate who had undertaken to repel them, will naturally enough conclude that they are unanswerable. It is but a momentary and ineffective triumph that can be obtained by man. (shrink)
It is a great pleasure to have the opportunity to contribute to this volume dedicated to the critical celebration of Stephen Schiffer’s very considerable philosophical achievements. My focus will be on his recent work on vagueness.1 The broad direction of Schiffer’s researches in this area has been to give priority to what we may call the characterisation problem: the problem of saying what the vagueness of expressions of natural language consists in or, more specifically – since Schiffer takes it as (...) a given that the vagueness he is targeting consist in a propensity of vague expressions to give rise to borderline cases —the problem of saying what being a borderline case of the concept expressed by a vague expression consists in. This has not been a main preoccupation of most of the work in the field since the vagueness “boom” started in the mid 1970s. There has been a tendency to jump straight into devising semantic theories for vague languages, usually aimed at twin desiderata of saving classical logic and dissolving the various paradoxes of vagueness, with a principal focus on the standard sorites, and occasional glances at the Forced March, and others.2 Of course, such work has inevitably implicated commitment to broad conceptions of vagueness, and of borderline cases, of various kinds. The classical epistemicist approach, for example, conceives of borderline cases as instances whose correct classification in terms of the relevant concept is, for reasons it attempts to explain, unknowable. Semantic indeterminist approaches, by contrast, tend (often implicitly) to conceive of borderline cases as items to which the concept in question neither applies nor fails to apply and as coming about because our practice with the concept leaves it, in effect, merely partially defined and so ‘gappy’. A variation on this, still semantic indeterminist, regards vagueness as consisting in a phenomenon akin to divided reference, whereby a predicate, for example, may be associated with a range of extensionally distinct best candidates to be the property it refers to; borderline cases are then items which exemplify some but not all of these properties.. (shrink)
This paper is a reply to George Boolos's three papers (Boolos (1987a, 1987b, 1990a)) concerned with the status of Hume's Principle. Five independent worries of Boolos concerning the status of Hume's Principle as an analytic truth are identified and discussed. Firstly, the ontogical concern about the commitments of Hume's Principle. Secondly, whether Hume's Principle is in fact consistent and whether the commitment to the universal number by adopting Hume's Principle might be problematic. Also the so-called `surplus content' worry is discussed, (...) which points out that the conceptual resources to grasp Hume's Principle vastly outstrip the conceptual resources employed in arithmetical reasoning. And lastly whether Hume's Principle is in bad company with other unsuccessful implicit definitions. In the last section, an account towards our entitlement to Hume&'s Principle is sketched. (shrink)
In giving an account of the content of perceptual experience, several authors, including Fred Dretske, Gareth Evans, Christopher Peacocke, and Michael Tye, have employed the notion of nonconceptual representational content..
Functionalists about truth employ Ramsification to produce an implicit definition of the theoretical term _true_, but doing so requires determining that the theory introducing that term is itself true. A variety of putative dissolutions to this problem of epistemic circularity are shown to be unsatisfactory. One solution is offered on functionalists' behalf, though it has the upshot that they must tread on their anti-pluralist commitments.
"New wave" reductionism aims at advancing a kind of reduction that is stronger than unilateral dependency of the mental on the physical. It revolves around the idea that reduction between theoretical levels is a matter of degree, and can be laid out on a continuum between a "smooth" pole (theoretical identity) and a "bumpy" pole (extremely revisionary). It also entails that both higher and lower levels of the reductive relationship sustain some degree of explanatory autonomy. The new wave predicts that (...) reductions of folk psychology to neuroscience will be located in the middle of this continuum; as neuroscientific evidence about mental states checks in, theoretical folk psychology will therefore be moderately revised. However, the model has conceptual problems which preclude its success in reviving reductionism, and its commitment to a syntactic approach wrecks its attempt to rescue folk psychology. Moreover, the architecture of the continuum operates on a category mistake that sneaks in an eliminativist conclusion. I argue that new wave reductionism therefore tends to be eliminativism in disguise. (shrink)
This paper proposes a subjectivist approach to color within the framework of an externalist form of representationalism about phenomenal consciousness. Motivations are presented for accepting both representationalism and color subjectivism, and an argument is offered against the case made by Michael Tye on behalf of the claim that colors are objective, physical properties of objects. In the face of the considerable difficulties associated with finding a workable realist theory of color, the alternative account of color experience set out, projectivist representationalism, (...) claims that the color properties we encounter in experience exist only in the representational contents of our experiences. Color experiences are caused by the physical structure of objects, but objects are never actually colored and color experiences systematically misrepresent objects as colored. However, despite being an error theory of color, projectivist representationalism does not do violence to our everyday use and understanding of color concepts and terms, nor does it undermine the role of color experience in aiding the perceiving subject in navigating through the world. (shrink)
There is now a widespread accord among philosophers that the vagueness of natural language gives rise to some particularly deep and perplexing problems and paradoxes. It was not always so. For most of the first century of analytical philosophy, vagueness was generally regarded as a marginal, slightly irritating phenomenon, —receiving some attention, to be sure, in parts of the Philosophical Investigations and in the amateur linguistics enjoyed by philosophers in Oxford in the 1950s, but best idealised away in any serious (...) theoretical treatment of meaning, understanding and valid inference. Frege, as is well known, had come to be thoroughly mistrustful of vagueness, supposing that a language fit for the purpose of articulating scientific and mathematical knowledge would have to be purified of it. Later trends in philosophical logic and semantics followed his lead, not indeed in setting about the (futile) task of expurgating vagueness from natural language but by largely restricting theoretical attention to artificial languages in whose workings vagueness was assigned no role. (shrink)
In “Double Vision Two Questions about the Neo-Fregean Programme”, John MacFarlane’s raises two main questions: (1) Why is it so important to neo-Fregeans to treat expressions of the form ‘the number of Fs’ as a species of singular term? What would be lost, if anything, if they were analysed instead as a type of quantifier-phrase, as on Russell’s Theory of Definite Descriptions? and (2) Granting—at least for the sake of argument—that Hume’s Principle may be used as a means of implicitly (...) defining the number operator, what advantage, if any, does adopting this course possess over a direct stipulation of the Dedekind-Peano axioms? This paper attempts to answer them. In response to the first, we spell out the links between the recognition of numerical terms as vehicles of singular reference and the conception of numbers as possible objects of singular, or object-directed, thought, and the role of the acknowledgement of numbers as objects in the neo-Fregean attempt to justify the basic laws of arithmetic. In response to the second, we argue that the crucial issue concerns the capacity of either stipulation—of Hume’s Principle, or of the Dedekind-Peano axioms—to found knowledge of the principles involved, and that in this regard there are crucial differences which explain why the former stipulation can, but the latter cannot, play the required foundational role. (shrink)
Abstract: Contextualism in epistemology has been proposed both as a way to avoid skepticism and as an explanation for the variability found in our use of "knows." When we turn to contextualism to perform these two functions, we should ensure that the version we endorse is well suited for these tasks. I compare two versions of epistemic contextualism: attributor contextualism (from Keith DeRose) and methodological contextualism (from Michael Williams). I argue that methodological contextualism is superior both in its response to (...) skepticism and in its mechanism for changing contexts. However, methodological contextualism still faces two challenges: explaining why we are solidly committed to some contexts, and explaining why knowledge within a context is valuable. I propose virtue contextualism as a useful extension of methodological contextualism, focusing on the way that our virtues depend on our social roles. My proposed virtue contextualism retains the benefits of methodological contextualism while explaining both our commitment to particular contexts and the value of knowledge held within those contexts. (shrink)
In the face of pluralism, moral constructivists attempt to salvage cognitivism by separating moral and ethical issues. Divergence over ethical issues, which concern the good life, would not threaten moral cognitivism, which is based on identifying generalizable interests as worthy of defending, using reason. Yet this approach falters given the inability of the constructivist to provide us a sure path by which to discern generalizable interests in difficult cases. Still, even if this approach to constructivism fails, cognitivist aspirations may not (...) be defeated if we can continue discursively in a project of identifying and appreciating the interests of others. Grasping the interests of others may require a transformation of moral sensibility such that agents recognize values they have not acknowledged before. This view calls for external moral discourse—that is, moral discourse that makes no appeal to an agent's present interests or desires but rather engages in description of the moral situation in hopes of bringing about a change in moral sensibility. (shrink)
Traditional inflationary approaches that specify the nature of truth are attractive in certain ways; yet, while many of these theories successfully explain why propositions in certain domains of discourse are true, they fail to adequately specify the nature of truth because they run up against counterexamples when attempting to generalize across all domains. One popular consequence is skepticism about the efficaciousness of inﬂationary approaches altogether. Yet, by recognizing that the failure to explain the truth of disparate propositions often stems from (...) inflationary approaches' allegiance to alethic monism, pluralist approaches are able to avoid this explanatory inadequacy and the resulting skepticism, though at the cost of inviting other conceptual difficulties. A novel approach, alethic functionalism, attempts to circumvent the problems faced by pluralist approaches while preserving their main insights. Unfortunately, it too generates additional problems---namely, with its suspect appropriation of the multiple realizability paradigm and its platitude-based strategy---that need to be dissolved before it can constitute an adequate inflationary approach to the nature of truth. (shrink)
A sensory receptor, in any organism anywhere, is sensitive through time to some distribution - energy, motion, molecular shape - indeed, anything that can produce an effect. The sensitivity is rarely direct: for example, it may track changes in relative variation rather than the absolute change of state (as when the skin responds to colder and hotter instead of to cold and hot as such); it may track differing variations under different conditions (the eyes' dark-adaptation; adaptation to sound frequencies can (...) lower the difference threshold; the kinesthetic sense will shut down if a limb is held in a stationary position too long - the limb 'going to sleep'); it may be subject to distortion of the input from overloading (dazzle producing strong-after-images); it may not be confined to one channel of sensitivity (the retina is sensitive to pressure; the hands can feel some strong sound-vibrations, the tympanum of the ear records touch). Strictly speaking there is no limit as to what intensities and what ranges receptors could be sensitive. Sharks are sensitive to electrostatic fields, homing pigeons to magnetic fields; snakes to infra-red rays; bacteria to acid concentrations; perhaps there has even been a mutant organism sensitive to the passage of cosmic rays, even though that would hardly have bestowed any conceivable survival value. What is irrefutable is that individual receptors differ markedly from organism to organism, between different members of the species (one dog being better at tracing smells than another; one person being able to sense light-waves of 375 nanometres, another not; children able to hear 20,000 Hz, older persons not), and between receptors of the same kind within one organism (one eye being sensitive to 765 nm and the other not; one ear deaf to 15,000 Hz and over, the other not). There are also just-noticeable-differences (JND's), in that one person can see two shades of a colour where another sees only one; similarly with sounds. (shrink)
It is the purpose of this article to explicate the logical implications of a television analogy for perception, first suggested by John R. Smythies (1956). It aims to show not only that one cannot escape the postulation of qualia that have an evolutionary purpose not accounted for within a strong functionalist theory, but also that it undermines other anti-representationalist arguments as well as some representationalist ones.
This paper develops and draws the consequences of an etiological analysis of goal-directedness modeled on one that functions centrally in Charles Taylor's work on action. The author first presents, criticizes, and modifies Taylor's formulation, and then shows his modified formulation accounts easily for much of the fine-structure of teleological concepts and conceptualizations. Throughout, the author is at pains to show that teleological explanations are orthodox from an empiricist's point of view: they require nothing novel methodologically.
What role can intellectual virtues play in an account of knowledge when we interpret those virtues internalistically, i.e., as depending only on internal states of the cognizer? Though it has been argued that internalist virtues are ill suited to play any role in an account of knowledge, I will show that, on the contrary, internalist virtues can play an important role in recent accounts of knowledge developed to utilize externalist virtues. The virtue account of knowledge developed by Linda Zagzebski is (...) intended to be supplemented by her version of the intellectual virtues which require an external success component. John Greco and Wayne Riggs both develop credit accounts of knowledge on which the abilities we use when we get credit for a true belief must be reliable. I examine the similarities between these three accounts of knowledge and demonstrate that internalist virtues fit into these accounts just as well as externalist virtues. Thus, although internalist virtues do not require a reliable connection to truth, they can still play an important role in defining the truth-requiring concept of knowledge. (shrink)
This paper addresses three problems: the problem of formulating a coherent relativism, the Sorites paradox and a seldom noticed difficulty in the best intuitionistic case for the revision of classical logic. A response to the latter is proposed which, generalised, contributes towards the solution of the other two. The key to this response is a generalised conception of indeterminacy as a specific kind of intellectual bafflement - Quandary. Intuitionistic revisions of classical logic are merited wherever a subject matter is conceived (...) both as liable to generate Quandary and as subject to a broad form of evidential constraint. So motivated, the distinctions enshrined in intuitionistic logic provide both for a satisfying resolution of the Sorites paradox and a coherent outlet for relativistic views about, e.g., matters of taste and morals. An important corollary of the discussion is that an epistemic conception of vagueness can be prised apart from the strong metaphysical realism with which its principal supporters have associated it, and acknowledged to harbour an independent insight. (shrink)
This paper proposes a framework for an ethical impact assessment which can be performed in regard to any policy, service, project or programme involving information technology. The framework is structured on the four principles posited by Beauchamp and Childress together with a separate section on privacy and data protection. The framework identifies key social values and ethical issues, provides some brief explanatory contextual information which is then followed by a set of questions aimed at the technology developer or policy-maker to (...) facilitate consideration of ethical issues, in consultation with stakeholders, which may arise in their undertaking. In addition, the framework includes a set of ethical tools and procedural practices which can be employed as part of the ethical impact assessment. Although the framework has been developed within a European context, it could be applied equally well beyond European borders. (shrink)
This is a multi-disciplinary exploration of the history of understanding of the human mind or soul and its relationship to the body, through the course of more than two thousand years. Thirteen specially commissioned chapters, each written by a recognized expert, discuss such figures as the doctors Hippocrates and Galen, the theologians St Paul, Augustine, and Aquinas, and philosophers from Plato to Leibniz.
Sunstein's characterization of moral blunders jointly indicts an intuitive process and the structure of heuristics. But intuitions need not lead to error, and the problems with moral heuristics apply also to moral principles. Accordingly, moral development may well involve more, rather than less, intuitive responsiveness. This suggests a novel trajectory for future research into the development of appropriate moral judgments.
Knowledge of one's own sensations, desires, intentions, thoughts, beliefs, and other attitudes is characteristically different from other kinds of knowledge: it has greater immediacy, authority, and salience. This volume offers a powerful and comprehensive look at current work on this topic, featuring closely interlinked essays by leading figures in the field that examine philosophical questions raised by the distinctive character of self-knowledge, relating it to knowledge of other minds, to rationality and agency, externalist theories of psychological content, and knowledge of (...) language. (shrink)
Many theorists claim that justice is a question-begging concept that has no inherent substantive content. They point to disagreements among justice theorists themselves about basic aspects of the justice theory, such as the nature of corrective justice and the distinction between it and distributive justice, as even further reason to dismiss the concept of justice or to fill it with their preferred theoretical content. Yet most persons perceive that the concept of justice is not an empty shell. Since ancient times (...) it has been thought to encompass not merely a formal equality (treating like cases alike), but also a substantive equality grounded in the equal dignity of each human being which requires giving each person his or her "due" - what is his or hers as a matter of right - a requirement that is usually understood to be in direct conflict with the basic principles of aggregate social welfare theories such as utilitarianism or its modern variant, economic efficiency. The elaboration of this substantive equality and its implications for morality, justice, and law form the core of the natural-law or natural-right theories of law. In this article I build upon a summary and critique of John Finnis's natural law theory (1) to delineate the basic assumptions and principles of the natural law theory regarding the foundations of and relationships among morality, justice, and law; (2) to demonstrate the agreement of major natural law theorists, from Aristotle through Aquinas and Kant to Finnis, on these basic assumptions and principles; (3) to distinguish these basic assumptions and principles from those of the competing theories of utilitarianism and economic efficiency; and (4) to clarify the nature of and distinctions between the two basic divisions of substantive justice: distributive justice and interactive justice. I use the term "interactive justice" instead of the usual term, "corrective justice," since the former term is much more informative and precise in conveying the distinct nature and domain of this type of justice, whereas the latter term almost always misleads people into one or both of two related misconceptions: (1) that "corrective" justice is concerned solely with the correction of wrongful injuries and has nothing to say about the nature of the underlying wrongs or the prevention of their occurrence, and (2) that it is merely a remedial corollary of distributive justice which corrects deviations from the distributively just distribution. Distributive justice and interactive justice separately address the two fundamental problems of human existence, and they employ quite different criteria of equality to resolve those problems. Together, they seek to assure the attainment of the common good (the full realization, to the extent practicable, of each person's humanity) by providing each person with her fair share of the social stock of instrumental goods (positive freedom via distributive justice) and by securing her person and her existing stock of instrumental goods from interactions with others that are inconsistent with her status as a rational being with equal, absolute moral worth (negative freedom via interactive justice). (shrink)
If a sensory field exists as a pure natural sign open to all kinds of interpretation as evidence (see 'Sensing as non-epistemic'), what is it that does the interpreting? Borrowing from the old Gestalt psychologists, I have proposed a gestalt module that picks out wholes from the turmoil, it being the process of noticing or attending to , but the important difference from Koffka and Köhler (Koffka, 1935; Köhler, 1940), the originators of the term 'gestalt' in the psychology of perception (...) ( is that the emphasis is upon the gestalt projection as motivated . Gestalt-attention of this kind is usually enforced in the first instance by pain or pleasure, and the resulting projections are placed in memory tabbed with fear or desire, such that if such a pattern recurs in the sensory field, fear or desire are triggered. In advanced animals the ability to play with the gestalt module has been evolved, because experimenting in curiosity has proved adaptive, as the exploratory behaviour in the Rat, the Raven, the Apes and Homo sapiens bears out. (shrink)
Thomas Nagel has held that transcendence requires attaining a point of view stripped of features unique to our perspective. The aim of transcendence on this view is to get at reality as it is, independent of our contributions to it. I show this notion of transcendence to be incoherent, yet defend a contrasting notion of transcendence. As conceived here, transcendence does not require striving for an external, objective viewpoint on nature or looking at matters from someone else's or an impartial (...) point of view. On my view, which builds on the work of Iris Murdoch, transcendence consists of a refinement of our concepts and sensibility to make them more adequate to the individuals we encounter. (Published Online October 13 2005). (shrink)
Lucas and Penrose have contended that, by displaying how any characterisation of arithmetical proof programmable into a machine allows of diagonalisation, generating a humanly recognisable proof which eludes that characterisation, Gödel's incompleteness theorem rules out any purely mechanical model of the human intellect. The main criticisms of this argument have been that the proof generated by diagonalisation (i) will not be humanly recognisable unless humans can grasp the specification of the object-system (Benacerraf); and (ii) counts as a proof only on (...) the (unproven) hypothesis that the object system is consistent (Putnam). The present paper argues that criticism (ii) may be met head-on by an intuitionistic proponent of the anti-mechanist argument; and that criticism (i) is simply mistaken. However the paper concludes by questioning the sufficiency of the situation for an interesting anti-mechanist conclusion. (shrink)