Traditional inflationary approaches that specify the nature of truth are attractive in certain ways; yet, while many of these theories successfully explain why propositions in certain domains of discourse are true, they fail to adequately specify the nature of truth because they run up against counterexamples when attempting to generalize across all domains. One popular consequence is skepticism about the efficaciousness of inﬂationary approaches altogether. Yet, by recognizing that the failure to explain the truth of disparate propositions often stems from (...) inflationary approaches' allegiance to alethic monism, pluralist approaches are able to avoid this explanatory inadequacy and the resulting skepticism, though at the cost of inviting other conceptual difficulties. A novel approach, alethic functionalism, attempts to circumvent the problems faced by pluralist approaches while preserving their main insights. Unfortunately, it too generates additional problems---namely, with its suspect appropriation of the multiple realizability paradigm and its platitude-based strategy---that need to be dissolved before it can constitute an adequate inflationary approach to the nature of truth. (shrink)
This is a concise and readable study of five intertwined themes at the heart of Wittgenstein's thought, written by one of his most eminent interpreters. David Pears offers penetrating investigations and lucid explications of some of the most influential and yet puzzling writings of twentieth-century philosophy. He focuses on the idea of language as a picture of the world; the phenomenon of linguistic regularity; the famous "private language argument"; logical necessity; and ego and the self.
This article does a research into a theme that has not been studied specifically. This is curious, as the idea of philosophy is the most important one in Ortega y Gasset’s system. Here, it is considered concerning its specifity: the principles of pantonomy and autonomy his platitudinous nature and its integrating and critical functions. As it can be seen, Philosophy’s traditional characteristics. Ortega sets them out in a masterly way, reaching its hight literary brilliance when he praises Philosophy.
One increasingly popular technique in philosophy might be called the "platitudes analysis": a set of widely accepted claims about a given subject matter are collected, adjustments are made to the body of claims, and this is taken to specify a “role” for the phenomenon in question. (Perhaps the best-known example is analytic functionalism about mental states, where platitudes about belief, desire, intention etc. are together taken to give us a "role" for states to fill if they are to count as (...) mental states.) We then look to our best theory of the world to see where this role is satisfied, if at all. Unfortunately, the platitudes analysis, so characterised, does not seem to help when we are doing fundamental metaphysics—when we want to know what, at base, our world is like (and not merely where things like e.g. the mental would be found in an already-specified ontology). Nevertheless, I will argue that the platitudes analysis, properly understood, does have the materials to help us answer questions in fundamental metaphysics as well. I will explore three different ways it can do so. (shrink)
We present a strategy to dissolve semantic paradoxes which proceeds from an explanation of why paradoxical sentences or their definitions are semantically defective. This explanation is compatible with the acceptability of impredicative definitions, self-referential sentences and semantically closed languages and leaves the status of the so-called truth-teller sentence unaffected. It is based on platitudes which encode innocuous constraints on successful definition and successful expression of propositional content. We show that the construction of liar paradoxes and of certain versions of Curry’s (...) paradox rests on presuppositions that violate these innocuous constraints. Other versions of Curry’s paradox are shown not to be paradoxical at all once their presuppositions are made explicit. Part of what we say rehearses a proposal originally made by Laurence Goldstein in 1985. Like Goldstein we dispose of certain paradoxes by rejecting some of the premises from which they must be taken to proceed. However, we disagree with his more recent view that the premises to be rejected are neither true nor false. (shrink)
The article considers some of the methodological commitments - specifically, what Brandom calls Gadamerian platitudes - defended in Tales of the Mighty Dead . I argue that, given his commitment to Gadamers model of dialogue and Vorgriff der Vollkommenheit (anticipation of completeness), Brandom should also accept Habermas position on the ineliminability of the second-person or performative perspective concerning our interpretive claims. Key Words: first person Hans Georg Gadamer Jürgen Habermas hermeneutics inferential semantics performative pragmatics (...) second person third person. (shrink)
It is a platitude among decision theorists that agents should choose their actions so as to maximize expected value. But exactly how to define expected value is contentious. Evidential decision theory (henceforth EDT), causal decision theory (henceforth CDT), and a theory proposed by Ralph Wedgwood that this essay will call benchmark theory (BT) all advise agents to maximize different types of expected value. Consequently, their verdicts sometimes conflict. In certain famous cases of conflict—medical Newcomb problems—CDT and BT seem to (...) get things right, while EDT seems to get things wrong. In other cases of conflict, including some recent examples suggested by Andy Egan, EDT and BT seem to get things right, while CDT seems to get things wrong. In still other cases, EDT and CDT seems to get things right, while BT gets things wrong. It's no accident, this essay claims, that all three decision theories are subject to counterexamples. Decision rules can be reinterpreted as voting rules, where the voters are the agent's possible future selves. The problematic examples have the structure of voting paradoxes. Just as voting paradoxes show that no voting rule can do everything we want, decision-theoretic paradoxes show that no decision rule can do everything we want. Luckily, the so-called “tickle defense” establishes that EDT, CDT, and BT will do everything we want in a wide range of situations. Most decision situations, this essay argues, are analogues of voting situations in which the voters unanimously adopt the same set of preferences. In such situations, all plausible voting rules and all plausible decision rules agree. (shrink)
We have looked at worries about expressivism and other forms of noncognitivism. The externalist solution may also seem to be a solution of last resort, because it may seem to deny the platitude that moral judgments are motivationally efficacious. For this reason, we might look seriously at rationalist theories of moral motivation, because they promise to represent moral judgments as intrinsically motivational without giving up cognitivism.
It is now a platitude that sexual objectification is wrong. As is often pointed out, however, some objectification seems morally permissible and even quite appealing—as when lovers are so inflamed by passion that they temporarily fail to attend to the complexity and humanity of their partners. Some, such as Nussbaum, have argued that what renders objectification benign is the right sort of relationship between the participants; symmetry, mutuality, and intimacy render objectification less troubling. On this line of thought, pornography, (...) prostitution, and some kinds of casual sex are inherently morally suspect. I argue against this view: what matters is simply respect for autonomy, and whether the objectification is consensual. Intimacy, I explain, can make objectification more morally worrisome rather than less, and symmetry and mutuality are not relevant. The proper political and social context, however, is crucial, since only in its presence can consent be genuine. I defend the consent account against the objection that there is something paradoxical in consenting to objectification, and I conclude that given the right background conditions, there is nothing wrong with anonymous, one-sided, or just-for-pleasure kinds of sexual objectification. (shrink)
One thing nearly all epistemologists agree upon is that Gettier cases are decisive counterexamples to the tripartite analysis of knowledge; whatever else is true of knowledge, it is not merely belief that is both justified and true. They now agree that knowledge is not justified true belief because this is consistent with there being too much luck present in the cases, and that knowledge excludes such luck. This is to endorse what has become known as the 'anti-luck platitude'. <br (...) /><br />But what if generations of philosophers have been mistaken about this, blinded at least partially by a deeply entrenched professional bias? There has been another, albeit minority, response to Gettier: to deny that the cases are counterexamples at all. <br /><br />Stephen Hetherington, a principal and vocal proponent of this view, advances what he calls the 'Knowing Luckily Proposal'. If Hetherington is correct, this would call for a major re-evaluation and re-orientation of post-Gettier analytic epistemology, since much of it assumes the anti-luck platitude both in elucidating the concept of knowledge, and in the application of such accounts to central philosophical problems. It is therefore imperative that the Knowing Luckily Proposal be considered and evaluated in detail. <br /><br />In this paper I critically assess the Knowing Luckily Proposal. I argue that while it draws our attention to certain important features of knowledge, ultimately it fails, and the anti-luck platitude emerges unscathed. Whatever else is true of knowledge, therefore, it is non-lucky true belief. For a proposition to count as knowledge, we cannot arrive at its truth accidentally or for the wrong reason. (shrink)
Perceptual experiences justify beliefs—that much seems obvious. As Brewer puts it, “sense experiential states provide reasons for empirical beliefs” (this volume, xx). In Mind and World McDowell argues that we can get from this apparent platitude to the controversial claim that perceptual experiences have conceptual content: [W]e can coherently credit experiences with rational relations to judgement and belief, but only if we take it that spontaneity is already implicated in receptivity; that is, only if we take it that experiences (...) have conceptual content. (1994, 162) Brewer agrees. Their view is sometimes called conceptualism; nonconceptualism is the rival position, that experiences have nonconceptual content. One initial obstacle is understanding what the issue is. What is conceptual content, and how is it different from nonconceptual content? Section 1 of this paper explains two versions of each of the rival positions: state (non)conceptualism and content (non)conceptualism; the latter pair is the locus of the relevant dispute. Two prominent arguments for content nonconceptualism—the richness argument and the continuity argument—both fail (section 2). McDowell’s and Brewer’s epistemological defenses of content conceptualism are also faulty (section 3). Section 4 gives a more simple-minded case for conceptualism; finally, some reasons are given for rejecting the claim—on one natural interpretation—that experiences justify beliefs. (shrink)
I sketch a new constraint on chance, which connects chance ascriptions closely with ascriptions of ability, and more specifically with 'CAN'-claims. This connection between chance and ability has some claim to be a platitude; moreover, it exposes the debate over deterministic chance to the extensive literature on (in)compatibilism about free will. The upshot is that a prima facie case for the tenability of deterministic chance can be made. But the main thrust of the paper is to draw attention to (...) the connection between the truth conditions of sentences involving 'CAN' and 'CHANCE', and argue for the context sensitivity of each term. Awareness of this context sensitivity has consequences for the evaluation of particular philosophical arguments for (in) compatibilism when they are presented in particular contexts. (shrink)
In reworking a variety of biological concepts, Developmental Systems Theory (DST) has made frequent use of parity of reasoning. We have done this to show, for instance, that factors that have similar sorts of impact on a developing organism tend nevertheless to be invested with quite different causal importance. We have made similar arguments about evolutionary processes. Together, these analyses have allowed DST not only to cut through some age-old muddles about the nature of development, but also to effect a (...) long-delayed reintegration of development into evolutionary theory. Our penchant for causal symmetry, however (or 'causal democracy', as it has recently been termed), has sometimes been misunderstood. This paper shows that causal symmetry is neither a platitude about multiple influences nor a denial of useful distinctions, but a powerful way of exposing hidden assumptions and opening up traditional formulations to fruitful change. (shrink)
A platitude questioned by many Buddhist thinkers in India and Tibet is the existence of the world. We might be tempted to insert some modifier here, such as “substantial,” “self-existent,” or “intrinsically existent,” for, one might argue, these thinkers did not want to question the existence of the world tout court but only that of a substantial, self-existent, or otherwise suitably qualified world. But perhaps these modifiers are not as important as is generally thought, for the understanding of the (...) world questioned is very much the understanding of the world everybody has. It is the understanding that there is a world out there —independent of our minds — and that when we speak and think about this world we mostly get it right. But the Madhyamaka thinkers under discussion here deny that there is a world out there and claim that our opinions about it are to the greatest part fundamentally and dangerously wrong. (shrink)
It is a platitude that whereas language is mediated by convention, depiction is mediated by resemblance. But this platitude may be attacked on the grounds that resemblance is either insufficient for or incidental to depictive representation. I defend common sense from this attack by using Grice's analysis of meaning to specify the non-incidental role of resemblance in depictive representation.
That truth provides the standard for believing appears to be a platitude, one which dovetails with the idea that in some sense belief aims only at the truth. In recent years, however, an increasing number of prominent philosophers have suggested that knowledge provides the standard for believing, and so that belief aims only at knowledge. In this paper, I examine the considerations which have been put forward in support of this suggestion, considerations relating to lottery beliefs, Moorean beliefs, the (...) criticism and defence of belief, and the value of knowledge. I argue that those considerations do not give us reason to give up the truth view in favour of the knowledge view and, moreover, that reflection on those considerations gives us some reason to reject the knowledge view. Thus, I conclude, we can continue to the take the apparent platitude at face value. (shrink)
According to Normativism, linguistic meaning is intrinsically normative (I shall explore what this amounts to below). One, though not the only, reason for Normativism’s importance is that it bears on the prospects of providing an account of meaning in the terms available to the natural sciences. In turn, since linguistic behaviour is inextricably bound up with both non linguistic behaviour and the psychological attitudes informing it, Normativism might (if true) pose a serious challenge to the project of accommodating creatures such (...) as ourselves within the worldview the natural sciences afford. In this paper, I shall not focus on such heady themes but rather on the prior issue of whether or not one should accept Normativism. Though certainly in circulation beforehand, it is fair to say that Saul Kripke’s (1982) was largely responsible for bringing the thesis to the philosophical forefront.1 In the years following its publication, Normativism looked close to achieving the status of orthodoxy. At one stage, Crispin Wright felt able to remark assuredly that the view ‘strike[s] most people now as a harmless platitude’ (1993: 247).2 In recent years. (shrink)
In a 2002 paper for this journal, Richard Joyce presents three new arguments against the Divine Command Theory. In this comment, I attempt to show that each of these arguments is either unpersuasive or uninteresting. Two of Joyce’s arguments are unpersuasive because they rely on an implausible principle or an implausible claim about what counts as a platitude governing use of the term “wrong.” Joyce’s other argument is uninteresting because it is persuasive only if Joyce’s formulation of the Euthyphro (...) Problem is persuasive. However, Joyce argues that the Euthyphro Problem is not persuasive. Therefore, if Joyce is correct about this, then his own objection to the Divine Command Theory is not persuasive either. (shrink)
All sorts of things are context-dependent in one way or another. What it is appropriate to wear, to give, or to reveal depends on the context. Whether or not it is all right to lie, harm, or even kill depends on the context. If you google the phrase ‘depends on the context’, you’ll get several hundred million results. This chapter aims to narrow that down. In this context the topic is context dependence in language and its use. It is commonly (...) observed that the same sentence can be used to convey different things in different contexts. That is why people complain when something they say is ‘taken out of context’ and insist that it be ‘put into context’, because ‘context makes it clear’ what they meant. Indeed, it is practically a platitude that what a speaker means in uttering a certain sentence, as well as how her audience understands her, ‘depends on the context’. But just what does that amount to, and to what extent is it true? (shrink)
It is a platitude in epistemology to say that knowledge excludes luck. Indeed, if one can show that an epistemological theory allows ‘lucky’ knowledge, then that usually suffices to warrant one in straightforwardly rejecting the view. Even despite the prevalence of this intuition, however, very few commentators have explored what it means to say that knowledge is incompatible with luck. In particular, no commentator, so far as I am aware, has offered an account of what luck is and on (...) this basis identified what it means for a true belief to be non-lucky. It is just such a view that I propose, however, and I hope to give a flavour of what this strategy involves here. In particular, I have two goals in this paper. The first is to outline the general contours of the position and show how such a view can account for the attraction of adducing a safety condition on knowledge, with all the epistemic benefits that this principle holds. Relatedly, I will also explain how an anti-luck epistemology can assist us in determining the best formulation of this principle. The second goal of the paper is to show anti-luck epistemology in action by highlighting how such a view can deal with the various problems posed by lottery-style examples. (shrink)
The possibilities of depicting non-existents, depicting non-particulars and depictive misrepresentation are frequently cited as grounds for denying the platitude that depiction is mediated by resemblance. I first argue that these problems are really a manifestation of the more general problem of intentionality. I then show how there is a plausible solution to the general problem of intentionality which is consonant with the platitude.
This paper argues for a possible worlds theory of the content of pictures, with three complications: depictive content is centred, two-dimensional and structured. The paper argues that this theory supports a strong analogy between depictive and other kinds of representation and the platitude that depiction is mediated by resemblance.
By defining both depictive and linguistic representation as kinds of symbol system, Nelson Goodman attempts to undermine the platitude that, whereas linguistic representation is mediated by convention, depiction is mediated by resemblance. I argue that Goodman is right to draw a strong analogy between the two kinds of representation, but wrong to draw the counterintuitive conclusion that depiction is not mediated by resemblance.
There is a long tradition of thinking of language as conventional in its nature, dating back at least to Aristotle De Interpretatione ). By appealing to the role of conventions, it is thought, we can distinguish linguistic signs, the meaningful use of words, from mere natural ‘signs’. During the last century the thesis that language is essentially conventional has played a central role within philosophy of language, and has even been called a platitude (Lewis 1969). More recently, the focus (...) has been less on the conventional nature of language than on the claim that meaning is essentially normative in a wider sense, leaving it open whether the normativity in question should be understood in terms of conventions or not (Kripke 1982). (shrink)
Once upon a time it was assumed that speaking literally and directly is the norm and that speaking nonliterally or indirectly is the exception. The assumption was that normally what a speaker means can be read off of the meaning of the sentence he utters, and that departures from this, if not uncommon, are at least easily distinguished from normal utterances and explainable along Gricean lines. The departures were thought to be limited to obvious cases like figurative speech and conversational (...) implicature. However, people have come to appreciate that the meaning of a typical sentence, at least one we are at all likely to use, is impoverished, at least relative to what we are likely to mean in uttering it. In other words, what a speaker normally means in uttering a sentence, even without speaking figuratively or obliquely, is an enriched version of what could be predicted from the meaning of the sentence alone. This can be because the sentence expresses a “minimal” (or “skeletal”) proposition or even because it fails to express a complete proposition at all.1 Indeed, it is now a platitude that linguistic meaning generally underdetermines speaker meaning. That is, generally what a speaker means in uttering a sentence, even if the sentence is devoid of ambiguity, vagueness, or indexicality, goes beyond what the sentence means. The question is what to make of this Contextualist Platitude, as I’ll call it. It may be a truism, but does it require a radical revision of the older conception of the relation between what sentences mean and what speakers mean in uttering them? Does it lead to a major modification, or perhaps even outright rejection, of the semantic-pragmatic distinction? I think.. (shrink)
Here is the liar paradox. We have a sentence, (L), which somehow says of itself that it is false. Suppose (L) is true. Then things are as (L) says they are. (For it would appear to be a mere platitude that if a sentence is true, then things are as the sentence says they are.) (L) says that (L) is false. So, (L) is false. Since the supposition that (L) is true leads to contradiction, we can assert that (L) (...) is false. But since this is just what (L) says, (L) is then true. (For it would appear to be a mere platitude that if things are as a given sentence says they are, the sentence is true.) So (L) is true. So (L) is both true and false. Contradiction. (shrink)
Anti-luck epistemology is an approach to analyzing knowledge that takes as a starting point the widely-held assumption that knowledge must exclude luck. Call this the anti-luck platitude. As Duncan Pritchard (2005) has suggested, there are three stages constituent of anti-luck epistemology, each which specifies a different philosophical requirement: these stages call for us to first give an account of luck; second, specify the sense in which knowledge is incompatible with luck; and finally, show what conditions must be satisfied in (...) order to block the kind of luck with which knowledge was argued to be incompatible. What I’ll show here is that the modal account of luck offers a plausible story at the first stage and leads naturally to equally plausible lines to take at the second and third stages, at which a safety condition on knowledge is squarely motivated. There are, however, recent challenges—advanced by Jonathan Kvanvig (Philosophy and Phenomenological Research 77: 272–281, 2008); Kelly Becker (2007); and Jennifer Lackey (Australasian Journal of Philosophy 86(2):255–267, 2008), among others—to the plausibility of the safety-based anti-luck project I’ve sketched here at each of its three stages of development. Once I’ve made precise the challenges, I’ll show why none implies that we abandon the commitments of the safety-based anti-luck project at any of its stages. What we should conclude, then, is that a safety-condition on knowledge is motivated by independently defensible accounts of (1) what luck is; and (2) just how knowledge should be thought incompatible with it. (shrink)
Robustness is a common platitude: hypotheses are better supported with evidence generated by multiple techniques that rely on different background assumptions. Robustness has been put to numerous epistemic tasks, including the demarcation of artifacts from real entities, countering the “experimenter’s regress,” and resolving evidential discordance. Despite the frequency of appeals to robustness, the notion itself has received scant critique. Arguments based on robustness can give incorrect conclusions. More worrying is that although robustness may be valuable in ideal evidential circumstances (...) (i.e., when evidence is concordant), often when a variety of evidence is available from multiple techniques, the evidence is discordant. †To contact the author, please write to: Jacob Stegenga, Department of Philosophy, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093; e‐mail: firstname.lastname@example.org. (shrink)
Informational semantics were first developed as an interpretation of the model-theory of substructural (and especially relevant) logics. In this paper we argue that such a semantics is of independent value and that it should be considered as a genuine alternative explication of the notion of logical consequence alongside the traditional model-theoretical and the proof-theoretical accounts. Our starting point is the content-nonexpansion platitude which stipulates that an argument is valid iff the content of the conclusion does not exceed the combined (...) content of the premises. We show that this basic platitude can be used to characterise the extension of classical as well as non-classical consequence relations. The distinctive trait of an informational semantics is that truth-conditions are replaced by information-conditions. The latter leads to an inversion of the usual order of explanation: Considerations about logical discrimination (how finely propositions are individuated) are conceptually prior to considerations about deductive strength. Because this allows us to bypass considerations about truth, an informational semantics provides an attractive and metaphysically unencumbered account of logical consequence, non-classical logics, logical rivalry and pluralism about logical consequence. (shrink)
At the start of Convention (1969) Lewis says that it is "a platitude that language is ruled by convention" and that he proposes to give us "an analysis of convention in its full generality, including tacit convention not created by agreement." Almost no clause, however, of Lewis's analysis has withstood the barrage of counter examples over the years,1 and a glance at the big dictionary suggests why, for there are a dozen different senses listed there. Left unfettered, convention wanders (...) freely from conventional wisdom through conventional medicine, conventions of art and "conventions of morality" to conventions of bidding in bridge.2 Surely it is unwise to try to fell these all with a single stone. Lewis's original goal, however, pursued further in (Lewis 1975), was to describe the conventionality of language, and this may be a more reasonable target. (shrink)
It is a platitude that morality is normative, but a substantive and interesting question whether morality is normative in a robust and important way; and although it is often assumed that morality is indeed robustly normative, that view is by no means uncontroversial, and a compelling argument for it is conspicuously lacking. In this paper, I provide such an argument. I argue, based on plausible claims about the relationship between moral wrongs and moral criticizability, and the relationship between criticizability (...) and normative reasons, that moral facts necessarily confer normative reasons upon moral agents. (shrink)
It is taken to be platitude that I can be responsible only for my own actions. Many have taken this to entail the slogan that responsibility presupposes personal identity. In this paper, I show that even if we grant the platitude, the slogan is not entailed and is at any rate false. I then suggest what the relevant non-identity relation grounding the ownership of actions consists in instead.
It seems to be a platitude of common sense that distinct ordinary objects cannot coincide, that they cannot fit into the same place or be composed of the same parts at the same time. The paradoxes of coincidence are instances of a breakdown of this platitude in light of counterexamples that are licensed by innocuous assumptions about particular kinds of ordinary object. Since both the anticoincidence principle and the assumptions driving the counterexamples flow from the folk conception of (...) ordinary objects, the paradoxes threaten this conception with inconsistency. Typical approaches to the paradoxes reject the anticoincidence principle or some portion of the assumptions driving the counterexamples, thereby partially revising our common conception of the world around us. This essay offers a compatibilist solution to the paradoxes that sustains the folk conception of ordinary objects in its entirety. According to this solution, the various cases of distinct coincidents do not clash with the anticoincidence principle since the cases and the principle manifest different yet compatible perspectives on the world. (shrink)
The senses, or sensory modalities, constitute the different ways we have of perceiving the world, such as seeing, hearing, touching, tasting and smelling. But how many senses are there? How many could there be? What makes the senses different? What interaction takes place between the senses? This book is a guide to thinking about these questions. Together with an extensive introduction to the topic, the book contains the key classic papers on this subject together with nine newly commissioned essays. -/- (...) One reason that these questions are important is that we are receiving a huge influx of new information from the sciences that challenges some traditional philosophical views about the senses. This information needs to be incorporated into our view of the senses and perception. Can we do this whilst retaining our pre-existing concepts of the senses and of perception or do we need to revise our concepts? If they need to be revised, then in what way should that be done? Research in diverse areas, such as the nature of human perception, varieties of non-human animal perception, the interaction between different sensory modalities, perceptual disorders, and possible treatments for them, calls into question the platitude that there are five senses, as well as the pre-supposition that we know what we are counting when we count them as five (or more). -/- This book will serve as an inspiring introduction to the topic and as a basis from which further new research will grow. -/- This volume is the first on the philosophy of the non-visual senses. -/- It combines older, hard-to-find essays on the non-visual senses with contemporary essays by well-known philosophers in the field. -/- Macpherson's introduction to the volume traces the philosophy of the senses throughout history and points towards new directions in its future. -/- Readership: The market will be scholars in philosophy of mind, as well as students particularly in graduate seminars. (shrink)
In matters of distributive justice, we assume that it is important how benefits and burdens are distributed among different people. But what, precisely, is important about this? In particular, what, from the point of view of justice, is ultimately at stake in what distributions come about? T. M. Scanlon has been coy about what his contractualist moral theory might imply for justice.[ii] Yet his conception of morality bears directly on this question of stakes. The significance of distribution then depends (...) on independently valuable relations of recognition. Distribution has no fundamental importance per se. This in turn has significant implications for how philosophical reasoning about justice in distribution must proceed. In recent years, many egalitarians (e.g. many luck egalitarians) have proceeded as though a distribution (of goods, resources, opportunities, capabilities, or welfare) can be just (or fair) by its very nature, in and of itself. The basic aim of the theory of distributive justice is to say what this intrinsically just distribution is (equality? priority for the worse off? everyone having enough? something else?).[iii] What is ultimately at stake in matters of distributive justice, it is suggested, is whether or not a certain intrinsically valuable distributional pattern comes about. Scanlon’s theory implies that this cannot be right: a distribution, taken as such, cannot be owed, and so cannot be justice. Or at least this follows given the platitude about justice, due to Aristotle, that justice is, by nature, giving each his or her due.[iv] The platitude tells us that to distribute justly is simply to give to each individual what he or she is due or owed, as determined by an independent conception of what this is. According to Scanlon’s independent conception of “what we owe to each other,” no individual can be owed a distribution across persons, as such. We are at most each owed our respective shares—only what we can reasonably ask for on our own behalf.. (shrink)
In reply to Geach's objection against expressivism, some have claimed that there is a plurality of truth predicates. I raise a difficulty for this claim: valid inferences can involve sentences assessable by any truth predicate, corresponding to 'lightweight' truth as well as to 'heavyweight' truth. To account for this, some unique truth predicate must apply to all sentences that can appear in inferences. Mixed inferences remind us of a central platitude about truth: truth is what is preserved in valid (...) inferences. The question is why we should postulate truth predicates that do not satisfy this platitude. (shrink)
It’s a platitude that a picture is realistic to the degree to which it resembles what it represents (in relevant respects). But if properties are abundant and degrees of resemblance are proportions of properties in common, then the degree of resemblance between different particulars is constant (or undefined), which is inconsonant with the platitude. This paper argues this problem should be resolved by revising the analysis of degrees of resemblance in terms of proportion of properties in common, and (...) not by accepting a sparse theory of properties or by denying that degree of realism is degree of resemblance (in relevant respects). (shrink)
Traditional theories of truth – such as the correspondence theory – are monist in character. All propositions, regardless of subject-matter, are true in the same way (if true). Recently, this view has been called into question by alethic pluralists (most notably Crispin Wright and Michael Lynch). According to the pluralist, the nature of truth varies across domains. Pluralists try to motivate their position by appealing to the following principle: for any domains D1 and D2, if the metaphysical constitutions of respectively (...) D1 and D2 differ, then D1-propositions and D2-propositions are true in different ways (if true). The aim of the paper isto present a monist challenge to this principle. The gist of the challenge is this: even if metaphysical pluralism (i.e. the antecedent) is granted, truth can be given a uniform account within a correspondence framework. The basic argument is this: every domain that the alethic pluralist is interested in is truth-apt. But any domain that deals in truth-apt propositions likewise deals in facts – after all, it seems a mere platitude that facts are what makes propositions true (if true). Once facts are admitted, the monist can argue as follows: a proposition is true if, and only if, it corresponds to reality. Now, a proposition corresponds to reality if, and only if, it represents reality correctly – and a proposition represents reality correctly if, and only if, what is says is a fact, or is the case. However, since the notion of fact is available for any truth-apt domain, a proposition – whatever its (truth-apt) domain – can be taken to be true if, and only if, it corresponds to reality. Thus, metaphysical pluralism does not imply alethic pluralism. (shrink)
Despite the platitude that analytic philosophy is deeply concerned with language, philosophers of science have paid little attention to methodological issues that arise within historical linguistics. I broach this topic by arguing that many inferences in historical linguistics conform to Reichenbach's common cause principle (CCP). Although the scope of CCP is narrower than many have thought, inferences about the genealogies of languages are particularly apt for reconstruction using CCP. Quantitative approaches to language comparison are readily understood as methods for (...) detecting the correlations that serve as premises for common cause inferences, and potential sources of error in historical linguistics correspond to well-known limitations of CCP. (shrink)
The claim that conceptual systems change is a platitude. That our conceptual systems are theory-laden is no less platitudinous. Given evolutionary theory, biologists are led to divide up the living world into genes, organisms, species, etc. in a particular way. No theory-neutral individuation of individuals or partitioning of these individuals into natural kinds is possible. Parallel observations should hold for philosophical theories about scientific theories. In this paper I summarize a theory of scientific change which I set out in (...) considerable detail in a book that I shall publish in the near future. Just as few scientists were willing to entertain the view that species evolve in the absence of a mechanism capable of explaining this change, so philosophers should be just as reticent about accepting a parallel view of conceptual systems in science evolving in the absence of a mechanism to explain this evolution. In this paper I set out such a mechanism. One reason that this task has seemed so formidable in the past is that we have all construed conceptual systems inappropriately. If we are to understand the evolution of conceptual systems in science, we must interpret them as forming lineages related by descent. In my theory, the notion of a family resemblance is taken literally, not metaphorically. In my book, I set out data to show that the mechanism which I propose is actually operative. In this paper, such data is assumed. (shrink)
Three proponents of the Canberra Plan, namely Jackson, Pettit, and Smith, have developed a collective functionalist program—Canberra Functionalism—spanning from philosophical psychology to ethics. They argue that conceptual analysis is an indispensible tool for research on cognitive processes since it reveals that there are some folk concepts, like belief and desire, whose functional roles must be preserved rather than eliminated by future scientific explanations. Some naturalists have recently challenged this indispensability argument, though the point of that challenge has been blunted by (...) a mutual conflation of metaphysical and methodological strands of naturalism. I argue that the naturalist’s challenge to the indispensability argument, like naturalism itself, ought to be reformulated as a strictly methodological thesis. So understood, the challenge succeeds by showing (1) that we cannot know a priori on the basis of conceptual analysis of folk platitudes that something must occupy the functional roles specified for beliefs and desires, and (2) that proponents of Canberra Functionalism sometimes tacitly concede this point by treating substantive psychological theories as the deliverances of a priori platitudes analysis. (shrink)
The idea that logic and reasoning are somehow related goes back to antiquity. It clearly underlies much of the work in logic, as witnessed by the development of computability, and formal and mechanical deductive systems, for example. On the other hand, a platitude is that logic is the study of correct reasoning; and reasoning is cognitive if anything Is. Thus, the relationship between logic, computation, and correct reasoning makes an interesting and historically central case study for mechanism. The purpose (...) of this article is to begin the articulation of this relationship, pointing out its sources and its limitations. (shrink)
Moral particularism, on some interpretations, is committed to a shapeless thesis: the moral is shapeless with respect to the natural. (Call this version of moral particularism ‘shapeless moral particularism’). In more detail, the shapeless thesis is that the actions a moral concept or predicate can be correctly applied to have no natural commonality (or shape) amongst them. Jackson, Smith and Pettit (2000) argue, however, that the shapeless thesis violates the platitude ‘predication supervenes on nature’—predicates or concepts apply because of (...) how things are, and therefore ought to be rejected. I defend shapeless moral particularism by arguing that Jackson et al’s contention is less compelling than it firstly appears. My defense is limited in the sense that it does not prove shapeless moral particularism to be right and it leaves open the possibility that shapeless moral particularism might attract criticisms different from the ones advanced by Jackson et al. But at the very least, I hope to say enough to undermine Jackson et al’s powerful attack against it. The plan of this paper is as follows. Section 1 glosses the view of moral particularism and why it is taken to be essentially committed to the shapeless thesis. Section 2 examines a Wittgensteinian argument for the shapeless thesis. I shall argue that the Canberrans’ counter-arguments against it on grounds of disjunctive commonality and conceptual competence do not succeed. Section 3 explicates Canberrans’ predication supervenience argument against the shapeless thesis. Section 4 offers my criticisms of the Canberrans’ predication supervenience argument. In view of the above discussions, in section 5, I conclude that there is no compelling argument (from the Canberrans) to believe that the shapeless thesis fails (as I have argued in section 4). In fact, there is some good reason for us to believe it (as I have argued in section 2). If so, I contend that moral particularism, when construed as essentially committed to the shapeless thesis, still remains as a live option. -/- . (shrink)