It is plausible that there are epistemic reasons bearing on a distinctively epistemic standard of correctness for belief. It is also plausible that there are a range of practical reasons bearing on what to believe. These theses are often thought to be in tension with each other. Most significantly for our purposes, it is obscure how epistemic reasons and practical reasons might interact in the explanation of what one ought to believe. We draw an analogy with a similar distinction between (...) types of reasons for actions in the context of activities. The analogy motivates a two-level account of the structure of normativity that explains the interaction of correctness-based and other reasons. This account relies upon a distinction between normative reasons and authoritatively normative reasons. Only the latter play the reasons role in explaining what state one ought to be in. All and only practical reasons are authoritative reasons. Hence, in one important sense, all reasons for belief are practical reasons. But this account also preserves the autonomy and importance of epistemic reasons. Given the importance of having true beliefs about the world, our epistemic standard typically plays a key role in many cases in explaining what we ought to believe. In addition to reconciling (versions of) evidentialism and pragmatism, this two-level account has implications for a range of important debates in normative theory, including the interaction of right and wrong reasons for actions and other attitudes, the significance of reasons in understanding normativity and authoritative normativity, the distinction between ‘formal’ and ‘substantive’ normativity, and whether there is a unified source of authoritative normativity. (shrink)
A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of scientific theories. I investigate this approach and argue that simple applications of abductive methodology to logic result in revision-cycles, developing a detailed case study of an actual dispute with this property. This is problematic if we take abductive methodology to provide justification for revising our logical framework. I then generalize the case study, pointing to similarities with more (...) recent and popular heterodox logics such as naïve logics of truth. I use this discussion to motivate a constraint—logical partisanhood—on the uses of such methodology: roughly: both the proposed alternative and our actual background logic must be able to agree that moving to the alternative logic is no worse than staying put. (shrink)
Etiquette and other merely formal normative standards like legality, honor, and rules of games are taken less seriously than they should be. While these standards are not intrinsically reason-providing in the way morality is often taken to be, they also play an important role in our practical lives: we collectively treat them as important for assessing the behavior of ourselves and others and as licensing particular forms of sanction for violations. This chapter develops a novel account of the normativity of (...) formal standards where the role they play in our practical lives explains a distinctive kind of reason to obey them. We have this kind of reason to be polite because etiquette is important to us. We also have this kind of reason to be moral because morality is important to us. This parallel suggests that the importance we assign to morality is insufficient to justify it being substantive. (shrink)
Mark Schroeder has argued that all reasonable forms of inconsistency of attitude consist of having the same attitude type towards a pair of inconsistent contents (A-type inconsistency). We suggest that he is mistaken in this, offering a number of intuitive examples of pairs of distinct attitudes types with consistent contents which are intuitively inconsistent (B-type inconsistency). We further argue that, despite the virtues of Schroeder's elegant A-type expressivist semantics, B-type inconsistency is in many ways the more natural choice in developing (...) an expressivist account of moral discourse. We close by showing how to adapt ordinary formality-based accounts of logicality to define a B-type account of logical inconsistency and distinguish it from both semantic and pragmatic inconsistency. In sum, we provide a roadmap of how to develop a successful B-type expressivism. (shrink)
I distinguish two ways of developing anti-exceptionalist approaches to logical revision. The first emphasizes comparing the theoretical virtuousness of developed bodies of logical theories, such as classical and intuitionistic logic. I'll call this whole theory comparison. The second attempts local repairs to problematic bits of our logical theories, such as dropping excluded middle to deal with intuitions about vagueness. I'll call this the piecemeal approach. I then briefly discuss a problem I've developed elsewhere for comparisons of logical theories. Essentially, the (...) problem is that a pair of logics may each evaluate the alternative as superior to themselves, resulting in oscillation between logical options. The piecemeal approach offers a way out of this problem andthereby might seem a preferable to whole theory comparisons. I go on to show that reflective equilibrium, the best known piecemeal method, has deep problems of its own when applied to logic. (shrink)
Sometimes a fact can play a role in a grounding explanation, but the particular content of that fact make no difference to the explanation—any fact would do in its place. I call these facts vacuous grounds. I show that applying the distinction between-vacuous grounds allows us to give a principled solution to Kit Fine and Stephen Kramer’s paradox of ground. This paradox shows that on minimal assumptions about grounding and minimal assumptions about logic, we can show that grounding is reflexive, (...) contra the intuitive character of grounds. I argue that we should never have accepted that grounding is irreflexive in the first place; the intuitions that support the irreflexive intuition plausibly only require that grounding be non-vacuously irreflexive. Fine and Kramer’s paradox relies, essentially, on a case of vacuous grounding and is thus no problem for this account. (shrink)
I argue that certain species of belief, such as mathematical, logical, and normative beliefs, are insulated from a form of Harman-style debunking argument whereas moral beliefs, the primary target of such arguments, are not. Harman-style arguments have been misunderstood as attempts to directly undermine our moral beliefs. They are rather best given as burden-shifting arguments, concluding that we need additional reasons to maintain our moral beliefs. If we understand them this way, then we can see why moral beliefs are vulnerable (...) to such arguments while mathematical, logical, and normative beliefs are not—the very construction of Harman-style skeptical arguments requires the truth of significant fragments of our mathematical, logical, and normative beliefs, but requires no such thing of our moral beliefs. Given this property, Harman-style skeptical arguments against logical, mathematical, and normative beliefs are self-effacing; doubting these beliefs on the basis of such arguments results in the loss of our reasons for doubt. But we can cleanly doubt the truth of morality. (shrink)
Expressivists explain the expression relation which obtains between sincere moral assertion and the conative or affective attitude thereby expressed by appeal to the relation which obtains between sincere assertion and belief. In fact, they often explicitly take the relation between moral assertion and their favored conative or affective attitude to be exactly the same as the relation between assertion and the belief thereby expressed. If this is correct, then we can use the identity of the expression relation in the two (...) cases to test the expressivist account as a descriptive or hermeneutic account of moral discourse. I formulate one such test, drawing on a standard explanation of Moore's paradox. I show that if expressivism is correct as a descriptive account of moral discourse, then we should expect versions of Moore's paradox where we explicitly deny that we possess certain affective or conative attitudes. I then argue that the constructions that mirror Moore's paradox are not incoherent. It follows that expressivism is either incorrect as a hermeneutic account of moral discourse or that the expression relation which holds between sincere moral assertion and affective or conative attitudes is not identical to the relation which holds between sincere non-moral assertion and belief. A number of objections are canvassed and rejected. (shrink)
This is an opinionated overview of the Frege-Geach problem, in both its historical and contemporary guises. Covers Higher-order Attitude approaches, Tree-tying, Gibbard-style solutions, and Schroeder's recent A-type expressivist solution.
I investigate syntactic notions of theoretical equivalence between logical theories and a recent objection thereto. I show that this recent criticism of syntactic accounts, as extensionally inadequate, is unwarranted by developing an account which is plausibly extensionally adequate and more philosophically motivated. This is important for recent anti-exceptionalist treatments of logic since syntactic accounts require less theoretical baggage than semantic accounts.
Why do promises give rise to reasons? I consider a quadruple of possibilities which I think will not work, then sketch the explanation of the normativity of promising I find more plausible—that it is constitutive of the practice of promising that promise-breaking implies liability for blame and that we take liability for blame to be a bad thing. This effects a reduction of the normativity of promising to conventionalism about liability together with instrumental normativity and desire-based reasons. This is important (...) for a number of reasons, but the most important reason is that this style of account can be extended to account for nearly all normativity—one notable exception being instrumental normativity itself. Success in the case of promises suggests a general reduction of normativity to conventions and instrumental normativity. But success in the cases of promises is already quite interesting and does not depend essentially the general claim about normativity. (shrink)
Logical Indefinites.Jack Woods - 2014 - Logique Et Analyse -- Special Issue Edited by Julien Murzi and Massimiliano Carrara 227: 277-307.details
I argue that we can and should extend Tarski's model-theoretic criterion of logicality to cover indefinite expressions like Hilbert's ɛ operator, Russell's indefinite description operator η, and abstraction operators like 'the number of'. I draw on this extension to discuss the logical status of both abstraction operators and abstraction principles.
Neofregeanism and structuralism are among the most promising recent approaches to the philosophy of mathematics. Yet both have serious costs. We develop a view, structuralist neologicism, which retains the central advantages of each while avoiding their more serious costs. The key to our approach is using arbitrary reference to explicate how mathematical terms, introduced by abstraction principles, refer. Focusing on numerical terms, this allows us to treat abstraction principles as implicit definitions determining all properties of the numbers, achieving a key (...) neofregean advantage, while preserving the key structuralist advantage, which objects play the number role does not matter. (shrink)
I defend normative subjectivism against the charge that believing in it undermines the functional role of normative judgment. In particular, I defend it against the claim that believing that our reasons change from context to context is problematic for our use of normative judgments. To do so, I distinguish two senses of normative universality and normative reasons---evaluative universality and reasons and ontic universality and reasons. The former captures how even subjectivists can evaluate the actions of those subscribing to other conventions; (...) the latter explicates how their reasons differ from ours. I then show that four aspects of the functional role of normativity---evaluation of our and others actions and reasons, normative communication, hypothetical planning, and evaluating counternromative conditionals---at most requires our normative systems being evaluatively universal. Yet reasonable subjectivist positions need not deny evaluative universality. (shrink)
It is regrettably common for theorists to attempt to characterize the Humean dictum that one can’t get an ‘ought’ from an ‘is’ just in broadly logical terms. We here address an important new class of such approaches which appeal to model-theoretic machinery. Our complaint about these recent attempts is that they interfere with substantive debates about the nature of the ethical. This problem, developed in detail for Daniel Singer’s and Gillian Russell and Greg Restall’s accounts of Hume’s dictum, is of (...) a general type arising for the use of model-theoretic structures in cashing out substantive philosophical claims: the question of whether an abstract model-theoretic structure successfully interprets something often involves taking a stand on non-trivial issues surrounding the thing. In the particular case of Hume’s dictum, given reasonable conceptual or metaphysical claims about the ethical, Singer’s and Russell and Restall’s accounts treat obviously ethical claims as descriptive and vice versa. Consequently, their model-theoretic characterizations of Hume’s dictum are not metaethically neutral. This encourages skepticism about whether model-theoretic machinery suffices to provide an illuminating distinction between the ethical and the descriptive. (shrink)
I argue that in order to apply the most common type of criteria for logicality, invariance criteria, to natural language, we need to consider both invariance of content—modeled by functions from contexts into extensions—and invariance of character—modeled, à la Kaplan, by functions from contexts of use into contents. Logical expressionsshould be invariant in both senses. If we do not require this, then old objections due to Timothy McCarthy and William Hanson, suitably modified, demonstrate that content invariant expressions can display intuitive (...) marks of non-logicality. If we do require this, we neatly avoid these objections while also managing to demonstrate desirable connections of logicality to necessity. The resulting view is more adequate as a demarcation of the logical expressions of natural language. (shrink)
Philosophical arguments usually are and nearly always should be abductive. Across many areas, philosophers are starting to recognize that often the best we can do in theorizing some phenomena is put forward our best overall account of it, warts and all. This is especially true in esoteric areas like logic, aesthetics, mathematics, and morality where the data to be explained is often based in our stubborn intuitions. -/- While this methodological shift is welcome, it's not without problems. Abductive arguments involve (...) significant theoretical resources which themselves can be part of what's being disputed. This means that we will sometimes find otherwise good arguments which suggest their own grounds are problematic. In particular, sometimes revising our beliefs on the basis of such an argument can undermine the very justification we used in that argument. -/- This feature, which I'll call self-effacingness, occurs most dramatically in arguments against our standing views on the esoteric subject matters mentioned above: logic, mathematics, aesthetics, and morality. This is because these subject matters all play a role in how we reason abductively. This isn't an idle fact; we can resist some challenges to our standing beliefs about these subject matters exactly because the challenges are self-effacing. The self-effacing character of certain arguments is thus both a benefit and limitation of the abductive turn and deserves serious attention. I aim to give it the attention it deserves. (shrink)
I show that the model-theoretic meaning that can be read off the natural deduction rules for disjunction fails to have certain desirable properties. I use this result to argue against a modest form of inferentialism which uses natural deduction rules to fix model-theoretic truth-conditions for logical connectives.
I respond to an interesting objection to my 2014 argument against hermeneutic expressivism. I argue that even though Toppinen has identified an intriguing route for the expressivist to tread, the plausible developments of it would not fall to my argument anyways---as they do not make direct use of the parity thesis which claims that expression works the same way in the case of conative and cognitive attitudes. I close by sketching a few other problems plaguing such views.
Moore’s paradox, the infamous felt bizarreness of sincerely uttering something of the form “I believe grass is green, but it ain’t”—has attracted a lot of attention since its original discovery (Moore 1942). It is often taken to be a paradox of belief—in the sense that the locus of the inconsistency is the beliefs of someone who so sincerely utters. This claim has been labeled as the priority thesis: If you have an explanation of why a putative content could not be (...) coherently believed, you thereby have an explanation of why it cannot be coherently asserted. (Shoemaker 1995). The priority thesis, however, is insufficient to give a general explanation of Moore-paradoxical phenomena and, moreover, it’s false. I demonstrate this, then show how to give a commitment-theoretic account of Moore Paradoxicality, drawing on work by Bach and Harnish. The resulting account has the virtue of explaining not only cases of pragmatic incoherence involving assertions, but also cases of cognate incoherence arising for other speech acts, such as promising, guaranteeing, ordering, and the like. (shrink)
I discuss Greg Restall’s attempt to generate an account of logical consequence from the incoherence of certain packages of assertions and denials. I take up his justification of the cut rule and argue that, in order to avoid counterexamples to cut, he needs, at least, to introduce a notion of logical form. I then suggest a few problems that will arise for his account if a notion of logical form is assumed. I close by sketching what I take to be (...) the most natural minimal way of distinguishing content and form and suggest further problems arising for this route. (shrink)
I discuss Greg Restall’s attempt to generate an account of logical consequence from the incoherence of certain packages of assertions and denials. I take up his justification of the cut rule and argue that, in order to avoid counterexamples to cut, he needs, at least, to introduce a notion of logical form. I then suggest a few problems that will arise for his account if a notion of logical form is assumed. I close by sketching what I take to be (...) the most natural minimal way of distinguishing content and form and suggest further problems arising for this route. (shrink)
This collection of new essays presents cutting-edge research on the semantic conception of logic, the invariance criteria of logicality, grammaticality, and logical truth. Contributors explore the history of the semantic tradition, starting with Tarski, and its historical applications, while central criticisms of the tradition, and especially the use of invariance criteria to explain logicality, are revisited by the original participants in that debate. Other essays discuss more recent criticism of the approach, and researchers from mathematics and linguistics weigh in on (...) the role of the semantic tradition in their disciplines. This book will be invaluable to philosophers and logicians alike. (shrink)