A participatory rural appraisal inthree West African countries examined thepossibility for replacing chemical pesticidesto control locusts and grasshoppers with abiological control method based on anindigenous fungal pathogen. The fungus iscurrently being tested at different sites inthe Sahel and in the humid tropics of WestAfrica. Structured group interviews, individualdiscussions, and field visits, were used toobtain farmers' perceptions of locust andgrasshoppers as crop pests, their quantitativeestimation of crop losses, and theirwillingness to pay for locust control. Farmersas well as plant protection officers generallyperceived (...) locusts and grasshoppers as importantpests that cause significant damage. Farmerswere aware of some of the risks of the use ofchemical pesticides, but not of the potentialalternatives. The use of the fungus in anoil-formulation and standard Ultra Low Volume(ULV) equipment was demonstrated, and theresults discussed with farmers. Theirimpressions of biological control werefavorable, and they expressed an interest inusing the technology. Farmers' expressedwillingness to pay for locust control is small,but not negligible. Locusts and grasshoppersare very visible pests and thus amenable topressure from farmers to local administrators,as well as by farmers' relatives in the city onthe national government. Therefore, politicalpressure for locust control is strong, althoughnational governments spend little on it,depending mostly on foreign donors. Donors areincreasingly worried about the environmentaleffect of the large amounts of chemicalpesticides used on locust control, and arepushing for more benign alternatives. Theresults of the present survey indicate thatthere may be a potential market for abiopesticide against grasshoppers and locustson cash crops in the humid areas. The potentialmarket in the Sahel depends on a reduction ofcosts or a subsidy of its price. This subsidycould be justified by the expected reduction inenvironmental and health costs when replacingchemical pesticides. Since donors are thecurrent purchasers of chemical pesticides forthe Sahel, they would also be expected to beinvolved in the purchase of the biologicalproduct. (shrink)
Open-minded people should endorse dogmatism because of its explanatory power. Dogmatism holds that, in the absence of defeaters, a seeming that P necessarily provides non-inferential justification for P. I show that dogmatism provides an intuitive explanation of four issues concerning non-inferential justification. It is particularly impressive that dogmatism can explain these issues because prominent epistemologists have argued that it can’t address at least two of them. Prominent epistemologists also object that dogmatism is absurdly permissive because it allows a seeming to (...) provide justification even if the seeming was caused in some apparently inappropriate way. I conclude by disarming this objection. (shrink)
That laws of nature play a vital role in explanation, prediction, and inductive inference is far clearer than the nature of the laws themselves. My hope here is to shed some light on the nature of natural laws by developing and defending the view that they involve genuine relations between properties. Such a position is suggested by Plato, and more recent versions have been sketched by several writers.~ But I am not happy with any of these accounts, not so much (...) because they lack detail or engender minor difficulties, though they do, but because they share a quite fundamental defect. My goal here is to make this defect clear and, more importantly, to present a rather different version of this general conception of laws that avoids it. I begin by considering several features of natural laws and argue that these are best explained by the view that laws involve properties, that this involvement takes the form of a genuine relation between properties, and, finally, that the relation is a metaphysically necessary one. In the second section I start at the other end, and by reflecting on the nature of properties arrive at a similar account of natural laws. In the final section I develop this account in more detail, with emphasis on the nature of the relation between properties it invokes. Along the way several natural objections to the account are answered. (shrink)
The primary aim of this book is to understand how seemings relate to justification and whether some version of dogmatism or phenomenal conservatism can be sustained. It also addresses a number of other issues, including the nature of seemings, cognitive penetration, Bayesianism, and the epistemology of morality and disagreement.
It is natural to think that many of our beliefs are rational because they are based on seemings, or on the way things seem. This is especially clear in the case of perception. Many of our mathematical, moral, and memory beliefs also appear to be based on seemings. In each of these cases, it is natural to think that our beliefs are not only based on a seeming, but also that they are rationally based on these seemings—at least assuming there (...) is no relevant counterevidence. This piece is an introduction to a volume dedicated to the question of what the connection is between seemings and justified belief: under what conditions, if any, can a seeming justify its content? (shrink)
The Neo-Moorean Deduction (I have a hand, so I am not a brain-in-a-vat) and the Zebra Deduction (the creature is a zebra, so isn’t a cleverly disguised mule) are notorious. Crispin Wright, Martin Davies, Fred Dretske, and Brian McLaughlin, among others, argue that these deductions are instances of transmission failure. That is, they argue that these deductions cannot transmit justification to their conclusions. I contend, however, that the notoriety of these deductions is undeserved. My strategy is to clarify, attack, defend, (...) and apply. I clarify what transmission and transmission failure really are, thereby exposing two questionable but quotidian assumptions. I attack existing views of transmission failure, especially those of Crispin Wright. I defend a permissive view of transmission failure, one which holds that deductions of a certain kind fail to transmit only because of premise circularity. Finally, I apply this account to the Neo-Moorean and Zebra Deductions and show that, given my permissive view, these deductions transmit in an intuitively acceptable way—at least if either a certain type of circularity is benign or a certain view of perceptual justification is false. (shrink)
This book takes concepts developed by researchers in theoretical computer science and adapts and applies them to the study of natural language meaning. Summarizing over a decade of research, Chris Barker and Chung-chieh Shan put forward the Continuation Hypothesis: that the meaning of a natural language expression can depend on its own continuation.
It is argued that a number of important, and seemingly disparate, types of representation are species of a single relation, here called structural representation, that can be described in detail and studied in a way that is of considerable philosophical interest. A structural representation depends on the existence of a common structure between a representation and that which it represents, and it is important because it allows us to reason directly about the representation in order to draw conclusions about the (...) phenomenon that it depicts. The present goal is to give a general and precise account of structural representation, then to use that account to illuminate several problems of current philosophical interest — including some that do not initially seem to involve representation at all. In particular, it is argued that ontological reductions (like that of the natural numbers to sets), compositional accounts of semantics, several important sorts of mental representation, and (perhaps) possible worlds semantics for intensional logics are all species of structural representation and are fruitfully studied in the framework developed here. (shrink)
Perceptual dogmatism holds that if it perceptually seems to S that P, then S thereby has prima facie perceptual justification for P. But suppose Wishful Willy's desire for gold cognitively penetrates his perceptual experience and makes it seem to him that the yellow object is a gold nugget. Intuitively, his desire-penetrated seeming can't provide him with prima facie justification for thinking that the object is gold. If this intuitive response is correct, dogmatists have a problem. But if dogmatists have a (...) problem, you do too (well, most of you anyway). Reliabilists have denounced dogmatism's cognitive penetration problems, but they have problems with cognitive penetration that are even worse. (shrink)
Phenomenal conservatism holds, roughly, that if it seems to S that P, then S has evidence for P. I argue for two main conclusions. The first is that phenomenal conservatism is better suited than is proper functionalism to explain how a particular type of religious belief formation can lead to non-inferentially justified religious beliefs. The second is that phenomenal conservatism makes evidence so easy to obtain that the truth of evidentialism would not be a significant obstacle to justified religious belief. (...) A natural objection to phenomenal conservatism is that it makes evidence too easy to obtain, but I argue this objection is mistaken. (shrink)
It is a common idea that morality, or moral truths, if there are any, must have some sort of source, or grounding. It has also been claimed that constructivist theories in metaethics have an advantage over realist theories in that the former but not the latter can provide such a grounding. This paper has two goals. First, it attempts to show that constructivism does not in fact provide a complete grounding for morality, and so is on a par with realism (...) in this respect. Second, it explains why it seems that morality in fact couldn't have a source. (shrink)
In replying to certain objections to the existence of God, Robert Adams, Bruce Langtry, and Peter van Inwagen assume that God can appropriately choose a suboptimal world, a world less good than some other world God could have chosen. A number of philosophers, such as Michael Slote and Klaas Kraay, claim that these theistic replies are therefore committed to the claim that satisficing can be appropriate. Kraay argues that this commitment is a significant liability. I argue, however, that the relevant (...) defenses of theism are committed to the appropriateness of, not satisficing, but motivated submaximization. When one submaximizes with motivation, one aims at the optimum but accepts the good enough because of a countervailing consideration. When one satisfices, one aims at the good enough and chooses the good enough because it realizes her aim at the good enough. While commitment to the appropriateness of satisficing may be a significant liability, commitment to the appropriateness of motivated submaximization is not. (shrink)
If dangerous climate change is to be avoided, it is vital that carbon sinks such as tropical rainforests are protected. But protecting them has costs. These include opportunity costs: the potential economic benefits which those who currently control rainforests have to give up when they are protected. But who should bear those costs? Should countries which happen to have rainforests within their territories sacrifice their own economic development, because of our broader global interests in protecting key carbon sinks? This essay (...) develops an argument from the “principle of fairness,” which seeks to establish that outsiders should pay states with rainforests so as to share the costs of protection. If they do not, they can be condemned for free-riding on forest states. The argument is, I suggest, compelling and also capable of enjoying support from adherents of a wide variety of positions on global justice. (shrink)
Classical acquaintance theory is any version of classical foundationalism that appeals to acquaintance in order to account for non-inferential justification. Such theories are well suited to account for a kind of infallible non-inferential justification. Why am I justified in believing that I’m in pain? An initially attractive (partial) answer is that I’m acquainted with my pain. But since I can’t be acquainted with what isn’t there, acquaintance with my pain guarantees that I’m in pain. What’s less clear is whether, given (...) classical acquaintance theory, it’s possible to have non-inferential justification to believe something false. Classical acquaintance theorists try to make room for such a possibility, but I argue that the attempts of Richard Fumerton, Ali Hasan, and Evan Fales are inadequate. I’ll focus on introspective justification, but similar issues arise for a priori justification as well. (shrink)
Does inferential justification require the subject to be aware that her premises support her conclusion? Externalists tend to answer “no” and internalists tend to answer “yes”. In fact, internalists often hold the strong higher-level requirement that an argument justifies its conclusion only if the subject justifiably believes that her premises support her conclusion. I argue for a middle ground. Against most externalists, I argue that inferential justification requires that one be aware that her premises support her conclusion. Against many internalists, (...) I argue that this higher-level awareness needn’t be doxastic or justified. I also argue that the required higher-level awareness needn’t be caused in some appropriate way, e.g. by a reliable or properly functioning faculty. I suspect that this weaker higher-level requirement is overlooked because, at first glance, it seems absurd to allow nondoxastic, unjustified, and unreliably-caused higher-level awareness to contribute to inferential justification. One of the central goals of this paper is to explain how such weak awareness can make an essential contribution to inferential justification. (shrink)
It is plausible that the universe exists: a thing such that absolutely everything is a part of it. It is also plausible that singular, structured propositions exist: propositions that literally have individuals as parts. Furthermore, it is plausible that for each thing, there is a singular, structured proposition that has it as a part. Finally, it is plausible that parthood is a partial ordering: reflexive, transitive, and anti-symmetric. These plausible claims cannot all be correct. We canvass some costs of denying (...) each claim and conclude that parthood is not a partial ordering. Provided that the relevant entities exist, parthood is not anti-symmetric and proper parthood is neither asymmetric nor transitive. (shrink)
Non-presentist A-theories of time (such as the growing block theory and the moving spotlight theory) seem unacceptable because they invite skepticism about whether one exists in the present. To avoid this absurd implication, Peter Forrest appeals to the "Past is Dead hypothesis," according to which only beings in the objective present are conscious. We know we're present because we know we're conscious, and only present beings can be conscious. I argue that the dead past hypothesis undercuts the main reason for (...) preferring non-presentist A-theories to their presentist rivals, rivals which straightforwardly avoid skepticism about the present. (shrink)
Just as theory of representation is deficient if it can’t explain how misrepresentation is possible, a theory of computation is deficient if it can’t explain how miscomputation is possible. Nonetheless, philosophers have generally ignored miscomputation. My primary goal in this paper is to clarify both what miscomputation is and how to adequately explain it. Miscomputation is a special kind of malfunction: a system miscomputes when it computes in a way that it shouldn’t. To explain miscomputation, you must provide accounts of (...) computational behavior, computational norms, and how computational behavior can deviate from computational norms. A secondary goal of this paper is to defend an (quasi-)individualist, mechanistic theory of miscomputation. Computational behavior is narrowly individuated. Computational norms are widely individuated. A system miscomputes when its behavior manifests a narrow computational structure that the widely individuated norms say that it should not have. (shrink)
Experimental philosophers have disagreed about whether "the folk" are intuitively incompatibilists or compatibilists, and they have disagreed about the role of abstraction in generating such intuitions. New experimental evidence using Construal Level Theory is presented. The experiments support the views that the folk are intuitively both incompatibilists and compatibilists, and that abstract mental representations do shift intuitions, but not in a univocal way.
Kant’s deontological ethics, along with Aristotle’s virtue ethics and Mill’s utilitarian ethics, is often identified as one of the three primary moral options between which individuals can choose. Given the importance of Kant’s moral philosophy, it is surprising and disappointing how little has been written on his important contributions to moral education. Kant argues for a catechistic approach to moral education. By memorizing a series of moral questions and answers, an individual learns the basic principles of morality in the same (...) way that Martin Luther believed an individual should learn the tenets of Christianity. The difficulty, however, is that this approach appears to violate a central tenet of Kantian morality: virtuous acts must be performed out of respect for the moral law itself, not due to habituation. This paper demonstrates Kant’s significant contribution to moral education by showing how a catechistic moral education establishes the foundation necessary for autonomous action. (shrink)
"Feminism, Theory and the Politics of Difference" looks at the question of difference across the full spectrum of feminist theory from liberal, radical, lesbian ...
Philosophers who theorize about whether free will is compatible with causal determinism often rely on ordinary intuitions to bolster their theory. A revisionist theory of free will takes a different approach, saying that the best philosophical theory of what we ought to think about free will conflicts with what we ordinarily do think about free will. I contend that revisionism has not been taken as seriously as should be because philosophers have not realized the extent to which ordinary intuitions are (...) inconsistent. I present an experiment that gives empirical evidence for revisionism. The experiment shows that, in spite of the fact that the ?is compatible with? relation is symmetric, folk intuitions change as a function of whether we ask ?Is determinism compatible with free will?? versus ?Is free will compatible with determinism?? The paper explores possible explanations for why folk intuitions do not mirror the symmetry of the ?is compatible with? relation, but regardless of which of these explanations is correct, I argue that we must be revisionists in at least this sense: what we ought to believe about free will cannot include everything we do believe about free will. (shrink)
An agent submaximizes with motivation when she aims at the best but chooses a less good option because of a countervailing consideration. An agent satisfices when she rejects the better for the good enough, and does so because the mere good enough gets her what she really wants. Motivated submaximization and satisficing, so construed, are different ways of choosing a suboptimal option, but this difference is easily missed. Putative proponents of satisficing tend to argue only that motivated submaximization can be (...) appropriate while critics of satisficing tend to criticize satisficing, as I construe it. The existing literature, then, leaves satisficing in a very bad state: there are no good arguments for it and there are three unanswered objections to it. This paper clarifies the distinction between motivated submaximization and satisficing and refutes the three most prominent objections to the claim that satisficing can be appropriate. (shrink)
The moral enhancement of humans by biological or genetic means has recently been urged as a response to the pressing concerns facing human civilization. In this paper, I argue that proponents of biological moral enhancement have misrepresented the facts of human moral psychology. As a result, the likely effectiveness of traditional methods of moral enhancement has been underestimated, relative to biological or genetic means. I review arguments in favor of biological moral enhancement and argue that the complexity of moral psychology (...) raises serious problems for such interventions. I offer a programmatic sketch of the ways in which our improved understanding of moral psychology can help facilitate more traditional methods of moral enhancement. I conclude that the best response to the dangers faced by human civilization is the continued use of traditional methods of moral enhancement and the use of our improved understanding of moral psychology to further refine and develop these methods. (shrink)
This splendid book is a collection of twenty-three of John Cooper’s papers on Greek ethical philosophy: seven are on Socrates and Plato, twelve are on Aristotle and four are on the Hellenistics; nineteen have appeared elsewhere, two are newly written essays incorporating previously published material, and two are new essays written for this volume. Many of these papers are justly regarded as classics of contemporary scholarship and some of them are located in out of the way journals or volumes: we (...) thus have the usual reasons for being grateful that they are at last collected together in one place. But more important, bringing these papers together has synergistic effects: we see Cooper returning to related issues in different contexts and elaborating the scope and depth of his analyses. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)