The cis-regulatory hypothesis is one of the most important claims of evolutionary developmental biology. In this paper I examine the theoretical argument for cis-regulatory evolution and its role within evolutionary theorizing. I show that, although the argument has some weaknesses, it acts as a useful example for the importance of current scientific debates for science education.
Introduction -- The pre-text : the dialectical origins of Anselm's argument -- The text -- Proslogion -- Pro insipiente -- Responsio -- Commentary on the Proslogion -- Anselm's defence and the Unum argumentum -- The medieval reception -- The modern reception -- Anselm's argument today -- Conclusion: The significance of Anselm's argument.
Suppose various observers are divided randomly into two groups, a large and a small. Not knowing into which group anyone has been sent, each can have strong grounds for believing in being in the large group, although recognizing that every observer in the other group has equally powerful reasons for thinking of this other group as the large one. Justified belief can therefore be observer-relative in a rather paradoxical way. Appreciating this allows one to reject an intriguing new objection (...) against Brandon Carter's 'doomsday argument'. Carter encourages us to doubt that we are among only the first hundredth, say, or first millionth, of all humans who will ever have existed. He thereby reinforces whatever reasons we may have for suspecting that, unless we take great care, the human race will not survive long. Admittedly his argument is weakened if our world is indeterministic, so that there is no suitably guaranteed 'fact of the matter' of how many humans will ever have existed. But even then, it can caution us against believing that a lengthy future for humankind 'is as good as determined'. Of all the objections the argument has yet faced, the new one is the most interesting. (shrink)
I call attention to Berkeley’s treatment of a Newtonian indispensability argument against his own main position. I argue that the presence of this argument marks a significant moment in the history of philosophy and science: Newton’s achievements could serve as a separate and authoritative source of justification within philosophy. This marks the presence of a new kind of naturalism. A long the way, I argue against the claim tha t there is no explicit opposition or distinction between “philosophy” (...) and “science” until the nineteenth century. Finally, I argue for the conceptual unity between Berkeley’s immaterialism and instrumentalism. I argue that Berkeley’s commitment to immaterialism requires his reinterpretation of science and, thus, the adoption of instrumentalism. (shrink)
The purely retributive moral justification of punishment has a gap at its centre. It fails to explain why the offender should not be protected from punishment by the intuitively powerful moral idea that afflicting another person (other than to avoid a greater harm) is always wrong. Attempts to close the gap have taken several different forms, and only one is discussed in this paper. This is the attempt to push aside the â€˜protectingâ€™ intuition, using some more powerful intuition specially invoked (...) by the situations to which criminal justice is addressed. In one aspect of his complex defence of pure retributivism, Michael S. Moore attempts to show that the emotions of well-adjusted persons provide evidence of moral facts which justify the affliction of culpable wrongdoers in retribution for their wrongdoing. In particular, he appeals to the evidential significance of emotions aroused by especially heinous crimes, including the punishment-seeking guilt of the offender who truly confronts the reality of his immoral act. The paper argues that Moore fails to vindicate this appeal to moral realism, and thus to show that intrinsic personal moral desert (as distinct from â€˜desertâ€™ in a more restricted sense, relative to morally justified institutions) is a necessary and sufficient basis for punishment. Other theories of the role of emotions in morality are as defensible as Mooreâ€™s, while the compelling emotions to which he appeals to clinch his argument can be convincingly situated within a non-retributivist framework, especially when the distinction between the intuitions of the lawless world, and those of the world of law, is recognised. (shrink)
Moderate relativism -- The framework -- The distribution of content -- Radical vs. moderate relativism -- Two levels of content -- Branch points for moderate relativism -- The debate over temporalism (1) : do we need temporal propositions? -- Modal vs. extensional treatments of tense -- What is at stake? -- Modal and temporal innocence -- Temporal operators and temporal propositions in an extensional framework -- The debate over temporalism (2) : can we believe temporal propositions? -- An epistemic (...) class='Hi'>argument against temporalism -- Rebutting Richard's argument -- Relativistic disagreement -- Relativization and indexicality -- Index, context, and content -- The two-stage picture : Lewis vs. Kaplan and Stalnaker -- Rescuing the two-stage picture -- Content, character, and cognitive significance -- Experience and subjectivity -- Content and mode -- Duality and the fallacy of misplaced information -- The content of perceptual judgements -- Episodic memory -- Immunity to error through misidentification -- Implicit self-reference -- Weak and strong immunity -- Quasi-perception and quasi-memory -- Reflexive states -- Relativization and reflexivity -- The (alleged) reflexivity of de se thoughts -- Reflexivity : internal or external? -- What is wrong with reflexivism -- The first person point of view -- De se thoughts and subjectivity -- Memory and the imagination -- Imagination and the self-- Imagination, empathy, and the quasi-de se -- Egocentricity and beyond -- Unarticulated constituents in the lekton? -- The context-dependence of the lekton : how far can we go? -- Unarticulatedness and the 'concerning' relation -- Three (alleged) arguments for the externality principle -- Invariance -- Self-relative thoughts -- The problem of the essential indexical -- Perry against relativized propositions -- Context-relativity -- Implicit and explicit de se thoughts -- Shiftability -- The generalized reflexive constraint -- Parametric invariance and m-shiftability -- Free shiftability -- The anaphoric mode : a Bühlerian perspective. (shrink)
John Beatty (1995) and Alexander Rosenberg (1994) have argued against the claim that there are laws in biology. Beatty's main reason is that evolution is a process full of contingency, but he also takes the existence of relativesignificance controversies in biology and the popularity of pluralistic approaches to a variety of evolutionary questions to be evidence for biology's lawlessness. Rosenberg's main argument appeals to the idea that biological properties supervene on large numbers of physical properties, but (...) he also develops case studies of biological controversies to defend his thesis that biology is best understood as an instrumental discipline. The present paper assesses their arguments. (shrink)
In the past five years, there have been a series of papers in the journal Evolution debating the relativesignificance of two theories of evolution, a neo-Fisherian and a neo-Wrightian theory, where the neo-Fisherians make explicit appeal to parsimony. My aim in this paper is to determine how we can make sense of such an appeal. One interpretation of parsimony takes it that a theory that contains fewer entities or processes, (however we demarcate these) is more parsimonious. On (...) the account that I defend here, parsimony is a ‘local’ virtue. Scientists’ appeals to parsimony are not necessarily an appeal to a theory’s simplicity in the sense of it’s positing fewer mechanisms. Rather, parsimony may be proxy for greater probability or likelihood. I argue that the neo-Fisherians appeal is best understood on this interpretation. And indeed, if we interpret parsimony as either prior probability or likelihood, then we can make better sense of Coyne et al. argument that Wright’s three phase process operates relatively infrequently. (shrink)
One of Kripke's fundamental objections to descriptivism was that the theory misclassifies certain _a posteriori_ propositions expressed by sentences involving names as _a priori_. Though nowadays very few philosophers would endorse a descriptivism of the sort that Kripke criticized, many find two-dimensional semantics attractive as a kind of successor theory. Because two-dimensionalism needn't be a form of descriptivism, it is not open to the epistemic argument as formulated by Kripke; but the most promising versions of two-dimensionalism are open to (...) a close relative of that argument. (shrink)
Among recent objections to Pascal’s Wager, two are especially compelling. The first is that decision theory, and specifically the requirement of maximizing expected utility, is incompatible with infinite utility values. The second is that even if infinite utility values are admitted, the argument of the Wager is invalid provided that we allow mixed strategies. Furthermore, Hájek (Philosophical Review 112, 2003) has shown that reformulations of Pascal’s Wager that address these criticisms inevitably lead to arguments that are philosophically unsatisfying and (...) historically unfaithful. Both the objections and Hájek’s philosophical worries disappear, however, if we represent our preferences using relative utilities (generalized utility ratios) rather than a one-place utility function. Relative utilities provide a conservative way to make sense of infinite value that preserves the familiar equation of rationality with the maximization of expected utility. They also provide a means of investigating a broader class of problems related to the Wager. (shrink)
The consequence argument for the incompatibility of free action and determinism has long been under attack, but two important objections have only recently emerged: Warfield’s modal fallacy objection and Campbell’s no past objection. In this paper, I explain the significance of these objections and defend the consequence argument against them. First, I present a novel formulation of the argument that withstands their force. Next, I argue for the one controversial claim on which this formulation relies: the (...) trans-temporality thesis. This thesis implies that an agent acts freely only if there is one time at which she is able to perform an action and a distinct time at which she actually performs it. I then point out that determinism, too, is a thesis about trans-temporal relations. I conclude that it is precisely because my formulation of the consequence argument emphasizes trans-temporality that it prevails against the modal fallacy and no past objections. (shrink)
According to the incentives argument, inequalities in material goods are justifiable if they are to the benefit of the worst off members of society. In this paper, I point out what is easily overlooked, namely that inequalities are justifiable only if they are to the overall benefit of the worst off, that is, in terms of both material and social goods. I then address the question how gains in material goods can be weighed against probable losses in social goods. (...) The ultimate criterion, so my idea, is how these gains and losses affect a person’s ability to reach her goals in life. Based on the idea that goals in life cannot be taken as given, I conclude that the absolute material gains are negligible compared to the losses of social goods and the disadvantage in the relative position caused by material inequalities. (shrink)
A theory of understanding -- Truth's role in understanding -- Critique of justificationist and evidential accounts -- Do pragmatist views avoid this critique? -- A realistic account -- How evidence and truth are related -- Three grades of involvement of truth in theories of understanding -- Anchoring -- Next steps -- Reference and reasons -- The main thesis and its location -- Exposition and four argument-types -- Significance and consequences of the main thesis -- The first person as (...) a case study -- Fully self-conscious thought -- Immunity to error through misidentification relative to the first person -- Can a use of the first-person concept fail to refer? -- Some conceptual roles are distinctive but not fundamental -- Implicit conceptions -- Implicit conceptions : motivation and examples -- Deflationary readings rejected -- The phenomenon of new principles -- Explanation by implicit conceptions -- Rationalist aspects -- Consequences : rationality, justification, understanding -- Transitional -- Applications to mental concepts -- Conceiving of conscious states -- Understanding and identity in other cases -- Constraints on legitimate explanations in terms of identity -- Why is the subjective case different? -- Attractions of the interlocking account -- Tacit knowledge, and externalism about the internal -- Is this the myth of the given? -- Knowledge of others' conscious states -- Communicability : between Frege and Wittgenstein -- Conclusions and significance -- 'Another I' : representing perception and action -- The core rule -- Modal status and its significance -- Comparisons -- The possession-condition and some empirical phenomena -- The model generalized -- Wider issues -- Mental action -- The distinctive features of action-awareness -- The nature and range of mental actions -- The principal hypothesis and its grounds -- The principal hypothesis : distinctions and consequences -- How do we know about our own mental actions? -- Concepts of mental actions and their epistemological significance -- Is this account open to the same objections as perceptual models of introspection? -- Characterizing and unifying schizophrenic experience -- The first person in the self-ascription of action -- Rational agency and action-awareness -- Representing thoughts -- The puzzle -- A proposal -- How the solution treats the constraints that generate the puzzle -- Relation to single-level treatments -- An application : reconciling externalism with distinctive self-knowledge. (shrink)
This paper addresses the political constraints on science through a pragmatist critique of Philip Kitcher’s account of “well-ordered science.” A central part of Kitcher’s account is his analysis of the significance of items of scientific research: contextual and purpose-relative scientific significance replaces mere truth as the aim of inquiry. I raise problems for Kitcher’s account and argue for an alternative, drawing on Peirce’s and Dewey’s theories of problem-solving inquiry. I conclude by suggesting some consequences for understanding the (...) proper conduct of science in a democracy. (shrink)
Examples of historical writing are analysed in detail, and it is demonstrated that, with respect to the statements which appear in historical accounts, their truth and value-freedom are neither necessary nor sufficient for the relative acceptability of historical accounts. What is both necessary and sufficient is the acceptability of the selection of statements involved, and it is shown that history can be objective only if the acceptability of selection can be made on the basis of a rational criterion of (...) relevance. 'Relevance' and 'significance 1 are distinguished. The conditions of rationality of a criterion of acceptability are examined with special reference to Popper's criterion of 'falsifiability', which is shown to fail to apply to historical writing. General conclusions are drawn about the implications of the argument for the possibility of the 'unity of science', and about the conditions which need to be met if history is to be objective. (shrink)
Two types of logical consequence are compared: one, with respect to matrix and designated elements and the other with respect to ordering in a suitable algebraic structure. Particular emphasis is laid on algebraic structures in which there is no top-element relative to the ordering. The significance of this special condition is discussed. Sequent calculi for a number of such structures are developed. As a consequence it is re-established that the notion of truth as such, not to speak of (...) tautologies, is inessential in order to define validity of an argument. (shrink)
We demonstrate that Statistical significance (Chow 1996) includes straw man arguments against (1) effect size, (2) meta-analysis, and (3) Bayesianism. We agree with the author that in experimental designs, H0 “is the effect of chance influences on the data-collection procedure . . . it says nothing about the substantive hypothesis or its logical complement” (Chow 1996, p. 41).
In this paper, I discuss the current thesis on the modern origin of the ad hominem-argument, by analysing the Aristotelian conception of it. In view of the recent accounts which consider it a relativeargument, i.e., acceptable only by the particular respondent, I maintain that there are two Aristotelian versions of the ad hominem, that have identifiable characteristics, and both correspond to the standard variants distinguished in the contemporary treatments of the famous informal fallacy: the abusive and (...) the circumstancial or tu quoque types. I propose to reconstruct the two Aristotelian versions (see sections 1 and 2), which have been recognized again in the ninteenth century (sec. 3). Finally, I examine whether or not it was considered as a fallacious dialogue device by Aristotle and by A. Schopenhauer (sec. 4). (shrink)
Relativism offers a nifty way of accommodating most of our intuitions about epistemic modals, predicates of personal taste, color expressions, future contingents, and conditionals. But in spite of its manifest merits relativism is squarely at odds with epistemic value monism: the view that truth is the highest epistemic goal. I will call the argument from relativism to epistemic value pluralism the trivial argument for epistemic value pluralism. After formulating the argument, I will look at three possible ways (...) to refute it. I will then argue that two of these are unsuccessful, and defend the third, which involves denying that there are any genuinely relative truths. (shrink)
A frequent objection to the fine-tuning argument has been that although certain necessary conditions for life were admittedly exceedingly improbable, still, the many possible alternative sets of conditions were all equally improbable, so that no special significance is to be attached to the realization of the conditions of life. Some authors, however, have rejected this objection as fallacious. The object of this paper is to state the objection to the fine-tuning argument in a more telling form than (...) has been done hitherto, and to meet the charge of fallacy. (shrink)
Kant famously argued that, from experience, we can only learn how something actually is, but not that it must be so. In this paper, I defend an improved version of Kant's argument for the existence of a priori knowledge, the Modal Argument , against recent objections by Casullo and Kitcher. For the sake of the argument, I concede Casullo's claim that we may know certain counterfactuals in an empirical way and thereby gain epistemic access to some nearby, (...) nomologically possible worlds. But I maintain that our beliefs about metaphysical necessities still cannot be justified empirically. Furthermore, I reject Casullo's deflationary thesis about the significance of such justification. Kitcher's most troublesome objection is that we can gain any modal justification whatsoever through testimony , i.e. in an experiential way. This can be countered by distinguishing between productive sources of justification, like perception, and merely reproductive sources, like testimony. Thus, some productive a priori source will always be needed somewhere. 1. (shrink)
In this paper, I take issue with an idea that has emerged from recent relativist proposals, and, in particular, from Lasersohn (Linguistics and Philosophy 28: 643–686, 2005), according to which the correct semantics for taste predicates must use contents that are functions of a judge parameter (in addition to a possible world parameter) rather than implicit arguments lexically associated with such predicates. I argue that the relativist account and the contextualist implicit argument-account are, from the viewpoint of semantics, not (...) much more than notational variants of one another. In other words, given any sentence containing a taste predicate, and given any assignment of values to the relevant parameters, the two accounts predict the same truth value and are, in that sense, equivalent. I also look at possible reasons for preferring one account over the other. The phenomenon of “faultless disagreement” (cf. Kölbel, Truth without objectivity, 2002) is often believed to be one such reason. I argue, against Kölbel and Lasersohn, that disagreement is never faultless: either the two parties genuinely disagree, hence if the one is right then the other is wrong, or the two parties are both right, but their apparent disagreement boils down to a misunderstanding. What is more, even if there were faultless disagreement, I argue that relativism would fail to account for it. The upshot of my paper, then, is to show that there is not much disagreement between a contextualist account that models the judge parameter as an implicit argument to the taste predicate, and a relativist account that models it as a parameter of the circumstances of evaluation. The choice between the two accounts, at least when talking about taste, is thus, to a large extent, a matter of taste. (shrink)
We investigate the philosophical significance of the existence of different semantic systems with respect to which a given deductive system is sound and complete. Our case study will be Corcoran’s deductive system D for Aristotelian syllogistic and some of the different semantic systems for syllogistic that have been proposed in the literature. We shall prove that they are not equivalent, in spite of D being sound and complete with respect to each of them. Beyond the specific case of syllogistic, (...) the goal is to offer a general discussion of the relations between informal notions—in this case, an informal notion of deductive validity—and logical apparatuses such as deductive systems and (model-theoretic or other) semantic systems that aim at offering technical, formal accounts of informal notions. Specifically, we will be interested in Kreisel’s famous ‘squeezing argument’; we shall ask ourselves what a plurality of semantic systems (understood as classes of mathematical structures) may entail for the cogency of specific applications of the squeezing argument. More generally, the analysis brings to the fore the need for criteria of adequacy for semantic systems based on mathematical structures. Without such criteria, the idea that the gap between informal and technical accounts of validity can be bridged is put under pressure. (shrink)
In chapter 7 of The Varieties of Reference, Gareth Evans claimed to have an argument that would present "an antidote" to the Cartesian conception of the self as a purely mental entity. On the basis of considerations drawn from philosophy of language and thought, Evans claimed to be able to show that bodily awareness is a form of self-awareness. The apparent basis for this claim is the datum that sometimes judgements about one’s position based on body sense are immune (...) to errors of misidentification relative to the first-person pronoun 'I'. However, Evans’s argument suffers from a crucial ambiguity. 'I' sometimes refers to the subject's mind, sometimes to the person, and sometimes to the subject's body. Once disambiguated, it turns out that Evans’s argument either begs the question against the Cartesian or fails to be plausible at all. Nonetheless, the argument is important for drawing our attention to the idea that bodily modes of awareness should be taken seriously as possible forms of self-awareness. (shrink)
A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p -value to make such a test. The model will be rejected if something surprising is observed (relative to what else might have been observed). It is shown that the relation between this measure of surprise (the s -value) and the surprise indices of Weaver and Good (...) is similar to the relationship between a p -value, a corresponding odds-ratio, and a logit or log-odds statistic. The s -value is always larger than the corresponding p -value, and is not uniformly distributed. Difficulties with the whole approach are discussed. (shrink)
Central to the debate between Humean and anti-Humean metaphysics is the question of whether dispositions can exist in the absence of categorical properties that ground them (that is, where the causal burden is shifted on to categorical properties on which the dispositions would therefore supervene). Dispositional essentialists claim that they can; categoricalists reject the possibility of such ?baseless? dispositions, requiring that all dispositions must ultimately have categorical bases. One popular argument, recently dubbed the ?Argument from Science?, has appeared (...) in one or another form over much of the last century and purports to win the day for the dispositional essentialist. Taking its cue from physical theory, the Argument from Science treats the exclusively dispositional characterizations of the fundamental particles one finds in physical theory as providing a key premise in what has been called a ?decisive? argument for baseless dispositions. Despite sharing the intuition that dispositions can be baseless, I argue that the force and significance of the Argument from Science have been greatly overestimated: no version of the argument is close to decisive, and only one version succeeds in scoring points against the categoricalist. Not only is physical theory more ontologically innocent than defenders of baseless dispositions seem to appreciate, most versions of the Argument from Science neglect important ways that dispositions could be grounded by categorical properties. (shrink)
The anti-realist argument from underconsideration focuses on the fact that, when scientists evaluate theories, they only ever consider a subset of the theories that can account for the available data. As a result, when scientists judge one theory to be superior to competitor theories, they are not warranted in drawing the conclusion that the superior theory is likely true with respect to what it says about unobservable entities and processes. I defend the argument from underconsideration from the objections (...) of Peter Lipton. I argue that the inconsistency that Lipton claims to find in the argument vanishes once we understand what the anti-realist means when she claims that scientists are reliable. I also argue that collapsing the distinction between relative and absolute evaluations, as Lipton recommends, has its costs. Finally, I briefly examine Richard Boyd's influential defence of realism. (shrink)
In this paper, I defend a representationalist account of the phenomenal character of color experiences. Representationalism, the thesis that phenomenal character supervenes on a certain kind of representational content, so-called phenomenal content, has been developed primarily in two different ways, as Russellian and Fregean representationalism. While the proponents of Russellian and Fregean representationalism differ with respect to what they take the contents of color experiences to be, they typically agree that colors are exhaustively characterized by the three dimensions of the (...) color solid: hue, saturation, and lightness. I argue that a viable version of representationalism needs to renounce this restriction to three dimensions and consider illumination to be a genuine phenomenal dimension of color. My argument for this thesis falls into two parts. I first consider the phenomenon of color constancy in order to show that neither Russellian nor Fregean representationalism can do justice to the phenomenal significance of local illumination. I subsequently formulate a version of representationalism that accounts for illumination by taking it as a phenomenal dimension of color. (shrink)
In The Sources of Normativity, Christine Korsgaard presents and defends a neo-Kantian theory of normativity. Her initial account of reasons seems to make them dependent upon the practical identity of the agent, and upon the value the agent must place on her own humanity. This seems to make all reasons agent-relative. But Korsgaard claims that arguments similar to Wittgenstein's private-language argument can show that reasons are in fact essentially agent-neutral. This paper explains both of Korsgaard's Wittgensteinian arguments, and (...) shows why neither of them work. The paper also provides a brief sketch of a different Wittgensteinian account of reasons that distinguishes the normative role of justification from that of requirement. On this account, the real agent-neutrality of reasons applies to their justificatory role, but not to their requiring role. (shrink)
Jeff McMahan appeals to what he calls the “Time-relative Interest Account of the Wrongness of Killing” to explain the wrongness of killing individuals who are conscious but not autonomous. On this account, the wrongness of such killing depends on the victim’s interest in his or her future, and this interest, in turn, depends on two things: the goods that would have accrued to the victim in the future; and the strength of the prudential relations obtaining between the victim at (...) the time of the killing and at the times these goods would have accrued to him or her. More precisely, when assessing this interest, future goods should be discounted to reflect reductions in the strength of such relations. Against McMahan’s account I argue that it relies on an implausible “actualist” view of the moral importance of interests according to which satisfactions of future interests only have moral significance if they are satisfactions of actual interests (interests that will in fact exist). More precisely, I aim to show that the Time-relative Interest Account (1) does not have the implications for the morality of killing that McMahan takes it to have, and (2) implies, implausibly, that certain interest satisfactions which seem to be morally significant are morally insignificant because they are not satisfactions of actual interests. (shrink)
We present a formal analysis of the Cosmological Argument in its two main forms: that due to Aquinas, and the revised version of the Kalam Cosmological Argument more recently advocated by William Lane Craig. We formulate these two arguments in such a way that each conclusion follows in first-order logic from the corresponding assumptions. Our analysis shows that the conclusion which follows for Aquinas is considerably weaker than what his aims demand. With formalizations that are logically valid in (...) hand, we reinterpret the natural language versions of the premises and conclusions in terms of concepts of causality consistent with (and used in) recent work in cosmology done by physicists. In brief: the Kalam argument commits the fallacy of equivocation in a way that seems beyond repair; two of the premises adopted by Aquinas seem dubious when the terms ‘cause’ and ‘causality’ are interpreted in the context of contemporary empirical science. Thus, while there are no problems with whether the conclusions follow logically from their assumptions, the Kalam argument is not viable, and the Aquinas argument does not imply a caused origination of the universe. The assumptions of the latter are at best less than obvious relative to recent work in the sciences. We conclude with mention of a new argument that makes some positive modifications to an alternative variation on Aquinas by Le Poidevin, which nonetheless seems rather weak. (shrink)
Why do agent-relative reasons have authority over us, reflective creatures? Reductive accounts base the normativity of agent-relative reasons on agent-neutral considerations like having parents caring especially for their own children serves best the interests of all children. Such accounts, however, beg the question about the source of normativity of agent-relative ways of reason-giving. In this paper, I argue for a non-reductive account of the reflective necessity of agent-relative concerns. Such an account will reveal an important structural (...) complexity of practical reasoning in general. Christine Korsgaard relates the rational binding force of practical reasons to the various identities or self-conceptions under which we value ourselves. The problem is that it is not clear why such self-conceptions would necessitate us rationally, given the fact that most of our identities are simply given. Perhaps, Harry Frankfurt is right in arguing that we are not only necessitated by reason, but also, and predominantly by what we love. I argue, however, that the necessities of love (in Frankfurts phrase) are not to be separated from, but should be seen as belonging to the necessities of reason. Our loves, concerns and related identities provide for a specific and important structure to practical reflection. They function on the background of reasoning, having a specific default role: they would lose their character as concerns, if there was a need for them to be cited on the foreground of deliberation or if there was a need to justify them. This does not mean that our deep concerns cannot be scrutinised. They can only be scrutinised in an indirect way, however, which explains their role in grounding the normativity of agent-relative reasons. It appears that this account can provide for a viable interpretation of Korsgaards argument about the foundational role of practical identities. (shrink)
In this paper, I argue that commentators have missed a significant clue given by Descartes in coming to understand his 'ontological' proof for the existence of God. In both the analytic and synthetic presentations of the proof throughout his writings, Descartes notes that the proof works 'in the same way' as a particular geometrical proof. I explore the significance of such a parallel, and conclude that Descartes could not have intended readers to think that the argument consists of (...) some kind of intuition. I argue that for Descartes the attribute of existence is a 'second-order' attribute that is demonstrated to belong to the idea of God on the basis of 'first-order' attributes. The proof, properly understood, is in fact a demonstration. Having brought to light the geometrical parallels between the ontological and geometrical proofs, we have new evidence to resolve the 'intuition versus demonstration' controversy that has characterized much of the discussion of Descartes's ontological argument. (shrink)
The judgment that a given event is epistemically improbable is necessary but insufficient for us to conclude that the event is surprising. Paul Horwich has argued that surprising events are, in addition, more probable given alternative background assumptions that are not themselves extremely improbable. I argue that Horwich’s definition fails to capture important features of surprises and offer an alternative definition that accords better with intuition. An important application of Horwich’s analysis has arisen in discussions of fine-tuning arguments. In the (...) second part of the paper I consider the implications for this argument of employing my definition of surprise. I argue that advocates of fine-tuning arguments are not justified in attaching significance to the fact that we are surprised by examples of fine-tuning. (shrink)
The sheer multitude of criteria of empirical significance has been taken as evidence that the pre-analytic notion being explicated is too vague to be useful. I show instead that a significant number of these criteria—by Ayer, Popper, Przełęcki, Suppes, and David Lewis, among others—not only form a coherent whole, but also connect directly to the theory of definition, the notion of empirical content as explicated by Ramsey sentences, and the theory of measurement; two criteria by Carnap and Sober are (...) trivial, but can be saved and connected to the other criteria by slight modifications. A corollary is that the ordinary language defense of Lewis, the conceptual arguments by Ayer and Popper, the theoretical considerations by Przełęcki, and the practical considerations by Suppes all apply to the same criterion or closely related criteria. Furthermore, the equivalence of some criteria allows for their individual justifications to be taken cumulatively and, together with the entailment relations between nonequivalent criteria, suggest criteria for general auxiliary assumptions, comparative criteria, and more liberal conceptions of observation. (shrink)
Most contemporary political philosophers deny that justice requires giving people what they deserve. According to a familiar anti-desert argument, the influence of genes and environment on people's actions and traits undermines all desert-claims. According to a less familiar – but more plausible – argument, the influence of genes and environment on people's actions and traits undermines some desert-claims (or all desert-claims to an extent). But, it says, we do not know which ones (or to what extent). This article (...) examines this ‘epistemological’ argument against desert. It gives reason to believe that it fails, emphasizing the importance of justice relative to efficiency and attempting to construct a practical way of measuring desert. (shrink)
The questions considered are whether colours are relative to systems of colour concepts, to the conditions in which they are observed, or to observers or communities of observers; and whether the relativity of colours, such as it is, implies that they are less real than shapes or intervals in time. The argument is based on the thought that Special Relativity provides the best available intellectual framework for thinking about the supposed relativity of qualities of physical things.
John Leslie presents a thought experiment to show that chances are sometimes observer-relative in a paradoxical way. The pivotal assumption in his argument – a version of the weak anthropic principle – is the same as the one used to get the disturbing Doomsday argument off the ground. I show that Leslie's thought experiment trades on the sense/reference ambiguity and is fallacious. I then describe a related case where chances are observer-relative in an interesting way. But (...) not in a paradoxical way. The result can be generalized: At least for a very wide range of cases, the weak anthropic principle does not giverise to paradoxical observer-relative chances. This finding could be taken to give new indirect support to the doomsday argument. (shrink)
I characterize the main approaches to the moral consideration of children developed in the light of the argument from 'marginal' cases, and develop a more adequate strategy that provides guidance about the moral responsibilities adults have towards children. The first approach discounts the significance of children's potential and makes obligations to all children indirect, dependent upon interests others may have in children being treated well. The next approaches agree that the potential of children is morally considerable, but disagree (...) as to whether and why children with intellectual disabilities are morally considerable. These approaches explore the moral significance of intellectual capacities, species membership, the capacity for welfare, and the interests of others. I argue that relationships characterized by reciprocity of care are morally valuable, that both the potential to be in such relationships and the actuality of being in them are morally valuable, and that many children with significant intellectual disabilities have this potential. (shrink)
Davies argues that the ontology of artworks as performances offers a principled way of explaining work-relativity of modality. Object oriented contextualist ontologies of art (Levinson) cannot adequately address the problem of work-relativity of modal properties because they understand looseness in what counts as the same context as a view that slight differences in the work-constitutive features of provenance are work-relative. I argue that it is more in the spirit of contextualism to understand looseness as context-dependent. This points to the (...) general problem—the context of appreciation is not robust enough to ground modal intuitions about objective entities. In general, when epistemology dictates ontology there is always a threat of anti-realism, scepticism and relativism. Davies also appeals to the modality principle—an entity’s essential properties are all and only its constitutive properties. Davies understands essentiality in a traditional way: a property P is an essential property of an object o iff o could not exist and lack P. Kit Fine has recently made a convincing case for the view that the notion of essence is not to be understood in modal terms. I explore some of the implications of this view for Davies’ modal argument for the performance theory. (shrink)
Bohmian mechanics faces an underdetermination problem: when it comes to solving the measurement problem, alternatives to the Bohmian guidance equation work just as well as the official guidance equation. One way to argue that the guidance equation is superior to its rivals is to use a symmetry argument: of the candidate guidance equations, the official guidance equation is the simplest Galilean-invariant candidate. This symmetry argument---if it worked---would solve the underdetermination problem. But the argument does not work. It (...) fails because it rests on assumptions about how Galilean transformations (especially boosts) act on the wavefunction that are (in this context) unwarranted. My discussion has larger morals about the physical significance of certain mathematical results (like, for example, Wigner's theorem) in non-orthodox interpretations of quantum mechanics. (shrink)
In the May 15, 1935 issue of Physical Review Albert Einstein co-authored a paper with his two postdoctoral research associates at the Institute for Advanced Study, Boris Podolsky and Nathan Rosen. The article was entitled “Can Quantum Mechanical Description of Physical Reality Be Considered Complete?” (Einstein et al. 1935). Generally referred to as “EPR”, this paper quickly became a centerpiece in the debate over the interpretation of the quantum theory, a debate that continues today. The paper features a striking case (...) where two quantum systems interact in such a way as to link both their spatial coordinates in a certain direction and also their linear momenta (in the same direction). As a result of this “entanglement”, determining either position or momentum for one system would fix (respectively) the position or the momentum of the other. EPR use this case to argue that one cannot maintain both an intuitive condition of local action and the completeness of the quantum description by means of the wave function. This entry describes the argument of that 1935 paper, considers several different versions and reactions, and explores the ongoing significance of the issues they raise. (shrink)
Nick Bostrom’s ‘Simulation Argument’ purports to show that, unless we are confident that advanced ‘posthuman’ civilizations are either extremely rare or extremely rarely interested in running simulations of their own ancestors, we should assign significant credence to the hypothesis that we are simulated. I argue that Bostrom does not succeed in grounding this constraint on credence. I first show that the Simulation Argument requires a curious form of selective scepticism, for it presupposes that we possess good evidence for (...) claims about the physical limits of computation and yet lack good evidence for claims about our own physical constitution. I then show that two ways of modifying the argument so as to remove the need for this presupposition fail to preserve the original conclusion. Finally, I argue that, while there are unusual circumstances in which Bostrom’s selective scepticism might be reasonable, we do not currently find ourselves in such circumstances. There is no good reason to uphold the selective scepticism the Simulation Argument presupposes. There is thus no good reason to believe its conclusion. (shrink)
The research described here explores the idea of using Supreme Court oral arguments as pedagogical examples in first year classes to help students learn the role of hypothetical reasoning in law. The article presents examples of patterns of reasoning with hypotheticals in appellate legal argument and in the legal classroom and a process model of hypothetical reasoning that relates them to work in cognitive science and Artificial Intelligence. The process model describes the relationships between an advocate’s proposed test for (...) deciding a case or issue, the facts of the hypothetical and of the case to be decided, and the often conflicting legal principles and policies underlying the issue. The process model of hypothetical reasoning has been partially implemented in a computerized teaching environment, LARGO (“Legal ARgument Graph Observer”) that helps students identify, analyze, and reflect on episodes of hypothetical reasoning in oral argument transcripts. Using LARGO, students reconstruct examples of hypothetical reasoning in the oral arguments by representing them in simple diagrams that focus students on the proposed test, the hypothetical challenge to the test, and the responses to the challenge. The program analyzes the diagrams and provides feedback to help students complete the diagrams and reflect on the significance of the hypothetical reasoning in the argument. The article reports the results of experiments evaluating instruction of first year law students at the University of Pittsburgh using the LARGO program as applied to Supreme Court personal jurisdiction cases. The learning results so far have been mixed. Instruction with LARGO has been shown to help law student volunteers with lower LSAT scores learn skills and knowledge regarding hypothetical reasoning better than a text-based approach, but not when the students were required to participate. On the other hand, the diagrams students produce with LARGO have been shown to have some diagnostic value, distinguishing among law students on the basis of LSAT scores, posttest performance, and years in law school. This lends support to the underlying model of hypothetical argument and suggests using LARGO as a pedagogically diagnostic tool. (shrink)
A central issue confronting both philosophers and practitioners in formulating an analysis of causation is the question of what constitutes evidence for a causal association. From the 1950s onward, the biostatistician Jerome Cornfield put himself at the center of a controversial debate over whether cigarette smoking was a causative factor in the incidence of lung cancer. Despite criticisms from distinguished statisticians such as Fisher, Berkson and Neyman, Cornfield argued that a review of the scientific evidence supported the conclusion of a (...) causal association. Cornfield's odds ratio in case‐control studies — as a good estimate of relative risk — together with his argument of ''explanatory common cause'' became important tools to use in confronting the skeptics. In this paper, I revisit this important historical episode as recorded in the Journal of National Cancer Institute and the Journal of the American Statistical Association. More specifically, I examine Cornfield's necessary condition on the minimum magnitudes of relative risk in light of confounders. This episode yields important insight into the nature of causal inference by showing the sorts of evidence appealed to by practitioners in supporting claims of causal association. I discuss this event in light of the manipulationist account of causation. (shrink)
This article is devoted to the question: does the Duhemian argument support the position taken by those contemporary philosophers who--like W. V. O. Quine and M. White--reject the distinction between analytic and synthetic statements? The term "Duhemian argument" is used to refer to the following statement: it is impossible to put to the test one isolated empirical statement; testing empirical statements involves testing a whole group of hypotheses. An analysis of the logical structure of reductive reasoning leads to (...) the conclusion that the Duhemian argument is valid and that it entails the following statements: (1)--experience alone cannot compel us absolutely to the acceptance of any isolated empirical statement whatsoever, independently of our acceptance or rejection of some other statements, and (2)--no isolated empirical statement can be conclusively falsified by experience, independently of our acceptance or rejection of some other statements. The Duhemian argument seems then to establish conclusively the cogency of the claim that, in principle, it is possible to reject or to maintain any particular empirical statement, provided we make appropriate changes in the system of hypotheses which is put to test. The philosophers who reject the distinction between analytic and synthetic statements--in particular Quine--claim that the same line of reasoning supports their contention. It is alleged that: (1)--the Duhemian argument makes impossible a definition of statement synonymy and, consequently, a definition of analyticity in terms of synonymy, and (2)--that the unit of empirical significance is the whole of science or the total science, and (3)--that it is a folly to seek a boundary between synthetic and analytic statements, because all our statements are equally open to revision. The article tries to show that these conclusions do not follow from the Duhemian argument. In particular it is shown: (1)--that the Duhemian argument does not exclude the definition of statement synonymy, (2)--that this argument does not support the contention that the enigmatic entity called "the whole of science" or the "total science" is involved in each and every testing procedure, (3)--that the principle of fundamental revisability of every statement does not change the fact that in scientific practice the situation is never so hopeless as the Duhemian argument seems to imply, because even inconclusive arguments may differ in their adequacy, and (4)--that the term "revision" is ambiguous and only this ambiguity lends an air of plausibility to Quine's formulations. The conclusion is that the Duhemian line of reasoning does not support the contention of philosophers who reject the distinction between analytic and synthetic statements. (shrink)
What is the appropriate division of power between public officials and private individuals? The straightforward answer to this question, it seems, is that an official should have a power if she employs it (morally) better compared to a private individual. However, Alon Harel argues that this answer is misguided, or at least partially, since there are some decisions—mainly concerning the employment of violence—that should be made and implemented only by public officials regardless of the (relative) moral quality of the (...) decision or action. In this comment I consider and criticize this argument. (shrink)
The expression ‘indispensability argument’ denotes a family of arguments for mathematical realism supported among others by Quine and Putnam. More and more often, Gottlob Frege is credited with being the first to state this argument in section 91 of the Grundgesetze der Arithmetik. Frege’s alleged indispensability argument is the subject of this essay. On the basis of three significant differences between Mark Colyvan’s indispensability arguments and Frege’s applicability argument, I deny that Frege presents an indispensability (...) class='Hi'>argument in that very often quoted section of the Grundegesetze. (shrink)
David Furley's work on the cosmologies of classical antiquity is structured around what he calls "two pictures of the world." The first picture, defended by both Plato and Aristotle, portrays the universe, or all that there is (to pan), as identical with our particular ordered world-system. Thus, the adherents of this view claim that the universe is finite and unique. The second system, defended by Leucippus and Democritus, portrays an infinite universe within which our particular kosmos is only one of (...) countless kosmoi. Aristotle's argument in De caelo I.9 that the world is necessarily unique is an important contribution to this debate. This argument holds interest because it shows Aristotle wrestling with an apparent inconsistency in his own philosophy, as deeply-held convictions within his cosmology collide with an equally deeply-held conviction within his metaphysics. The following three principles, each of which Aristotle appears committed to, are inconsistent: -/- The cosmic uniqueness principle. The world is necessarily unique. The cosmic form principle. The world is an ordered, structured unity. As such, the world has a form. The possibility of multiple instantiation principle. For all F, if F is a form, it is possible that there exist multiple Fs. In De caelo I.9, Aristotle argues that we can establish the uniqueness of the universe, reject the multiple instantiation principle, yet still retain the distinction between 'this world' and 'world in general,' if the following is true (as it is): the world takes up all the matter that exists. Aristotle illustrates this argument with one of the stranger analogies in his corpus: imagine an aquiline nose that takes up all the flesh in the universe. If this were so, then there could not exist any other aquiline objects whatsoever. (For this reason, we dub the De caelo I.9 argument the 'Cosmic Nose argument.') This paper is an interpretation of how this argument is supposed to proceed and an assessment of its success. The first section states the problem Aristotle is confronted with, sorts through Aristotle's various statements of the Cosmic Nose argument, which exhibit some sloppiness, and reconstructs charitably a single argument. We also spend some time examining the significance of Aristotle's example of a gigantic aquiline nose. We argue that, even charitably reconstructed, the argument appears to commit a serious modal fallacy. The remainder of the paper explores whether this modal fallacy can be overcome. We conclude that, although not a cogent argument for the uniqueness of the world (as this would require a significant revision of our current astronomy), the Cosmic Nose argument does succeed on its own terms. However, it should not be regarded as a free-standing argument for the uniqueness of the world. Instead, it depends crucially on the earlier argument in De caelo I.8 for the universe's uniqueness; De caelo I.9 should be viewed as an attempt to extend the conclusion of De caelo I.8 and to show how this conclusion can be made consistent with Aristotle's metaphysical principles about the nature of form. (shrink)
Some believe that evidence for the big bang is evidence for the existence of god. Who else, they ask, could have caused such a thing? In this paper, I evaluate the big bang argument, compare it with the traditional first-cause argument, and consider the relative plausibility of various natural explanations of the big bang.
Statistical significance is almost universally equated with the attribution to some population of nonchance influences as the source of structure in the data. But statistical significance can be divorced from both parameter estimation and probability as, instead, a statement about the atypicality or lack of exchangeability over some distinction of the data relative to some set. From this perspective, the criticisms of significance tests evaporate.
Lawrence Kohlberg's Just Community program of moral education has conceptual significance to his theoretical work in the field of moral development. This argument contends that a perspective recognizing the Just Community as conceptually significant provides a more comprehensive picture of Kohlberg's work than do critical perspectives that limit their scope to his Structural Stage Model of moral development. Apprehending the Just Community's conceptual significance provides the opportunity to respond to critics, like Carol Gilligan and Helen Haste, who (...) have suggested that Kohlberg's work is inattentive to notions of attachment in morality, but who either neglect or dismiss consideration of the Just Community in making these conclusions. The argument concludes by stating that a more philosophically comprehensive and mature understanding of morality was developing in Kohlberg's Just Community, a project undertaken well in advance of these major criticisms. (shrink)
Apparently alone among medieval Christians, Eriugena argues that all life is immortal. He relies on Plato’s Timaeus as his primary source for this claim, but he modifies the argument of the Timaeus considerably. He turns Plato’s cosmic soul into the genus of life, thereby taking a treatise that originally dealt with cosmology and using it to explore the ontological significance of definition. All species that fall under the genus of life must be immortal, because a mortal species would (...) contradict the genus. No later medieval author would take up Eriugena’s arguments explicitly, although Aquinas comes close. The two thirteenth-century thinkers to address universal immortality seriously—Aquinas and Bonaventure—argue against it, but they are more faithful than Eriugena himself to a literal reading of the Timaeus. (shrink)
In his paper, Why the Successful Assassin Is More Wicked than the Unsuccessful One, Leo Katz "pick[s] up the gauntlet [Sandy] Kadish throws down" to offer a nonconsequentialist justification for giving significance to resulting harm and, in particular, to justify the common practice of punishing attempts less than the completed offense. In one sense, I may not be the ideal person to serve as critic. I am not one of those who, like Kadish and others, does not believe in (...) the significance of resulting harm in assessing blameworthiness (people whom Katz calls the "luck- skeptics" but to whom I will refer as the "nonbelievers" in the significance of resulting harm).I will try to perform the mental gymnastics of pretending to be a nonbeliever as I evaluate Professor Katz's arguments. As Part I explains, I fear the nonbeliever will be unpersuaded. Whatever the outcome of the debate as Professor Katz presents it, the method of his argument raises issues that I think are just as interesting as its outcome. My social science work, as limited as it is, gives me pause when assessing the argument-by-hypothetical method that Professor Katz uses so ingeniously here (and elsewhere). Relatedly, I have some doubts about using our intuitions in the way Professor Katz would have us use them here (and elsewhere), or at least doubts about whether we can draw from them the kind of conclusions about moral desert that Professor Katz would have us draw.Available for download at http://ssrn.com/abstract=662061. (shrink)
The Coase theorem is argued to be incompatible with bargaining set stability due to a tension between the grand coalition and sub-coalitions. We provide a counter-intuitive argument to demonstrate that the Coase theorem may be in complete consonance with bargaining set stability. We establish that an uncertainty concerning the formation of sub-coalitions will explain such compatibility: each agent fears that others may `gang up' against him and this fear forces the agents to negotiate. The grand coalition emerges from the (...) negotiations if each agent uses the principle of equal relative sacrifice to determine the actual allocation. We also establish the rational basis for the choice of the principle of equal relative concession by the negotiating agents. Hence we argue that the Coase theorem will be valid even if there are stable sub-coalitions. (shrink)
The soundness of Chow's (1996; 1998a) argument depends on the soundness of his assertion that statistical significance may be understood to signify that chance may be excluded as the reason for results. The examples and arguments provided here show that statistical significance signifies no such thing.
1, this article chooses three famous sayings, discusses the laozi philosophy the dialectical thought and its modern significance. And the suggestion, the philosophy needs to make the contribution for the world peace 2, the atomic bomb and the violence, threaten humanity's life, is this century characteristic. The science is developed, the humanity has not obtained the perfect happiness, on the contrary actually is the threat which the world trend perishes. Take this fact as the example, has proven the Laozi (...) "为学日益,为道日 损." Dialectical thought. 3, "民不畏死,奈何以死惧之" Laozi speech, its meaning is, regarding the death, feared with did not fear, is relative, may transform. Resolves the contradiction with the military force, died threatens others, if others do not fear death, you do not have themeans. Explained person's will, it cannot use the military force to conquer. Must solve the various countries dispute, is impossible with the military force. 4, the Laozi thought what the world is weakest is the water, it may have the flood tremendous strength, is because has an invariable intention. Always unceasingly flows, achieved the horizontal position, or the condition is tranquil, only then stops. But the humanity similarly has one forever invariable intention, is requests the equality, request peace. Therefore people's intention, is the invincible tremendous strength. This is a truth. (shrink)
‘Is being one only one? – The Argument for the Uniqueness of Platonic Forms’ Abstract: Each Form is unique in number; no two numerically distinct Forms can share the same nature. Plato argues for this claim in Republic X. I identify the metaphysical principles Plato presupposes in the premises of the argument, by examining the reasoning behind them, and offer a reconstruction of the argument showing the principles in use. I argue that the metaphysical significance of (...) the argument’s conclusion is to establish that if a Form F were not unique, if there were many Forms F, their nature would alter along with their number: a Form cannot recur without change in its constitution. This is why there can be only one Form for each character in the world. (shrink)
Parfit’s Branch Line argument is intended to show that the relation of survival is possibly a one-many relation and thus different from numerical identity. I offer a detailed reconstruction of Parfit’s notions of survival and personal identity, and show the argument cannot be coherently formulated within Parfit’s own setting. More specifically, I argue that Parfit’s own specifications imply that the “R-relation”, i.e., the relation claimed to capture of “what matters in survival,” turns out to hold not only along (...) but also across the branches representing the development of a reduplicated person. This curious fact of ‘interbranch survival,’ as I call it, has gone unnoticed so far. The fact that the R-relation also holds across branches creates a trilemma for Parfit’s approach. Either the envisaged notion of personal identity is circular, or the R-relation fails as a reconstruction of the common sense notion of survival, or talk about persons ‘branching’ (being reduplicated etc.) remains semantically empty. In the paper’s last section I suggest that my criticism does not detract from the larger systematic significance of Parfit’s argument. The argument is simply terminologically miscalibrated. Even though Parfit’s branch line argument fails to establish the conceptual separability of survival and identity, it can be used to show the separability of sameness and numerical identity, which should have similar implications for meta-ethics as the original argument. (shrink)
Relativity Theory by Albert Einstein has been so far littleconsidered by cognitive scientists, notwithstanding its undisputedscientific and philosophical moment. Unfortunately, we don't have adiary or notebook as cognitively useful as Faraday's. But physicshistorians and philosophers have done a great job that is relevant bothfor the study of the scientist's reasoning and the philosophy ofscience. I will try here to highlight the fertility of a `triangulation'using cognitive psychology, history of science and philosophy of sciencein starting answering a clearly very complex question:why (...) did Einstein discover Relativity Theory? Here we arenot much concerned with the unending question of precisely whatEinstein discovered, that still remains unanswered, for we have noconsensus over the exact nature of the theory's foundations(Norton 1993). We are mainly interested in starting to answer the`how question', and especially the following sub-question: what(presumably) were his goals and strategies in hissearch? I will base my argument on fundamental publications ofEinstein, aiming at pointing out a theory-specific heuristic, settingboth a goal and a strategy: covariance/invariance.The result has significance in theory formation in science, especiallyin concept and model building. It also raises other questions that gobeyond the aim of this paper: why was he so confident in suchheuristic? Why didn't many other scientists use it? Where did he keep ? such a heuristic? Do we have any other examples ofsimilar heuristic search in other scientific problemsolving? (shrink)
Studies in astrophysical cosmology have served to reveal the incomprehensible fine-tuning of the fundamental constants and cosmological quantities which must obtain if a universe like ours is to be life-permitting. Traditionally, such fine-tuning of the universe for life would have been taken as evidence of divine design. William Dembski’s ’generic chance elimination argument’ provides a framework for evaluating the hypothesis of design with respect to the fine-tuning of the universe. On Dembski’s model the key to a design inference is (...) the elimination of the competing alternatives of physical necessity and chance. In debates over fine-tuning, the former is represented by a ’theory of everything’, which would eliminate or significantly reduce the improbabilities of fine-tuning. The latter takes the shape of the ’many worlds hypothesis’, according to which a ’world ensemble’ of universes exist, thus providing purchase for the ’anthropic principle’. This paper assesses the relative merit. (shrink)
Although considerations based on contemporary space-time theories, such as special and general relativity, seem highly relevant to the debate about persistence, their significance has not been duly appreciated. My goal in this paper is twofold: (1) to reformulate the rival positions in the debate (i.e., endurantism [three-dimensionalism] and perdurantism [four-dimensionalism, the doctrine of temporal parts]) in the framework of special relativistic space-time; and (2) to argue that, when so reformulated, perdurantism exhibits explanatory advantages over endurantism. The argument builds (...) on the fact that four-dimensional entities extended in space as well as time are relativistically invariant in a way three-dimensional entities are not. (shrink)
If an argument can be reconstructed in at least two different ways, then which reconstruction is to be preferred? In this paper I address this problem of argument reconstruction in terms of Ryle’s infinite regress argument against the view that knowledge-how requires knowledge-that. First, I demonstrate that Ryle’s initial statement of the argument does not fix its reconstruction as it admits two, structurally different reconstructions. On the basis of this case and infinite regress arguments generally, I (...) defend a revisionary take on argument reconstruction: argument reconstruction is mainly to be ruled by charity (viz. by general criteria which arguments have to fulfil in order to be good arguments) rather than interpretation. (shrink)
This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (...) (1969) study of refutational argument, this study considers points of contact between opposing arguments that emerged in opposing loci, dissociations, and casuistic reasoning. In particular, it shows how perceptions of AI were reframed and rehabilitated through metaphorical language, reversal of the philosophical pair artificial/natural, appeals to the paradigm case, and use of the loci of quantity and essence. Furthermore, examining responses to the book in subsequent arguments indicates the topoi characteristic of the rhetoric of technology advocacy. (shrink)
Where does the necessity that seems to accompany causal inferences come from? “Why [do] we conclude that […] particular causes must necessarily have such particular effects?” (Hume 2002, 184.108.40.206) In 1.3.6 of the Treatise, Hume entertains the possibility that this necessity is a function of reason. However, he eventually dismisses this possibility, where this dismissal consists of Hume’s “negative” argument concerning induction. This argument has received, and continues to receive, a tremendous amount of attention. How could causal inferences (...) be justified if they are not justified by reason? If we believe that p causes q, isn’t it reason that allows us to conclude q when we see p with some assurance, i.e., with some necessity? (shrink)
A generally ignored feature of Aristotle’s famous function argument is its reliance on the claim that practitioners of the crafts (technai) have functions: but this claim does important work. Aristotle is pointing to the fact that we judge everyday rational agency and agents by norms which are independent of their contingent desires: a good doctor is not just one who happens to achieve his personal goals through his work. But, Aristotle argues, such norms can only be binding on individuals (...) if human rational agency as such is governed by objective teleological norms. . (shrink)
J. Howard Sobel devotes seventy pages of his wide-ranging analysis of theistic arguments to a critique of the cosmological argument. Although the focus of that critique falls on the Leibnizian argument, he also offers in passing some criticisms of the kalam cosmological argument. Sobel does not challenge the causal premiss insofar as "begins to exist" means "has a first time of its existence." Rather he disputes the arguments and evidence for the fact of the universe's beginning. I (...) show that Sobel's rebuttals of the philosophical arguments against the infinitude of the past are in various ways misconceived or fallacious and that his response to the empirical evidence for the beginning of the universe involves a gratuitous and radical revision of contemporary astrophysical cosmogony. (shrink)
John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper â claims that computers can think or do thinkâ Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (...) (at least) equal to those of brains." On a morecarefully crafted understanding â understood just to targetmetaphysical identification of thought with computation ("Functionalism"or "Computationalism") and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high churchâ "someday my prince of an AI program will come" â believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them. (shrink)
In this paper I present a new argument against internalist theories of practical reason. My argument is inpired by Frank Jackson's celebrated Knowledge Argument. I ask what will happen when an agent experiences pain for the first time. Such an agent, I argue, will gain new normative knowledge that internalism cannot explain. This argument presents a similar difficulty for other subjectivist and constructivist theories of practical reason and value. I end by suggesting that some debates in (...) meta-ethics and in the philosophy of mind might be more closely intertwined than philosophers in either area would like to believe. (shrink)
In this paper, I argue that Kant's famous critique of the Ontological Argument largely begs the question against that argument, and is no better when supplemented by the modern quantificational analysis of "exists." In particular, I argue that the claim, common to Hume and Kant, that conceptual truths can never entail substantive existential claims is false,and thus no ground for rejecting the Ontological Argument.
The Chomskian holds that the grammars that linguists produce are about human psycholinguistic structures, i.e. our mastery of a grammar, our linguistic competence. But if we encountered Martians whose psycholinguistic processes differed from ours, but who nevertheless produced sentences that are extensionally equivalent to the set of sentences in our English and shared our judgements on the grammaticality of various English sentences, then we would count them as being competent in English. A grammar of English is about what the Martians (...) and we share. In this note, I argue that a recent attack on the Martian Argument by Laurence fail to mitigate its force. (shrink)
Marginal humans are not rational yet we still think they are morally considerable. This is inconsistent with denying animals moral status on the basis of their irrationality. Therefore, either marginal humans and animals are both morally considerable or neither are. In this paper I consider a major objection to this argument: that species is a relevant difference between humans animals.
In recent work on contextdependency, it has been argued that certain types of sentences give rise to a notion of relative truth. In particular, sentences containing predicates of personal taste and moral or aesthetic evaluation as well as epistemic modals are held to express a proposition (relative to a context of use) which is true or false not only relative to a world of evaluation, but other parameters as well, such as standards of taste or knowledge or (...) an agent. Thus, a sentence like chocolate tastes good would express a proposition p that is true or false not only at a world of evaluation, but relative to the additional parameter as well, such as a parameter of taste or an agent. I will argue that the sentences that apparently give rise to relative truth should be understood by relating them in a certain way to the first person. More precisely, such sentences express what I will call firstpersonbased genericity, a form of generalization that is based on or directed toward an essential firstperson application of the predicate. The account differs from standard relative truth account in crucial respects: it is not the truth of the proposition expressed that is relative to the first person; the proposition expressed by a sentence with a predicate of taste rather has absolute truth conditions. Instead it is the propositional content itself that requires a firstpersonal cognitive access whenever it is entertained. This account, I will argue, avoids a range of problems that standard relative truth theories of the sentences in question face and explains a number of further peculiarities that such sentences display. (shrink)
Derk Pereboom's Four-Case Argument is among the most famous and resilient manipulation arguments against compatibilism. I contend that its resilience is not a function of the argument's soundness but, rather, the ill-gotten gain from an ambiguity in the description of the causal relations found in the argument's foundational case. I expose this crucial ambiguity and suggest that a dilemma faces anyone hoping to resolve it. After a thorough search for an interpretation which avoids both horns of this (...) dilemma, I conclude that none is available. Rather, every metaphysically coherent interpretation invites either a hard- or soft-line reply to Pereboom's argument. I then consider a recharacterization of the dilemma which seems to clear the way for the defence of a revised Four-Case Argument. I address this rejoinder by identifying a still more fundamental problem shared by all viable interpretations of the manipulation cases, showing that each involves a type of manipulation which undermines the victim's agency. Because this diagnosis supports a soft-line reply to every viable interpretation of the argument and can be endorsed by any compatibilist, I consider it the final piece of the Soft-line Solution to the Four-Case Argument. Finally, I suggest a new taxonomy of manipulation arguments, arguing that none that employs the suppressive variety of manipulation found in Pereboom's argument offers a threat to compatibilism. (shrink)
One argument for reductive physicalism, the explanatory argument, rests on its ability to explain the vast and growing body of acknowledged psychophysical correlations. Jaegwon Kim has recently levelled four objections against the explanatory argument. I assess all of Kim's objections, showing that none is successful. The result is a defence of the explanatory argument for physicalism.
According to Interest-Relative Invariantism, whether an agent knows that p, or possesses other sorts of epistemic properties or relations, is in part determined by the practical costs of being wrong about p. Recent studies in experimental philosophy have tested the claims of IRI. After critically discussing prior studies, we present the results of our own experiments that provide strong support for IRI. We discuss our results in light of complementary findings by other theorists, and address the challenge posed by (...) a leading intellectualist alternative to our view. (shrink)
The recent debate surrounding scientific realism has largely focused on the “no miracles” argument (NMA). Indeed, it seems that most contemporary realists and anti-realists have tied the case for realism to the adequacy of this argument. I argue that it is mistake for realists to let the debate be framed in this way. Realists would be well advised to abandon the NMA altogether and pursue an alternative strategy, which I call the “local strategy”.
The luck argument raises a serious challenge for libertarianism about free will. In broad outline, if an action is undetermined, then it appears to be a matter of luck whether or not one performs it. And if it is a matter of luck whether or not one performs an action, then it seems that the action is not performed with free will. This argument is most effective against event-causal accounts of libertarianism. Recently, Christopher Franklin (2011) has defended event-causal (...) libertarianism against four formulations of the luck argument. I will argue that three of Franklin’s responses are unsuccessful and that there are important versions of the luck challenge that his defense has left unaddressed. (shrink)
Rationality (or something similar) is usually given as the relevant difference between all humans and animals; the reason humans do but animals do not deserve moral consideration. But according to the Argument from Marginal Cases not all humans are rational, yet if such (marginal) humans are morally considerable despite lacking rationality it would be arbitrary to deny animals with similar capacities a similar level of moral consideration. The slippery slope objection has it that although marginal humans are not strictly (...) speaking morally considerable, we should give them moral consideration because if we do not we will slide down a slippery slope where we end up by not giving normal humans due consideration. I argue that this objection fails to show that marginal humans have the kind of direct moral status proponents of the slippery slope argument have in mind. (shrink)
It is argued that the explanatory gap argument, according to which it is fundamentally impossible to explain qualitative mental states in a physicalist theory of mind, is unsound. The main argument in favour of the explanatory gap is presented, which argues that an identity statement of mind and brain has no explanatory force, in contrast to "normal" scientific identity statements. Then it is shown that "normal" scientific identity statements also do not conform to the demands set by the (...) proponent of the explanatory gap. Rather than accept all such gaps, it is argued that we should deny the explanatory gap in a physicalist theory of mind. (shrink)
In this paper, I examine Kant’s famous objection to the ontological argument: existence is not a determination. Previous commentators have not adequately explained what this claim means, how it undermines the ontological argument, or how Kant argues for it. I argue that the claim that existence is not a determination means that it is not possible for there to be non-existent objects; necessarily, there are only existent objects. I argue further that Kant’s primary target is not ontological arguments (...) as such but the metaphysical view they presuppose: that God necessarily exists in virtue of his essence being contained in, or logically entailed by, his essence. I show that this view of divine necessity requires the assumption that existence is a determination, and I show that Descartes and Leibniz are implicitly committed to this in their published versions of the ontological argument. I consider the philosophical motivations for the claim that existence is a determination and then I argue that Kant’s argument in the Critique of Pure Reason only undermines some of them. (shrink)
Proponents of the argument from regress maintain that the existence of Instrumental Value is sufficient to establish the existence of Intrinsic Value. It is argued that the chain of instrumentally valuable things has to end somewhere. Namely with intrinsic value. In this paper, I shall argue something a little more modest than this. I do not want to argue that the regress argument proves that there is intrinsic value but rather that it proves that the idea of intrinsic (...) value is a necessary part of our thinking about moral value. (shrink)
Think of the last thing someone did to you to seriously harm or offend you. And now imagine, so far as you can, becoming fully aware of the fact that his or her action was the causally inevitable result of a plan set into motion before he or she was ever even born, a plan that had no chance of failing. Should you continue to regard him or her as being morally responsible—blameworthy, in this case—for what he or she did? (...) Many have thought that, intuitively, you should not. Recently, Alfred Mele has employed this line of thought to mount what many have taken to be a powerful argument for incompatibilism: the “Zygote Argument”. However, in interesting new papers, John Martin Fischer and Stephen Kearns have each independently argued that the Zygote Argument fails. As I see it, the criticisms of Fischer and Kearns reveal some important questions about how the argument is meant to be—or how it would best be—understood. Once we make a slight (but important) modification to the argument, however, I think we will be able to see that the criticisms of Fischer and Kearns do not detract from its substantial force. (shrink)
David Chalmers' dancing qualia argument is intended to show that phenomenal experiences, or qualia, are organizational invariants. The dancing qualia argument is a reductio ad absurdum, attempting to demonstrate that holding an alternative position, such as the famous inverted spectrum argument, leads one to an implausible position about the relation between consciousness and cognition. In this paper, we argue that Chalmers' dancing qualia argument fails to establish the plausibility of qualia being organizational invariants. Even stronger, we (...) will argue that the gap in the argument cannot be closed. (shrink)
The cosmological argument for God’s existence has a long history, but perhaps the most inﬂuential version of it has been the argument from contingency. This is the version that Frederick Copleston pressed upon Bertrand Russell in their famous debate about God’s existence in 1948 (printed in Russell’s 1957 Why I am not a Christian). Russell’s lodges three objections to the Thomistic argument.
In this paper, I argue that the ultimate argument for Scientific Realism, also known as the No-Miracles Argument (NMA), ultimately fails as an abductive defence of Epistemic Scientific Realism (ESR), where (ESR) is the thesis that successful theories of mature sciences are approximately true. The NMA is supposed to be an Inference to the Best Explanation (IBE) that purports to explain the success of science. However, the explanation offered as the best explanation for success, namely (ESR), fails to (...) yield independently testable predictions that alternative explanations for success do not yield. If this is correct, then there seems to be no good reason to prefer (ESR) over alternative explanations for success. (shrink)
The idea that formal geometry derives from intuitive notions of space has appeared in many guises, most notably in Kant’s argument from geometry. Kant claimed that an a priori knowledge of spatial relationships both allows and constrains formal geometry: it serves as the actual source of our cognition of principles of geometry and as a basis for its further cultural development. The development of non-Euclidean geometries, however, seemed to deﬁnitely undermine the idea that there is some privileged relationship between (...) our spatial intuitions and mathematical theory. This paper’s aim is to look at this longstanding philosophical issue through the lens of cognitive science. Drawing on recent evidence from cognitive ethology, developmental psychology, neuroscience and anthropology, I argue for an enhanced, more informed version of the argument from geometry: humans share with other species evolved, innate intuitions of space which serve as a vital precondition for geometry as a formal science. (shrink)
In this paper, I criticize David McNaughton and Piers Rawling's formalization of the agent-relative/agent-neutral distinction. I argue that their formalization is unable to accommodate an important ethical distinction between two types of conditional obligations. I then suggest a way of revising their formalization so as to fix the problem.
According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (Synthese, published online 16 October, 2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (...) (J Artif Soc Soc Simul 5(3):1–33, 2002; J Artif Soc Soc Simul 9(3):1–28, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere, Synthese, forthcoming and Olsson, Episteme 8(2):127–143, 2011) considerably less support for the AD is forthcoming. (shrink)
It is argued that, contrary to prevailing opinion, Bas van Fraassen nowhere uses the argument from underdetermination in his argument for constructive empiricism. It is explained that van Fraassen’s use of the notion of empirical equivalence in The Scientific Image has been widely misunderstood. A reconstruction of the main arguments for constructive empiricism is offered, showing how the passages that have been taken to be part of an appeal to the argument from underdetermination should actually be interpreted.
I defend a hard-line reply to Derk Pereboom’s four-case manipulation argument. Pereboom accuses compatibilists who take a hard-line reply to his manipulation argument of adopting inappropriate initial attitudes towards the cases central to his argument. If Pereboom is correct he has shown that a hard-line response is inadequate. Fortunately for the compatibilist, Pereboom’s list of appropriate initial attitudes is incomplete and at least one of the initial attitudes he leaves out provides room for a revised hard-line reply (...) to be successfully mounted against the multiple-case argument. (shrink)
Here I propose a coherent way of preserving the identity of material objects with the matter that constitutes them. The presentation is formal, and intended for RSL. An informal presentation is in preliminary draft! -/- Relative-sameness relations—such as being the same person as—are like David Lewis's "counterpart" relations in the following respects: (i) they may hold between objects that aren't identical (I propose), and (ii) there are a multiplicity of them, different ones of which may be variously invoked in (...) different contexts. They differ from counterpart relations, however, in that they are weak equivalence relations (transitive, symmetric and weakly reflexive). The likenesses to counterpart relations make them suitable for an analysis of de-re temporal and modal predications. The difference renders the resulting counterpart theory immune to standard criticisms of Lewis's Counterpart Theory (e.g., in Hazen 1979, and Fara and Williamson 2005). (shrink)