In many cases, the “therapeutic misconception” may be an unavoidable part of the imperfect process of recruitment and consent in medical researchPaul Appelbaum, Loren Roth, and Charles Lidz coined the term “therapeutic misconception” in 1982.1 They described it as the misconception that participating in research is the same as receiving individualised treatment from a physician. It referred to the research subject’s failure to appreciate that the aim of research is to obtain scientific knowledge, and that any benefit to the subject (...) is a by-product of that knowledge. More recent studies by Appelbaum and Lidz have shown that this phenomenon is just as pervasive now as it was twenty four years ago.2 The problem pertains not to any duty of care for researchers but to participants’ unfounded belief in the therapeutic potential of research.3 It is especially acute in phase I oncology trials, which aim to test the toxicity and highest tolerable dose of anticancer drugs.To remedy this situation, many have argued that both clinicians and researchers need to do more in explaining to subjects the differences between experimental research and standard care. Clinicians and researchers recruiting potential subjects for research must present information about the expected risks and benefits of participation in research in a more realistic and straightforward way.4 In one recent examination of consent forms for phase I oncology trials, Sam Horng et al found that, in the section on “benefit”, only one of 272 forms stated that the subjects were expected to benefit. They also found that 11 consent forms stated clearly that subjects would not benefit, 25 forms communicated uncertainty about benefit, and 5 forms said nothing about the chance of benefit. Interestingly, 139 forms alluded to the …. (shrink)
A BELIEF IN FREE WILL touches nearly everything that human beings value. It is difficult to think about law, politics, religion, public policy, intimate relationships, morality—as well as feelings of remorse or personal achievement—without first imagining that every person is the true source of his or her thoughts and actions. And yet the facts tell us that free will is an illusion. In this enlightening book, Sam Harris argues that this truth about the human mind does not undermine morality or (...) diminish the importance of social and political freedom, but it can and should change the way we think about some of the most important questions in life. (shrink)
According to the modal view, essence admits of reductive analysis in exclusively modal terms. Fine (1994) argues that modal view delivers an inadequate analysis of essence. This paper defends the modal view from Fine's challenge. This defense proceeds by examining the disagreement between Finean primitivists and Quinean eliminativists about essence. In order to model this disagreement, a distinction between essence and a separable concept, nature, is required. This distinction is then used to show that Fine's challenge is misdirected and therefore (...) unsuccessful. (shrink)
This essay attempts to clarify the distinction between property and sovereignty, and to bring out the importance of that distinction to a liberal nationalism. Beginning with common intuitions about what distinguishes our rights to our possessions from the state's rightful governance over us, it proceeds to explore some historical sources of these intuitions, and the importance of a sharp distinction between ownership and governance to the rise of liberalism. From here, the essay moves into an exploration of group ownership, and (...) the ways in which group ownership can in practice turn into an illiberal kind of sovereignty The point is to shed new light on problems that nationalist states characteristically face. Examples of these problems, from the Israel/Palestine conflict, are put forth in the conclusion. (shrink)
This paper refines a controversial proposal: that core systems belong to a perceptual kind, marked out by the format of its representational outputs. Following Susan Carey, this proposal has been understood in terms of core representations having an iconic format, like certain paradigmatically perceptual outputs. I argue that they don’t, but suggest that the proposal may be better formulated in terms of a broader analogue format type. Formulated in this way, the proposal accommodates the existence of genuine icons in perception, (...) and avoids otherwise troubling objections. (shrink)
Jerry Fodor deemed informational encapsulation ‘the essence’ of a system’s modularity and argued that human perceptual processing comprises modular systems, thus construed. Nowadays, his conclusion is widely challenged. Often, this is because experimental work is seen to somehow demonstrate the cognitive penetrability of perceptual processing, where this is assumed to conflict with the informational encapsulation of perceptual systems. Here, I deny the conflict, proposing that cognitive penetration need not have any straightforward bearing on the conjecture that perceptual processing is composed (...) of nothing but informationally encapsulated modules, the conjecture that each and every perceptual computation is performed by an informationally encapsulated module, and the consequences perceptual encapsulation was traditionally expected to have for a perception-cognition border, the epistemology of perception and cognitive science. With these points in view, I propose that particularly plausible cases of cognitive penetration would actually seem to evince the encapsulation of perceptual systems rather than refute/problematize this conjecture. (shrink)
Introduction : time, film, and the ethical vision of Emmanuel Levinas. American transcendence : Levinas and a short history of an American idea in film -- Frank Capra and James Stewart : time, transcendence, and the other -- The changing face of American redemption : Henry Fonda, Marilyn Monroe, Paul Newman, and Denzel Washington -- Sex, art, and Oedipus : The unbearable lightness of being -- Fellini and La dolce vita : documentary, decadence, and desire -- Antonioni and L'avventura : (...) transcendence, the body, and the feminine. (shrink)
Formal models of appearance and reality have proved fruitful for investigating structural properties of perceptual knowledge. This paper applies the same approach to epistemic justification. Our central goal is to give a simple account of The Preface, in which justified belief fails to agglomerate. Following recent work by a number of authors, we understand knowledge in terms of normality. An agent knows p iff p is true throughout all relevant normal worlds. To model The Preface, we appeal to the normality (...) of error. Sometimes, it is more normal for reality and appearance to diverge than to match. We show that this simple idea has dramatic consequences for the theory of knowledge and justification. Among other things, we argue that a proper treatment of The Preface requires a departure from the internalist idea that epistemic justification supervenes on the appearances and the widespread idea that one knows most when free from error. (shrink)
Joint actions often require agents to track others’ actions while planning and executing physically incongruent actions of their own. Previous research has indicated that this can lead to visuomotor interference effects when it occurs outside of joint action. How is this avoided or overcome in joint actions? We hypothesized that when joint action partners represent their actions as interrelated components of a plan to bring about a joint action goal, each partner’s movements need not be represented in relation to distinct, (...) incongruent proximal goals. Instead they can be represented in relation to a single proximal goal – especially if the movements are, or appear to be, mechanically linked to a more distal joint action goal. To test this, we implemented a paradigm in which participants produced finger movements that were either congruent or incongruent with those of a virtual partner, and either with or without a joint action goal (the joint flipping of a switch, which turned on two light bulbs). Our findings provide partial support for the hypothesis that visuomotor interference effects can be reduced when two physically incongruent actions are represented as mechanically interdependent contributions to a joint action goal. (shrink)
Offers a conciliatory solution to one of the central contemporary debates in the theory of rationality, the debate about the proper formulation of rational requirements. Introduces a novel conception of the “symmetry problem” for wide scope rational requirements, and sketches a theory of rational commitment as a response.
According to the KK-principle, knowledge iterates freely. It has been argued, notably in Greco, that accounts of knowledge which involve essential appeal to normality are particularly conducive to defence of the KK-principle. The present article evaluates the prospects for employing normality in this role. First, it is argued that the defence of the KK-principle depends upon an implausible assumption about the logical principles governing iterated normality claims. Once this assumption is dropped, counter-instances to the principle can be expected to arise. (...) Second, it is argued that even if the assumption is maintained, there are other logical properties of normality which can be expected to lead to failures of KK. Such failures are noteworthy, since they do not depend on either a margins-for-error principle or safety condition of the kinds Williamson appeals to in motivating rejection KK. “Introduction: KK and Being in a Position to Know” Section formulates two versions of the KK-Principle; “Inexact Knowledge and Margins for Error” Section presents a version of Williamson’s margins-for-error argument against it; “Knowledge and Normality” and “Iterated Normality” Sections discuss the defence of the KK-Principle due to Greco and show that it is dependent upon the implausible assumption that the logic of normality ascriptions is at least as strong as K4; finally, “Knowledge in Abnormal Conditions” and “Higher-Order Ignorance Inside the Margins” Sections argue that a weakened version of Greco’s constraint on knowledge is plausible and demonstrate that this weakened constraint will, given uncontentious assumptions, systematically generate counter-instances to the KK-principle of a novel kind. (shrink)
In an interesting experimental study, Bonini et al. (1999) present partial support for truth-gap theories of vagueness. We say this despite their claim to find theoretical and empirical reasons to dismiss gap theories and despite the fact that they favor an alternative, epistemic account, which they call ‘vagueness as ignorance’. We present yet more experimental evidence that supports gap theories, and argue for a semantic/pragmatic alternative that unifies the gappy supervaluationary approach together with its glutty relative, the subvaluationary approach.
The theoretical virtue of parsimony values the minimizing of theoretical commitments, but theoretical commitments come in two kinds : ontological and ideological. While the ontological commitments of a theory are the entities it posits, a theory’s ideological commitments are the primitive concepts it employs. Here, I show how we can extend the distinction between quantitative and qualitative parsimony, commonly drawn regarding ontological commitments, to the domain of ideological commitments. I then argue that qualitative ideological parsimony is a theoretical virtue. My (...) defense proceeds by demonstrating the merits of qualitative ideological parsimony and by showing how the qualitative conception of ideological parsimony undermines two notable arguments from ideological parsimony: David Lewis’ defense of modal realism and Ted Sider’s defense of mereological nihilism. (shrink)
Standard approaches to counterfactuals in the philosophy of explanation are geared toward causal explanation. We show how to extend the counterfactual theory of explanation to non-causal cases, involving extra-mathematical explanation: the explanation of physical facts by mathematical facts. Using a structural equation framework, we model impossible perturbations to mathematics and the resulting differences made to physical explananda in two important cases of extra-mathematical explanation. We address some objections to our approach.
Taking their motivation from the perceived failure of the reductive physicalist project concerning consciousness, panpsychists ascribe subjectivity to fundamental material entities in order to account for macro-consciousness. But there exists an unresolved tension within the mainstream panpsychist position, the seriousness of which has yet to be appreciated. I capture this tension as a dilemma, and offer advice to panpsychists on how to resolve it. The dilemma is as follows: Panpsychists take the micro-material realm to feature phenomenal properties, plus micro-subjects to (...) whom these properties belong. However, it is impossible to explain the generation of a macro-subject (like one of us) in terms of the assembly of micro-subjects, for, as I show, subjects cannot combine. Therefore the panpsychist explanatory project is derailed by the insistence that the world’s ultimate material constituents are subjects of experience. The panpsychist faces a choice of giving up her explanatory ambitions, or of giving up the claim that the ultimates are subjects. I argue that the latter option is preferable, leading to neutral monism, on which phenomenal qualities are irreducible but subjects are reducible. So panpsychists should be neutral monists. (shrink)
Call an explanation in which a non-mathematical fact is explained—in part or in whole—by mathematical facts: an extra-mathematical explanation. Such explanations have attracted a great deal of interest recently in arguments over mathematical realism. In this article, a theory of extra-mathematical explanation is developed. The theory is modelled on a deductive-nomological theory of scientific explanation. A basic DN account of extra-mathematical explanation is proposed and then redeveloped in the light of two difficulties that the basic theory faces. The final view (...) appeals to relevance logic and uses resources in information theory to understand the explanatory relationship between mathematical and physical facts. 1Introduction2Anchoring3The Basic Deductive-Mathematical Account4The Genuineness Problem5Irrelevance6Relevance and Information7Objections and Replies 7.1Against relevance logic7.2Too epistemic7.3Informational containment8Conclusion. (shrink)
Across the world people in different societies structure their family relationships in many different ways. These relationships become encoded in their languages as kinship terminology, a word set that maps variably onto a vast genealogical grid of kinship categories, each of which could in principle vary independently. But the observed diversity of kinship terminology is considerably smaller than the enormous theoretical design space. For the past century anthropologists have captured this variation in typological schemes with only a small number of (...) model system types. Whether those types exhibit the internal co-selection of parts implicit in their use is an outstanding question, as is the sufficiency of typologies in capturing variation as a whole. We interrogate the coherence of classic kinship typologies using modern statistical approaches and systematic data from a new database, Kinbank. We first survey the canonical types and their assumed patterns of internal and external co-selection, then present two data-driven approaches to assess internal coherence. Our first analysis reveals that across parents’ and ego’s generation, typology has limited predictive value: knowing the system in one generation does not reliably predict the other. Though we detect limited co-selection between generations, “disharmonic” systems are equally common. Second, we represent structural diversity with a novel multidimensional approach we term kinship space. This approach reveals, for ego’s generation, some broad patterning consistent with the canonical typology, but diversity is considerably higher than classical typologies suggest. Our results strongly challenge the descriptive adequacy of the set of canonical kinship types. (shrink)
A national sample of 362 respondents assessed the ethical predisposition of the American marketplace by calculating a consumer ethics index. The results indicate that the population is quite intolerant of perceived ethical abuses. The situations where consumers are ambivalent tend to be those where the seller suffers little or no economic harm from the consumer's action. Younger, more educated, and higher income consumers appear more accepting of these transgressions. The results provided the basis for developing a four-group taxonomy of consumers (...) which retailers should find insightful in assessing potential consumer actions in a variety of situations. (shrink)
Work in quantum gravity suggests that spacetime is not fundamental. Rather, spacetime emerges from an underlying, non-spatiotemporal reality. After clarifying the type of emergence at issue, I argue that standard conceptions of emergence available in metaphysics won’t work for the emergence of spacetime. I go on to consider spacetime functionalism as a way to make sense of spacetime emergence. I argue that a functionalist approach to spacetime modelled on mental state functionalism is not a viable alternative to the standard conception (...) of emergence in metaphysics. I go on to consider an alternative: ‘partial’ functionalism, whereby certain aspects of spacetime are functionalised, rather than spacetime as a whole. (shrink)
Panpsychism, an increasingly popular competitor to physicalism as a theory of mind, faces a famous difficulty, the ‘combination problem’. This is the difficulty of understanding the composition of a conscious mind by parts which are themselves taken to be phenomenally qualitied. I examine the combination problem, and I attempt to solve it. There are a few distinct difficulties under the banner of ‘the combination problem’, and not all of them need worry panpsychists. After homing in on the genuine worries, I (...) identify some disputable assumptions that underlie them. Doing away with these assumptions allows us to make a start on a working conception of phenomenal combination. (shrink)
Naive speakers find some logical contradictions acceptable, specifically borderline contradictions involving vague predicates such as Joe is and isn’t tall. In a recent paper, Cobreros et al. (J Philos Logic, 2012) suggest a pragmatic account of the acceptability of borderline contradictions. We show, however, that the pragmatic account predicts the wrong truth conditions for some examples with disjunction. As a remedy, we propose a semantic analysis instead. The analysis is close to a variant of fuzzy logic, but conjunction and disjunction (...) are interpreted as intensional operators. (shrink)
I give a new argument for the moral difference between lying and misleading. First, following David Lewis, I hold that conventions of truthfulness and trust fix the meanings of our language. These conventions generate fair play obligations. Thus, to fail to conform to the conventions of truthfulness and trust is unfair. Second, I argue that the liar, but not the misleader, fails to conform to truthfulness. So the liar, but not the misleader, does something unfair. This account entails that bald-faced (...) lies are wrong, that we can lie nonlinguistically, and that linguistic innovation is morally significant. (shrink)
ABSTRACT Our goal in this paper is to extend counterfactual accounts of scientific explanation to mathematics. Our focus, in particular, is on intra-mathematical explanations: explanations of one mathematical fact in terms of another. We offer a basic counterfactual theory of intra-mathematical explanations, before modelling the explanatory structure of a test case using counterfactual machinery. We finish by considering the application of counterpossibles to mathematical explanation, and explore a second test case along these lines.
Koonin argues that CRISPR-Cas systems present the best-known case in point for Lamarckian evolution because they satisfy his proposed criteria for the specific inheritance of acquired adaptive characteristics. We see two interrelated issues with Koonin’s characterization of CRISPR-Cas systems as Lamarckian. First, at times he appears to confuse an account of the CRISPR-Cas system with an account of the mechanism it employs. We argue there is no evidence for the CRISPR-Cas system being “Lamarckian” in any sense. Second, it is unclear (...) whether the mechanism is more “Lamarckian” than many other forms of genetic change already well-characterized in Darwinian terms. We present three conceptually distinct senses in which the mechanism of IAC may be considered Lamarckian and argue that only the strongest sense of goal-directed IAC would be difficult to accommodate in a Darwinian account. As the CRISPR-Cas mechanism does not qualify as “Lamarckian” in this strong sense, we argue there is no conceptual value in calling it “Lamarckian”. Finally, we suggest that CRISPR-Cas systems do hold the potential for genuinely non-Darwinian, directed evolution in a way that Koonin did not discuss, involving their potential use as a human gene-editing tool. (shrink)
This article explores the interplay between the globalization process and the nation/nation-state by examining the case of contemporary Taiwan. Globalization is analyzed along four dimensions: flows of people, flows of culture, economic globalization and international/transnational institutions. Along each dimension, it is found that globalization has had a profound impact upon how cultural and political elites imagine their nation, leading to rising aspirations for nationhood and nation-stateness. Meanwhile, nation-building efforts have deepened Taiwan's embeddedness in globalization, where globalization itself is being employed, (...) both by the state and non-state elites, as a strategy to construct the nation. Three implications suggest that the relationship between `the global' and `the national' be reconceptualized. First, nations and nationalism can be better comprehended against a global/international backdrop, as national identity to a large extent depends upon the imagined or real approval of other nations. Second, there emerges a new strategic alliance between the global and the national, in the sense that globalization gives new ground upon which the nation can be formulated. And finally, by reinforcing certain institutional prerogatives of nations and nation-states, globalization may also lead to an increased desire for nationhood and nation-stateness in cases where the latter two have not been fully realized. (shrink)
The popularity of ‘food sovereignty’ to cover a range of positions, interventions, and struggles within the food system is testament, above all, to the term’s adaptability. Food sovereignty is centrally, though not exclusively, about groups of people making their own decisions about the food system—it is a way of talking about a theoretically-informed food systems practice. Since people are different, we should expect decisions about food sovereignty to be different in different contexts, albeit consonant with a core set of principles. (...) In this paper we look at the analytical points of friction in applying ideas of food sovereignty within the context of Indigenous struggles in North America. This, we argue, helps to clarify one of the central themes in food sovereignty: that it is a continuation of anti-colonial struggles, even in post-colonial contexts. Such an examination has dividends both for scholars of food sovereignty and for those of Indigenous politics: by helping to problematize notions of food sovereignty and postcoloniality, but also by posing pointed questions around gender for Indigenous struggles. (shrink)
Identifying patterns in the world requires noticing not only unusual occurrences, but also unusual absences. We examined how people learn from absences, manipulating the extent to which an absence is expected. People can make two types of inferences from the absence of an event: either the event is possible but has not yet occurred, or the event never occurs. A rational analysis using Bayesian inference predicts that inferences from absent data should depend on how much the absence is expected to (...) occur, with less probable absences being more salient. We tested this prediction in two experiments in which we elicited people's judgments about patterns in the data as a function of absence salience. We found that people were able to decide that absences either were mere coincidences or were indicative of a significant pattern in the data in a manner that was consistent with predictions of a simple Bayesian model. (shrink)
Think of a number, any number, or properties like fragility and humanity. These and other abstract entities are radically different from concrete entities like electrons and elbows. While concrete entities are located in space and time, have causes and effects, and are known through empirical means, abstract entities like meanings and possibilities are remarkably different. They seem to be immutable and imperceptible and to exist "outside" of space and time. This book provides a comprehensive critical assessment of the problems raised (...) by abstract entities and the debates about existence, truth, and knowledge that surround them. It sets out the key issues that inform the metaphysical disagreement between platonists who accept abstract entities and nominalists who deny abstract entities exist. Beginning with the essentials of the platonist–nominalist debate, it explores the key arguments and issues informing the contemporary debate over abstract reality: arguments for platonism and their connections to semantics, science, and metaphysical explanation the abstract–concrete distinction and views about the nature of abstract reality epistemological puzzles surrounding our knowledge of mathematical entities and other abstract entities. arguments for nominalism premised upon concerns about paradox, parsimony, infinite regresses, underdetermination, and causal isolation nominalist options that seek to dispense with abstract entities. Including chapter summaries, annotated further reading, and a glossary, _Entities_ is essential reading for anyone seeking a clear and authoritative introduction to the problems raised by abstract entities. (shrink)
Continuous sedation until death (CSD), the act of reducing or removing the consciousness of an incurably ill patient until death, often provokes medical-ethical discussions in the opinion sections of medical and nursing journals. A content analysis of opinion pieces in medical and nursing literature was conducted to examine how clinicians define and describe CSD, and how they justify this practice morally. Most publications were written by physicians and published in palliative or general medicine journals. Terminal Sedation and Palliative Sedation are (...) the most frequently used terms to describe CSD. Seventeen definitions with varying content were identified. CSD was found to be morally justified in 73 % of the publications using justifications such as Last Resort, Doctrine of Double Effect, Sanctity of Life, Autonomy, and Proportionality. The debate over CSD in the opinion sections of medical and nursing journals lacks uniform terms and definitions, and is profoundly marked by ‘charged language’, aiming at realizing agreement in attitude towards CSD. Not all of the moral justifications found are equally straightforward. To enable a more effective debate, the terms, definitions and justifications for CSD need to be further clarified. (shrink)
Kant proclaimed that all theodicies must fail in ?On the Miscarriage of All Philosophical Trials in Theodicy?, but it is mysterious why he did so since he had developed a theodicy of his own during the critical period. In this paper, I offer an explanation of why Kant thought theodicies necessarily fail. In his theodicy, as well as in some of his works in ethics, Kant explained moral evil as resulting from unavoidable limitations in human beings. God could not create (...) finite beings without such limitations and so could not have created humans that were not prone to committing immoral acts. However, the work of Carl Christian Eberhard Schmid showed Kant that given his own beliefs about freedom and the nature of responsibility one could not account for moral evil in this way without tacitly denying that human beings were responsible for their actions. This result is significant not only because it explains an otherwise puzzling shift in Kant's philosophy of religion, but also because it shows that the theodicy essay provides powerful evidence that Kant's thinking about moral evil and freedom underwent fundamental shifts between early works such as the Groundwork and later works like the Religion within the Limits of Mere Reason. (shrink)
Ockham's razor asks that we not multiply entities beyond necessity. The razor is a powerful methodological tool, enabling us to articulate reasons for preferring one theory to another. There are those, however, who would modify the razor. Schaffer, for one, tells us that, ‘I think the proper rendering of Ockham's razor should be ‘Do not multiply fundamental entities without necessity’’. Our aim, here, is to challenge such re-workings of Ockham's razor.
Mathematics appears to play a genuine explanatory role in science. But how do mathematical explanations work? Recently, a counterfactual approach to mathematical explanation has been suggested. I argue that such a view fails to differentiate the explanatory uses of mathematics within science from the non-explanatory uses. I go on to offer a solution to this problem by combining elements of the counterfactual theory of explanation with elements of a unification theory of explanation. The result is a theory according to which (...) a counterfactual is explanatory when it is an instance of a generalized counterfactual scheme. (shrink)
The distinction between qualitative properties like mass and shape and non-qualitative properties like being Napoleon and being next to Obama is important, but remains largely unexamined. After discussing its theoretical significance and cataloguing various kinds of non-qualitative properties, I survey several views about the nature of this distinction and argue that all proposed reductive analyses of this distinction are unsatisfactory. I then defend primitivism, according to which the distinction resists reductive analysis.
A recent empirical study claims to show that the answer to Molyneux’s question is negative, but, as John Schwenkler points out, its findings are inconclusive: Subjects tested in this study probably lacked the visual acuity required for a fair assessment of the question. Schwenkler is undeterred. He argues that the study could be improved by lowering the visual demands placed on subjects, a suggestion later endorsed and developed by Kevin Connolly. I suggest that Connolly and Schwenkler both underestimate the difficulties (...) involved in rectifying the study they seek to fix. The problem is that the experimental paradigm under consideration fails to account for the role that rational inference plays in newly sighted subjects’ ability or inability to recognize spatial properties across modalities. Since answering Molyneux’s question requires establishing whether spatial properties can be recognized, across modalities, by newly sighted subjects without recourse to rational inference, this is a problem. Indeed, it is a problem that may be worsened by Schwenkler and Connolly’s suggestions regarding the lowering of visual demands on subjects in cross-modal matching tasks. (shrink)
This paper motivates and develops a new theory of time: priority presentism. Priority presentism is the view according to which (i) only present entities exist fundamentally and (ii) past and future entities exist, but they are grounded in the present. The articulation of priority presentism is an exercise in applied grounding: it draws on concepts from the recent literature on ontological dependence and applies those concepts in a new way, to the philosophy of time. The result, as I will argue, (...) is an attractive position that can do much of the same work in satisfying our intuitions about time as presentism, but without the ontological cost. (shrink)
The present paper advances an analogy between cases of extra-mathematical explanation and cases of what might be termed ‘extra-logical explanation’: the explanation of a physical fact by a logical fact. A particular case of extra-logical explanation is identified that arises in the philosophical literature on time travel. This instance of extra-logical explanation is subsequently shown to be of a piece with cases of extra-mathematical explanation. Using this analogy, we argue extra-mathematical explanation is part of a broader class of non-causal explanation. (...) This has important implications for extra-mathematical explanation, for time travel and for theories of explanation more generally. (shrink)
The indispensability argument seeks to establish the existence of mathematical objects. The success of the indispensability argument turns on finding cases of genuine extra- mathematical explanation. In this paper, I identify a new case of extra- mathematical explanation, involving the search patterns of fully-aquatic marine predators. I go on to use this case to predict the prevalence of extra- mathematical explanation in science.