The increasing application of network models to interpret biological systems raises a number of important methodological and epistemological questions. What novel insights can network analysis provide in biology? Are network approaches an extension of or in conflict with mechanistic research strategies? When and how can network and mechanistic approaches interact in productive ways? In this paper we address these questions by focusing on how biological networks are represented and analyzed in a diverse class of case studies. Our examples span from (...) the investigation of organizational properties of biological networks using tools from graph theory to the application of dynamical systems theory to understand the behavior of complex biological systems. We show how network approaches support and extend traditional mechanistic strategies but also offer novel strategies for dealing with biological complexity. (shrink)
While mechanistic explanation and, to a lesser extent, nomological explanation are well-explored topics in the philosophy of biology, topological explanation is not. Nor is the role of diagrams in topological explanations. These explanations do not appeal to the operation of mechanisms or laws, and extant accounts of the role of diagrams in biological science explain neither why scientists might prefer diagrammatic representations of topological information to sentential equivalents nor how such representations might facilitate important processes of explanatory reasoning unavailable to (...) scientists who restrict themselves to sentential representations. Accordingly, relying upon a case study about immune system vulnerability to attacks on CD4+ T-cells, I argue that diagrams group together information in a way that avoids repetition in representing topological structure, facilitate identification of specific topological properties of those structures, and make available to controlled processing explanatorily salient counterfactual information about topological structures, all in ways that sentential counterparts of diagrams do not. (shrink)
Using as case studies two early diagrams that represent mechanisms of the cell division cycle, we aim to extend prior philosophical analyses of the roles of diagrams in scientific reasoning, and specifically their role in biological reasoning. The diagrams we discuss are, in practice, integral and indispensible elements of reasoning from experimental data about the cell division cycle to mathematical models of the cycle’s molecular mechanisms. In accordance with prior analyses, the diagrams provide functional explanations of the cell cycle and (...) facilitate the construction of mathematical models of the cell cycle. But, extending beyond those analyses, we show how diagrams facilitate the construction of mathematical models, and we argue that the diagrams permit nomological explanations of the cell cycle. We further argue that what makes diagrams integral and indispensible for explanation and model construction is their nature as locality aids: they group together information that is to be used together in a way that sentential representations do not. (shrink)
ABSTRACT There is a new argument form within theoretical biology. This form takes as input competing explanatory models; it yields as output the conclusion that one of these models is more plausible than the others. The driving force for this argument form is an analysis showing that one model exhibits more parametric robustness than its competitors. This article examines these inferences to the more robust explanation, analysing them as variants of inference to the best explanation. The article defines parametric robustness (...) and distinguishes it from more familiar kinds of robustness. The article also argues that parametric robustness is an explanatory virtue not subsumed by more familiar explanatory virtues, and that the plausibility verdicts in the conclusions of inferences to the more robust explanations are best interpreted as guidance for research activity, rather than claims about likely truth. 1Introducing Inference to the More Robust Explanation 2Inference to the More Robust Explanation in the Study of Apoptosis 2.1Regulating apoptosis 2.2Competing models and evidential indecision 2.3Measuring robustness 2.4Robustness as a guide to plausibility 2.5Varieties of robustness 3Inference to the More Robust Explanation as Inference to the Best Explanation 3.1The structure of inference to the best explanation 3.2Parametric robustness as an explanatory virtue rather than an explanandum 3.3Relation of parametric robustness to other explanatory virtues 4Epistemological Significance of Inference to the More Robust Explanation 4.1Plausibility in practice 4.2Plausibility in principle 5Conclusion. (shrink)
Life scientists increasingly rely upon abstraction-based modeling and reasoning strategies for understanding biological phenomena. We introduce the notion of constraint-based reasoning as a fruitful tool for conceptualizing some of these developments. One important role of mathematical abstractions is to impose formal constraints on a search space for possible hypotheses and thereby guide the search for plausible causal models. Formal constraints are, however, not only tools for biological explanations but can be explanatory by virtue of clarifying general dependency-relations and patterning between (...) functions and structures. We describe such situations as constraint-based explanations and argue that these differ from mechanistic strategies in important respects. While mechanistic explanations emphasize change-relating causal features, constraint-based explanations emphasize formal dependencies and generic organizational features that are relatively independent of lower-level changes in causal details. Our distinction between mechanistic and constraint-based explanations is pragmatically motivated by the wish to understand scientific practice. We contend that delineating the affordances and assumptions of different explanatory questions and strategies helps to clarify tensions between diverging scientific practices and the innovative potentials in their combination. Moreover, we show how constraint-based explanation integrates several features shared by otherwise different philosophical accounts of abstract explanatory strategies in biology. (shrink)
Names name, but there are no individuals who are named by names. This is the key to an elegant and ideologically parsimonious strategy for analyzing the Buddhist catuṣkoṭi. The strategy is ideologically parsimonious, because it appeals to no analytic resources beyond those of standard predicate logic. The strategy is elegant, because it is, in effect, an application of Bertrand Russell's theory of definite descriptions to Buddhist contexts. The strategy imposes some minor adjustments upon Russell's theory. Attention to familiar catuṣkoṭi from (...) Vacchagotta and Nagarjuna as well as more obscure catuṣkoṭi from Khema, Zhi Yi, and Fa Zang motivates the adjustments. The result is a principled structural distinction between affirmative and negative catuṣkoṭi, as well as analyses for each that compare favorably to more recent efforts from Tillemans, Westerhoff, and Priest. (shrink)
This paper explicates the counting ten coins metaphor as it appears in Fazang’s Treatise on the Five Teachings of Huayan. The goal is to transform Fazang’s inexact and obscure mentions of the metaphor into something that is clearer and more precise. The method for achieving this goal is threefold: first, presenting Fazang’s version of the metaphor as improving upon prior efforts by Zhiyan and Ŭisang to interpret a brief stanza in the Avataṁsaka sutra; second, providing textual evidence to support this (...) interpretation; third, contrasting this interpretation with alternatives from Francis Cook as well as Yasuo Deguchi and Katsuhiko Sano. (shrink)
Abstract: According to certain dispositional accounts of meaning, an agent's meaning is determined by the dispositions that an idealized version of this agent has in optimal conditions. We argue that such attempts cannot properly fix meaning. For even if there is a way to determine which features of an agent should be idealized without appealing to what the agent means, there is no non-circular way to determine how those features should be idealized. We sketch an alternative dispositional account that avoids (...) this problem, according to which an agent's meaning is determined by the dispositions that an abstract version of this agent has in optimal conditions. (shrink)
Idealizing conditions are scapegoats for scientific hypotheses, too often blamed for falsehood better attributed to less obvious sources. But while the tendency to blame idealizations is common among both philosophers of science and scientists themselves, the blame is misplaced. Attention to the nature of idealizing conditions, the content of idealized hypotheses, and scientists’ attitudes toward those hypotheses shows that idealizing conditions are blameless when hypotheses misrepresent. These conditions help to determine the content of idealized hypotheses, and they do so in (...) a way that prevents those hypotheses from being false by virtue of their constituent idealizations. (shrink)
Theology involves inquiry into God's nature, God's purposes, and whether certain experiences or pronouncements come From God. These inquiries are metaphysical, part of theology's concern with the veridicality of signs and realities that are independent from humans. Several research programs concerned with the relation between theology and science aim to secure theology's intellectual standing as a metaphysical discipline by showing that it satisfies criteria that make modern science reputable, on the grounds that modern science embodies contemporary canons of respectability for (...) metaphysical disciplines. But, no matter the ways in which theology qua metaphysics is shown to resemble modern science, these research programs seem destined for failure. For, given the currently dominant approaches to understanding modern scientific epistemology, theological reasoning is crucially dissimilar to modern scientific reasoning in that it treats the existence of God as a certainty immune to refutation. Barring the development of an epistemology of modern science that is amenable to theology, theology as metaphysics is intellectually disreputable. (shrink)
The theistic argument from beauty has what we call an 'evil twin', the argument from ugliness. The argument yields either what we call 'atheist win', or, when faced with aesthetic theodicies, 'agnostic tie' with the argument from beauty.
Mengzi 孟子 6A2 contains the famous water analogy for the innate goodness of human nature. Some evaluate Mengzi’s reasoning as strong and sophisticated; others, as weak or sophistical. I urge for more nuance in our evaluation. Mengzi’s reasoning fares poorly when judged by contemporary standards of analogical strength. However, if we evaluate the analogy as an instance of correlative thinking within a yin-yang 陰陽 cosmology, his reasoning fares well. That cosmology provides good reason to assert that water tends to flow (...) downward, not because of available empirical evidence, but because water correlates to yin and yin correlates to naturally downward motion. Substantiating these contentions also gives occasion to better understand the nature of correlative reasoning in classical Chinese philosophy. (shrink)
I consider three explanatory strategies from recent systems biology that are driven by mathematics as much as mechanistic detail. Analysis of differential equations drives the first strategy; topological analysis of network motifs drives the second; mathematical theorems from control engineering drive the third. I also distinguish three abstraction types: aggregations, which simplify by condensing information; generalizations, which simplify by generalizing information; and structurations, which simplify by contextualizing information. Using a common explanandum as reference point—namely, the robust perfect adaptation of chemotaxis (...) in Escherichia coli—I argue that each strategy invokes a different combination of abstraction types and that each targets its abstractions to different mechanistic details. (shrink)
This is an attempt to explain, in a way familiar to contemporary ways of thinking about mereology, why someone might accept some prima facie puzzling remarks by Fazang, such as his claims that the eye of a lion is its ear and that a rafter of a building is identical to the building itself. These claims are corollaries of the Huayan Buddhist thesis that everything is part of everything else, and it is intended here to show that there is a (...) rational basis for this thesis that involves a nonstandard notion of parthood and, importantly, that does not violate the principle of noncontradiction. (shrink)
Comparing Buddhist and contemporary analytic views about mereological composition reveals significant dissimilarities about the purposes that constrain successful answers to mereological questions, the kinds of considerations taken to be probative in justifying those answers, and the value of mereological inquiry. I develop these dissimilarities by examining three questions relevant to those who deny the existence of composite wholes. The first is a question of justification: What justifies denying the existence of composite wholes as more reasonable than affirming their existence? The (...) second is a question of ontology: Under what conditions are many partless individuals arranged composite-wise? The third is a question of reasonableness: Why, if there are no composites available to experience, do “the folk” find it reasonable to believe there are? I motivate each question, sketch some analytic answers for each, develop in more detail answers from the Theravādin Buddhist scholar Buddhaghosa, and extract comparative lessons. (shrink)
This paper examines the metaphysics of interdependence in the work of the Chinese Buddhist Fazang. The dominant approach of this metaphysics interprets it as a species of metaphysical coherentism wherein everything depends upon everything else, no individual is more fundamental than any other, and so reality itself is non-well-founded in the sense that chains of dependence never terminate. I argue, to the contrary, that Fazang's metaphysics is better interpreted as a novel variety of foundationalism. I argue, as well, using set- (...) and graph-theoretic techniques, that there is a consistent way to model this alternative interpretation, and that this model differs in significant ways from a coherentist model. (shrink)
One recent priority of the U.S. government is developing autonomous robotic systems. The U.S. Army has funded research to design a metric of evil to support military commanders with ethical decision-making and, in the future, allow robotic military systems to make autonomous ethical judgments. We use this particular project as a case study for efforts that seek to frame morality in quantitative terms. We report preliminary results from this research, describing the assumptions and limitations of a program that assesses the (...) relative evil of two courses of action. We compare this program to other attempts to simulate ethical decision-making, assess possibilities for overcoming the trade-off between input simplification and output reliability, and discuss the responsibilities of users and designers in implementing such programs. We conclude by discussing the implications that this project highlights for the successes and challenges of developing automated mechanisms for ethical decision making. (shrink)
This chapter briefly reviews the role of race (as a concept) in the history of theorizing the posthuman, engages with existing discussions of race as technology, and explores the significance of understanding race as technology for the field of posthumanism. Our aim is to engage existing literature that posits racialized individuals as posthumans and to consider how studying race might inform theories of the posthuman.
In his _Treatise on the Golden Lion_, Fazang says that wholes are _in_ each of their parts and that each part of a whole _is_ every other part of the whole. In this paper, I offer an interpretation of these remarks according to which they are not obviously false, and I use this interpretation in order to rigorously reconstruct Fazang's arguments for his claims. On the interpretation I favor, Fazang means that the presence of a whole's part suffices for the (...) presence of the whole and that the presence of any such part is both necessary and sufficient for the presence of any other part. I also argue that this interpretation is more plausible than its extant competitors. (shrink)
I propose an account of generous action in the Pāli Buddhist tradition, whereby generous actions are instances of giving in which the donor has esteem for the recipient of their giving. The account differs from recent Anglophone accounts of generous action. These tend to construe generous actions as instances of a donor freely offering a gift to the recipient for the sake of benefiting the recipient. Unlike the Buddhist account I propose, these accounts do not require donors to esteem their (...) recipient. Accordingly, I also offer a partial account of esteem, whereby one esteems another only if they refrain from noticing the other’s faults and they encounter the other as someone who is superior in virtue and goodness. Taken together, the Buddhist accounts of generous action and esteem offer insight into certain ways in which different philosophical traditions tend to characterize generous action. (shrink)
This paper elaborates upon various responses to the Problem of the One over the Many, in the service of two central goals. The first is to situate Huayan's mereology within the context of Buddhism's historical development, showing its continuity with a broader tradition of philosophizing about part-whole relations. The second goal is to highlight the way in which Huayan's mereology combines the virtues of the Nyāya-Vaisheshika and Indian Buddhist solutions to the Problem of the One over the Many while avoiding (...) their vices. (shrink)
When comparing alternative courses of action, modern military decision makers often must consider both the military effectiveness and the ethical consequences of the available alternatives. The basis, design, calibration, and performance of a principles-based computational model of ethical considerations in military decision making are reported in this article. The relative ethical violation (REV) model comparatively evaluates alternative military actions based upon the degree to which they violate contextually relevant ethical principles. It is based on a set of specific ethical principles (...) deemed by philosophers and ethicists to be relevant to military courses of action. A survey of expert and non-expert human decision makers regarding the relative ethical violation of alternative actions for a set of specially designed calibration scenarios was conducted to collect data that was used to calibrate the REV model. Perhaps unsurprisingly, the survey showed that people, even experts, disagreed greatly amongst themselves regarding the scenarios’ ethical considerations. Despite this disagreement, two significant results emerged. First, after calibration the REV model performed very well in terms of replicating the ethical assessments of human experts for the calibration scenarios. The REV model outperformed an earlier model that was based on tangible consequences rather than ethical principles, that earlier model performed comparably to human experts, the experts outperformed human non-experts, and the non-experts outperformed random selection of actions. All of these performance comparisons were measured quantitatively and confirmed with suitable statistical tests. Second, although humans tended to value some principles over others, none of the ethical principles involved—even the principle of not harming civilians—completely overshadowed all of the other principles. (shrink)
In "Bayesian Confirmation of Theories that Incorporate Idealizations", Michael Shaffer argues that, in order to show how idealized hypotheses can be confirmed, Bayesians must develop a coherent proposal for how to assign prior probabilities to counterfactual conditionals. This paper develops a Bayesian reply to Shaffer's challenge that avoids the issue of how to assign prior probabilities to counterfactuals by treating idealized hypotheses as abstract descriptions. The reply allows Bayesians to assign non-zero degrees of confirmation to idealized hypotheses and to capture (...) the intuition that less idealized hypotheses tend to be better confirmed than their more idealized counterparts. (shrink)
Cyborg and prosthetic technologies frame prominent posthumanist approaches to understanding the nature of race. But these frameworks struggle to accommodate the phenomena of racial passing and racial travel, and their posthumanist orientation blurs useful distinctions between racialized humans and their social contexts. We advocate, instead, a humanist approach to race, understanding racial hierarchy as an industrial technology. Our approach accommodates racial passing and travel. It integrates a wide array of research across disciplines. It also helpfully distinguishes among grounds of racialization (...) and conditions facilitating impacts of such racialization. (shrink)
According to conciliatory views about the epistemology of disagreement, when epistemic peers have conflicting doxastic attitudes toward a proposition and fully disclose to one another the reasons for their attitudes toward that proposition (and neither has independent reason to believe the other to be mistaken), each peer should always change his attitude toward that proposition to one that is closer to the attitudes of those peers with which there is disagreement. According to pure higher-order evidence views, higher-order evidence for a (...) proposition always suffices to determine the proper rational response to disagreement about that proposition within a group of epistemic peers. Using an analogue of Arrow's Impossibility Theorem, I shall argue that no conciliatory and pure higher-order evidence view about the epistemology of disagreement can provide a true and general answer to the question of what disagreeing epistemic peers should do after fully disclosing to each other the (first-order) reasons for their conflicting doxastic attitudes. (shrink)
This document is a synopsis of discussions at the workshop prepared by Nicholaos Jones and Kevin Coffey, with remarks added by by Chuang Liu, John D. Norton, John Earman, Gordon Belot, Mark Wilson, Bob Batterman and Margie Morrison. The program is included in an appendix.
This paper examines the Huayan teaching of the six characteristics as presented in the Rafter Dialogue from Fazang's Treatise on the Five Teachings. The goal is to make the teaching accessible to those with minimal training in Buddhist philosophy, and especially for those who aim to engage with the extensive question-and-answer section of the Rafter Dialogue. The method for achieving this goal is threefold: first, contextualizing Fazang's account of the characteristics with earlier Buddhist attempts to theorize the relationships between wholes (...) and their parts; second, explicating the meaning Fazang likely attributes to each of the six characteristics; third, situating the characteristics as explicated within Fazang's broader metaphysical framework. (shrink)
General Relativity and the Standard Model often are touted as the most rigorously and extensively confirmed scientific hypotheses of all time. Nonetheless, these theories appear to have consequences that are inconsistent with evidence about phenomena for which, respectively, quantum effects and gravity matter. This paper suggests an explanation for why the theories are not disconfirmed by such evidence. The key to this explanation is an approach to scientific hypotheses that allows their actual content to differ from their apparent content. This (...) approach does not appeal to ceteris-paribus qualifiers or counterfactuals or similarity relations. And it helps to explain why some highly idealized hypotheses are not treated in the way that a thoroughly refuted theory is treated but instead as hypotheses with limited domains of applicability. (shrink)
In this reply to Gregory Peterson 's essay "Maintaining Respectability," which itself is a response to my "Is Theology Respectable as Metaphysics?" I elaborate upon my claims that theology treats God's existence as an absolute certainty immune to refutation and that modern science constitutes the canons of respectable reasoning for metaphysical disciplines. I conclude with some comments on Peterson 's "In Praise of Folly? Theology and the University.".
Analysis of Competing Hypotheses (ACH) promises a relatively objective and tractable methodology for ranking the plausibility of competing hypotheses. Unlike Bayesianism, it is computationally modest. Unlike explanationism, it appeals to minimally subjective judgments about relations between hypotheses and evidence. Yet the canonical procedures for ACH allow a certain kind of instability in applications of the methodology, by virtue of supporting competing rankings despite common evidential bases and diagnosticity assessments. This instability should motivate advocates of ACH to focus their efforts toward (...) creating structured methods for individuating items of evidence. (shrink)
General Relativity and the Standard Model often are touted as the most rigorously and extensively confirmed scientific hypotheses of all time. Nonetheless, these theories appear to have consequences that are inconsistent with evidence about phenomena for which, respectively, quantum effects and gravity matter. This paper suggests an explanation for why the theories are not disconfirmed by such evidence. The key to this explanation is an approach to scientific hypotheses that allows their actual content to differ from their apparent content. This (...) approach does not appeal to ceteris-paribus qualifiers or counterfactuals or similarity relations. And it helps to explain why some highly idealized hypotheses are not treated in the way that a thoroughly refuted theory is treated but instead as hypotheses with limited domains of applicability. (shrink)
The ontologies of scientific theories include a variety of objects: point-mass particles, rigid rods, frictionless planes, flat and curved spacetimes, perfectly spherical planets, continuous fluids, ideal gases, nonidentical but indistinguishable electrons, atoms, quarks and gluons, strong and weak nuclear forces, ideally rational agents, and so on. But the scientific community currently regards only some of these objects as real. According to Paul Teller, a group sometimes can be justified in regarding competing ontologies as real and the ontologies we are justified (...) in regarding as real are inexact, because the theories that give those ontologies characterize what things are like rather than what they are. In this paper, I argue that Teller's view is incomplete and suggest that one way to remove this incompleteness is to adopt a criterion for when we are justified in regarding a theory's ontology as real that is based upon a theory's comparative degree of confirmation. I argue that this criterion is prima-facie plausible and that Teller's view is false if this criterion is correct. (shrink)
Can contradictions be meaningful? How can one assert 'P soku not-P' or 'P and yet not-P' without sacrificing intelligibility? Expanding on previous attempts, mainly by Dilworth and Heisig, to demystify the soku connective, a formal system is presented here for the logic of soku. Through a formal distinction between internal and external negation, grammatical features of the soku connective are shown to be logically irrelevant, and the principle of non-contradiction is preserved. Disparities with traditional logic are noted, with a focus (...) on negation rather than 'soku'. The formal examination of the logic of soku is intended to present the logic in a way acceptable to more analytically minded philosophers and thereby enhance East-West and Japanese-Anglo-American interaction and criticism. (shrink)