In this article, we will present a number of technical results concerning Classical Logic, ST and related systems. Our main contribution consists in offering a novel identity criterion for logics in general and, therefore, for Classical Logic. In particular, we will firstly generalize the ST phenomenon, thereby obtaining a recursively defined hierarchy of strict-tolerant systems. Secondly, we will prove that the logics in this hierarchy are progressively more classical, although not entirely classical. We will claim that a logic (...) is to be identified with an infinite sequence of consequence relations holding between increasingly complex relata: formulae, inferences, metainferences, and so on. As a result, the present proposal allows not only to differentiate Classical Logic from ST, but also from other systems sharing with it their valid metainferences. Finally, we show how these results have interesting consequences for some topics in the philosophical logic literature, among them for the debate around Logical Pluralism. The reason being that the discussion concerning this topic is usually carried out employing a rivalry criterion for logics that will need to be modified in light of the present investigation, according to which two logics can be non-identical even if they share the same valid inferences. (shrink)
A venerable tradition in the metaphysics of science commends ontological reduction: the practice of analysis of theoretical entities into further and further proper parts, with the understanding that the original entity is nothing but the sum of these. This tradition implicitly subscribes to the principle that all the real action of the universe (also referred to as its "causation") happens at the smallest scales-at the scale of microphysics. A vast majority of metaphysicians and philosophers of science, covering a wide swath (...) of the spectrum from reductionists to emergentists, defend this principle. It provides one pillar of the most prominent theory of science, to the effect that the sciences are organized in a hierarchy, according to the scales of measurement occupied by the phenomena they study. On this view, the fundamentality of a science is reckoned inversely to its position on that scale. This venerable tradition has been justly and vigorously countered-in physics, most notably: it is countered in quantum theory, in theories of radiation and superconduction, and most spectacularly in renormalization theories of the structure of matter. But these counters-and the profound revisions they prompt-lie just below the philosophical radar. This book illuminates these counters to the tradition principle, in order to assemble them in support of a vaster (and at its core Aristotelian) philosophical vision of sciences that are not organized within a hierarchy. In so doing, the book articulates the principle that the universe is active at absolutely all scales of measurement. This vision, as the book shows, is warranted by philosophical treatment of cardinal issues in the philosophy of science: fundamentality, causation, scientific innovation, dependence and independence, and the proprieties of explanation. (shrink)
The question whether Frege’s theory of indirect reference enforces an infinite hierarchy of senses has been hotly debated in the secondary literature. Perhaps the most influential treatment of the issue is that of Burge (1979), who offers an argument for the hierarchy from rather minimal Fregean assumptions. I argue that this argument, endorsed by many, does not itself enforce an infinite hierarchy of senses. I conclude that whether or not the theory of indirect reference can avail itself (...) of only finitely many senses is pending further theoretical development. (shrink)
Contemporary evolutionary biology comprises a plural landscape of multiple co-existent conceptual frameworks and strenuous voices that disagree on the nature and scope of evolutionary theory. Since the mid-eighties, some of these conceptual frameworks have denounced the ontologies of the Modern Synthesis and of the updated Standard Theory of Evolution as unfinished or even flawed. In this paper, we analyze and compare two of those conceptual frameworks, namely Niles Eldredge’s Hierarchy Theory of Evolution (with its extended ontology of evolutionary entities) (...) and the Extended Evolutionary Synthesis (with its proposal of an extended ontology of evolutionary processes), in an attempt to map some epistemic bridges (e.g. compatible views of causation; niche construction) and some conceptual rifts (e.g. extra-genetic inheritance; different perspectives on macroevolution; contrasting standpoints held in the “externalism–internalism” debate) that exist between them. This paper seeks to encourage theoretical, philosophical and historiographical discussions about pluralism or the possible unification of contemporary evolutionary biology. (shrink)
The ubiquity of top-down causal explanations within and across the sciences is prima facie evidence for the existence of top-down causation. Much debate has been focused on whether top-down causation is coherent or in conflict with reductionism. Less attention has been given to the question of whether these representations of hierarchical relations pick out a single, common hierarchy. A negative answer to this question undermines a commonplace view that the world is divided into stratified ‘levels’ of organization and suggests (...) that attributions of causal responsibility in different hierarchical representations may not have a meaningful basis for comparison. Representations used in top-down and bottom-up explanations are primarily ‘local’ and tied to distinct domains of science, illustrated here by protein structure and folding. This locality suggests that no single metaphysical account of hierarchy for causal relations to obtain within emerges from the epistemology of scientific explanation. Instead, a pluralist perspective is recommended—many different kinds of top-down causation (explanation) can exist alongside many different kinds of bottom-up causation (explanation). Pluralism makes plausible why different senses of top-down causation can be coherent and not in conflict with reductionism, thereby illustrating a productive interface between philosophical analysis and scientific inquiry. (shrink)
We classify ultrafilters on ω with respect to sequential contours (see [4].[5]) of different ranks. In this way we obtain an ω1 sequence {Pα}1≤α≤ω1 of disjoint classes. We prove that non-emptiness of Pα for successor α ≥ 2 is equivalent to the existence of P-point. We investigate relations between P-hierarchy and ordinal ultrafilters (introduced by J. E. Baumgartner in [1]), we prove that it is relatively consistent with ZFC that the successor classes (for α ≥ 2) of P-hierarchy (...) and ordinal ultrafilters intersect but are not the same. (shrink)
Mechanistic explanation involves the attribution of functions to both mechanisms and their component parts, and function attribution plays a central role in the individuation of mechanisms. Our aim in this paper is to investigate the impact of a perspectival view of function attribution for the broader mechanist project, and specifically for realism about mechanistic hierarchies. We argue that, contrary to the claims of function perspectivalists such as Craver, one cannot endorse both function perspectivalism and mechanistic hierarchy realism: if functions (...) are perspectival, then so are the levels of a mechanistic hierarchy. We illustrate this argument with an example from recent neuroscience, where the mechanism responsible for the phenomenon of ephaptic coupling cross-cuts the more familiar mechanism for synaptic firing. Finally, we consider what kind of structure there is left to be realist about for the function perspectivalist. (shrink)
This article defines a hierarchy on the hereditarily finite sets which reflects the way sets are built up from the empty set by repeated adjunction, the addition to an already existing set of a single new element drawn from the already existing sets. The structure of the lowest levels of this hierarchy is examined, and some results are obtained about the cardinalities of levels of the hierarchy.
Some reasons to regard the cumulative hierarchy of sets as potential rather than actual are discussed. Motivated by this, a modal set theory is developed which encapsulates this potentialist conception. The resulting theory is equi-interpretable with Zermelo Fraenkel set theory but sheds new light on the set-theoretic paradoxes and the foundations of set theory.
Few of Stephen Jay Gould’s accomplishments in evolutionary biology have received more attention than his hierarchical theory of evolution, which postulates a causal discontinuity between micro- and macroevolutionary events. But Gould’s hierarchical theory was his second attempt to supply a theoretical framework for macroevolutionary studies—and one he did not inaugurate until the mid-1970s. In this paper, I examine Gould’s first attempt: a proposed fusion of theoretical morphology, multivariate biometry and the experimental study of adaptation in fossils. This early “macroevolutionary synthesis” (...) was predicated on the notion that parallelism and convergence dominate the history of higher taxa, and moreover, that they can be explained in terms of adaptation leading to mechanical improvement. In this paper, I explore the origins and contents of Gould’s first macroevolutionary synthesis, as well as the reasons for its downfall. In addition, I consider how various developments during the mid-1970s led Gould to identify hierarchy and constraint as the leading themes of macroevolutionary studies—and adaptation as a macroevolutionary red herring. (shrink)
We introduce a new hierarchy of computably enumerable degrees. This hierarchy is based on computable ordinal notations measuring complexity of approximation of${\rm{\Delta }}_2^0$functions. The hierarchy unifies and classifies the combinatorics of a number of diverse constructions in computability theory. It does so along the lines of the high degrees and the array noncomputable degrees. The hierarchy also gives a number of natural definability results in the c.e. degrees, including a definable antichain.
In this paper, a way of constructing many-valued paraconsistent logics with weak double negation axioms is proposed. A hierarchy of weak double negation axioms is addressed in this way. The many-valued paraconsistent logics constructed are defined as Gentzen-type sequent calculi. The completeness and cut-elimination theorems for these logics are proved in a uniform way. The logics constructed are also shown to be decidable.
Evidence-based medicine (EBM) makes use of explicit procedures for grading evidence for causal claims. Normally, these procedures categorise evidence of correlation produced by statistical trials as better evidence for a causal claim than evidence of mechanisms produced by other methods. We argue, in contrast, that evidence of mechanisms needs to be viewed as complementary to, rather than inferior to, evidence of correlation. In this paper we first set out the case for treating evidence of mechanisms alongside evidence of correlation in (...) explicit protocols for evaluating evidence. Next we provide case studies which exemplify the ways in which evidence of mechanisms complements evidence of correlation in practice. Finally, we put forward some general considerations as to how the two sorts of evidence can be more closely integrated by EBM. (shrink)
For a Euclidean space , let L n denote the modal logic of chequered subsets of . For every n 1, we characterize L n using the more familiar Kripke semantics, thus implying that each L n is a tabular logic over the well-known modal system Grz of Grzegorczyk. We show that the logics L n form a decreasing chain converging to the logic L of chequered subsets of . As a result, we obtain that L is also a logic (...) over Grz, and that L has the finite model property. We conclude the paper by extending our results to the modal language enriched with the universal modality. (shrink)
Various processes are often classified as both deterministic and random or chaotic. The main difficulty in analysing the randomness of such processes is the apparent tension between the notions of randomness and determinism: what type of randomness could exist in a deterministic process? Ergodic theory seems to offer a particularly promising theoretical tool for tackling this problem by positing a hierarchy, the so-called ‘ergodic hierarchy’, which is commonly assumed to provide a hierarchy of increasing degrees of randomness. (...) However, that notion of randomness requires clarification. The mathematical definition of EH does not make explicit appeal to randomness; nor does the usual way of presenting EH involve a specification of the notion of randomness that is supposed to underlie the hierarchy. In this paper we argue that EH is best understood as a hierarchy of random behaviour if randomness is explicated in terms of unpredictability. We then show that, contrary to common wisdom, EH is useful in characterising the behaviour of Hamiltonian dynamical systems. (shrink)
Many areas of science develop by discovering mechanisms and role functions. Cummins' (1975) analysis of role functions-according to which an item's role function is a capacity of that item that appears in an analytic explanation of the capacity of some containing system-captures one important sense of "function" in the biological sciences and elsewhere. Here I synthesize Cummins' account with recent work on mechanisms and causal/mechanical explanation. The synthesis produces an analysis of specifically mechanistic role functions, one that uses the characteristic (...) active, spatial, temporal, and hierarchical organization of mechanisms to add precision and content to Cummins' original suggestion. This synthesis also shows why the discovery of role functions is a scientific achievement. Discovering a role function (i) contributes to the interlevel integration of multilevel mechanisms, and (ii) provides a unique, contextual variety of causal/mechanical explanation. (shrink)
Assuming the existence of a measurable cardinal, we define a hierarchy of Ramsey cardinals and a hierarchy of normal filters. We study some combinatorial properties of this hierarchy. We show that this hierarchy is absolute with respect to the Dodd-Jensen core model, extending a result of Mitchell which says that being Ramsey is absolute with respect to the core model.
This article argues that the dynamics behind the generation of social pathologies in modern society also undermine the social-relational framework for recognition. It therefore claims that the theory of recognition is impotent in face of the kinds of normative power exerted by social hierarchies. The article begins by discussing the particular forms of social pathology and their relation to hierarchical forms of social structure that are based on domination, control and subordination and then shows how the internalization of the norms (...) that shape and hold together hierarchical social formations causes pathologies within the self. As a result of these processes, the recognitive aspects of social action that the theory of recognition posits are unable to overcome and in fact reproduce and in many instances reinforce the pathologies themselves. (shrink)
We consider Borel sets of finite rank $A \subseteq\Lambda^\omega$ where cardinality of Λ is less than some uncountable regular cardinal K. We obtain a "normal form" of A, by finding a Borel set Ω, such that A and Ω continuously reduce to each other. In more technical terms: we define simple Borel operations which are homomorphic to ordinal sum, to multiplication by a countable ordinal, and to ordinal exponentiation of base K, under the map which sends every Borel set A (...) of finite rank to its Wadge degree. (shrink)
The human history has evidenced a great number of systems of hierarchy and power, various manifestations of power and hierarchy relations in different spheres of social life from politics to information networks, from culture to sexual life. A careful study of each particular case of such relations is very im-portant, especially within the context of contemporary multipolar and multicultural world. In the meantime it is very important to see both the general features, typical for all or most of (...) the hierarchy and power forms, and their variation. This set of issues has been treated by a series of international conferences titled ‘Hierarchy and Power in the History of Civilizations’ held in 2000–2006. Most articles of this volume were originally presented at the 4th conference of this series (Moscow, 2006). Needless to mention that all those presentations have been substantially re-worked for the publication in this volume. Notwithstanding the fact that the relations of hierarchy and power are rele-vant for all the spheres as they penetrate the whole of social life, establishing a sort of framework for the human agency, they are naturally most visible in the political sphere. They existed long before the formation of the earliest states – ethologists maintain that complex systems of hierarchy and power can be found among many highly organized animals. Yet, this was with the formation of the state and civilization when the power and hierarchy relations acquired their mature forms. Although ancient and medieval systems of government and domination always attract special attention, contemporary systems are much closer for every scholar. At the same time relations of power and hierarchy in the modern political world system demonstrate a great number of variants, levels and dimensions. In the present edited volume we only focus on three aspects of this important subject. These are revolutionary transformations (in the broad sense of this notion), violence, and globalization. Each volume section is devoted to one of those themes. (shrink)
This research was partially supported by Grant-in-Aid for Scientific Research, Ministry of Education, Science and Culture of Japan Mathematics Subject Classification: 03E05 -->. Following Carr's study on diagonal operations and normal filters on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} ${\cal P}_{\kappa}\lambda$\end{document} in [2], several weakenings of normality have been investigated. One of them is to consider normal filters without \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\kappa$\end{document}-completeness, for example, see DiPrisco-Uzcategui [3]. The other (...) is weakening normality itself while keeping \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\kappa$\end{document}-completeness such as in Mignone [10] and Shioya [11]. We take the second one so that all filters are assumed to be \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\kappa$\end{document}-complete. In Sect. 1 a hierarchy of filters on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} ${\cal P}_{\kappa}\lambda$\end{document} is presented which corresponds to the length of diagonal intersections under which the filters are closed. It turns out that many ranks exist between \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $FSF_{\kappa\lambda}$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $CF_{\kappa\lambda}$\end{document}. We consider seminormal ideals in Sect. 2 and determine the minimal seminormal ideal extending Johnson's result in [6]. Its precise descripti on changes according to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $cf$\end{document} although we can write it in a single form as well. We also prove that a nonnormal seminormal ideal \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $I\supset NS_{\kappa\lambda}$\end{document} exists if and only if \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\lambda$\end{document} is regular. (shrink)
The so-called ergodic hierarchy (EH) is a central part of ergodic theory. It is a hierarchy of properties that dynamical systems can possess. Its five levels are egrodicity, weak mixing, strong mixing, Kolomogorov, and Bernoulli. Although EH is a mathematical theory, its concepts have been widely used in the foundations of statistical physics, accounts of randomness, and discussions about the nature of chaos. We introduce EH and discuss how its applications in these fields.
The paper places the work of G. Gaus into the tradition of political thought experimenting. In particular, his strategy of modeling moral decision by the heuristic device of idealized Members of the Public is presented as an iterated thought experiment, which stands in marked contrast with more traditional devices like the veil of ignorance. The consequences are drawn, and issues of utopianism and realism briefly discussed.
The purpose of this work is to elaborate an empirically grounded mathematical model of the magnitude of consequences component of “moral intensity” (Jones, Academy of Management Review 16 (2),366, 1991) that can be used to evaluate different ethical situations. The model is built using the analytical hierarchy process (AHP) (Saaty, The Analytic Hierarchy Process , 1980) and empirical data from the legal profession. One contribution of our work is that it illustrates how AHP can be applied in the (...) field of ethics. Following a review of the literature, we discuss the development of the model. We then illustrate how the model can be used to rank-order three well-known ethical reasoning cases in terms of the magnitude of consequences. The work concludes with implications for theory, practice, and future research. Specifically we discuss how this work extends the previous work by Collins ( Journal of Business Ethics 8 , 1, 1989) regarding the nature of harm variable. We also discuss the contribution this work makes in the development of ethical scenarios used to test hypotheses in the field of business ethics. Finally, we discuss how the model can be used for after-action review, contribute to organizational learning, train employees in ethical reasoning, and aid in the design and development of decision support systems that support ethical reasoning. (shrink)
The concept of a generalized quantifier of a given similarity type was defined in [12]. Our main result says that on finite structures different similarity types give rise to different classes of generalized quantifiers. More exactly, for every similarity type t there is a generalized quantifier of type t which is not definable in the extension of first order logic by all generalized quantifiers of type smaller than t. This was proved for unary similarity types by Per Lindström [17] with (...) a counting argument. We extend his method to arbitrary similarity types. (shrink)
Game theory has played a critical role in elucidating the evolutionary origins of social behavior. Sober and Wilson model altruism as a prisoner's dilemma and claim that this model indicates that altruism arose from group selection pressures. Sober and Wilson also suggest that the prisoner's dilemma model can be used to characterize punishment; hence, punishment too originated from group selection pressures. However, empirical evidence suggests that a group selection model of the origins of altruistic punishment may be insufficient. I argue (...) that examining dominance hierarchies and coalition formation in chimpanzee societies suggests that the origins of altruistic punishment may be best captured by individual selection models. I suggest that this shows the necessity of coupling of game-theoretic models with a conception of what our actual social structure may have been like to best model the origins of our own behavior. ‡I would like to thank Zachary Ernst and Emma Marris for their many helpful comments which greatly improved this paper. †To contact the author, please write to: Department of Philosophy, University of Maryland, Skinner Building, College Park, MD 20742; e-mail: yrohwer@umd.edu. (shrink)
I argue that dialetheists have a problem with the concept of logical consequence. The upshot of this problem is that dialetheists must appeal to a hierarchy of concepts of logical consequence. Since this hierarchy is akin to those invoked by more orthodox resolutions of the semantic paradoxes, its emergence would appear to seriously undermine the dialetheic treatments of these paradoxes. And since these are central to the case for dialetheism, this would represent a significant blow to the position (...) itself. (shrink)
In this paper, we characterize the strength of the predicative Frege hierarchy, , introduced by John Burgess in his book [J. Burgess, Fixing frege, in: Princeton Monographs in Philosophy, Princeton University Press, Princeton, 2005]. We show that and are mutually interpretable. It follows that is mutually interpretable with Q. This fact was proved earlier by Mihai Ganea in [M. Ganea, Burgess’ PV is Robinson’s Q, The Journal of Symbolic Logic 72 619–624] using a different proof. Another consequence of the (...) our main result is that is mutually interpretable with Kalmar Arithmetic . The fact that interprets EA was proved earlier by Burgess. We provide a different proof. Each of the theories is finitely axiomatizable. Our main result implies that the whole hierarchy taken together, , is not finitely axiomatizable. What is more: no theory that is mutually locally interpretable with is finitely axiomatizable. (shrink)
I argue that dialetheists have a problem with the concept of logical consequence. The upshot of this problem is that dialetheists must appeal to a hierarchy of concepts of logical consequence. Since this hierarchy is akin to those invoked by more orthodox resolutions of the semantic paradoxes, its emergence would appear to seriously undermine the dialetheic treatments of these paradoxes. And since these are central to the case for dialetheism, this would represent a significant blow to the position (...) itself. (shrink)
The analytic hierarchy process (AHP) can be used to determine co-author responsibility for a scientific paper describing collaborative research. The objective is to deter scientific fraud by holding co-authors accountable for their individual contributions. A hiearchical model of the research presented in a paper can be created by dividing it into primary and secondary elements. The co-authors then determine the contributions of the primary and secondary elements to the work as a whole as well as their own individual contributions. (...) They can use the results to determine authorship order. (shrink)
The philosophical debate whether the epistemological and conceptual structure of science is better characterized as hierarchical or as holistic cannot be decideda priori. A case study on general relativity should help to clarify our representation of this section of physics. For this purpose Sneed's model-theoretic approach is used to reconstruct the structure of relativity. The proposed axiomatization of general relativity takes into account approximations and utilizes local models for a realistic view on the functioning of the theory. A central objective (...) of the paper is to give an explication of the approximative empirical claim of general relativity, which is designed to identify the empirical content of the theory in a weak hierarchical form. (shrink)
We empirically tested Hemelrijk’s agent-based model, in which dyadic agonistic interaction between primate-group subjects determines their spatial distribution and whether or not the dominant subject has a central position with respect to the other subjects. We studied a group of captive red-capped mangabeys that met the optimal conditions for testing this model. We analyzed the spatial distribution of the subjects in relation to their rank in the dominance hierarchy and the results confirmed the validity of this model. In accordance (...) with Hemelrijk’s model, the group studied showed an ambiguity-reducing strategy that led to non-central spatial positioning on the part of the dominant subject, thus confirming the model indirectly. Nevertheless, for the model to be confirmed directly, the group has to adopt a risk-sensitive strategy so that observers can study whether dominant subjects develop spatial centrality. Our study also demonstrated that agent-based models are a good tool for the study of certain complex behaviors observed in primates because these explanatory models can help formulate suggestive hypotheses for exploring new lines of research in primatology. Keywords: Dominance-hierarchy rank; spatial distribution;Cercocebus torquatus; agent-based models. (shrink)