Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

The Unity of Science

First published Thu Aug 9, 2007; substantive revision Thu May 16, 2013

The topic of unity in the sciences includes the following questions: Is there one privileged, most basic kind of material, and if not, how are the different kinds of material in the universe related? Can the various natural sciences (physics, astronomy, chemistry, biology) be unified into a single overarching theory, and can theories within a single science (e.g., general relativity and quantum theory in physics) be unified? Does the unification of these parts of science involve only matters of fact or are matters of value involved as well? What about matters of method, material, institutional, ethical and other aspects of intellectual cooperation? Moreover, what kinds of unity in the sciences are there, and is unification merely a relation between concepts or terms (i.e., a matter of semantics), or is it also a relation between the theories, people, objects, or objectives that they are part of? And is the relation one of reduction, translation, explanation, logical inference, collaboration or something else?

These are the kinds of questions that will be addressed in this article. In addressing these questions, I will consider the often-assumed preference for physics as a privileged locus and consider whether anything follows about the unity of science from the fact that physics is the study of the most fundamental elements such as matter and energy. We shall also consider biology, and the extent to which biological entities, from organisms to genes and processes, are really just chemical in nature. The question of unity also extends to explanatory concepts in psychology and the social sciences.

Finally, we consider a very different move, namely, to challenge the very hierarchy presupposed by the question of unity and to redraw the boundaries and lines of interaction and integration that best describe actual scientific practice. Global unification projects have been replaced by strategies of interdisciplinary research. How should we evaluate the evidence for disunity and pluralism in science? To what extent should we supplement the attention to logic and language with an interest in practices, images and objects? It is worth pointing out that positions about the unity of science have important consequences, and affect the way we formulate and solve problems in philosophy (e.g., questions of naturalism), science (e.g., design of education and research projects) and policy (e.g., allocation of resources).


1. Historical development in philosophy and science from Greek philosophy to Logical Empiricism in America

1.1 From Greek thought to Western science

Unity has a history as well as a logic. The general questions should be carefully distinguished from any of the different specific theses addressing them and should be noted as the linking thread of a time-honored philosophical debate. The questions about unity belong to a tradition of thought that can be traced back to pre-Socratic Greek cosmology, in particular to the preoccupation with the question of the one and the many. In what senses are the world, and thereby our knowledge of it, one? A number of representations of the world in terms of a few simple constituents that were considered fundamental emerged: Parmenides' static substance, Heraclitus' flux of becoming, Empedocles' four elements, Democritus' atoms, or Pythagoras' numbers, Plato's forms, and Aristotle's categories. The underlying question of the unity of our types of knowledge was explicitly addressed by Plato in the Sophist as follows: “Knowledge also is surely one, but each part of it that commands a certain field is marked off and given a special name proper to itself. Hence language recognizes many arts and many forms of knowledge” (Sophist, 257c). Aristotle asserted in On the Heavens that knowledge concerns what is primary, and different “sciences” know different kinds of causes; it is metaphysics that comes to provide knowledge of the underlying kind.

With the advent and expansion of Christian monotheism, the organization of knowledge reflected the idea of a world governed by the laws dictated by God, its creator and legislator. From this tradition emerged encyclopedic efforts such as the Etymologies, compiled in the sixth century by the Andalusian Isidore, Bishop of Seville, the works of the Catalan Ramon Llull in the Middle Ages and those of the Frenchman Petrus Ramus in the Renaissance. Llull introduced iconic tree-diagrams and forest-encyclopedias representing the organization of different disciplines including law, medicine, theology and logic. He also introduced more abstract diagrams—not unlike some found in Cabbalistic and esoteric traditions—in an attempt to combinatorially encode the knowledge of God's creation in a universal language of basic symbols; their combination would then generate knowledge of the secrets of creation. Ramus introduced diagrams representing dichotomies and gave prominence to the view that the starting point of all philosophy is the classification of the arts and sciences. The search for a universal language would continue to be a driving force behind the project of unifying knowledge.

The emergence of a distinctive tradition of scientific thought addressed the question of unity through the designation of a privileged method, which involved a set of concepts and language. In the late 16th century Francis Bacon held that one unity of the sciences was the result of our organization of discovered material facts in the form of a pyramid with different levels of generalities; these would be classified in turn according to disciplines linked to human faculties. In accordance with at least three traditions—the Pythagorean tradition, the Bible's dictum in the Book of Wisdom and the Italian commercial tradition of bookkeeping—Galileo proclaimed at the turn of the 17th century that the Book of Nature had been written by God in the language of mathematical symbols and geometrical truths; and that in it the story of Nature's laws was told in terms of a reduced set of objective, quantitative primary qualities: extension, quantity of matter and motion. In the 17th century, mechanical philosophy and Newton's systematization from basic concepts and first laws of mechanics became the most promising framework for the unification of natural philosophy. After the demise of Laplacian molecular physics in the first half of the 19th century, this role was taken over by ether mechanics and energy physics.

1.2 Rationalism and Enlightenment

Descartes and Leibniz gave this tradition a rationalist twist that was centered on the powers of human reason; it became the project of a universal framework of exact categories and ideas, a mathesis universalis (Garber 1992 and Gaukroger 2002). Like Llull's, their conception of unity is determined by rules of analysis of ideas into elements, and their synthesis into combinations. According to Descartes the science of geometry, with its demonstrative reasoning from the simplest and clearest thoughts, constitutes the paradigm for the goal of unifying all knowledge. In adapting the scholastic image of knowledge, Descartes's tree holds that metaphysics are the roots, physics the trunk, and that the branches are mechanics, medicine and morals. Leibniz proposed a general science in the form of a demonstrative encyclopedia. This would be based on a “catalogue of simple thoughts” and an algebraic language of symbols, characteristica universalis, which would render all knowledge demonstrative and allow disputes to be resolved by precise calculation. Both defended the program of founding much of physics on metaphysics and ideas from life science (Smith 2011) (Leibniz's unifying ambitions with symbolic language and physics extended beyond science, to settle religious and political fractures in Europe). By contrast, while sharing a model of geometric axiomatic structure of knowledge, Newton's project of natural philosophy was meant to be autonomous from a system of philosophy and, in the new context, still endorsed for its model of organization and its empirical reasoning values of formal synthesis and ontological simplicity (see the entry on Newton and Janiak 2008).

Belief in the unity of science or knowledge, along with the universality of rationality, was at its strongest during the European Enlightenment. The most important expression of the encyclopedic tradition came in the mid-eighteenth century from Diderot and D'Alembert, editors of the Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (1751–1772). Following earlier classifications by Nichols and Bacon, their diagram presenting the classification of intellectual disciplines was organized in terms of a classification of human faculties. Diderot stressed in his own entry, “Encyclopaedia”, that the word Encyclopedia signifies the unification of the sciences. The function of the encyclopedia was to exhibit the unity of human knowledge. Diderot and D'Alembert, in contrast with Leibniz, made classification by subject the primary focus, and introduced cross-references instead of logical connections. The Enlightenment tradition in Germany culminated in Kant's critical philosophy.

1.3 German tradition since Kant

Kant saw one of the functions of philosophy as determining the precise unifying scope and value of each science. For Kant, the unity of science is not the reflection of a unity found in nature; rather, it has its foundations in the unifying character or function of concepts and of reason itself. Nature is precisely our experience of the world under the universal laws that include some such concepts. And science, as a system of knowledge, is “a whole of cognition ordered according to principles”, and the principles on which proper science is grounded are a priori (Preface to Metaphysical Foundations of Natural Science). A devoted but not exclusive follower of Newton's achievements and insights, he maintained through most of his life that mathematization and a priori universal laws given by the understanding of it were preconditions for genuine scientific character (like Galileo and Descartes earlier, and Carnap later, Kant believed that mathematical exactness constituted the main condition for the possibility of objectivity). Here Kant emphasized the role of mathematics coordinating a priori cognition and its determined objects of experience. Thus, he contrasted the methods employed by the chemist, a “systematic art” organized by empirical regularities, with those employed by the mathematician or physicist, which were organized by a priori laws, and held that biology is not reducible to mechanics—as the former involves explanations in terms of final causes—(see Critique of Pure Reason, Critique of Judgment and Metaphysical Foundations of Natural Science). By the end of his life, after having become acquainted with Lavoisier's achievements in chemistry, Kant thought of the unification of physics and chemistry not so much in terms of mathematization but, rather, in terms of the a priori principles regarding the properties of a universal ether (Friedman 1992). With regards to biology—insufficiently grounded in the fundamental forces of matter—its inclusion requires the introduction of the idea of purposiveness (McLaughlin 1991). More generally, for Kant unity was a regulative principle of reason, namely, an ideal guiding the process of inquiry toward a complete empirical science with its empirical concepts and principles grounded in the so-called concepts and principles of the understanding that constitute and objectify empirical phenomena (on the systematicity in this ideal and its origin in reason see Kitcher 1986).

Kant's ideas set the frame of reference for discussions of the unification of the sciences in German thought throughout the nineteenth century (Wood and Hahn 2011). He gave philosophical currency to the notion of world-view (Weltanschauung) and, indirectly, world-picture (Weltbild), thereby establishing among philosophers and scientists unity of science as an intellectual ideal. In Great Britain this idealist unifying spirit (and other notions from an idealist and romantic turn) took form in William Whewell's philosophy of science. Two unifying dimensions are these: his notion of mind-constructed fundamental ideas, the basis for organizing axioms and phenomena and classifying sciences—e.g., space (geometry), limit (analysis), symmetry (crystallography), cause (mechanics), life or similarity (biology), etc.—and the argument for the reality of explanatory causes in the form of consilience of induction, wherein a single cause is independently arrived at as the hypothesis explaining different kinds of phenomena.

This German intellectual current culminated in philosophers such as Windelband, Rickert and Dilthey. In their views and those of similar thinkers, a world-view often included elements of evaluation and life meaning. Kant also distinguished between several types of judgments that, in turn, characterized different intellectual disciplines. In this way, he established the basis for the famous distinction between the natural sciences (Naturwissenschaften) and the cultural, or social, sciences (Geisteswissesnschaften) introduced by Wilhelm Dilthey. According to Dilthey's Life-philosophy (Lebensphilosophie), science, philosophy, art and religion are on par with world-views expressing different attitudes in life. Followers of Dilthey's distinction, such as Wilhelm Windelband, Heinrich Rickert and Max Weber (although the first two preferred Kulturwissenschaften, which excluded psychology), claimed that the difference in subject matter between the two kinds of sciences forced a distinctive difference between their respective methods. Their preoccupation with the historical dimension of the human phenomena, along with the Kantian emphasis on the conceptual basis of knowledge led to the suggestion that the natural sciences aimed at generalizations about abstract types and properties, but the human sciences studied concrete individuals and complexes. The human case suggested a different approach based on valuation and personal understanding (Weber's verstehen). In biology, Ernst Haeckel defended a monistic worldview (Richards 2008).

This approach stood in opposition to the prevailing empiricist view since the time of Hume, Comte and Mill, which held that the moral or social sciences had relied on conceptual and methodological analogies with the natural sciences from Newtonian and statistical mechanics, such as Condorcet's “social mathematics”, and also from biology, such as Saint-Simon's “social physiology” (see also the entry on Hume's Newtonianism). But then the question arose of how the human sciences—under the rubric “science of man”—were themselves organized, and how social sciences such as sociology and economics were related to, say, the psychology of individuals.

Empiricists assumed methodological individualism, but not without qualifications. Comte followed his Enlightenment predecessors in combining an analytical sense of conceptual order and a historical sense of progress. He emphasized a pyramidal hierarchy of disciplines in his “encyclopedic law” or order, from the most general sciences about the simplest phenomena to the most specific sciences about the most complex phenomena, each depending on knowledge from its more general antecedent: from inorganic physical sciences (arithmetic, geometry, mechanics, astronomy, physics and chemistry) to the organic physical ones, such as biology and the new “social physics”, soon to be renamed sociology (Comte 1830–1842).

Mill's approach to the relation between the sciences was methodological. He followed Comte as well as Newton in his project to formulate the logic of the different sciences, natural and human. In particular he adopted for the human sciences these authors' reductive individualism, law-orientedness and inductivism, and the interest in history of the natural sciences. He noted a number of different methods of inference, even within the natural sciences, deductive (axiomatic or geometric) and inductive (chemical method). On the path towards generalizations in the social sciences, Mill vacillated between Ricardo's—and his father's—geometric method and his subsequent choice of the a posteriori and more inductive methodology of “inverse deduction”, or historical method, based on the primacy of generalizations of individual character (Mill 1843, Book VI). He came to view political economy eventually as an art, a tool for reform more than a system of knowledge, and diverged from deductivism with Whewell's view favoring a more complex and integrated representation of men and their circumstances (Snyder 2006).

The generation of German social scientists in the second half of the 19th century recapitulated the British debates between a priori and a posteriori methods. Gustav Schmöller, representing the Historical School, defended the empirical causal methodology with the empiricist predecessors' interests in the complexity of human phenomena and an interest in social and political reform (Tribe 2005). Besides criticizing Dilthey and the Weltbild or Weltanschauung tradition, he opposed Carl Menger's deductive, a priori, quasi-Platonic, and ahistorical approach to economic concepts and truths.

The Weltbild tradition influenced the physicists Max Planck and Ernst Mach, who engaged in a heated debate about the precise character of the unified scientific world-picture, and culminated in the first two decades of the twentieth century with the work of Albert Einstein (Holton 1998). Mach's more influential view was both phenomenological and Darwinian: the unification of knowledge took the form of an analysis of ideas into elementary sensations (neutral monism) and was ultimately a matter of adaptive economy of thought. Planck adopted a realist view that took science to gradually approach complete truth about the world, and fundamentally adopted the thermodynamical principles of energy and entropy (on the Mach-Planck debate see Toulmin 1970). These world-pictures constituted some of the alternatives to a long-standing mechanistic view that, since the rise of mechanistic philosophy with Descartes and Newton, had informed biology as well as most branches of physics.

In the same German tradition and amidst the proliferation of books on unity of science, the German energeticist Wilhelm Ostwald declared the 20th century the “Monistic century”. During the 1904 World's Fair in St. Louis, the German psychologist and Harvard professor Hugo Munsterberg organized a congress under the title “Unity of Knowledge”; invited speakers were Ostwald, Ludwig Boltzmann, Ernest Rutherford, Edward Leamington Nichols, Paul Langevin and Henri Poincaré. In 1911 the International Committee of Monism held its first meeting in Hamburg, with Ostwald presiding.[1] Two years later it published Ostwald's monograph, Monism as the Goal of Civilization. In 1912, Mach, Felix Klein, David Hilbert, Einstein, and others signed a manifesto aiming at the development of a comprehensive world-view. Unification remained a driving scientific ideal.

In the 1890s, Gottlob Frege and Hilbert had aimed at setting the mathematical sciences on rigorous foundations. The ideal had the form of an axiomatic system. Frege aimed at founding arithmetic on axioms of logic and Hilbert proposed that geometry be founded upon purely formal axioms. In 1920, Hilbert proposed his general formalist research project for the axiomatic formalization of mathematics, which he also extended to physics. He hoped that Einstein's General Theory of Relativity could be synthesized with the theory of electromagnetism to form a foundation for all of physics. Mathieu Leclerc du Sablon published his L'Unité de la Science (1919), exploring metaphysical foundations, and Johan Hjorst published The Unity of Science (1921), sketching out a history of philosophical systems and unifying scientific hypotheses.

All the while, the progressive specialization of scientific disciplines and specific technical challenges triggered educational and research support for interdisciplinary projects, away from the traditional intellectual ambitions of global integration, from specific new projects to ambitious theories such as systems theory (Klein 1990). The 20th century was the century of competing drives towards the ideal of total synthesis of knowledge, isolating specialization of disciplines and the development of interdisciplinary projects.

1.4 Unity and reductionism in logical empiricism

The question of unity engaged science and philosophy alike. In the 20th century the unity of science became a distinctive theme of the scientific philosophy of logical empiricism. Logical empiricists—known controversially also as logical positivists—and most notably the founding members of the Vienna Circle in their Manifesto adopted the Machian banner of “unity of science without metaphysics”, a container-model of unity (see section 2, below) based on demarcation between science and metaphysics: the unity of method and language that included all the sciences, natural and social. Notice that a common method does not imply a more substantive unity of content involving theories and their concepts. For instance, around the same time, the emphasis on rules and uses of logical, scientific, and ordinary language was part of the more general so-called “linguistic turn”.

A stronger reductive model within the Vienna Circle was recommended by Rudolf Carnap in his Logical Construction of the World (1928). With the Kantian connotation of the term “constitutive system”, it was inspired by Hilbert's axiomatic approach to formulating theories in the exact sciences and in Frege's and Russell's logical constructions in mathematics: it was predicated on the formal values of simplicity, neutrality and objectivity and was characterized by logical constructions out of basic concepts in axiomatic structures and rigorous, reductive logical connections between concepts at different levels.

The logical connections were provided by biconditional statements, or constitution sentences (these changed to conditionals, or reduction sentences, when Carnap encountered the problem of dispositional predicates). Different constitutive systems or logical constructions would serve different purposes. In one system of unified science the construction connects concepts and laws of the different sciences at different levels, with physics—with its genuine laws—as fundamental, lying at the base of the hierarchy. Because of the emphasis on the formal and structural properties of our representations, the individuality of concepts, like that of nodes in a railway network, was determined by their place in the whole structure, and hence, presupposed connective unity. Objectivity and unity went hand in hand. The formal emphasis developed further in Logical Syntax of Language (1934).

Alternatively, all scientific concepts could be constituted or constructed in a different system in the protocol language out of classes of elementary, experiential concepts. The basic experiences do not provide a reductive analysis of theoretical concepts; nor are the basic empirical concepts (red, etc) the outcome of an analysis of experience. They are not atomic in the Machian sense, but derived from the field of experience as a complex whole in the manner proposed by Gestalt psychology. This construction of scientific knowledge took into account the possibility of empirical grounding of theoretical concepts and testability of theoretical claims. Unity of science in this context was an epistemological project.

Carnap was influenced by the phenomenological tradition (through Husserl himself) and empiricist tradition (especially Russell and Mach) and the ideals of simplicity and reductive logical analysis in the early works of Russell and Wittgenstein. From the formalist point of view of the logicist and neo-Kantian traditions, Carnap's models of unity expressed his concern with the possibility of objectivity in scientific knowledge. The same concern was expressed in the subsequent idea of unity of science in the form of a physicalist language, the intersubjective language that translates the subjective language of experience into an objective and universal language. Carnap's pragmatic pluralism would extend to logic with his Principle of Tolerance—in Logical Syntax of Language (1934)—and subsequently—in “Empiricism, Semantics, and Ontology” (1950)—to the plurality of possible “linguistic frameworks”.

Otto Neurath, by contrast, favored a less idealized and less reductive model of unity predicated on the complexity of empirical reality. He spoke of an “encyclopedia-model”, instead of the classic ideal of the pyramidal, reductive “system-model”. The encyclopedia-model took into account the presence within science of uneliminable and imprecise terms from ordinary language and the social sciences and emphasized a unity of language and the local exchanges of scientific tools. Specifically, Neurath stressed the material-thing-language called “physicalism”, not to be confounded with the emphasis on the vocabulary of physics. His view was not constrained by Carnap's ideals of conceptual precision, deductive systematicity and logical rigor. No unified science, like a boat at sea, would rest on firm foundations. This weaker model of unity emphasized empiricism and the normative unity of the natural and the human sciences.

Like Carnap's unified reconstructions, Neurath's had pragmatic motivations. Neurath's rejection of physics reductionism was based on considerations of descriptive relevance, and explanatory and predictive power involving, for instance, social phenomena. By the same token, his emphasis on the importance of unity—without reduction—was also epistemic and pragmatic. Unity was meant as a tool for cooperation and it was motivated by the need for successful treatment—prediction and control—of complex phenomena in the real world that involved properties studied by different theories or sciences (from real forest fires to social policy): unity of science at the point of action (Cat, Cartwright and Chang 1996). It is an argument from holism, the counterpart of Duhem's claim that only clusters of hypotheses are confronted with experience. Neurath spoke of a “boat”, a “mosaic”, an “orchestration”, and a “universal jargon”. In the wake of institutions such as the International Committee on Monism and the International Council of Scientific Unions, Neurath spearheaded a movement for Unity of Science in 1934 that encouraged international cooperation among scientists and launched the project of an International Encyclopedia of Unity of Science. Neurath wrote repeatedly on the connections between the project of unity of science and the movements for economic socialization, educational reform and the peaceful and cooperative internationalization and unification of mankind.

At the end of the Eighth International Congress of Philosophy held in Prague in September of 1934, Neurath proposed a series of International Congresses for the Unity of Science. These took place in Paris, 1935; Copenhagen, 1936; Paris, 1937; Cambridge, England, 1938; Cambridge, Massachusetts, 1939 and Chicago, 1941. For the organization of the congresses and related activities, Neurath founded the Unity of Science Institute in 1936, which was renamed in 1937 as the International Institute for the Unity of Science, a special department of his Mundaneum Institute at The Hague. Neurath had founded the Mundaneum in 1934, after the pro-Nazi fascist coup in Vienna found him away in Moscow, and it already included the International Foundation for Visual Education, founded in 1933. The Institute's executive committee was composed of Neurath, Philip Frank and Charles Morris. And the Organization Committee for the International Congresses for the Unity of Science was composed of Neurath, Carnap, Frank, Joergen Joergensen, Morris, Louis Rougier and Susan Stebbing. And supporters of the movement were distinguished and widely distributed across Europe and the USA. Both Carnap and Neurath took the ideal of unified science to have deep social and political significance against metaphysics. At the same time Karl Popper was defending a methodological criterion to demarcate science from metaphysics based on the falsifiability of all genuinely scientific propositions.

After the Second World War a discussion of unity engaged philosophers and scientists in the Inter-Scientific Discussion Group, first the Science of Science Discussion Group, in Cambridge, Massachusetts, (founded primarily by Philip Frank and Carnap, themselves founders of the Vienna Circle, Quine, Feigl, Bridgman, and the psychologists E. Boring and S.S. Stevens in October 1940) which would later become the Unity of Science Institute. The group was joined by scientists from different disciplines, from quantum mechanics (Kemble and Van Vleck) and cybernetics (Wiener) to economics (Morgenstern) as a part of what was both a self-conscious extension of the Vienna Circle and a reflection of local concerns in a culture of computers and nuclear power. The characteristic feature of the new view of unity was the ideas of consensus and subsequently, especially within the USI, cross-fertilization. These ideas were instantiated in the emphasis on scientific operations (operationalism) and the creation of war-boosted cross-disciplines such as cybernetics, computation, electro-acoustics, psycho-acoustics, neutronics, game theory, and biophysics (Galison 1998 and Hardcastle 2003).

In the late 1960s, Michael Polanyi and Marjorie Grene organized a series of conferences funded by the Ford Foundation on unity of science themes (Grene 1969a, 1969b, 1971). Their general character was interdisciplinary and anti-reductionist. The group was originally entitled “Study Group on Foundations of Cultural Unity,” but this was later changed to “Study Group on the Unity of Knowledge.” By then a number of American and international institutions were already promoting interdisciplinary projects in academic areas (Klein 1990).

2. Varieties of Unity

The historical introductory sections have aimed to show the intellectual centrality and significance of the concept of unity, which in both knowledge and in the sciences has time-honored roots and a plurality of sources, motivations, interpretations and expressions, the last ranging from the conceptual to the iconic (trees, etc.). The rest of the entry presents a variety of modern views that stemmed from the development of the positions that established the topic in modern philosophy of science, and the reactions to them. For that purpose it will be helpful to introduce a number of broad categories and distinctions that sort out the accounts and track some relations between them and other significant philosophical issues (The categories are not mutually exclusive, and sometimes partly overlap; therefore, while they help label and characterize different positions, they cannot provide a simple, easy and neatly ordered linear conceptual map.)

The very problematics that established the project of modern philosophy of science around the works of Carnap, Hempel, Popper and Nagel, among others, implied a duality of external/internal unity, or equivalently, container/connective unity. External unity, such as container unity, is characterized by a boundary that brings science together from the outside, in this case a relation from the point of view of a criterion that demarcates science from non-science, or even the nonsensical. For Popper, the unity that identifies the scientific kind is methodological: the falsifiability of hypotheses. For Carnap it is formal, and it requires a place in the connected structure of logical relations that make up the system of science. External unity implies some sort of internal unity, but not vice versa. Internal or connective unity concerns the relations within a science or among different sciences.

Connective unity is a weaker notion than its specific ideal of reductive unity; this requires asymmetric relations of reduction, with assumptions about hierarchies of levels of description and the primacy—conceptual, ontological, epistemological, and so on—of a fundamental representation. The category of connective unity helps accommodate and bring attention to the diversity of non-reductive accounts.

Another distinction is between synchronic and diachronic unity. Synchronic accounts are ahistorical, assuming no meaningful temporal relations. Diachronic accounts, by contrast, introduce genealogical hypotheses involving asymmetric temporal and causal relations between entities or states of the systems described. Evolutionary models are of this kind; they may be reductive to the extent that the posited original entities are simpler and on a lower level of organization and size. Others simply emphasize connection without overall directionality.

In general, it is useful to distinguish between ontological unity and epistemological unity, even if many accounts bear both characteristics and fall under both rubrics. In some cases, one kind supports the other salient kind in the model. Ontological unity is here broadly understood as involving relations between conceptual elements; in some cases the concepts will describe entities, properties or relations and the models will focus on metaphysical aspects of the unifying connections such as holism, emergence, or downwards causation. Epistemological unity applies to relations or goals such as explanation. Methodological connections and formal (logical, mathematical, etc.) models belong in this kind. I will not draw any strict or explicit distinction between epistemological and methodological dimensions or modes of unity.

Additional categories and distinctions include the following: vertical unity or inter-level unity is unity of elements attached to levels of analysis on a hierarchy, whether for a single science or more, whereas horizontal unity or intra-level unity applies on one single level and its corresponding kind of system; global unity is unity of any other variety with a universal quantifier of all kinds of elements, aspects or descriptions associated with individual sciences as a kind of monism, while local unity applies to a subset; obviously, vertical and horizontal accounts of unity can be either global or local; finally, the rejection of global unity has been associated with isolationism, keeping independent competing alternative representations of the same phenomena or systems, as well as local integration, the local connective unity of the alternative perspectives.

3. Epistemological Unities

3.1 Reduction

Philosophy of science consolidated itself in the 1950s around a positivist orthodoxy roughly characterized as follows: a syntactic formal approach to theories, logical deductions and axiomatic systems, with a distinction between theoretical and observational vocabularies, and empirical generalizations. Unity and reduction may be introduced in terms of the following distinctions: epistemological and ontological, synchronic and diachronic. The specific elements of the dominating accounts will stand and fall with the attitudes towards the elements of the orthodoxy mentioned above. Reductionism must be distinguished from reduction: reductionism is the adoption of reduction as the global ideal of unified structure of scientific knowledge and a measure of its progress. As before, I will consider methodological aspects of unity as an extension of epistemological matters, insofar as methodology serves epistemology.

Two formulations by logical positivists in the United States about the ideal logical structure of science again placed the question of unity of science at the core of philosophy of science: Carl Hempel's deductive-nomological model of explanation and Ernst Nagel's model of reduction. Both were fundamentally epistemological models, and both were specifically explanatory. The emphasis on logical structure makes unity of explanation and reduction chiefly of the synchronic kind. Nagel's model of reduction is a model of scientific structure and explanation as well as of scientific progress. It is based on the problem of relating different theories as different sets of theoretical predicates.

Reduction poses two requirements: connectability and derivability. Connectability of laws of different theories requires meaning invariance in the form of extensional equivalence between descriptions, with bridge principles between coextensive but distinct terms in different theories.

Nagel envisaged two kinds of reductions: homogenous and heterogeneous. When both sets of terms overlap, the reduction is homogeneous. When the related terms are different, the reduction is heterogeneous. Derivability requires a deductive relation between the laws involved. In the quantitative sciences, the derivation often involved taking a limit. In this sense the reduced science is considered an approximation of the reducing new one.

Neo-Nagelian accounts have attempted to solve Nagel's problem of reduction between putatively incompatible theories. Here are a few:

Nagel's two-term relation account has been modified by weaker conditions of analogy and conventions, requiring it to be satisfied not necessarily by the two original theories, T1 and T2, which are respectively new and old and more and less general, but by the modified theories T1′ and T2′. Explanatory reduction is strictly a four-term relation in which T1′ is “strongly analogous” to T1 and corrects, with the insight that the more fundamental theory can offer, the older theory, T2, changing it to T2′. He also required that the bridge laws be synthetic identities, in the sense that they be factual, empirically discoverable and testable, rather than conventions (Schaffner 1967; Sarkar 1998).[2] The difficulty lay especially with the task of specifying or giving a non-contextual, transitive account of the relations between T and T′ (Wimsatt 1976).

An alternative set of semantic and syntactic conditions of reduction bear a counterfactual interpretation. For instance, syntactic conditions in the form of limit relations and ceteris paribus assumptions have the function of explaining why the reduced theory works where it does and fails where it does not (Glymour 1969).

A different approach to reductionism acknowledges a commitment to providing explanations but rejects the value of the focus on the role of laws. This approach typically draws a distinction between hard sciences such as physics and chemistry and historical sciences such as biology and social sciences, and claims that laws that are in a sense operative in the hard sciences are not available, or play a more limited and weaker role. The rejection of empirical laws in biology has been argued on grounds of dependence on contingent initial conditions (Beatty 1995), and as matter of supervenience (see the entry on supervenience) of spatio-temporally restricted functional claims on lower level molecular ones, and the multiple realization (see the entry on multiple realizability) of the former by the latter (Rosenberg 1994; Rosenberg's argument from supervenience to reduction without laws must be contrasted with Fodor's physicalism about the special sciences about laws without reduction (see below and the entry on physicalism); for a criticism of these views see Sober 1996). This non-Nagelian approach furthermore assumes that explanation rests on identities between predicates and deductive derivations (reduction and explanation might be said to be justified by derivations, but not constituted by them; see Spector 1978, also for the articulation of this view in physics). Explanation is provided by lower-level mechanisms; their explanatory role is to replace final why-necessarily questions (functional) with proximate how-possibly questions (molecular). A similar line of argument has been proposed as a defense of the explanatory power of the lower level, the disjunction of supervening bases without reduction (understood in the Nagelian sense of requiring bridge laws with one-to-one mappings between predicates at different levels and the derivation of the reduced laws or theories from the reducing ones)(Kincaid 1997). On this view lower-level explanations of, say, biological or sociological properties and events might fail to capture and discriminate causal properties and patterns and yield false causal inferences. Most, if not all, lower-level theories are not less localized or qualified ceteris paribus than higher-level ones. Lower-level descriptions can still get their explanatory relevance in the following limited form: from the heuristic asymmetry that makes the higher-level valuable and ineliminable by answering higher-level questions centered on higher-level salient descriptions and groupings of phenomena; lower-level explanations, then, would proceed by targeting specific token representations of events or properties, not more general types that are realized by and supervene on the more specific tokens. The discussion here enters the conceptual metaphysical arena I discuss below. One suggestion to make sense of the possibility of the supervening functional explanations without Nagelian reduction is a metaphysical picture of composition of powers in explanatory mechanisms (Gillette 2010). The reductive commitment to the lower level is compositional, from epistemological analysis and metaphysical synthesis, but not derivational; we infer what composes the higher level but we cannot simply get all the relevant knowledge of the higher-level from our knowledge of the lower-level (see also Auyang 1998).

A more general characterization is of reductionism as a research strategy. On this methodological view reductionism can be characterized by a set of so-called heuristics (non-algorithmic, efficient, error-based, purpose-oriented, problem-solving tasks) (Wimsatt 2006): heuristics of conceptualization (descriptive localization of properties, system-environment interface determinism, level and entity-dependence), heuristics of model-building and theory construction (model intra-systemic localization with emphasis of structural properties over functional ones, contextual simplification, external generalization) and heuristics of observation and experimental design (focused observation, environmental control, local scope of testing, abstract shared properties, behavioral regularity, context-independence of results).

3.2 Antireductionism

The focus since the 1930s had been on a syntactic approach with physics as the paradigm of science, deductive logical relations as the form of cognitive or epistemic goals such as explanation and prediction, and theory and empirical laws as paradigmatic units of scientific knowledge (Suppe 1977; Grünbaum and Salmon 1988). The historicist turn in the 1960s, the semantic turn in philosophy of science in the 1970s and a renewed interest in special sciences changed this focus . The very structure of hierarchy of levels has lost its validity, even for those who believe in it as a model of autonomy of levels rather than as an image of fundamentalism. The rejection of such models and their emendations has occupied the last four decades of philosophical discussion about unity in and of the sciences, especially in connection to psychology and biology, and more recently chemistry. A valuable consequence has been the strengthening of philosophical projects and communities devoting more sustained and sophisticated attention to special sciences different from physics. The same spirit, and often rhetoric, guiding the reactionary and the more progressive camps have been put to use to serve professional and institutional interests, especially when considerations of funding and commercial interest are involved—the most socially visible cases have been the (terminated) construction of the Superconducting Supercollider (SSC) and the Human Genome Project (Cat 1998; Kevles and Hood 1992).

The first target of antireductionist attacks has been Nagel's demand of extensional equivalence as an inadequate demand of “meaning invariance” and approximation, and with it the possibility of deductive connections. Mocking the positivist legacy of progress through unity, empiricism and anti-dogmatism, these constraints have been decried as intellectually dogmatic, conceptually weak and methodologically overly restrictive (Feyerabend 1962). The emphasis is placed, instead, on the merits of the new theses of incommensurability and methodological pluralism.

This rejection extends to the move that the deductive connection be guaranteed provided that the old, reduced theory was “corrected” beforehand (Shaffner 1967). The evolution and the structure of scientific knowledge could be neatly captured, using Schaffner's expression, by “layer-cake reduction.” The terms “length” and “mass”—or the symbols l and m—, for instance, may be the same in Newtonian and Relativistic mechanics, or the term “electron” the same in classical physics and quantum mechanics, or the term “atom” the same in quantum mechanics and in chemistry, or “gene” in Mendelian genetics and molecular genetics (see, for instance, Kitcher 1984). But the corresponding concepts, they argued, are not. Concepts or words are to be understood as getting their content or meaning within a holistic or organic structure, even if the organized wholes are the theories that include them. From this point of view, different wholes, whether theories or Kuhnian paradigms, manifest degrees of conceptual incommensurability. Therefore, the derived, reducing theories typically are not the allegedly reduced, old ones; and their derivation sheds no relevant insight into the relation between the original old one and the new (Feyerabend 1962; Sklar 1967).

From a historical point of view, the positivist model collapsed the distinction between synchronic and diachronic reduction, or between reductive models of the structure and the evolution or succession of science. The point of Kuhn and Feyerabend's historicism was to drive a wedge between the two dimensions and reject the linear model of scientific change in terms of accumulation and replacement. For Kuhn replacement becomes, then, partly continuous, partly non-cumulative change in which one world—or, less literally, one world-picture, one paradigm—replaces another (after a revolutionary episode of crisis and proliferation of alternative contenders) (Kuhn 1962). This image constitutes a form of pluralism, and, like the reductionism it is meant to replace, it can be either synchronic or diachronic. Here is where Kuhn and Feyerabend parted ways. For Kuhn synchronic pluralism only describes the situation of crisis and revolution between paradigms. For Feyerabend history is less monistic, and pluralism is and should remain a synchronic and diachronic feature of science and culture (Feyerabend, here, thought science and society inseparable, and followed Mill's philosophy of individualism and democracy).

A different kind of antireductionism addresses a more conceptual dimension, the problem of categorial reduction: Meta-theoretical categories of description and interpretation of mathematical formalisms (for instance, the use of the category of causality) block and replace full reduction; basic interpretative concepts that are not just variables in a theory or model are not reducible to counterparts in fundamental descriptions (Cat 2000; Cat 2006 and Cat forthcoming c; the categorial problem of individuality in quantum physics has been discussed in Healey 1991; Redhead and Teller 1991 and Auyang 1995; in psychology in Block 2003).

3.3 Container Unity and Connective Unities

Container models of unity have tried to establish the role of unity to demarcate science from non-science. The criteria are typically methodological and normative, not descriptive. Popper's original anti-metaphysics barrier was set by the criterion of empirical falsifiability of scientific statements, e.g., the formal, logical possibility of their relation to basic statements, linked to experience, that can prove them false deductively through the application of a modus tollens argument (Popper 1935/1951). Another criterion is explanatory unity, empirically grounded. Hempel's deductive-nomological model characterizes the scientific explanation of events as a logical argument that expressed their expectability in terms of their subsumption under an empirically testable generalization. Explanations in the historical sciences too must fit the model if they were to count as scientific. They could be brought into the fold as bona fide scientific explanations even if they could qualify only as explanation sketches.

The universal applicability and appropriateness of Hempel's model was challenged. Historical explanations have a genealogical form and logic of their own, or else they require the historian's conceptual judgment to bring together meaningfully a set of historical facts (Cleland 2002, Koster 2009). This reversal of fortune opened a debate about the nature of the historical sciences that remains unresolved. In the process, some have claimed that some of the natural sciences such as geology and biology are historical. It has been argued that Hempel's model, especially the requirement of empirically testable strict universal laws, is satisfied neither by the physical sciences nor the historical sciences, including biology (Ereshefsky 1992).

The focus has shifted to alternative units of scientific products, elements and modes of knowledge and scientific achievement. And different models of unity emphasizing different kinds of relations between different areas of inquiry have gained importance. More generally, then, many driving agendas and background assumptions have gradually been exposed, criticized, qualified, or replaced. The debate over unity has broadened along many new dimensions as a result.

Unification has been defended as an epistemic virtue, on the cognitive grounds that unification, measured as the number of independent explanatory laws or phenomena conjoined in a theoretical structure, contributes understanding and confirmation from the fewest basic kinds of phenomena, regardless of issues of derivation or explanatory power in terms of a few types of derivation or argument patterns (Friedman 1974; Kitcher 1981; Kitcher 1989; within a probabilistic framework, Myrvold 2003 and Sober 2003, see below). Physics, as a standard bearer of unity both material and formal, has been defended as the natural domain of validity of the explanatory interpretation (for instance, in Wayne 1996).

A weaker position argues that unification is not explanation on the grounds that unification is simply systematization of old beliefs and operates as a criterion of theory-choice (Halonen and Hintikka 1999).

The unification account of explanation has been defended within a more detailed cognitive pragmatist approach. The key is to think of explanations as question-answer episodes involving four elements: the explanation-seeking question about P, P?, the cognitive state C of the questioner/agent for whom P calls for explanation, the answer A, and the cognitive state C+A in which the need for explanation of P has disappeared. A related account models unity in the cognitive state in terms of comparative increase of coherence and elimination of spurious unity—such as circularity or redundancy (Schurz 1999). Unification is also based on information-theoretic transfer or inference relations. Unification of hypotheses is only a virtue if it unifies data. The last two conditions imply that unification yields also empirical confirmation. Explanations are global increases in unification in the cognitive state of the cognitive agent (Schurz 1999; Schurz and Lambert 1994).

The unification-explanation link can be defended on the grounds that laws make unifying similarity expectable (hence Hempel-explanatory) and this similarity becomes the content of a new belief (Weber and Van Dyck 2002 contra Halonen and Hintikka 1999). Unification is not the mere systematization of old beliefs. Contra Schurz they argue that scientific explanation is provided by novel understanding of facts and the satisfaction of our curiosity (Weber and Van Dyck 2002 contra Schurz 1999). In this sense, causal explanations, for instance, are genuinely explanatory and do not require an increase of unification.

A contextualist and pluralist account argues that understanding is a legitimate aim of science that is pragmatic and not necessarily formal, or contra Trout, a subjective psychological by-product of explanation (De Regt and Dieks 2005). In this view explanatory understanding is variable and can have diverse forms, such as causal-mechanical and unification, without conflict (De Regt and Dieks 2005).

The views on scientific explanation have evolved away from the formal and cognitive accounts of the epistemic categories. Accordingly, the source of understanding provided by scientific explanations has been misidentified according to some (Barnes 1992). The genuine source often lies, instead, in causal explanation, or causal mechanism (Cartwright 1983; Cartwright 1989; see also Glennan 1996, Cat 2005 and Craver 2007). Mechanistic models of explanation have become entrenched in philosophical accounts of the life sciences (Darden 2006, Craven 2007). The challenge extends to the alleged extensional link between explanation on the one hand, and neat unity, truth and universality on the other (Cartwright 1983; Woodward 2003). In this sense, explanatory unity, which rests on metaphysical assumptions about components and their properties, also involves a form of ontological or metaphysical unity.

Criticisms extend to physicists' arguments along similar lines: unification, even at the fundamental level, fails to yield explanation in the formal scheme based on laws and their symmetries (Cat 1998; Cat 2005); unification and explanation conflict on the grounds that in biology and physics only causal mechanical explanations answering why-questions yield understanding of the connections that contribute to “true unification” (Morrison 2000;[3] Her choice of standard for evaluating the epistemic accounts of unity and explanation has not been without critics, e.g., Wayne 2002; Plutyinski 2005, Karaca 2012).[4]

Unity can be understood as a methodological principle (Wimsatt 1976 and Wimsatt 2006 for the case of biology and Cat 1998 for physics). One way of doing so is see it as a simplicity or parsimony condition. But this kind of condition can receive at least two different interpretations: epistemological and ontological (Sober 2003). As a formal probabilistic principle of curve-fitting or average predictive accuracy, the relevance of unity is objective. Unity plays the role of an empirical background theory. The example of the Akaike curve-fitting method is meant to suggest that the connection between unity as parsimony and likelihood is not interest-relative, at least in the way that the connection between unity and explanation is (Sober 2003; Forster and Sober 1994).

The probabilistic model dovetails with other recent formal discussions of unity and coherence within the framework of Bayesianism (Forster and Sober 1994, sect. 7; Schurz and Lambert 2005 is also a formal model, with an algebraic approach). In this approach, which involves a formal calculus of evidential support for hypotheses, the rational comparison and acceptance of probabilistic beliefs in the light of empirical data is constrained by Bayes' Theorem for conditional probabilities (where h and d are the hypothesis and the data respectively):

P(h | d) = P(d | h) · P(h) / P(d)

A recent attempt to provide an explicit Bayesian account of unification as an epistemic, methodological virtue, holds that the measure of unity is this: a hypothesis h unifies phenomena p and q to the degree that given h, p is statistically/probabilistically relevant to (or correlated with) q (Myrvold 2003; a probabilistically equivalent measure of unity in Bayesian terms in McGrew 2003; on the equivalence, Schupbach 2005). This measure of unity has been criticized as neither necessary nor sufficient (Lange 2004; Lange's criticism assumes the unification-explanation link; in a rebuttal, Schupbach has rejected this and other assumptions behind Lange's criticism; Schupbach 2005).

Finally, another kind of formal model for a different kind of unity straddles the boundary between formal epistemology and ontology: computational models of emergence or complexity. They are based on simulations of chaotic dynamical processes such as cellular automata (Wolfram 1984; Wolfram 2002). Their supposed superiority to combinatorial models based on aggregative functions of parts of wholes does not lack defenders (Crutchfield 1994; Crutchfield and Hanson 1997; Humphreys 2004, 2007 and 2008; Humphreys and Huneman 2008; Huneman 2008a and b and 2010).

In anti-reductionist quarters, models of unification without reduction have readily appeared: for instance, the notion of interfield theories (Darden and Maull 1977; Darden 2006). Examples of such fields are genetics, biochemistry and cytology. Different levels of organization correspond in this view to different fields: Fields are individuated intellectually by a focal problem, a domain of facts related to the problem, explanatory goals, methods and a vocabulary. Fields import and transform terms and concepts from others. The model is based on the idea that theories and disciplines do not match neat levels of organization within a hierarchy; rather, many of them in their scope and development cut across different such levels. Reduction is a relation between theories within a field, not across fields. In general, the higher-level (for instance, cell physiology) and the lower-level theories (for instance, biochemistry) are ontologically and epistemologically inter-dependent on matters of informational content and evidential relevance; one cannot be developed without the other (Kincaid 1996; Kincaid 1997; Wimsatt 1976; Spector 1977).

The emergence and development of hybrid disciplines and theories is another instance of non-reductive cooperation or interaction between sciences. I noted, above, the post-war emergence of interdisciplinary areas of research, the so-called hyphenated sciences (Klein 1990, Galison 1997) such as neuro-acoustics, radioastronomy, biophysics, etc. On a smaller scale, such as within physics for instance, one can find semiclassical models in quantum physics or models developed around phenomena where the limiting reduction relations are singular or catastrophic (caustic optics and quantum chaos) (Cat 1998; Batterman 2002; Belot 2005). The general form of pervasive cases of emergence have been characterized with the notion of contextual emergence (Bishop and Atmanspacher 2006): properties, behaviors and their laws on a restricted, lower-level, single-scale, domain are necessary but not sufficient for the properties, behaviors of another, e.g., higher-level one, not even of itself. The latter are also determined by contingent contexts (contingent features of the state space of the relevant system). The interstitial formation of more or less stable small-scale syntheses and cross-boundary “alliances” has been common in most sciences since the early 20th century; indeed, it is crucial to development in model building and empirical scope in fields ranging anywhere from biochemistry to cell ecology, or from econophysics to thermodynamical cosmology.

The conceptual dimension of cross-cutting has been developed in connection with the possibility of cross-cutting natural kinds. Categories of taxonomy and domains of description are interest-relative, as are rationality and objectivity (Khalidi 1998; his view shares positions and attitudes with Longino 1989; Elgin 1996 and 1997). Cross-cutting taxonomic boundaries, then, are not conceptually inconsistent. Moreover, cognitive theories of typification and language learning support the hierarchy models only for terms which are in principle applicable, but not for ones which are actually applicable. The same can be argued of science.

A development of the explanatory version of inter-theoretic connections involves the concepts of explanatory refinement and explanatory extension of the higher-level knowledge (for instance, Mendelian genetics) through elements of lower-level knowledge (for instance, molecular genetics) (Kitcher 1984).

Another, more general, unifying element of this kind is Holton's notion of themata. Themata are conceptual values that are a priori yet contingent (both individual and social), informing and organizing presuppositions that factor centrally in the evolution of the science: continuity/discontinuity, harmony, quantification, symmetry, conservation, mechanicism, hierarchy, etc. (Holton 1973). Unity of some kind is itself a thematic element. A more complex and comprehensive unit of organized scientific practice is the notion of the various styles of reasoning, such as statistical, analogical modeling, taxonomical, genetic/genealogical or laboratory styles; each is a cluster of epistemic standards, questions, tools, ontology, and self-authenticating or stabilizing protocols (Hacking 1996; see below for the relevance of this account to claims of global disunity; the account shares distinctive features of Kuhn's notion of paradigm).

A key element of scientific practice often ignored by philosophical analysis is expertise. Recent accounts of multidisciplinary collaboration as a human activity have focused on the dynamics of integrating different kinds of expertise around common systems or goals of research (Collins and Evans 2007, Gorman 2002).

Another model of non-reductive unification is historical and diachronic: it emphasizes the genealogical and historical identity of disciplines, which has become complex through interaction. The interaction extends to relations between specific sciences, philosophy and philosophy of science (Hull 1988). Hull's idea of science as a process models historical unity after a Darwinian-style pattern of evolution (developing an earlier suggestion by Popper). Hull's idea of disciplines as evolutionary historical individuals can be revised with the help of more recent ideas of biological individuality: hybrid unity as an external model of unity as integration or coordination of individual disciplines and disciplinary projects, e.g., characterized by a form of occurrence, evolution or development whose tracking and identification involves a conjunction with other disciplines, projects and domains of resources, from within science or outside science. This type of account can accommodate models of discovery, in which genealogical unity integrates a variety of resources that can be both theoretical and applied, or scientific and non-scientific (an example, from physics, the discovery of superconductivity, can be found in Holton, Chang and Jurkowitz 1996). Some models of unity below provide other examples.

A generalization of the notion of interfield theories is the idea that unity is interconnection: Fields are unified theoretically and practically (Grantham 2004). This is an extension of the original modes of unity or identity that single out individual disciplines. Theoretical unification involves conceptual, ontological and explanatory relations. Practical unification involves heuristic dependence, confirmational dependence and methodological integration. The social dimension of the epistemology of scientific disciplines relies on institutional unity. With regard to disciplines as professions, this kind of unity has rested on institutional arrangements such as professional organizations for self-identification and self-regulation, university mechanisms of growth and reproduction through certification, funding and training, and communication and record through journals.

Many examples of unity without reduction are local rather than global, and are not a phase in a global and linear project or tradition of unification (or integration); they are focused on science as a human activity. Unification is a piecemeal description and strategy of collaboration (on the distinction between global integration and local interdisciplinarity, see Klein 1990). Cases are restricted to specific models, phenomena or situations.

A recent approach to the connection between areas has focused on a material level of scientific practice through instruments and other material objects (Galison 1997, Bowker and Starr 1999). For instance, the material unity of natural philosophy in the 16th and 17th centuries relied on the circulation, transformation and application of objects, in their concrete and abstract representations (Bertoloni-Meli 2006). The latter correspond to the imaginary systems and their representations, which we call models. The evolution of objects and images across different theories and experiments and their developments in 19th-century natural philosophy provide a historical model of scientific development; but the approach is not meant to illustrate reductive materialism, since the same objects and models work and are perceived as vehicles for abstract ideas, institutions, cultures, etc. (Cat forthcoming b). Objects can be thought of as elements in so-called trading zones (see below) with shifting meanings in the evolution of 20th-century physics, such as with the cloud chamber which was first relevant to meteorology and next to particle physics (Galison 1997); they can also be thought of as as boundary objects, which provide the opportunity for experts from different fields to collaborate through their respective understanding of the system in question and their respective goals (Bowker and Starr 1999).

At the concrete perceptual level, one may also focus on the role of visual representations in the sciences and assess what may be called graphic unification of the sciences. Their cognitive roles, methodological and rhetorical, include establishing and disseminating facts and their so-called virtual witnessing, revealing empirical relations, testing their fit with available patterns of more abstract theoretical relations (theoretical integration), suggesting new ones, aiding in computations, serving as aesthetic devices, etc. But these uses are not homogeneous across different sciences and make visible disciplinary differences. We may speak of graphic pluralism. The rates in the use of diagrams in research publications appear to vary along the hard-soft axis of pyramidal hierarchy, from physics, chemistry, biology, psychology, economics and sociology and political science (Smith et al. 2000): the highest use can be found in physics, intuitively identified by the highest degree of hardness understood as consensus, codification, theoretical integration and factual stability to highest interpretive and instability of results. Similarly, the same variation occurs among subdisciplines within each discipline. The kinds of images and their contents also vary across disciplines and within disciplines, ranging from hand-made images of particular specimens to hand-made or mechanically generated images of particulars standing in for types, to schematic images of geometric patterns in space or time, or to abstract diagrams representing quantitative relations. Importantly, graphic tools circulate like other cognitive tools between areas of research that they in turn connect (Galison 1997, Daston and Galison 2007, Lopes 2009; see also Lynch and Woolgar 1990; Baigrie 1996; Jones and Galison 1998; Galison 1997; Cat 2001 and forthcoming; and Kaiser 2005).

A field of study has focused on disciplines broadly and their relations. Disciplines constitute a broader unity of analysis of connection in the sciences that is characterized, for instance, by their domain of inquiry, cognitive tools and social structure (Bechtel 1987). Unification of disciplines, in that sense, can be interdisciplinary, multidisciplinary, crossdisciplinary and transdisciplinary (Klein 1990, Kellert 2008, Repko 2012). It might involve a researcher borrowing from different disciplines or the collaboration of different researches. Neither does modality of connection amount to a straightforward generalization of, or reduction to any single discipline, theory, etc. In either case, the strategic development is typically defended for its heuristic problem-solving or innovative powers, as it is prompted by a problem considered complex in that it does not arise or cannot be fully treated within the purview of one specific discipline unified or individuated around some potentially non-unique set of elements such as scope of empirical phenomena, rules, standards, techniques, conceptual and material tools, aims, social institutions, etc. Indicators of disciplinary unity may vary (Kuhn 1962, Klein 1990, Kellert 2008). Interdisciplinary research or collaboration creates a new discipline or project, such as interfield research, often leaving the existence of the original ones intact. Multidisciplinary work involves the juxtaposition of the treatments and aims of the different disciplines involved in addressing a common problem. Crossdisciplinary work involves borrowing resources from one discipline to serve the aims of a project in another. Transdisciplinary work is a synthetic creation that encompasses work from different disciplines (Klein 1990, Kellert 2008, Brigandt 2010, Hoffmann, Schmidt and Nersessian 2012, Osbeck et al 2011, Repko 2012). These different modes of synthesis or connection are not mutually exclusive.

Models of interdisciplinary cooperation and their corresponding outcomes are often described using metaphors of different kinds: cartographic (domains, boundaries, trading zone, etc), linguistic (pidgin language, communication, translation, etc), architectural (building blocks, tiles, etc), socio-political (imperialism, hierarchy, republic, orchestration, negotiation, coordination, cooperation etc) or embodied (cross-training). Each highlights and neglects different aspects of scientific practice and properties of scientific products. Cartographic and architectural images, for instance, focus on spatial and static synchronic relations and simply connected, compatible elements. Socio-political and embodied images emphasize activity and non-propositional elements (Kellert 2008 defends the image of cross-training).

A general model of local interconnection which has acquired widespread attention and application in different sciences is the anthropological model of trading zone, where hybrid languages and meanings are developed that allow for interaction without straightforward extension of any party's original language or framework (Galison 1997). Galison applies this kind of anthropological analysis to the subcultures of experimentation. This strategy aims to explain the strength, coherence and continuity of science in terms of local coordinations of intercalated levels of symbolic procedures and meanings, instruments and arguments.

At the experimental level, instruments, as found objects, acquire new meanings, developments and uses as they bridge over the transitions between theories, observations or theory-laden observations. Instruments and experimental projects in the case of Big Science also bring together, synchronically and interactively, the skills, standards and other resources from different communities, and change each in turn (on interdisciplinary experimentation see also Osbeck et al. 2011). Patterns of laboratory research are shared by the different sciences, not just instruments but general strategies of reconfiguration of human researchers and natural entities researched (Knorr-Cetina 1992). At the same time, attention has been paid to the different ways on which experimental approaches differ among the sciences (Knorr-Cetina 1992, Guala 2005, Weber 2005)

Empirical work in sociology and cognitive psychology on scientific collaboration has led to a broader perspective including a number of dimensions of interdisciplinary cooperation, involving identification of conflicts and the setting of sufficient so-called common ground integrators: for instance, shared—pre-existing, revised and newly developed— concepts, terminology, standards, techniques, aims, information, tools, expertise, skills (abstract, dialectical, creative and holistic thinking), cognitive and social ethos (curiosity, tolerance, flexibility, humility, receptivity, reflexivity, honesty, team-play) social interaction, institutional structures and geography (Cummings and Kiesler 2005, Klein 1990, Kockelmans 1979, Repko 2012). Sociological studies of scientific collaboration can in principle place the connective models of unity within the more general scope of social epistemology, for instance, in relation to distributive cognition (beyond the focus on strategies of consensus within communities).

The broad and dynamical approach to processes of interdisciplinary integration may effectively be understood to describe the production of different sorts and degrees of epistemic emergence. The integrated accounts require shared (old or new) assumptions and may involve a case of ontological integration, for instance in causal models. Suggested kinds of interdisciplinary causal-model integration are the following: sequential causal order in a process or mechanism cutting across disciplinary divides; horizontal parallel integration of different causal models of different elements of a complex phenomenon; horizontal joint causal model of the same effect; and vertical or cross-level causal integration (see emergent or top-down causality, below) (Repko 2012, Kockelmans 1979).

Talk of cooperation and coordination for the purpose of forming hybrid cross-disciplines, emergent disciplines or projects and products revolves around the problem of conflicts, and the challenge of striking a balance between cooperation and autonomy. By extension of the discussion of value conflict in moral and political philosophy, one must acknowledge the extent to which scientific practice is based on accepting limited conflict over necessary commitments and making epistemic and/or non-epistemic compromises (a volitional, not just cognitive aspect; on this view against unity as social consensus, see Rescher 1993, Cat 2005 and 2010; van Bouwel 2009; comp Repko 2012; Hoffmann, Schmidt and Nersessian 2012).

4. Ontological unities

4.1 Ontological unities and reduction

Since Nagel's influential model of reduction by derivation most discussions of unity of science have been cast in terms of reductions between concepts, the entities they describe, and between theories incorporating the descriptive concepts. A distinctive ontological model is this: The hierarchy of levels of reduction is fixed by part-whole relations. The levels of aggregation of entities run all the way down to atomic particles and field parts, rendering microphysics the fundamental science.

A classic reference in this kind, away from the syntactic model, is Oppenheim and Putnam's “The Unity of Science as a Working Hypothesis” (Oppenheim and Putnam 1958; Oppenheim and Hempel had worked in the 1930s on taxonomy and typology, a question of broad intellectual, social and political relevance in Germany at the time). Oppenheim and Putnam intended to articulate an idea of science as a reductive unity of concepts and laws to those of the most elementary elements. They also defended it as an empirical hypothesis—not an a priori ideal, project or precondition—about science and the claim that its evolution manifested a trend in that unified direction out of the smallest entities and lowest levels of aggregation. In an important sense, the evolution of science recapitulates, in the reverse, the evolution of matter, from aggregates of elementary particles to the formation of complex organisms and species (we find a similar assumption in Weinberg's downward arrow of explanation). Unity, then, is manifested not just in mereological form, but also genealogically or historically.

A weaker form of ontological reduction advocated for the biomedical sciences with the causal notion of partial reductions: explanations of localized scope (focused on parts of higher-level systems only) laying out a causal mechanism connecting different levels in the hierarchy of composition and organization (Schaffner 1993; Schaffner 2006; Scerri has similarly discussed degrees of reduction in Scerri 1994). An extensional domain-relative approach introduces the distinction between “domain preserving” and “domain combining” reductions. Domain-preserving reductions are intra-level reductions and occur between T1 and its predecessor T2. In this parlance, however, T2 “reduces” to T1. This notion of “reduction” does not refer to any relation of explanation (Nickles 1973).

The claim that reduction, as a relation of explanation, needs to be a relation between theories or even involve any theory has also been challenged. One such challenge focuses on “inter-level” explanations in the form of compositional redescription and causal mechanisms (Wimsatt 1976). The role of biconditionals or even Schaffner-type identities, as factual relations, is heuristic (Wimsatt 1976). The heuristic value extends to the preservation of the higher-level, reduced concepts, especially for cognitive and pragmatic reasons, including reasons of empirical evidence. This amounts to rejecting the structural, formal approach to unity and reductionism. Reductionism is another example of the functional, purposive nature of scientific practice. The metaphysical view that follows is a pragmatic and non-eliminative realism (Wimsatt 2006). As a heuristic, this kind of non-eliminative pragmatic reductionism is a complex stance. It is integrative and intransitive, compositional, mechanistic and functionally localized, approximative and abstractive; it is bound to false idealizations focusing on regularities and stable common behavior, circumstances and properties, and is constrained in its rational calculations and methods, tool-binding, and problem-relative. The heuristic value of eliminative inter-level reductions has been defended as well (Poirier 2006).

The appeal to formal laws and deductive relations is dropped for sets of concepts or vocabularies in the replacement analysis (Spector 1978). This approach allows for talk of entity reduction or branch reduction, and even direct theory replacement without the operation of laws, and circumvents vexing difficulties raised by bridge principles and the deductive derivability condition (self-reduction, infinite regress, etc). The formal relations only guarantee, but do not define, the reduction relation. Replacement functions are meta-linguistic statements. Like Sellars had argued in the case of explanation, this account distinguishes between reduction and testing of reduction and the need of derivation for both. Finally, replacement can be in practice or in theory. Replacement in practice does not advocate elimination of the reduced or replaced entities or concepts (Spector 1978).

Note, however, the following: the compartmentalization of theories and their concepts or vocabulary into levels neglects the existence of empirically meaningful and causally explanatory relations between entities or properties at different levels. If they are neglected as theoretical knowledge and left outside as only bridge principles, the possibility of completeness of knowledge is seriously jeopardized. Maximizing completeness of knowledge here requires unity of all phenomena at all levels and anything between these levels. Any bounded region or body of knowledge neglecting such cross-boundary interactions is radically incomplete, and not just confirmationally or evidentially so; we may refer to this problem as the problem of crossboundary incompleteness as either intra-level or horizontal incompleteness and, on a hierarchy, the problem of inter-level or vertical incompleteness (Kincaid 1997; Cat 1998).

The most radical form of reduction as replacement is often called eliminativism. The position has made a considerable impact in philosophy of psychology and philosophy of mind (Churchland 1981; Churchland 1986). On this view the vocabulary of the reducing theories (neurobiology) eliminates and replaces that of the reduced ones (psychology), leaving no substantive relation between them (which is only a replacement rule) (see also eliminative materialism).

In a general semantic account, Sarkar distinguishes different kinds of reduction in terms of four criteria, two epistemological and two ontological: fundamentalism, approximation, abstract hierarchy and spatial hierarchy. Fundamentalism implies that the features of a system can be explained in terms only of factors and rules from another realm. Abstract hierarchy is the assumption that the representation of a system involves a hierarchy of levels of organization with the explanatory factors being located at the lower levels. Spatial hierarchy is a special case of abstract hierarchy in which the criterion of hierarchical relation is a spatial part-whole or containment relation. Strong reduction satisfies the three “substantive” criteria, whereas weak reduction only satisfies fundamentalism. Approximate reductions—strong and hierarchical—are those which satisfy the criterion of fundamentalism only approximately (Sarkar 1998; the merit of Sarkar's proposal resides in its systematic attention to hierarchical conditions and, more originally, to different conditions of approximation; see also Ramsey 1995; Lange 1995; Cat 2005).

The semantic turn extends to more recent notion of models that do not fall under the strict semantic or model-theoretic notion of mathematical structures (Giere 1999; Morgan and Morrison 1999; Cat 2005). This is a more flexible framework about relevant formal relations and the scope of relevant empirical situations; and it is implicitly or explicitly adopted by most accounts of unity without reduction. One may add the primacy of temporal representation and temporal parts, temporal hierarchy or temporal compositionality, first emphasized by Oppenheim and Putnam as a model of genealogical or diachronic unity. This framework applies to processes both of evolution and development (a more recent version in McGivern 2008 and Love and Hütteman 2011).

The shift in the accounts of scientific theory from syntactic to semantic approaches has changed conceptual perspectives and, accordingly, formulations and evaluations of reductive relations and reductionism. However, examples of the semantic approach focusing on mathematical structures and satisfaction of set-theoretic relations have focused on syntactic features—including the axiomatic form of a theory—in the discussion of reduction (Sarkar 1998, da Costa and French 2003). In this sense, the structuralist approach can be construed as a neo-Nagelian account, but another line of research has also championed the more traditional structuralist semantic approach (Balzer and Moulines 1996; Moulines 2006; Ruttkamp 2000; Ruttkamp and Heidema 2005).

4.2 Ontological unities and antireductionism

Headed in the opposite direction, arguments concerning new concepts such as multiple realizability and supervenience by Putnam, Kim, Fodor and others led to higher-level functionalism, a distinction between type-type and token-token reductions and the examination of its implications. The concepts of emergence, supervenience and downward causation are related metaphysical tools for generating and evaluating proposals about unity and reduction in the sciences. This literature has enjoyed its chief sources and developments in general metaphysics and in philosophy of mind and psychology (Davidson 1969; Putnam 1975; Fodor 1975; Kim 1993).

Supervenience, first introduced by Davidson in discussions of mental properties, is the notion that a system with properties on one level is composed of entities on a lower level and that its properties are determined by the properties of the lower-level entities or states. The relation of determination is that no changes at the higher-level occur without changes at the lower level. Like token-reductionism, supervenience has been adopted by many as the poor man's reductionism (see the entry on supervenience). A different case for the autonomy of the macrolevel is based on the notion of multiple supervenience (Kincaid 1997; Meyering 2000).

The autonomy of the special sciences from physics has been defended in terms of a distinction between type-physicalism and token-physicalism (Fodor 1974; Fodor countered Oppenheim and Putnam's hypothesis under the rubric “the disunity of science”; the entry on physicalism). The key logical assumption is the type-token distinction, that types are realized by more specific tokens, the type animal is instantiated by different species, the type tiger or electron can be instantiated by multiple individual token tigers and electrons. Type-physicalism is characterized by a type-type identity between the predicates/properties in the laws of the special sciences and those of physics. By contrast, token-physicalism is based on the token-token identity between the predicates/properties of the special sciences and those of physics; every event under a special law falls under a law of physics and bridge laws express contingent token-identities between events. Token-physicalism operates as a demarcation criterion for materialism. Fodor argued that the predicates of the special sciences correspond to infinite or open-ended disjunctions of physical predicates, and these disjunctions do not constitute natural kinds identified by an associated law. Token-physicalism is the only alternative. All special kinds of events are physical but the special sciences are not physics (for criticisms based on the presuppositions in Fodor's argument, see Sober 1999).

The denial of remedial weaker forms of reductionism is the basis for the concept of emergence (Humphreys 1997, Bedau and Humphreys 2008). Different accounts have attempted to articulate the idea of a whole being different from or more than the mere sum of its parts (see the entry on emergent properties). Emergence has been described beyond logical relations, synchronically as an ontological property and diachronically as a material process of fusion, in which the powers of the separate constituents lose their separate existence and effects (Humphreys 1997). This concept has been widely applied in discussions of complexity (see below). Unlike the earliest antireductionist models of complexity in terms of holism and cybernetic properties, more recent approaches track the role of constituent parts (Simon 1996). Weak emergence has been opposed to nominal and strong forms of emergence. The nominal kind simply represents that some macro-properties cannot be properties of micro-constituents; strong is based on supervenience and irreducibility, with autonomous downwards causation upon any constituents (see below). Weak emergence is linked to processes stemming from the states and powers of constituents, with a reductive notion of downwards causation of the system as a resultant of constituents' effects; yet the connection is not a matter of Nagelian formal derivation but of implementation through, for instance, computational aggregation and iteration. Weak emergence, then, can be defined in terms of simulation: a macro-property, state or fact is weakly emergent if and only if it can be derived from its macro-constituents only by simulation (Bedau 2008) (see entry on simulations in science).

Connected to the concept of emergence is top-down or downward causation. It captures the autonomous and genuine causal power of higher-level entities or states, especially upon lower-level ones. The most extreme and most controversial version include a violation of laws that regulate the lower-level (Meehl and Sellars 1956; Campbell 1974). Weaker forms require compatibility with the microlaws (for a brief survey and discussion see Robinson 2005; on downward causation without top-down causes, see Craver and Bechtel 2007, Bishop 2012). The very concept has become the subject of interdisciplinary interest in the sciences (Ellis, Noble and O'Connor 2012).

Another general argument for the autonomy of the macrolevel in the form of non-reductive materialism has been a cognitive type of functionalism, namely, cognitive pragmatism (Van Gulick 1992). This account links ontology to epistemology. It discusses four pragmatic dimensions of representations: the nature of the causal interaction between theory-user and the theory, the nature of the goals to whose realization the theory can contribute, the role of indexical elements in fixing representational content, and differences in the individuating principles applied by the theory to its types (Wimsatt and Spector's arguments above are of this kind). A more ontologically substantive account of functional reduction is Ramsey's bottom-up construction by reduction: transformation reductions streamline formulations of theories in such a way that they extend basic theories upwards by engineering their application to specific context or phenomena. As a consequence, they reveal, by construction, new relations and systems that are antecedently absent from a scientist's understanding of the theory—independently of a top or reduced theory (Ramsey 1995). A weaker framework of ontological unification is categorial unity, wherein abstract categories such as causality, information, etc, are attached to the interpretation of the specific variables and properties in models of phenomena (see Cat 2001 and 2006).

5. Disunity

A more radical departure is the recent criticism of the methodological values of reductionism and unification in science and also its position in culture and society. From the descriptive standpoint, many views under the rubric of disunity are versions of positions mentioned above. The difference is mainly normative and a matter of emphasis, perspective, and stance. This view argues for the replacement of the emphasis on global unity—including unity of method—by emphasizing disunity and epistemological and ontological pluralism.

5.1 The Stanford School

A picture of disunity comes from the members of the so-called Stanford School, e.g., John Dupré, Ian Hacking, Peter Galison, Patrick Suppes and Nancy Cartwright. Disunity is, in general terms, a rejection of universalism and uniformity both methodological and metaphysical. While the view can be constructed in terms of specific anti-reductionistic claims and positions, they share an emphasis on the rejection of restrictive accounts of unity. In this sense, the rubric of disunity has acquired a visibility parallel to the one once acquired by unity, as an inspiring philosophical rallying cry.

From a methodological point of view, disunity is simply the global negative expression of a model of local unity such trading-zone, by contrast with globalists, formal models (Galison 1998), with an emphasis on a plurality of scientific methods (Suppes 1978) and causal indeterminism, a plurality of scientific styles with the function of establishing spaces of epistemic possibility, and a disunity of science in terms of plurality of unities (Hacking 1996; Hacking follows the historian A.A. Crombie).

From a metaphysical point of view, the disunity of science can be given adequate metaphysical foundations that make pluralism compatible with realism (Dupré 1993). Dupré opposes a mechanistic paradigm of unity characterized by determinism, reductionism and essentialism. The paradigm spreads the values and methods of physics to other sciences that he thinks are scientifically and socially deleterious. Disunity is characterized by three pluralistic theses: against essentialism, there is always a plurality of classifications of reality into kinds; against reductionism, there exists equal reality and causal efficacy of systems at different levels of description, that is, the microlevel is not causally complete, leaving room for downward causation; and against epistemological monism, there is no single methodology that supports a single criterion of scientificity, nor a universal domain of its applicability, only a plurality of epistemic and non-epistemic virtues. The unitary concept of science should be understood, following the later Wittgenstein, as a family-resemblance concept (For a criticism of Dupré's ideas, see Mitchell 2003 and Sklar 2003).

Against the universalism of explanatory laws, Cartwright has argued that laws cannot be both universal and true; there exist only patchworks of laws and local cooperation. Like Dupré, Cartwright adopts a kind of scientific realism but denies that there is a universal order, whether represented by a theory of everything or a corresponding a priori metaphysical principle (Cartwright 1983). The empirical evidence, she argues, along the same lines as Wimsatt, suggests far more strongly the idea of a dappled world, best represented by a patchwork of laws, often in local cooperation (e.g., local identifications, causal interactions, joint actions and piecemeal corrections and correlations). Theories apply only where and to the extent that their interpretive models fit the phenomena studied (Cartwright 1999). But this is not their alleged universal factual scope. They only hold in special conditions like ceteris paribus. Cartwright's pluralism is not just opposed to vertical reductionism but also horizontal imperialism, or universalism and globalism. She explains their more or less general domain of application in terms of causal capacities and arrangements she calls nomological machines (Cartwright 1989; Cartwright 1999). The regularities they bring about depend on a shielded environment. As a matter of empiricism, this is the reason that it is in the controlled environment of laboratories and experiments, where causal interference is shielded off, that factual regularities are manifested. The controlled, stable regular world is an engineered world. Representation rests on intervention (comp. Hacking 1983). On these grounds, as a matter of holism she rejects strong distinctions between natural and social sciences, and like Neurath, between the natural and the social world. Whether as a hypothesis or as an ideal, the debates continue over the form, scope and significance of unification in the sciences. Cartwright's theses and arguments are not without their critics (Winsberg et al. 2000; Hoefer 2003; Sklar 2003; Howhy 2003; Teller 2004; McArthur 2006).

Disunity and autonomy of levels has been associated, conversely, with antirealism, meaning instrumentalist or empiricist heuristics. This includes, for Fodor and Rosenberg, higher-level sciences such as biology and sociology (Fodor 1974; Rosenberg 1994; Huneman 2010). By contrast, Dupré's and Cartwright's attacks on uniformly global unity and reductionism, above, include an endorsement, in causal terms, of realism.[5] Rohrlich has defended a similar realist position about weaker, conceptual (cognitive) antireductionism, although on the grounds of the mathematical success of derivational explanatory reductions (Rohrlich 2001). Ruphy, however, has argued that antireductionism merely amounts to a methodological prescription and is too weak to yield uncontroversial metaphysical lessons (Ruphy 2005).

5.2 Pluralism

The question of the metaphysical significance of disunity and anti-reductionism takes one straight to the larger issue of the epistemology and metaphysics (and aesthetics, social culture and politics) of pluralism. And here one encounters the directly related issues of conceptual schemes, frameworks and worldviews, incommensurability, relativism, contextualism and perspectivalism (for a general discussion see Lynch 1998; on perspectivalism about scientific models see Giere 1999 and Rueger 2005). In connection with relativism and instrumentalism, pluralism has typically been associated with antirealism about taxonomical practices. But it has been defended from the standpoint of realism (for instance, Dupré 1993 and Chakravartty 2011). Pluralism about knowledge of mind-independent facts can be formulated in terms of different ways of to distribute properties (sociability-based pluralism), with more specific commitments about the ontological status of the related elements and their plural contextual manifestations of powers or dispositions (Chakravartty 2011, Cartwright 2007).

Pluralism applies widely to concepts, explanations, virtues, goals, methods, models, and kinds of representations (see above for graphic pluralism), etc. In this sense, pluralism has been defended as a general framework that rejects the ideal of consensus in cognitive, evaluative and practical matters, against pure skepticism (nothing goes) or indifferentism (anything goes), with a defense of preferential and contextual rationality that notes the role of contextual rational commitments, or by analogy with political forms of engagement (Rescher 1993, van Bouwel 2009, Cat 2012).

Consider at least three distinctions—they are formulated about concepts, facts, and descriptions, and they apply also to values, virtues, methods, etc:

5.3 Metapluralism

The preference for one kind of pluralism over another is typically motivated by epistemic virtues or constraints. Meta-pluralism, pluralism about pluralism, is obviously conceivable in similar terms, as it can be found in the formulation of the so-called pluralist stance (Kellert, Longino and Waters 2006). The pluralist stance replaces metaphysical principles with scientific, or empirical, methodological rules and aims that have been “tested”. Like Dupré's and Cartwright's metaphysical positions, its metascientific position must be empirically tested. Metascientific conclusions and assumptions cannot be considered universal or necessary, but local and contingent, relative to scientific interests and purposes. Thus, on this view, complexity does not always require interdisciplinarity (Kellert 2008); and in some situations the pluralist stance will defend reductions or specialization over interdisciplinary integration (Kellert, Longino and Waters 2006, Cat 2010 and 2012, Rescher 1993).

6. Conclusion: Why unity? And what difference does it really make?

Views on matters of unity and unification make more than a cardinality difference, and they do so in both science and philosophy. In science they provide strong heuristic or methodological guidance and even justification for hypotheses, projects, and specific goals. In this sense, different rallying cries and idioms such as simplicity, unity, disunity, emergence or interdisciplinarity, have been endowed with a normative value. They also provide legitimacy, even if rhetorically, in social contexts with sources of funding and profit. They become the standard of what carries the authority and legitimacy of what it is to be scientific. They make a difference, as a result, through scientific application and extension, often merely rhetorical, to other domains such as healthcare and economic policy. For instance, complexity of causal structures challenges traditional deterministic or simple causal strategies of policy decision-making with known risks and unknown effects of known properties (Mitchell 2009). Last but not least is the influence of implicit assumptions about what unification can and do have on science education (Klein 1990).

Philosophically, assumptions about unification help choose what sort of philosophical questions to pursue and what target areas to explore. For instance, fundamentalist assumptions typically lead one to address epistemological and metaphysical issues in terms of only results and interpretations of fundamental levels of disciplines. Assumptions of this sort help define what counts as scientific and shape scientistic or naturalized philosophical projects. In this sense, it determines, or at least strongly suggests, what relevant science carries authority in matters philosophical.

At the end of the day one should not lose sight of the larger context that sustains problems and projects in most disciplines and practices. We are as free to pursue them as Kant's dove is to fly, that is, not without the surrounding air resistance to flap its wings upon and against. Philosophy was once thought to stand for the systematic unity of the sciences. The foundational character of unity became the distinctive project of philosophy, in which conceptual unity played the role of the standard of intelligibility. In addition, the ideal of unity, frequently under the guise of harmony, has long been a standard of aesthetic virtue (This image has been eloquently challenged by, for instance, John Bailey and Iris Murdoch; Bailey 1976; Murdoch 1992). Unities and unifications help us meet cognitive and practical demands upon our life as well as cultural demands upon our self-images that are both cosmic and earthly. It is not surprising that talk of the many meanings of unity, namely, fundamental level, unification, system, organization, universality, simplicity, atomism, reduction, harmony, complexity or totality, can bring an urgent grip on our intellectual imagination.

Bibliography

Academic Tools

sep man icon How to cite this entry.
sep man icon Preview the PDF version of this entry at the Friends of the SEP Society.
inpho icon Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet Resources

[Please contact the author with suggestions.]

Related Entries

adaptationism | Aristotle | atomism: 17th to 20th century | Bacon, Roger | biocomplexity | Carnap, Rudolf | chaos | Comte, Auguste | Condorcet, Marie-Jean-Antoine-Nicolas de Caritat, Marquis de | Democritus | Descartes, René | determinism: causal | Diderot, Denis | Dilthey, Wilhelm | economics, philosophy of | Einstein, Albert: philosophy of science | emergent properties | Empedocles | empiricism: logical | Feyerabend, Paul | Frege, Gottlob | Galileo Galilei | genetics: and genomics | Hempel, Carl | Heraclitus | Hilbert, David | Hume, David | Kant, Immanuel | Leibniz, Gottfried Wilhelm | logical positivism | Mach, Ernst | many, problem of | mereology | Mill, John Stuart | monism | multiple realizability | Neurath, Otto | Newton, Isaac | Parmenides | physicalism | physics: intertheory relations in | Plato | Pythagoras | quantum mechanics | quantum theory: quantum field theory | Ramus, Petrus | reduction, scientific: in biology | Rickert, Heinrich | supervenience | Weber, Max | Whewell, William | Wittgenstein, Ludwig