We employ data envelopment analysis on a series of experiments performed in Fermilab, one of the major high-energy physics laboratories in the world, in order to test their efficiency (as measured by publication and citation rates) in terms of variations of team size, number of teams per experiment, and completion time. We present the results and analyze them, focusing in particular on inherent connections between quantitative team composition and diversity, and discuss them in relation to other factors contributing to scientific (...) production in a wider sense. Our results concur with the results of other studies across the sciences showing that smaller research teams are more productive, and with the conjecture on curvilinear dependence of team size and efficiency. (shrink)
Niels Bohr was a central figure in quantum physics, well known for his work on atomic structure and his contributions to the Copenhagen interpretation of quantum mechanics. In this book, philosopher of science Slobodan Perović explores the way Bohr practiced and understood physics, and analyzes its implications for our understanding of modern science. Perović develops a novel approach to Bohr’s understanding of physics and his method of inquiry, presenting an exploratory symbiosis of historical and philosophical analysis that uncovers the key (...) aspects of Bohr’s philosophical vision of physics within a given historical context. -/- To better understand the methods that produced Bohr’s breakthrough results in quantum phenomena, Perović clarifies the nature of Bohr’s engagement with the experimental side of physics and lays out the basic distinctions and concepts that characterize his approach. Rich and insightful, Perović’s take on the early history of quantum mechanics and its methodological ramifications sheds vital new light on one of the key figures of modern physics. (shrink)
The organization of cutting-edge HEP laboratories has evolved in the intersection of academia, state agencies, and industry. Exponentially ever-larger and more complex knowledge-intensive operations, the laboratories have often faced the challenges of, and required organizational solutions similar to, those identified by a cluster of diverse theories falling under the larger heading of organization theory. The cluster has either shaped or accounted for the organization of industry and state administration. The theories also apply to HEP laboratories, as they have gradually and (...) uniquely hybridized their principles and solutions. Yet scholarship has virtually ignored this linkage and has almost exclusively focused on the laboratories’ presumably unique egalitarian organizational aspects. Guided by the principles developed in the organization theory cluster, we identify the basic organizational features of HEP laboratories in relation to their pursuit of narrow and broad epistemic goals. We also provide a set of criteria and methods for assessing the efficiency of the identified organizational features in achieving such goals. (shrink)
We argue that inductive analysis and operational assessment of the scientific process can be justifiably and fruitfully brought together, whereby the citation metrics used in the operational analysis can effectively track the inductive dynamics and measure the research efficiency. We specify the conditions for the use of such inductive streamlining, demonstrate it in the cases of high energy physics experimentation and phylogenetic research, and propose a test of the method’s applicability.
The success of particle detection in high energy physics colliders critically depends on the criteria for selecting a small number of interactions from an overwhelming number that occur in the detector. It also depends on the selection of the exact data to be analyzed and the techniques of analysis. The introduction of automation into the detection process has traded the direct involvement of the physicist at each stage of selection and analysis for the efficient handling of vast amounts of data. (...) This tradeoff, in combination with the organizational changes in laboratories of increasing size and complexity, has resulted in automated and semi-automated systems of detection. Various aspects of the semi-automated regime were greatly diminished in more generic automated systems, but turned out to be essential to a number of surprising discoveries of anomalous processes that led to theoretical breakthroughs, notably the establishment of the Standard Model of particle physics. The automated systems are much more efficient in confirming specific hypothesis in narrow energy domains than in performing broad exploratory searches. Thus, in the main, detection processes relying excessively on automation are more likely to miss potential anomalies and impede potential theoretical advances. I suggest that putting substantially more effort into the study of electron–positron colliders and increasing its funding could minimize the likelihood of missing potential anomalies, because detection in such an environment can be handled by the semi-automated regime—unlike detection in hadron colliders. Despite virtually unavoidable excessive reliance on automated detection in hadron colliders, their development has been deemed a priority because they can operate at currently highest energy levels. I suggest, however, that a focus on collisions at the highest achievable energy levels diverts funds from searches for potential anomalies overlooked due to tradeoffs at the previous energy thresholds. I also note that even in the same collision environment, different research strategies will opt for different tradeoffs and thus achieve different experimental outcomes. Finally, I briefly discuss current searches for anomalous process in the context of the previous analysis. (shrink)
A recent rethinking of the early history of Quantum Mechanics deemed the late 1920s agreement on the equivalence of Matrix Mechanics and Wave Mechanics, prompted by Schrödinger's 1926 proof, a myth. Schrödinger supposedly failed to prove isomorphism, or even a weaker equivalence (“Schrödinger-equivalence”) of the mathematical structures of the two theories; developments in the early 1930s, especially the work of mathematician von Neumann provided sound proof of mathematical equivalence. The alleged agreement about the Copenhagen Interpretation, predicated to a large extent (...) on this equivalence, was deemed a myth as well. In response, I argue that Schrödinger's proof concerned primarily a domain-specific ontological equivalence, rather than the isomorphism or a weaker mathematical equivalence. It stemmed initially from the agreement of the eigenvalues of Wave Mechanics and energy-states of Bohr's Model that was discovered and published by Schrödinger in his first and second communications of 1926. Schrödinger demonstrated in this proof that the laws of motion arrived at by the method of Matrix Mechanics are satisfied by assigning the auxiliary role to eigenfunctions in the derivation of matrices (while he only outlined the reversed derivation of eigenfunctions from Matrix Mechanics, which was necessary for the proof of both isomorphism and Schrödinger-equivalence of the two theories). This result was intended to demonstrate the domain-specific ontological equivalence of Matrix Mechanics and Wave Mechanics, with respect to the domain of Bohr's atom. And although the mathematical equivalence of the theories did not seem out of the reach of existing theories and methods, Schrödinger never intended to fully explore such a possibility in his proof paper. In a further development of Quantum Mechanics, Bohr's complementarity and Copenhagen Interpretation captured a more substantial convergence of the subsequently revised (in light of the experimental results) Wave and Matrix Mechanics. -/- I argue that both the equivalence and Copenhagen Interpretation can be deemed myths if one predicates the philosophical and historical analysis on a narrow model of physical theory which disregards its historical context, and focuses exclusively on its formal aspects and the exploration of the logical models supposedly implicit in it. -/- . (shrink)
E. Schrödinger's ideas on interpreting quantum mechanics have been recently re-examined by historians and revived by philosophers of quantum mechanics. Such recent re-evaluations have focused on Schrödinger's retention of space–time continuity and his relinquishment of the corpuscularian understanding of microphysical systems. Several of these historical re-examinations claim that Schrödinger refrained from pursuing his 1926 wave-mechanical interpretation of quantum mechanics under pressure from the Copenhagen and Göttingen physicists, who misinterpreted his ideas in their dogmatic pursuit of the complementarity doctrine and the (...) principle of uncertainty. My analysis points to very different reasons for Schrödinger's decision and, accordingly, to a rather different understanding of the dialogue between Schrödinger and N. Bohr, who refuted Schrödinger's arguments. Bohr's critique of Schrödinger's arguments predominantly focused on the results of experiments on the scattering of electrons performed by Bothe and Geiger, and by Compton and Simon. Although he shared Schrödinger's rejection of full-blown classical entities, Bohr argued that these results demonstrated the corpuscular nature of atomic interactions. I argue that it was Schrödinger's agreement with Bohr's critique, not the dogmatic pressure, which led him to give up pursuing his interpretation for 7 yr. Bohr's critique reflected his deep understanding of Schrödinger's ideas and motivated, at least in part, his own pursuit of his complementarity principle. However, in 1935 Schrödinger revived and reformulated the wave-mechanical interpretation. The revival reflected N. F. Mott's novel wave-mechanical treatment of particle-like properties. R. Shankland's experiment, which demonstrated an apparent conflict with the results of Bothe–Geiger and Compton–Simon, may have been additional motivation for the revival. Subsequent measurements have proven the original experimental results accurate, and I argue that Schrödinger may have perceived even the reformulated wave-mechanical approach as too tenuous in light of Bohr's critique. (shrink)
Niels Bohr’s complementarity principle is a tenuous synthesis of seemingly discrepant theoretical approaches based on a comprehensive analysis of relevant experimental results. Yet the role of complementarity, and the experimentalist-minded approach behind it, were not confined to a provisional best-available synthesis of well-established experimental results alone. They were also pivotal in discovering and explaining the phenomenon of quantum tunneling in its various forms. The core principles of Bohr’s method and the ensuing complementarity account of quantum phenomena remain highly relevant guidelines (...) in the current controversial debate and in experimental work on quantum tunneling times. (shrink)
We historically trace various non-conventional explanations for the origin of the cosmic microwave background and discuss their merit, while analyzing the dynamics of their rejection, as well as the relevant physical and methodological reasons for it. It turns out that there have been many such unorthodox interpretations; not only those developed in the context of theories rejecting the relativistic paradigm entirely but also those coming from the camp of original thinkers firmly entrenched in the relativistic milieu. In fact, the orthodox (...) interpretation has only incrementally won out against the alternatives over the course of the three decades of its multi-stage development. While on the whole, none of the alternatives to the hot Big Bang scenario is persuasive today, we discuss the epistemic ramifications of establishing orthodoxy and eliminating alternatives in science, an issue recently discussed by philosophers and historians of science for other areas of physics. Finally, we single out some plausible and possibly fruitful ideas offered by the alternatives. (shrink)
H. Collins has challenged the empiricist understanding of experimentation by identifying what he thinks constitutes the experimenter’s regress: an instrument is deemed good because it produces good results, and vice versa. The calibration of an instrument cannot alone validate the results: the regressive circling is broken by an agreement essentially external to experimental procedures. In response, A. Franklin has argued that calibration is a key reasonable strategy physicists use to validate production of results independently of their interpretation. The physicists’ arguments (...) about the merits of calibration are not coextensive with the interpretation of results, and thus an objective validation of results is possible. I argue, however, that the in-situ calibrating and measurement procedures and parameters at the Large Hadron Collider are closely and systematically interrelated. This requires empiricists to question their insistence on the independence of calibration from the outcomes of the experiment and rethink their position. Yet this does not leave the case of in-situ calibration open to the experimenter’s regress argument; it is predicated on too crude a view of the relationship between calibration and measurement that fails to capture crucial subtleties of the case. (shrink)
I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian (...) induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the “classical” nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries. (shrink)
Advancing the reductionist conviction that biology must be in agreement with the assumptions of reductive physicalism (the upward hierarchy of causal powers, the upward fixing of facts concerning biological levels) A. Rosenberg argues that downward causation is ontologically incoherent and that it comes into play only when we are ignorant of the details of biological phenomena. Moreover, in his view, a careful look at relevant details of biological explanations will reveal the basic molecular level that characterizes biological systems, defined by (...) wholly physical properties, e.g., geometrical structures of molecular aggregates (cells). In response, we argue that contrary to his expectations one cannot infer reductionist assumptions even from detailed biological explanations that invoke the molecular level, as interlevel causal reciprocity is essential to these explanations. Recent very detailed explanations that concern the structure and function of chromatin—the intricacies of supposedly basic molecular level—demonstrate this. They show that what seem to be basic physical parameters extend into a more general biological context, thus rendering elusive the concepts of the basic level and causal hierarchy postulated by the reductionists. In fact, relevant phenomena are defined across levels by entangled, extended parameters. Nor can the biological context be explained away by basic physical parameters defining molecular level shaped by evolution as a physical process. Reductionists claim otherwise only because they overlook the evolutionary significance of initial conditions best defined in terms of extended biological parameters. Perhaps the reductionist assumptions (as well as assumptions that postulate any particular levels as causally fundamental) cannot be inferred from biological explanations because biology aims at manipulating organisms rather than producing explanations that meet the coherence requirements of general ontological models. Or possibly the assumptions of an ontology not based on the concept of causal powers stratified across levels can be inferred from biological explanations. The incoherence of downward causation is inevitable, given reductionist assumptions, but an ontological alternative might avoid this. We outline desiderata for the treatment of levels and properties that realize interlevel causation in such an ontology. (shrink)
The question of when to stop an unsuccessful experiment can be difficult to answer from an individual perspective. To help to guide these decisions, we turn to the social epistemology of science and investigate knowledge inquisition within a group. We focused on the expensive and lengthy experiments in high energy physics, which were suitable for citation-based analysis because of the relatively quick and reliable consensus about the importance of results in the field. In particular, we tested whether the time spent (...) on a scientific project correlates with the project output. Our results are based on data from the high energy physics laboratory Fermilab. They point out that there is an epistemic saturation point in experimenting, after which the likelihood of obtaining major results drops. With time the number of less significant publications does increase, but highly cited ones do not get published. Since many projects continue to run after the epistemic saturation point, it becomes clearer that decisions made about continuing them are not always rational. (shrink)
S. Oyama’s prominent account of the Parity Thesis states that one cannot distinguish in a meaningful way between nature-based (i.e. gene-based) and nurture-based (i.e. environment-based) characteristics in development because the information necessary for the resulting characteristics is contained at both levels. Oyama as well as P. E. Griffiths and K. Stotz argue that the Parity Thesis has far-reaching implications for developmental psychology in that both nativist and interactionist developmental accounts of psychological capacities that presuppose a substantial nature/nurture dichotomy are inadequate. (...) We argue that well-motivated abandoning of the nature/nurture dichotomy, as advocated in converging versions of the Parity Thesis in biology, does not necessarily entail abandoning the distinction between biologically given abilities necessary for the development of higher psychological capacities and the learning process they enable. Thus, contrary to the claims of the aforementioned authors, developmental psychologists need not discard a substantial distinction between innate (biologically given) characteristics and those acquired by learning, even if they accept the Parity Thesis. We suggest a two-stage account of development: the first stage is maturational and involves interaction of genetic, epigenetic and environmental causes, resulting in the endogenous biological ‘machinery’ (e.g. language acquisition device), responsible for learning in the subsequent stage of the developmental process by determining the organism’s responses to the environment. This account retains the crux of nativism (the endogenous biological structure determines the way the organism learns/responds to an environment) whilst adopting the developmentalist view of biology by characterizing environments as distinctly different in terms of structure and function in two developmental stages. (shrink)
The Modern Synthesis of Darwinism and genetics regards non-genetic factors as merely constraints on the genetic variations that result in the characteristics of organisms. Even though the environment (including social interactions and culture) is as necessary as genes in terms of selection and inheritance, it does not contain the information that controls the development of the traits. S. Oyama’s account of the Parity Thesis, however, states that one cannot conceivably distinguish in a meaningful way between nature-based (i.e., gene-based) and nurture-based (...) (i.e., environment-based) characteristics in development because the information necessary for the resulting characteristics is contained at both levels. Oyama and others argue that the Parity Thesis has far-reaching implications for developmental psychology, in that both nativist and interactionist developmental accounts of motor, cognitive, affective, social, and linguistic capacities that presuppose a substantial nature/nurture dichotomy are inadequate. After considering these arguments, we conclude that either Oyama’s version of the Parity Thesis does not differ from the version advocated by liberal interactionists, or it renders precarious any analysis involving abilities present at birth (despite her claim to the contrary). More importantly, developmental psychologists need not discard the distinction between innate characteristics present at birth and those acquired by learning, even if they abandon genocentrism. Furthermore, we suggest a way nativists can disentangle the concept of maturation from a genocentric view of biological nature. More specifically, we suggest they can invoke the maturational segment of the developmental process (which involves genetic, epigenetic and environmental causes) that results in the biological “machinery” (e.g. language acquisition device) which is necessary for learning as a subsequent segment of the developmental process. (shrink)
A long-standing debate on the causality of levels in biological explanations has divided philosophers into two camps. The reductionist camp insists on the causal primacy of lower, molecular levels, while the critics point out the inescapable shifting, reciprocity, and circularity of levels across biological explanations. We argue, however, that many explanations in biology do not exclusively draw their explanatory power from detailed insights into inter-level interactions; they predominantly require identifying the adequate levels of biological complexity to be explained. Moreover, the (...) main explanatory strategies grounding both theoretical and experimental approaches to one of the central debates in contemporary biology, i.e., on the origin of life, are primarily and sometimes exclusively driven by issues concerning the levels of biochemical complexity, and these only subsequently frame more substantial and detailed accounts of inter-level biochemical interactions. (shrink)
Ian Hacking has argued that the notions of experiment and observation are distinct, not even the opposite ends of a continuum. More recently, other authors have emphasised their continuity, saying...
Michael Strevens develops kairetic account of causal explanations as a brand of explanatory reductionism. He argues that explanations in higher-level sciences are complete only because they can be potentially deepened—that is, added kernels of causal processes all the way down to the level of micro-physical relations. Thus, they are, in essence, the result of abstraction from deeper causal explanatory levels. I argue that Strevens’s discussion of the notion of depth in science is limited to a very narrow domain, the boundaries (...) of which are determined by a simplistic amalgam of science textbook and everyday cases analyzed by means of rational metaphysics. In contrast to his view, history of scientific practice shows that scientific explanations are typically bounded within a level and do not draw their viability from their potential for lower-level explanatory deepening. Moreover, a result of such deepening of higher-level explanations produces changes and refinements much more complex than Strevens’s account assumes. (shrink)
A recent rethinking of the early history of Quantum Mechanics deemed the late 1920s agreement on the equivalence of Matrix Mechanics and Wave Mechanics, prompted by Schrödinger’s 1926 proof, a myth. Schrödinger supposedly failed to achieve the goal of proving isomorphism of the mathematical structures of the two theories, while only later developments in the early 1930s, especially the work of mathematician John von Neumman (1932) provided sound proof of equivalence. The alleged agreement about the Copenhagen Interpretation, predicated to a (...) large extent on this equivalence, was deemed a myth as well. If such analysis is correct, it provides considerable evidence that, in its critical moments, the foundations of scientific practice might not live up to the minimal standards of rigor, as such standards are established in the practice of logic, mathematics, and mathematical physics, thereby prompting one to question the rationality of the practice of physics. In response, I argue that Schrödinger’s proof concerned primarily a domain-specific ontological equivalence, rather than the isomorphism. It stemmed initially from the agreement of the eigenvalues of Wave Mechanics and energy-states of Bohr’s Model that was discovered and published by Schrödinger in his First and Second Communications of 1926. Schrödinger demonstrated in this proof that the laws of motion arrived at by the method of Matrix Mechanics could be derived successfully from eigenfunctions as well (while he only outlined the reversed derivation of eigenfunctions from Matrix Mechanics, which was necessary for the proof of isomorphism of the two theories). This result was intended to demonstrate the domain-specific ontological equivalence of Matrix Mechanics and Wave Mechanics, with respect to the domain of Bohr’s atom. And although the full-fledged mathematico-logical equivalence of the theories did not seem out of the reach of existing theories and methods, Schrödinger never intended to fully explore such a possibility in his proof paper. In a further development of Quantum Mechanics, Bohr’s complementarity and Copenhagen Interpretation captured a more substantial convergence of the subsequently revised (in light of the experimental results) Wave and Matrix Mechanics. I argue that both the equivalence and Copenhagen Interpretation can be deemed myths if one predicates the philosophical and historical analysis on a narrow model of physical theory which disregards its historical context, and focuses exclusively on its formal aspects and the exploration of the logical models supposedly implicit in it. (shrink)
Modern physics encompasses theoretical and experimental research divided in subfields with specific features. For instance, high energy physics (HEP) attracts significant funding and has distinct organizational structures, i.e., large laboratories and cross-institutional collaborations. Expensive equipment and large experiments create a specific work atmosphere and human relations. While the gender misbalance is characteristic for STEM, early-career researchers are inherently dependent on their supervisors. This raises the question of how satisfied researchers with working in physics are and how different subgroups – female (...) and early-career researchers – perceive their work environment. We empirically investigated job satisfaction and satisfaction with the academic system among physicists (N=123) working in large laboratories, universities, and independent institutes. The scale for measuring the satisfaction with the academic system in physics yielded three factors: experience of research autonomy, opportunities to use one's knowledge, and appreciation of the research by the general public. The results show that physicists are less satisfied with the academic system than with their work environment. Moreover, female scientists and junior researchers experience their jobs more negatively. The results emphasize the need for improving work and research conditions for underprivileged groups in physics. Interestingly, no significant effect was found between different types of academic institutions and general job satisfaction. Finally, physicists felt that their work has not been well understood by the public. (shrink)
Focusing on the discovery of weak currents, the current debate on the theory-ladenness of observation in modern physics might be too narrow, as it concerns only the last stage of a complex experimental process and statistical methods required to analyze data. The scope of the debate should be extended to include broader experimental conditions that concern the design of the apparatus and different levels of the detection process. These neglected conditions often decisively delimit experiments long before the last stage has (...) been reached, thus predetermining the extent of the dependence of data production on the theory. I explain the nature of these conditions and the theory-ladenness tendencies they produce, noting how they affect the last stage of the data analysis and providing some relevant examples. (shrink)
Symmetry-based explanations using symmetry breaking as the key explanatory tool have complemented and replaced traditional causal explanations in various domains of physics. The process of spontaneous SB is now a mainstay of contemporary explanatory accounts of large chunks of condensed-matter physics, quantum field theory, nonlinear dynamics, cosmology, and other disciplines. A wide range of empirical research into various phenomena related to symmetries and SB across biological scales has accumulated as well. Led by these results, we identify and explain some common (...) features of the emergence, propagation, and cascading of SB-induced layers across the biosphere. These features are predicated on the thermodynamic openness and intrinsic functional incompleteness of the systems at stake and have not been systematically analyzed from a general philosophical and methodological perspective. We also consider possible continuity of SB across the physical and biological world and discuss the connection between Darwinism and SB-based analysis of the biosphere and its history. (shrink)
Jaegwon Kim’s exclusion argument is a general ontological argument, applicable to any properties deemed supervenient on a microproperty basis, including biological properties. It implies that the causal power of any higher-level property must be reducible to the subset of the causal powers of its lower-level properties. Moreover, as Kim’s recent version of the argument indicates, a higher-level property can be causally efficient only to the extent of the efficiency of its micro-basis. In response, I argue that the ontology that aims (...) to capture experimentally based explanations of metabolic control systems and morphogenetic systems must involve causally relevant contextual properties. Such an ontology challenges the exclusiveness of micro-based causal efficiency that grounds Kim’s reductionism, since configurations themselves are inherently causally efficient constituents. I anticipate and respond to the reductionist’s objection that the nonreductionist ontology’s account of causes and inter-level causal relations is incoherent. I also argue that such an ontology is not open to Kim’s overdetermination objection. (shrink)
Weinert defends a distinctively anti-Kuhnian position on scientific revolutions, predicating his argument on a nuanced and clear case analysis. He also builds on his previous work on eliminative induction that he sees as the central scientific method in the rise of revolutionary theories. The treatment of social sciences as revolutionary offers the key elements of a promising ambitious project. His botched attempt to portray the Darwinian view of mind as a brand of emergentism is the only weak point if this (...) insightful book. (shrink)
Identifying optimal ways of organizing exploration in particle physics mega-labs is a challenging task that requires a combination of case-based and formal epistemic approaches. Data-driven studies suggest that projects pursued by smaller master-teams are substantially more efficient than larger ones across sciences, including experimental particle physics. Smaller teams also seem to make better project choices than larger, centralized teams. Yet the epistemic requirement of small, decentralized, and diverse teams contradicts the often emphasized and allegedly inescapable logic of discovery that forces (...) physicists pursuing the fundamental levels of the physical world to perform centralized experiments in mega-labs at high energies. We explain, however, that this epistemic requirement could be met, since the nature of theoretical and physical constraints in high energy physics and the technological obstacles stemming from them turn out to be surprisingly open-ended. (shrink)
I discuss two uses of the concept of the morphogenetic field, a tool of the 19th century biology motivated by particular ontological views of the time, which has been re-emerging and increasingly relevant in explaining microbiological phenomena. I also consider the relation of these uses to the Central Dogma of modern biology as well as Modern Synthesis of Darwinism and genetics. An induced morphogenetic field is determined by a physical field, or it acquires a physical field?s characteristics. Such a morphogenetic (...) field presents only a weak challenge to the Central Dogma of Modern Synthesis by indirectly, albeit severely, constraining variability at the molecular level. I discuss explanations that introduce structural inheritance in ciliate protozoa, as well as the experimental evidence on which these arguments are based. The global cellular morphogenetic field is a unit of such inheritance. I discuss relevant cases of structural inheritance in ciliates that bring about internal cellular as well as functional changes and point out that DNA is absent in the cortex and that RNA controls neither intermediary nor the global level of the field. I go on to argue that utilizing knowledge of known physical fields may advance explanations and understanding of the morphogenetic field in ciliates as the unit of both development and inheritance. (shrink)