Causal selection is the task of picking out, from a field of known causally relevant factors, some factors as elements of an explanation. The Causal Parity Thesis in the philosophy of biology challenges the usual ways of making such selections among different causes operating in a developing organism. The main target of this thesis is usually gene centrism, the doctrine that genes play some special role in ontogeny, which is often described in terms of information-bearing or programming. This paper is (...) concerned with the attempt of confronting the challenge coming from the Causal Parity Thesis by offering principles of causal selection that are spelled out in terms of an explicit philosophical account of causation, namely an interventionist account. I show that two such accounts that have been developed, although they contain important insights about causation in biology, nonetheless fail to provide an adequate reply to the Causal Parity challenge: Ken Waters's account of actual-difference making and Jim Woodward's account of causal specificity. A combination of the two also doesn't do the trick, nor does Laura Franklin-Hall's account of explanation (in this volume). We need additional conceptual resources. I argue that the resources we need consist in a special class of counterfactual conditionals, namely counterfactuals the antecedents of which describe biologically normal interventions. (shrink)
Causation has multiple distinct meanings in genetics. One reason for this is meaning slippage between two concepts of the gene: Mendelian and molecular. Another reason is that a variety of genetic methods address different kinds of causal relationships. Some genetic studies address causes of traits in individuals, which can only be assessed when single genes follow predictable inheritance patterns that reliably cause a trait. A second sense concerns the causes of trait differences within a population. Whereas some single genes can (...) be said to cause population-level differences, most often these claims concern the effects of many genes. Polygenic traits can be understood using heritability estimates, which estimate the relative influences of genetic and environmental differences to trait differences within a population. Attempts to understand the molecular mechanisms underlying polygenic traits have been developed, although causal inference based on these results remains controversial. Genetic variation has also recently been leveraged as a randomizing factor to identify environmental causes of trait differences. This technique—Mendelian randomization—offers some solutions to traditional epidemiological challenges, although it is limited to the study of environments with known genetic influences. (shrink)
In The Biopsychosocial Model of Health and Disease, Derek Bolton and Grant Gillett argue that a defensible updated version of the biopsychosocial model requires a metaphysically adequate account of disease causation that can accommodate biological, psychological, and social factors. This present paper offers a philosophical critique of their account of biopsychosocial causation. I argue that their account relies on claims about the normativity and the semantic content of biological information that are metaphysically contentious. Moreover, I suggest that these claims are (...) unnecessary for a defence of biopsychosocial causation, as the roles of multiple and diverse factors in disease causation can be readily accommodated by a more widely accepted and less metaphysically contentious account of causation. I then raise the more general concern that they are misdiagnosing the problem with the traditional version of the biopsychosocial model. The challenge when developing an explanatorily valuable version of the biopsychosocial model, I argue, is not so much providing an adequate account of biopsychosocial causation, but providing an adequate account of causal selection. Finally, I consider how this problem may be solved to arrive at a more explanatorily valuable and clinically useful version of the biopsychosocial model. (shrink)
In recent years the philosophy of information has emerged as an important area of research in philosophy. However, until now information’s philosophical history has been largely overlooked. Information and the History of Philosophy is the first comprehensive investigation of the history of philosophical questions around information, including work from before the Common Era to the twenty-first century. It covers scientific and technology-centred notions of information; views of human information processing, as well as socio-political topics such as the control and use (...) of information in societies. Organised into five parts, nineteen chapters by an international team of contributors cover the following topics and more: Information before 500 CE, including ancient Chinese, Greek and Roman approaches to information Early theories of information processing, sources of information and cognition Information and computation in Leibniz, visualised scientific information, copyright and social reform The nineteenth century, including biological information, knowledge economies, and information’s role in empire and eugenics Recent and contemporary philosophy of information, including racialized information, Shannon information, and the very idea of an information revolution. Information and the History of Philosophy is a landmark publication in this emerging field. As such, it is essential reading for students and researchers in the history of philosophy, philosophy of science and technology, and library and information studies. It is also a valuable resource for those working in subjects such as the history of science, media and communication studies, and intellectual history. (shrink)
An atom is characterized mathematically as an evolving superposition of possible values of properties and experimentally as an instantaneous phenomenon with a precise value of a measured property. Likewise, an organism is to itself a flux of experience and to an observer a tangible body in a distinct moment. Whereas the implicit atom is the stream of computation represented by the smoothly propagating wave function, the implicit organism is both the species from which the body individuates and the personal mind (...) its behavior explicates. As with the wave computation that underlies the atom, the substance of the implicit organism is not matter but information. And like projection from a superposition of potential values to a single outcome in a precise and fleeting moment, the organism actualizes only one of the many possible behaviors calculated in the ongoing presence we know as consciousness. (shrink)
The teleosemantic theory of representational content is held by some philosophers to imply that genes carry semantic information about whole-organism phenotypes. In this paper, I argue that this position is not supported by empirical findings. I focus on one of the most elaborate defenses of this position: Shea’s view that genes represent whole-organism phenotypes. I distinguish between two ways of individuating genes in contemporary biological science as possible vehicles of representational content—as molecular genes and as difference-maker genes. I show that (...) given either of these ways of individuating genes, genes fail to meet conditions which the teleosemantic theory requires an entity to meet if that entity is to qualify as a representational vehicle that represents a whole-organism phenotype. The considerations I present against Shea’s view generalize to other attempts to use the teleosemantic theory in support of the claim that genes represent whole-organism phenotypes. (shrink)
The development of synthetic biology calls for accurate understanding of the critical functions that allow construction and operation of a living cell. Besides coding for ubiquitous structures, minimal genomes encode a wealth of functions that dissipate energy in an unanticipated way. Analysis of these functions shows that they are meant to manage information under conditions when discrimination of substrates in a noisy background is preferred over a simple recognition process. We show here that many of these functions, including transporters and (...) the ribosome construction machinery, behave as would behave a material implementation of the informationmanaging agent theorized by Maxwell almost 150 years ago and commonly known as Maxwell’s demon (MxD). A core gene set encoding these functions belongs to the minimal genome required to allow the construction of an autonomous cell. These MxDs allow the cell to perform computations in an energy-efficient way that is vastly better than our contemporary computers. (shrink)
Ongoing empirical discoveries in molecular biology have generated novel conceptual challenges and perspectives. Philosophers of biology have reacted to these trends when investigating the practice of molecular biology and contributed to scientific debates on methodological and conceptual matters. This article reviews some major philosophical issues in molecular biology. First, philosophical accounts of mechanistic explanation yield a notion of explanation in the context of molecular biology that does not have to rely on laws of nature and comports well with molecular discovery. (...) Second, reductionism continues to be debated and increasingly be rejected by scientists. Philosophers have likewise moved away from reduction toward integration across fields or integrative explanations covering several levels of organization. Third, although the gene concept has undergone substantial transformation and even fragmentation, it still enjoys widespread use by molecular biologists, which has prompted philosophers to understand the empirical reasons for this. At the same time, it has been argued the notion of ‘genetic information’ is largely an empty metaphor, which generates the illusion of explanatory understanding without offering an adequate explanation of molecular and developmental mechanisms. (shrink)
In 1965, Konrad Lorenz grounded the innate–acquired distinction in what he believed were the only two possible sources of information that can underlie adaptedness: phylogenetic and individual experience. Phylogenetic experience accumulates in the genome by the process of natural selection. Individual experience is acquired ontogenetically through interacting with the environment during the organism’s lifetime. According to Lorenz, the adaptive information underlying innate traits is stored in the genome. Lorenz erred in arguing that genetic adaptation is the only means of accumulating (...) information in phylogenetic experience. Cultural adaptation also occurs over a phylogenetic time scale, and cultural tradition is a third source from which adaptive information can be extracted. This paper argues that genetic adaptation can be distinguished from individual and cultural adaptation in a species like Homo sapiens, in which even adaptations with a genetic component require cultural inputs and scaffolding to develop and be expressed. Examination of the way in which innateness is used in science suggests that scientists use the term, as Lorenz suggested, to designate genetic adaptations. The search for innate traits plays an essential role in generating hypotheses in ethology and psychology. In addition, designating a trait as innate establishes important facts that apply at the information-processing level of description. (shrink)
How can researchers use race, as they do now, to conduct health-care studies when its very definition is in question? The belief that race is a social construct without “biological authenticity” though widely shared across disciplines in social science is not subscribed to by traditional science. Yet with an interdisciplinary approach, the two horns of the social construct/genetics dilemma of race are not mutually exclusive. We can use traditional science to provide a rigorous framework and use a social-science approach so (...) that “invisible” factors are used to adjust the design of studies on an as-needed basis. One approach is to first observe health-care outcomes and then categorize the outcomes, thus removing genetic differences as racial proxies from the design of the study. From the outcomes, we can then determine if there is a pattern of conceivable racial categories. If needed, we can apply dynamic notions of race to acknowledge bias without prejudice. We can use them constructively to improve outcomes and reduce racial disparities. Another approach is nearly identical but considers race not at all: While analyzing outcomes, we can determine if there are biological differences significant enough to identify classifications of humans. That is, we look for genetic patterns in the outcomes and classify only those patterns. There is no attempt to link those patterns to race. (shrink)
Griffiths et al. (2015) have proposed a quantitative measure of causal specificity and used it to assess various attempts to single out genetic causes as being causally more specific than other cellular mechanisms, for example, alternative splicing. Focusing in particular on developmental processes, they have identified a number of important challenges for this project. In this discussion note, I would like to show how these challenges can be met.
Identification of non-coding RNAs (ncRNAs) has been significantly improved over the past decade. On the other hand, semantic annotation of ncRNA data is facing critical challenges due to the lack of a comprehensive ontology to serve as common data elements and data exchange standards in the field. We developed the Non-Coding RNA Ontology (NCRO) to handle this situation. By providing a formally defined ncRNA controlled vocabulary, the NCRO aims to fill a specific and highly needed niche in semantic annotation of (...) large amounts of ncRNA biological and clinical data. (shrink)
The status of genes as bearers of semantic content remains very much in dispute among philosophers of biology. In a series of papers, Nicholas Shea has argued that his ‘infotel’ theory of semantics vindicates the claim that genes carry semantic content. On Shea’s account, each organism is associated with a ‘developmental system’ that takes genetic representations as inputs and produces whole-organism traits as outputs. Moreover, at least in his most recent work on the topic, Shea is explicit in claiming that (...) these genetic representations are ‘read in ontogenetic time, in the course of individual development’. Here I argue that a close examination of the process of reading, in Shea’s sense, reveals that acts of reading do not actually occur over the course of developmental time at all. To make this vivid, I contrast the process of reading for Shea with another type of developmental process that is widely seen as a form of reading directed on inherited genes, and which certainly does occur over the course of developmental time, namely, gene expression. I suggest that this error in Shea’s thinking can be traced back to an equivocation on Shea’s part in the meaning of ‘reads’, and also to a reliance on an invalid principle regarding the transference of representational content from one token gene to another. The issues at play are bound up with questions about causation and in particular about causation over time. Thus, having first presented my arguments in a way that doesn’t depend on any particular theory of causation, I then make use of Kenneth Waters’ framework of difference-making causation to conceptually sharpen and shed further light on matters. I conclude by discussing a consequence of the fact that acts of reading do not occur in development. 1 Introduction2 Reading for Shea3 Gene Expression as Reading4 Are Genetic Representations Read in Development? Part 15 The Manipulability Theory of Causation and Difference-Making Causation6 Are Genetic Representations Read in Development? Part 27 Conclusion. (shrink)
The Protein Ontology provides terms for and supports annotation of species-specific protein complexes in an ontology framework that relates them both to their components and to species-independent families of complexes. Comprehensive curation of experimentally known forms and annotations thereof is expected to expose discrepancies, differences, and gaps in our knowledge. We have annotated the early events of innate immune signaling mediated by Toll-Like Receptor 3 and 4 complexes in human, mouse, and chicken. The resulting ontology and annotation data set has (...) allowed us to identify species-specific gaps in experimental data and possible functional differences between species, and to employ inferred structural and functional relationships to suggest plausible resolutions of these discrepancies and gaps. (shrink)
Parents influence the development of their offspring in many ways beyond the transmission of DNA. This includes transfer of epigenetic states, nutrients, antibodies and hormones, and behavioural interactions after birth. While the evolutionary consequences of such nongenetic inheritance are increasingly well understood, less is known about how inheritance mechanisms evolve. Here, we present a simple but versatile model to explore the adaptive evolution of non-genetic inheritance. Our model is based on a switch mechanism that produces alternative phenotypes in response to (...) different inputs, including genes and non-genetic factors transmitted from parents and the environment experienced during development. This framework shows how genetic and non-genetic inheritance mechanisms and environmental conditions can act as cues by carrying correlational information about future selective conditions. Differential use of these cues is manifested as different degrees of genetic, parental or environmental morph determination. We use this framework to evaluate the conditions favouring non-genetic inheritance, as opposed to genetic determination of phenotype or within-generation plasticity, by applying it to two putative examples of adaptive non-genetic inheritance: maternal effects on seed germination in plants and transgenerational phase shift in desert locusts. Our simulation models show how the adaptive value of non-genetic inheritance depends on its mechanism, the pace of environmental change, and life history characteristics. (shrink)
The Protein Ontology (PRO; http://proconsortium.org) formally defines protein entities and explicitly represents their major forms and interrelations. Protein entities represented in PRO corresponding to single amino acid chains are categorized by level of specificity into family, gene, sequence and modification metaclasses, and there is a separate metaclass for protein complexes. All metaclasses also have organism-specific derivatives. PRO complements established sequence databases such as UniProtKB, and interoperates with other biomedical and biological ontologies such as the Gene Ontology (GO). PRO relates to (...) UniProtKB in that PRO’s organism-specific classes of proteins encoded by a specific gene correspond to entities documented in UniProtKB entries. PRO relates to the GO in that PRO’s representations of organism-specific protein complexes are subclasses of the organism-agnostic protein complex terms in the GO Cellular Component Ontology. The past few years have seen growth and changes to the PRO, as well as new points of access to the data and new applications of PRO in immunology and proteomics. Here we describe some of these developments. (shrink)
Recent theoretical work has identified a tightly-constrained sense in which genes carry representational content. Representational properties of the genome are founded in the transmission of DNA over phylogenetic time and its role in natural selection. However, genetic representation is not just relevant to questions of selection and evolution. This paper goes beyond existing treatments and argues for the heterodox view that information generated by a process of selection over phylogenetic time can be read in ontogenetic time, in the course of (...) individual development. Recent results in evolutionary biology, drawn both from modelling work, and from experimental and observational data, support a role for genetic representation in explaining individual ontogeny: both genetic representations and environmental information are read by the mechanisms of development, in an individual, so as to lead to adaptive phenotypes. Furthermore, in some cases there appears to have been selection between individuals that rely to different degrees on the two sources of information. Thus, the theory of representation in inheritance systems like the genome is much more than just a coherent reconstruction of information talk in biology. Genetic representation is a property with considerable explanatory utility. (shrink)
The concept of innateness is used to make inferences between various better-understood properties, like developmental canalization, evolutionary adaptation, heritability, species-typicality, and so on (‘innateness-related properties’). This article uses a recently-developed account of the representational content carried by inheritance systems like the genome to explain why innateness-related properties cluster together, especially in non-human organisms. Although inferences between innateness-related properties are deductively invalid, and lead to false conclusions in many actual cases, where some aspect of a phenotypic trait develops in reliance on (...) a genetic representation it will tend, better than chance, to have many of the innateness-related properties. The account also shows why inferences between innateness-related properties sometimes fail and argues that such inferences are especially misleading when applied to human psychology and behaviour because human psychological development is especially reliant on non-genetic inherited representations. (shrink)
In 1809--the year of Charles Darwin's birth--Jean-Baptiste Lamarck published Philosophie zoologique, the first comprehensive and systematic theory of biological evolution. The Lamarckian approach emphasizes the generation of developmental variations; Darwinism stresses selection. Lamarck's ideas were eventually eclipsed by Darwinian concepts, especially after the emergence of the Modern Synthesis in the twentieth century. The different approaches--which can be seen as complementary rather than mutually exclusive--have important implications for the kinds of questions biologists ask and for the type of research they conduct. (...) Lamarckism has been evolving--or, in Lamarckian terminology, transforming--since Philosophie zoologique's description of biological processes mediated by "subtle fluids." Essays in this book focus on new developments in biology that make Lamarck's ideas relevant not only to modern empirical and theoretical research but also to problems in the philosophy of biology. Contributors discuss the historical transformations of Lamarckism from the 1820s to the 1940s, and the different understandings of Lamarck and Lamarckism; the Modern Synthesis and its emphasis on Mendelian genetics; theoretical and experimental research on such "Lamarckian" topics as plasticity, soft (epigenetic) inheritance, and individuality; and the importance of a developmental approach to evolution in the philosophy of biology. The book shows the advantages of a "Lamarckian" perspective on evolution. Indeed, the development-oriented approach it presents is becoming central to current evolutionary studies--as can be seen in the burgeoning field of Evo-Devo. Transformations of Lamarckism makes a unique contribution to this research. (shrink)
This article is arranged around two general claims and a thought experiment. I begin by suggesting that the genome should be studied as a developmental system, and that genes supervene on genomes (rather than the other way around). I move on to present a thought experiment that illustrates the implications a dynamic view of the genome has for central concepts in biology, in particular the information content of the genome, and the notion of responses to stress.
To construct a synthetic cell we need to understand the rules that permit life. A central idea in modern biology is that in addition to the four entities making reality, matter, energy, space and time, a fifth one, information, plays a central role. As a consequence of this central importance of the management of information, the bacterial cell is organised as a Turing machine, where the machine, with its compartments defining an inside and an outside and its metabolism, reads and (...) expresses the genetic program carried by the genome. This highly abstract organisation is implemented using concrete objects and dynamics, and this is at the cost of repeated incompatibilities (frustration), which need to be sorted out by appropriate «patches». After describing the organisation of the genome into the paleome (sustaining and propagating life) and the cenome (permitting life in context), we describe some chemical hurdles that the cell as to cope with, ending with the specific case of the methionine salvage pathway. (shrink)
There is ongoing controversy as to whether the genome is a representing system. Although it is widely recognised that DNA carries information, both correlating with and coding for various outcomes, neither of these implies that the genome has semantic properties like correctness or satisfaction conditions, In the Scope of Logic, Methodology, and the Philosophy of Sciences, Vol. II. Kluwer, Dordrecht, pp. 387–400). Here a modified version of teleosemantics is applied to the genome to show that it does indeed have semantic (...) properties – there is representation in the genome. The account differs in three respects from previous attempts to apply teleosemantics to genes. It emphasises the role of the consumer of representations. It rejects the standard assumption that genetic representation can be used to explain the course of an organism’s development. And it identifies the explanatory role played by representational properties of the genome. A striking consequence of this account is that other inheritance systems could also be representational. Thus, a version of the parity thesis is accepted. However, the criteria for being an inheritance system are demanding, so semantic properties are not ubiquitous. (shrink)
I want to exhibit the deeper metaphysical reasons why some common ways of describing the causal role of genes in development and evolution are problematic. Specifically, I show why using the concept of information in an intentional sense in genetics is inappropriate, even given a naturalistic account of intentionality. Furthermore, I argue that descriptions that use notions such as programming, directing or orchestrating are problematic not for empirical reasons, but because they are not strictly causal. They are intentional. By contrast, (...) other notions that are part of the received view in genetics and evolutionary theory are defensible if understood correctly, in particular the idea that genes are the main replicators in evolution. The paper concludes that dropping all intentional or intentionally laden concepts does not force us to accept the so-called causal parity thesis, at least not in its stronger form. (shrink)
The first use of the term "information" to describe the content of nervous impulse occurs 20 years prior to Shannon`s (1948) work, in Edgar Adrian`s The Basis of Sensation (1928). Although, at least throughout the 1920s and early 30s, the term "information" does not appear in Adrian`s scientific writings to describe the content of nervous impulse, the notion that the structure of nervous impulse constitutes a type of message subject to certain constraints plays an important role in all of his (...) writings throughout the period. The appearance of the concept of information in Adrian`s work raises at least two important questions: (i) what were the relevant factors that motivated Adrian`s use of the concept of information? (ii) What concept of information does Adrian appeal to, and how can it be situated in relation to contemporary philosophical accounts of the notion of information in biology? The first question involves an account of the application of communications technology in neurobiology as well as the historical and scientific background of Adrian`s major scientific achievement, which was the recording of the action potential of a single sensory neuron. The response to the second question involves an explication of Adrian`s concept of information and an evaluation of how it may be situated in relation to more contemporary philosophical explications of a semantic concept of information. I suggest that Adrian`s concept of information places limitations on the sorts of systems that are referred to as information carriers by causal and functional accounts of information. (shrink)
The notion of innateness is widely used, particularly in philosophy of mind, cognitive science and linguistics. Despite this popularity, it remains a controversial idea. This is partly because of the variety of ways in which it can be explicated and partly because it appears to embody the suggestion that we can determine the relative causal contributions of genes and environment in the development of biological individuals. As these causes are not independent, the claim is metaphysically suspect. This paper argues that (...) there is a plausible reconstruction of the notion of innateness. This involves defining it sufficiently broadly to cover most of the current usages as well as making it an informational rather than a causal property. This has two consequences. Firstly, innateness becomes a matter of degree. Secondly, we have to abandon the idea, originally proposed by ethologists, that innate traits are necessarily the products of genetic information. (shrink)
Algorithmical procedure within a logical system to generate DNA chains through a formal rule up to the generation of a STOP codon's signal. Work developped under the direction of the Mexican Professor Hugo Padilla Chacón.
The short paper introduces the concept of possible branches of double-stranded DNA (later sometimes called palindromes): Certain sequences of nucleotides may be followed, after a short unpaired stretch, by a complementary sequence in reversed order, such that each DNA strand can fold back on itself, and the DNA assumes a cruciform or tree-like structure. This is postulated to interact with regulatory proteins. -/- .
Applying mild methods of preparation, part of the ribosomes of rabbit reticulocytes are found in aggregates (later called polyribosomes) of up to six ribosomal units. Upon treatment with RNA-ase, they desintegrate into single ribosomes. The fast-sedimenting aggregates are found to be more active in protein synthesis in terms of incorporation of radioactive amino acids, whereas the single ribosomes are more receptive to stimulation by the artificial messenger RNA poly-U. The findings indicate that the linkage of ribosomes into aggregates is due (...) to the messenger RNA. They support a tape-reading mechanism of protein synthesis whereby growth of the peptide chain is accompanied by shifting the active site of the ribosome from one coding group of nucleotides of the messenger RNA to the next. (shrink)
The generation of viral mutants in vitro was demonstrated by treatment of the isolated RNA of Tobacco Mosaic Virus by nitrous acid. This agent causes deaminations converting cytosine into uracil, and adenine into hypoxanthine. Our assay for mutagenesis was the production of local lesions on a tobacco variety on which the untreated strain produces systemic infections only. A variety of different mutants are generated in this way. Quantitative analysis of the kinetics of mutagenesis leads to the conclusion that alteration of (...) a single out of the 6000 nucleotides of the viral RNA is sufficient for causing a mutation. (shrink)
Within the sedimentation diagram of infective RNA preparations isolated from Tobacco Mosaic Virus, undegraded molecules form a sharp peak with a molecular weight corresponding to the total RNA content of the virus particle. Degradation kinetics by ribonuclease is of the linear, single-target type, indicating that the RNA is single-stranded. The intact RNA of a virus particle thus forms one big single-stranded molecule. Quantitative evaluation of the effect degradation by RNA-ase on the infectivity of the RNA shows that the integrity of (...) the entire molecule is required for its biological activity. (shrink)
Upon separation of the protein from the nucleic acid component of tobacco mosaic virus by phenol, using a fast and gentle procedure, the nucleic acid is infective in assays on tobacco leaves. A series of qualitative and quantitative control experiments demonstrates that the biological activity cannot depend on residual proteins in the preparation, but is a property of isolated nucleic acid which is thus the genetic material of the virus.