New concepts may prove necessary to profit from the avalanche of sequence data on the genome, transcriptome, proteome and interactome and to relate this information to cell physiology. Here, we focus on the concept of large activity-based structures, or hyperstructures, in which a variety of types of molecules are brought together to perform a function. We review the evidence for the existence of hyperstructures responsible for the initiation of DNA replication, the sequestration of newly replicated origins of replication, cell division (...) and for metabolism. The processes responsible for hyperstructure formation include changes in enzyme affinities due to metabolite-induction, lipid-protein affinities, elevated local concentrations of proteins and their binding sites on DNA and RNA, and transertion. Experimental techniques exist that can be used to study hyperstructures and we review some of the ones less familiar to biologists. Finally, we speculate on how a variety of in silico approaches involving cellular automata and multi-agent systems could be combined to develop new concepts in the form of an Integrated cell (I-cell) which would undergo selection for growth and survival in a world of artificial microbiology. (shrink)
Counterfactuals all the way down? Content Type Journal Article DOI 10.1007/s11016-010-9437-9 Authors Jim Woodward, History and Philosophy of Science, 1017 Cathedral of Learning, University of Pittsburgh, Pittsburgh, PA 15260, USA Barry Loewer, Department of Philosophy, Rutgers University, New Brunswick, NJ 08901, USA John W. Carroll, Department of Philosophy and Religious Studies, North Carolina State University, Raleigh, NC 27695-8103, USA Marc Lange, Department of Philosophy, University of North Carolina at Chapel Hill, CB#3125—Caldwell Hall, Chapel Hill, NC 27599-3125, USA Journal Metascience (...) Online ISSN 1467-9981 Print ISSN 0815-0796 Journal Volume Volume 20 Journal Issue Volume 20, Number 1. (shrink)
In Lange 2004a, I argued that 'scientific essentialism' [Ellis 2001 cannot account for the characteristic relation between laws and counterfactuals without undergoing considerable ad hoc tinkering. In recent papers, Brian Ellis 2005 and Toby Handfield 2005 have defended essentialism against my charge. Here I argue that Ellis's and Handfield's replies fail. Even in ordinary counterfactual reasoning, the 'closest possible world' where the electron's electric charge is 5% greater may have less overlap with the actual world in its fundamental natural (...) kinds than a 'more distant possible world' where the electron's charge is 5% greater. But more importantly, essentialism's flexibility in being able to accommodate virtually any relation between laws and counterfactuals is a symptom of essentialism's explanatory impotence as far as that relation is concerned. (shrink)
Scientific essentialism aims to account for the natural laws' special capacity to support counterfactuals. I argue that scientific essentialism can do so only by resorting to devices that are just as ad hoc as those that essentialists accuse Humean regularity theories of employing. I conclude by offering an account of the laws' distinctive relation to counterfactuals that portrays laws as contingent but nevertheless distinct from accidents by virtue of possessing a genuine variety of necessity.
After reviewing several failed arguments that laws cannot change, I use the laws' special relation to counterfactuals to show how temporary laws would have to differ from eternal but time-dependent laws. Then I argue that temporary laws are impossible and that neither Lewis's nor Armstrong's analyses of law nicely accounts for the laws' immutability. *Received September 2006; revised September 2007. ‡Many thanks to John Roberts and John Carroll for valuable comments on earlier drafts, as well as to several anonymous referees (...) for their good suggestions. †To contact the author, please write to: Department of Philosophy, University of North Carolina, CB #3125, Caldwell Hall, Chapel Hill, NC 27599-3125; e-mail: email@example.com. (shrink)
Why should science be so interested in discovering whether p is a law over and above whether p is true? The answer may involve the laws' relation to counterfactuals: p is a law iff p would still have obtained under any counterfactual supposition that is consistent with the laws. But unless we already understand why science is especially concerned with the laws, we cannot explain why science is especially interested in what would have happened under those counterfactual suppositions consistent with (...) the laws. It is argued that the laws form the only non-trivially "stable" set, where "stability" is invariance under a certain range of counterfactual suppositions not itself defined by reference to the laws. It is then explained why science should be so interested in identifying a non-trivially "stable" set: because of stability's relation to the best set of "inductive strategies". (shrink)
Many philosophers have believed that the laws of nature differ from the accidental truths in their invariance under counterfactual perturbations. Roughly speaking, the laws would still have held had q been the case, for any q that is consistent with the laws. (Trivially, no accident would still have held under every such counterfactual supposition.) The main problem with this slogan (even if it is true) is that it uses the laws themselves to delimit qs range. I present a means of (...) distinguishing the laws (and their logical consequences) from the accidents, in terms of their range of invariance under counterfactual antecedents, that does not appeal to physical modalities (or any cognate notion) in delimiting the relevant range of counterfactual perturbations. I then argue that this approach explicates the sense in which the laws possess a kind of necessity. (shrink)
Ceteris-paribus clauses are nothing to worry about; aceteris-paribus qualifier is not poisonously indeterminate in meaning. Ceteris-paribus laws teach us that a law need not be associated straightforwardly with a regularity in the manner demanded by regularity analyses of law and analyses of laws as relations among universals. This lesson enables us to understand the sense in which the laws of nature would have been no different under various counterfactual suppositions — a feature even of those laws that involve no ceteris-paribus (...) qualification and are actually associated with exceptionless regularities. Ceteris-paribus generalizations of an‘inexact science’ qualify as laws of that science in virtue of their distinctive relation to counterfactuals: they form a set that is stable for the purposes of that field. (Though an accident may possess tremendous resilience under counterfactual suppositions, the laws are sharply distinguished from the accidents in that the laws are collectively as resilient as they could logically possibly be.) The stability of an inexact science's laws may involve their remaining reliable even under certain counterfactual suppositions violating fundamental laws of physics. The ceteris-paribus laws of an inexact science may thus possess a kind of necessity lacking in the fundamental laws of physics. A nomological explanation supplied by an inexact science would then be irreducible to an explanation of the same phenomenon at the level of fundamental physics. Island biogeography is used to illustrate how a special science could be autonomous in this manner. (shrink)
Hempel and Giere contend that the existence of provisos poses grave difficulties for any regularity account of physical law. However, Hempel and Giere rely upon a mistaken conception of the way in which statements acquire their content. By correcting this mistake, I remove the problem Hempel and Giere identify but reveal a different problem that provisos pose for a regularity account — indeed, for any account of physical law according to which the state of affairs described by a law-statement presupposes (...) a Humean regularity. These considerations suggest a normative analysis of law-statements. On this view, law-statements are not distinguished from accidental generalizations by the kind of Humean regularities they describe because a law-statement need not describe any Humean regularity. Rather, a law-statement says that in certain contexts, one ought to regard the assertion of a given type of claim, if made with justification, as a proper way to justify a claim of a certain other kind. (shrink)
This paper analyzes the logical truths as (very roughly) those truths that would still have been true under a certain range of counterfactual perturbations.What’s nice is that the relevant range is characterized without relying (overtly, at least) upon the notion of logical truth. This approach suggests a conception of necessity that explains what the different varieties of necessity (logical, physical, etc.) have in common, in virtue of which they are all varieties of necessity. However, this approach places the counterfactual conditionals (...) in an unfamiliar foundational role. (shrink)
Suppose that unobtanium-346 is a rare radioactive isotope. Consider: (1) Every Un346 atom, at its creation, decays within 7 microseconds (µs). (50%) Every Un346 atom, at its creation, has a 50% chance of decaying within 7µs. (1) and (50%) can be true together, but (1) and (50%) cannot together be laws of nature. Indeed, (50%)'s mere (non-vacuous) truth logically precludes (1)'s lawhood. A satisfactory analysis of chance and lawhood should nicely account for this relation. I shall argue first that David (...) Lewis's Humean picture accounts for this relation only by inserting this relation ‘by hand’. Next, I shall argue that this relation between law and chance also threatens a radically non-Humean picture of laws and chances. Finally, I shall offer an account of natural law that nicely explains the relation between chancy facts and deterministic laws. This explanation is not ad hoc because it derives the relation from the very same features of lawhood that account for the laws' special relation to counterfactuals and explain how the laws (unlike the accidents) possess a variety of necessity. The reason that a chancy fact such as (50%) keeps (1) from being a law, without keeping (1) from being true, is ultimately that a chancy fact constrains the subjunctive facts and (1)'s lawhood, unlike (1)'s truth, depends upon the subjunctive facts. (shrink)
The beauty of electricity, or of any other force, is not that the power is mysterious and unexpected, touching every sense at unawares in turn, but that it is under law... Michael Faraday, Wheatstone's Electric Telegraph's Relation to Science (being an argument in favour of the full recognition of Science as a branch of Education), 1854.
I identify the special sort of stability (invariance, resilience, etc.) that distinguishes laws from accidental truths. Although an accident can have a certain invariance under counterfactual suppositions, there is no continuum between laws and accidents here; a law's invariance is different in kind, not in degree, from an accident's. (In particular, a law's range of invariance is not "broader"--at least in the most straightforward sense.) The stability distinctive of the laws is used to explicate what it would mean for there (...) to be multiple grades (or degrees) of physical necessity. Whether there are is for science to discover. (shrink)
Although all mathematical truths are necessary, mathematicians take certain combinations of mathematical truths to be ‘coincidental’, ‘accidental’, or ‘fortuitous’. The notion of a ‘mathematical coincidence’ has so far failed to receive sufficient attention from philosophers. I argue that a mathematical coincidence is not merely an unforeseen or surprising mathematical result, and that being a misleading combination of mathematical facts is neither necessary nor sufficient for qualifying as a mathematical coincidence. I argue that although the components of a mathematical coincidence may (...) possess a common explainer, they have no common explanation; that two mathematical facts have a unified explanation makes their truth non-coincidental. I suggest that any motivation we may have for thinking that there are mathematical coincidences should also motivate us to think that there are mathematical explanations, since the notion of a mathematical coincidence can be understood only in terms of the notion of a mathematical explanation. I also argue that the notion of a mathematical coincidence plays an important role in scientific explanation. When two phenomenological laws of nature are similar, despite concerning physically distinct processes, it may be that any correct scientific explanation of their similarity proceeds by revealing their similarity to be no mathematical coincidence. (shrink)
Among the niftiest arguments for scientific anti-realism is the ‘pessimistic induction’ (also sometimes called ‘the disastrous historical meta-induction’). Although various versions of this argument differ in their details (see, for example, Poincare 1952: 160, Putnam 1978: 25, and Laudan 1981), the argument generally begins by recalling the many scientific theories that posit unobservable entities and that at one time or another were widely accepted. The anti-realist then argues that when these old theories were accepted, the evidence for them was quite (...) persuasive – roughly as compelling as our current evidence is for our best scientific theories positing various unobservable entities. Nevertheless, the anti-realist argues, most of these old theories turned out to be incorrect in the unobservables they posited. Therefore, the anti-realist concludes that with regard to the theories we currently accept, we should believe that probably, most of them are likewise incorrect in the unobservable entities they posit. (This argument appeals to what our best current theories say about unobservables in order to show that the entities posited by some earlier theory are not real. So the argument takes the form of a reductio of the view that the apparent success of some scientific theory justifies our believing in its accuracy regarding unobservables.) Of course, this argument has been criticized on many grounds. Some have argued, for instance, that the scientific theories we currently accept are much better supported than were earlier scientific theories at the time they were accepted. In addition, some have argued that many scientific theories accepted justly in the past were in fact accurate.. (shrink)
It is often presumed that the laws of nature have special significance for scientific reasoning. But the laws' distinctive roles have proven notoriously difficult to identify--leading some philosophers to question if they hold such roles at all. This study offers original accounts of the roles that natural laws play in connection with counterfactual conditionals, inductive projections, and scientific explanations, and of what the laws must be in order for them to be capable of playing these roles. Particular attention is given (...) to laws of special sciences, levels of scientific explanation, natural kinds, ceteris-paribus clauses, and physically necessary non-laws. (shrink)
Recently, biologists and computer scientists who advocate the "strong thesis of artificial life" have argued that the distinction between life and nonlife is important and that certain computer software entities could be alive in the same sense as biological entities. These arguments have been challenged by Sober (1991). I address some of the questions about the rational reconstruction of biology that are suggested by these arguments: What is the relation between life and the "signs of life"? What work (if any) (...) might the concept of "life" (over and above the "signs of life") perform in biology? What turns on scientific disputes over the utility of this concept? To defend my answers to these questions, I compare "life" to certain other concepts used in science, and I examine historical episodes in which an entity's vitality was invoked to explain certain phenomena. I try to understand how these explanations could be illuminating even though they are not accompanied by any reductive definition of "life.". (shrink)
Myrvold (2003) has proposed an attractive Bayesian account of why theories that unify phenomena tend to derive greater epistemic support from those phenomena than do theories that fail to unify them. It is argued, however, that "unification" in Myrvold's sense is both too easy and too difficult for theories to achieve. Myrvold's account fails to capture what it is that makes unification sometimes count in a theory's favor.
I offer an argument regarding chances that appears to yield a dilemma: either the chances at time t must be determined by the natural laws and the history through t of instantiations of categorical properties, or the function ch(•) assigning chances need not satisfy the axioms of probability. The dilemma's first horn might seem like a remnant of determinism. On the other hand, this horn might be inspired by our best scientific theories. In addition, it is entailed by the familiar (...) view that facts about chances at t are ontologically reducible to facts about the laws and the categorical history through t. However, that laws are ontologically prior to chances stands in some tension with the view that chances are governed by laws just as categorical-property instantiations are. The dilemma's second horn entails that if chances are in fact probabilities, then this is a matter of natural law rather than logical or conceptual necessity. I conclude with a suggestion for going between the horns of the dilemma. This suggestion involves a generalization of the notion that chances evolve by conditionalization. Introduction "Chances evolve by conditionalization" How might the lawful magnitude principle be defended? A historical interlude What if chances failed to be determined by the laws and categorical facts? (shrink)
Philosophy of Science: An Anthology assembles some of the finest papers in the philosophy of science since 1945, showcasing enduring classics alongside important and innovative recent work. Introductions by the editor highlight connections between selections, and contextualize the articles Nine sections address topics at the heart of philosophy of science, including realism and the character of scientific theories, scientific explanations and laws of nature, singular casusation, and the metaphysical implications of modern physics Provides an authoritative and accessible overview of the (...) field. (shrink)
Rosenberg has recently argued that explanations supplied by (what he calls) functional biology are mere promissory notes for macromolecular adaptive explanations. Rosenberg's arguments currently constitute one of the most substantial challenges to the autonomy, irreducibility, and indispensability of the explanations supplied by functional biology. My responses to Rosenberg's arguments will generate a novel account of the autonomy of functional biology. This account will turn on the relations between counterfactuals, scientific explanations, and natural laws. Crucially, in their treatment of the laws' (...) relation to counterfactuals, Rosenberg's arguments beg the question against the autonomy of functional biology. This relation is considerably more subtle than is suggested by familiar slogans such as Laws support counterfactuals; accidents don't. (shrink)
We consider the model checking problem for Hybrid Logic. Known algorithms so far are global in the sense that they compute, inductively, in every step the set of all worlds of a Kripke structure that satisfy a subformula of the input. Hence, they always exploit the entire structure. Local model checking tries to avoid this by only traversing necessary parts of the input in order to establish or refute the satisfaction relation between a given world and a formula. We present (...) a framework for local model checking of Hybrid Logic based on games. We show that these games are simple reachability games for ordinary Hybrid Logic and weak Büchi games for Hybrid Logic with operators interpreted over the transitive closure of the accessibility relation of the underlying Kripke frame, and show how to solve these games thus solving the local model checking problem. Since the first-order part of Hybrid Logic is inherently hard to localise in model checking, we give examples, in the end, of how global model checkers can be optimised in certain special cases using well-established techniques like fixpoint approximations and divide-and-conquer algorithms. (shrink)
In a recent paper replying to the inductive sceptic, Samir Okasha says that the Humean argument for inductive scepticism depends on mistakenly construing inductive reasoning as based on a principle of the uniformity of nature. I dispute Okasha's argument that we are entitled to the background beliefs on which (he says) inductive reasoning depends. Furthermore, I argue that the sorts of theoretically impoverished contexts to which a uniformity-of-nature principle has traditionally been restricted are exactly the contexts relevant to the inductive (...) sceptic's argument, and (pace Okasha) are not at all remote from actual scientific practice. I discuss several scientific examples involving such contexts. (shrink)
The concept of transliminality (''a hypothesized tendency for psychological material to cross thresholds into or out of consciousness'') was anticipated by William James (1902/1982), but it was only recently given an empirical definition by Thalbourne in terms of a 29-item Transliminality Scale. This article presents the 17-item Revised Transliminality Scale (or RTS) that corrects age and gender biases, is unidimensional by a Rasch criterion, and has a reliability of .82. The scale defines a probabilistic hierarchy of items that address magical (...) ideation, mystical experience, absorption, hyperaesthesia, manic experience, dream interpretation, and fantasy proneness. These findings validate the suggestions by James and Thalbourne that some mental phenomena share a common underlying dimension with selected sensory experiences (such being overwhelmed by smells, bright lights, sights, and sounds). Low scores on transliminality remain correlated with ''tough mindedness'' in on Cattell 16PF test, as well as ''self-control'' and ''rule consciousness,'' whereas high scores are associated with ''abstractedness'' and an ''openness to change'' on that test. An independent validation study confirmed the predictions implied by our definition of transliminality. Implications for test construction are discussed. (shrink)
The anti-metaphysical intentions of naturalism can be respected without abandoning the project of a normative epistemology. The central assumptions of naturalism imply that (1.) the distinction between action and behaviour is spurious, and (2.) epistemology cannot continue to be a normative project. Difficulties with the second implication have been adressed by Normative Naturalism, but without violating the naturalistic consensus, it can only appreciate means-end-rationality. However, this does not suffice to justify its own implicit normative pretensions. According to our diagnosis, naturalism (...) succumbs to the lure of an absolute observer's stance and thereby neglects the need for participation in communal practice. By contrast, methodical culturalism ties down the concepts of epistemology to the success of such practice. Only from this perspective, the normative force of epistemology can be appreciated. Also, the mind-body problem loosens its hold and the distinction between action and behaviour is reestablished. In the last section, the mutual relation between philosophy andscience is reconsidered. (shrink)
has offered a lovely example to motivate the intuition that a successful prediction has a kind of confirmatory significance that an accommodation lacks. This paper scrutinizes Maher's example. It argues that once the example is tweaked, the intuitive difference there between prediction and accommodation disappears. This suggests that the apparent superiority of prediction to accommodation is actually a side effect of an important difference between the hypotheses that tend to arise in each case.
Glymour, Scheines, Spirtes, and Kelly argue for ‘Spearman's Principle’: one should (ceteris paribus) favour the theory whose ‘free parameters’ need assume no particular values for the theory to save the ‘constraints’ holding of the phenomena. I argue that the rationale they give for Spearman's Principle fails, but that (contra Cartwright) Spearman's Principle cannot be made to favour either of two theories depending on how they are expressed. I examine how one must motivate the demand for a scientific explanation of a (...) parameter's value and how one justifies believing that a constraint should be explained independent of any parameter's particular value. (shrink)
Head-driven phrase structure grammar (HPSG) is one of the most prominent theories employed in deep parsing of natural language. Many linguistic theories are arguably best formalized in extensions of modal or dynamic logic (Keller, Feature logics, infinitary descriptions and grammar, 1993; Kracht, Linguistics Philos 18:401–458, 1995; Moss and Tiede, In: Blackburn, van Benthem, and Wolther (eds.) Handbook of modal logic, 2006), and HPSG seems to be no exception. Adequate extensions of dynamic logic have not been studied in detail, however; the (...) most important aspect is the reference to sets of substructures. In this paper, an adequate extension is identified, and some important results are established: Satisfiability is highly undecidable, and model checking is shown to be in EXPTIME and PSPACE-hard. A fragment with polynomial time model checking procedures is identified; it is shown to cover considerable fragments of HPSG. (shrink)