In this paper we criticize the existing contradictions in the context of bone marrow donor records in Spain. On the one hand there is the necessity to register as many patients as possible to maximize the opportunities to access a bone marrow transplant and on the other the always important economic issue of having to perform the relevant histocompatibility tests. In fact, these records are globally coordinated with each other to provide a wide range of potential donors and make better (...) use of existing resources. It should be noted that many patients who should have a transplant but who do not achieve a sufficient donor support must follow a medical treatment with high economic costs with tragic results for many of them. For example, those suffering from certain types of leukemia, when not finding that compatible donor, must follow chemotherapy-based treatments. There are also cord blood units banks which allow performing these transplants, also coordinated at the international level. Therefore, according to the principle of reciprocity, it is not justifiable that some countries are doing an important economic effort to have a large registration while others take advantage without making an equivalent effort according to their economic and population size. (shrink)
En el texto que nos ocupa, Tauber insiste en la necesidad de cultivar habilidades prácticas de carácter moral para que los médicos puedan cumplir su cometido. Su argumento central es que la práctica de la medicina debe concebirse como una interacción entre sujetos y abandonar el modelo actual, que ha reducido esa interacción a una relación entre sujeto y objeto . Tauber considera que dicha reducción es debida a la cientifización de la actividad sanitaria que, primero, observa al enfermo como (...) objeto de estudio neutral e impersonal de una medicina concebida como una ciencia natural, y ulteriormente trata de rescatar al enfermo como sujeto desde la bioética y en el espacio político jurídico, de forma que se le otorgan derechos y deberes, al ser considerado dueño absoluto de sus decisiones por estar esencialmente dotado de una identidad autónoma. Pero este reconocimiento es, en opinión de Tauber, insatisfactorio, porque sucede fuera del contexto de la interacción entre médico y paciente que constituye a la medicina. Por ello, la demanda teórica central de su texto es la reintroducción de la dimensión moral en la propia práctica médica, transformando el modelo del conocimiento médico mediante una epistemología moral. (shrink)
J. T. Ismael's monograph is an ambitious contribution to metaphysics and the philosophy of language and mind. She tackles a philosophical question whose origin goes back to Descartes: What am I? The self is not a mere thing among things--but if so, what is it, and what is its relationship to the world? Ismael is an original and creative thinker who tries to understand our problematic concepts about the self and how they are related to our use of (...) language in particular. (shrink)
In 1687 Isaac Newton ushered in a new scientific era in which laws of nature could be used to predict the movements of matter with almost perfect precision. Newton's physics also posed a profound challenge to our self-understanding, however, for the very same laws that keep airplanes in the air and rivers flowing downhill tell us that it is in principle possible to predict what each of us will do every second of our entire lives, given the early conditions of (...) the universe. Can it really be that even while you toss and turn late at night in the throes of an important decision and it seems like the scales of fate hang in the balance, that your decision is a foregone conclusion? Can it really be that everything you have done and everything you ever will do is determined by facts that were in place long before you were born? This problem is one of the staples of philosophical discussion. It is discussed by everyone from freshman in their first philosophy class, to theoretical physicists in bars after conferences. And yet there is no topic that remains more unsettling, and less well understood. If you want to get behind the façade, past the bare statement of determinism, and really try to understand what physics is telling us in its own terms, read this book. The problem of free will raises all kinds of questions. What does it mean to make a decision, and what does it mean to say that our actions are determined? What are laws of nature? What are causes? What sorts of things are we, when viewed through the lenses of physics, and how do we fit into the natural order? Ismael provides a deeply informed account of what physics tells us about ourselves. The result is a vision that is abstract, alien, illuminating, and-Ismael argues-affirmative of most of what we all believe about our own freedom. Written in a jargon-free style, How Physics Makes Us Free provides an accessible and innovative take on a central question of human existence. (shrink)
Quantum mechanics seems to portray nature as nonseparable, in the sense that it allows spatiotemporally separated entities to have states that cannot be fully specified without reference to each other. This is often said to implicate some form of “holism.” We aim to clarify what this means, and why this seems plausible. Our core idea is that the best explanation for nonseparability is a “common ground” explanation, which casts nonseparable entities in a holistic light, as scattered reflections of a more (...) unified underlying reality. (shrink)
In this paper, I defend an empiricist account of modality that keeps a substantive account of modal commitment, but throws out the metaphysics. I suggest that if we pair a deflationary attitude toward representation with a substantive account of how scientific models are constructed and put to use, the result is an account that deflates the metaphysics of modal commitment without deflating the content of modal claims.
Huw Price has argued that on an interventionist account of cause the distinction is perspectival, and the claim prompted some interesting responses from interventionists and in particular an exchange with Woodward that raises questions about what it means to say that one or another structure is perspectival. I’ll introduce his reasons for claiming that the distinction between cause and effect on an interventionist account is perspectival. Then I’ll introduce a distinction between different ways in which a class of concepts can (...) be said to depend on facts about their users. Three importantly different forms of dependence will emerge from the discussion: Pragmatic dependence on us: truth conditions for x-beliefs can be given by a function f \ of more fundamental physical structures making no explicit reference to human agents. But there are any other number of functions ) ontologically on a par with x and what explains the distinguished role f plays in our practical and epistemic lives are facts about us. Implicit relativization: truth conditions for x-beliefs are relative to agent or context; the context supplies the value of a hidden parameter that determines the truth of x-beliefs. Indexicals: like implicit relativization except that the surface syntax contains a term whose semantic value is context-dependent. I suggest that Price’s insights are best understood in the first way. This will draw a crucial disanalogy with his central examples of perspectival concepts, but it will refine the thesis in a way that is more faithful to what his arguments show. The refined thesis will also support generalization to other concepts, and clarify the foundations of the quite distinctive research program that Price has been developing for a number of years. (shrink)
El prefijo “auto” en autoorganización y autopoiesis se refiere a la existencia de una identidad o agencialidad implicada en el orden, organización o producción de un sistema que se corresponde con el sistema mismo, en contraste con el diseño o la influencia de carácter externo. La autoorganización (AO) estudia la manera en la que los procesos de un sistema alcanzan de forma espontánea un orden u organización complejo, bien como una estructura o patrón emergente, bien como algún tipo de finalidad (...) o identidad autoconstruida. En este trabajo nos ocupamos del concepto de AO en el contexto de la problemática sobre la naturaleza la vida y de los organismos vivientes. Este concepto se elabora en diferentes tradiciones científicas y filosóficas, a partir de su origen en la filosofía kantiana. La cibernética trata de emular la organización de los seres vivos y su teleología mediante la construcción de máquinas; desarrolla una perspectiva centrada en la regulación y en la causalidad mutua entre componentes del sistema. Estos trabajos, a veces complementados con la teoría de sistemas y la teoría de la información, son fundamentales para el desarrollo de la ciencia del siglo XX, especialmente las ciencias computacionales y la biología. Una segunda corriente surge desde la termodinámica de los procesos irreversibles alejados del equilibrio a partir, entre otros, de los trabajos de la escuela de Bruselas, en la que la AO se explora como la formación espontánea de estructuras de orden disipativo. Una tercera tradición, tal vez la más profundamente kantiana, se desarrolla en el contexto de la biología del desarrollo e integra a las dos mencionadas previamente, pues combina aspectos de las dos previas en el desarrollo ontogenético. Podemos decir que cada una de estas concepciones de la AO se relaciona con modelos paradigmáticos diferentes. La noción de autopoiesis (AP), por su parte, fue propuesta en los años 70 por los biólogos chilenos Humberto Maturana y Francisco Varela para explicar la organización individual de los seres vivos como un proceso dinámico que genera una identidad desde las operaciones del sistema (Maturana y Varela 1973). Puede decirse que hereda y reorganiza ideas de la tradición de la AO, especialmente la kantiana y la cibernética, para proponer una teoría biológica alternativa. El enfoque autopoiético concibe el fenómeno de la vida y a los seres vivos de forma muy diferente a la teoría de la evolución o la biología molecular que constituían las líneas de investigación predominantes en la biología de su tiempo. La teoría subraya como propiedad básica de un sistema viviente su autoconstitución dinámica como unidad dotada de identidad a partir de interacciones entre sus componentes. Sin embargo, aquellas propiedades de la vida consideradas primordiales en el enfoque darwiniano, como la reproducción o la evolución, se ven como secundarias, pues requieren de la existencia previa de sistemas autopoiéticos. El objetivo de esta voz es examinar diferentes aspectos que configuran las tradiciones autoorganizativa y autopoiética, en especial las tensiones conceptuales internas que permiten comprender los desafíos a los que se enfrentan ambas en el marco de la filosofía y la teoría de la biología, así como la forma en que sus posiciones e intuiciones contrastan con otras perspectivas en biología. (shrink)
The role of probability is one of the most contested issues in the interpretation of contemporary physics. In this paper, I’ll be reevaluating some widely held assumptions about where and how probabilities arise. Larry Sklar voices the conventional wisdom about probability in classical physics in a piece in the Stanford Online Encyclopedia of Philosophy, when he writes that “Statistical mechanics was the first foundational physical theory in which probabilistic concepts and probabilistic explanation played a fundamental role.” And the conventional wisdom (...) about quantum probabilities is that they are basic, not reducible to the types of probabilities we see in statistical mechanics. In the first section of this paper, I’ll argue that in fact classical physics was steeped in probability long before statistical mechanics came on the scene, specifically, that an objective measure over phase space is an indispensable component of any informative physical theory. In the next section, I’ll argue that this objective measure is the fundamental form of physical probability and that quantum probabilities can be defined in terms of it. In the last, I’ll raise some questions about the metaphysical status of the fundamental measure. (shrink)
First para: Before the 17th century, there was not much discussion, and little uniformity in conception, of natural laws. The rise of science in 17th century, Newton’s mathematization of physics, and the provision of strict, deterministic laws that applied equally to the heavens and to the terrestrial realm had a profound impact in transforming the philosophical imagination. A philosophical conception of physical law built on the example of Newtonian Mechanics became quickly entrenched. Between the 17th and 20th centuries, there was (...) a great deal of philosophical interest in probabilities, but probabilities were mostly regarded as having something to do with the management of opinion, not as having a fundamental role in science. Probabilities made their first appearance in an evidently ineliminable way in the laws of a fundamental theory with the advent of quantum mechanics. Quantum probabilities have come to be called ‘chances’ in the philosophical literature, and their interpretation has been one of the central problems in philosophy of science now for almost a century. There continue to be hold-outs that insist that there must be an underlying probability-free replacement for quantum mechanics and Bohmians have had some success in formulating a deterministic alternative to quantum mechanics, but most physicists accept that the probabilistic character of the quantum mechanical laws is likely to be retained in any successor theory. While physics has adjusted itself comfortably to the existence of ineliminably probabilistic laws, philosophy has not managed arrive at a stable interpretation of quantum probability. The difficulty is that there are a number of constraints that an interpretation of chance must satisfy, constraints that appear to be partially definitive of the concept and it proves to be extraordinarily difficult to meet them simultaneously. (shrink)
Is there more that one "Curie's principle"? How far are different formulations legitimate? What are the aspects that make it so scientifically fruitful, independently of how it is formulated? The paper is devoted to exploring these questions. We start with illustrating Curie's original 1894 article and his focus. Then, we consider the way that the discussion of the principle took shape from early commentators to its modern form. We say why we think that the modern focus on the inter-state version (...) of the principle loses sight of some of the most interesting significant applications of the principle. Finally, we address criticism of the principle put forward by Norton and purported counterexamples due to Roberts. (shrink)
In a famous passage drawing implications from determinism, Laplace introduced the image an intelligence who knew the positions and momenta of all of the particles of which the universe is composed, and asserted that in a deterministic universe such an intelligence would be able to predict everything that happens over its entire history. It is not, however, difficult to establish the physical possibility of a counterpredictive device, i.e., a device designed to act counter to any revealed prediction of its behavior. (...) What would happen if a Laplacean intelligence were put into communication with such a device and forced to reveal its prediction of what the device would do on some occasion? On the one hand, it seems that the Laplacean Intelligence should be able to predict the device's behavior. On the other hand, it seems like that device should be able to act counter to the prediction. An examination of the puzzle leads to clarification of what determinism does entail, with some insights about various other things along the way. (shrink)
Definition The authors’ definition of the autopoietic system has evolved through the years. One of them states that an autopoietic system is organized (defined as a unity) as a network of processes of production (transformation and destruction) of components that produces the components which: (1) through their interactions and transformations regenerate and realize the network of processes (relations) that produced them; and (2) constitute it (the machine) as a concrete unity in the space in which they exist by specifying the (...) topological domain of its realization as such a network (Varela 1979, p. 13). Nearly the same formula was earlier used to define an autopoietic machine (Maturana and Varela 1973/1980, 1984/1987, p. 135). (shrink)
I propose, in the context of Everett interpretations of quantum mechanics, a way of understanding how there can be genuine uncertainty about the future notwithstanding that the universe is governed by known, deterministic dynamical laws, and notwithstanding that there is no ignorance about initial conditions, nor anything in the universe whose evolution is not itself governed by the known dynamical laws. The proposal allows us to draw some lessons about the relationship between chance and determinism, and to dispel one source (...) of the tendency among Everettians to introduce consciousness as a primitive element into physical description. (shrink)
The most potentially powerful objection to the possibility of time travel stems from the fact that it can, under the right conditions, give rise to closed causal loops, and closed causal loops can be turned into self-defeating causal chains; folks killing their infant selves, setting out to destroy the world before they were born, and the like. It used to be thought that such chains present paradoxes; the received wisdom nowadays is that they give rise to physical anomalies in the (...) form of inexplicably correlated events. I argue against the received wisdom. I can find nothing in them that argues against the possibility of time travel. (shrink)
This chapter addresses the worry that the existence of causal antecedents to your choices means that you are causally compelled to act as you do. It begins with the folk notion of cause, leads the reader through recent developments in the scientific understanding of causal concepts, and argues that those developments undermine the threat from causal antecedents. The discussion is then used as a model for a kind of naturalistic metaphysics that takes its lead from science, letting everyday concepts be (...) shaped and transformed by scientific developments. (shrink)
The chance of a physical event is the objective, single-case probability that it will occur. In probabilistic physical theories like quantum mechanics, the chances of physical events play the formal role that the values of physical quantities play in classical (deterministic) physics, and there is a temptation to regard them on the model of the latter as describing intrinsic properties of the systems to which they are assigned. I argue that this understanding of chances in quantum mechanics, despite being a (...) part of the orthodox interpretation of the theory and the most prevalent view in the physical community, is incompatible with a very wide range of metaphysical views about the nature of chance. The options that remain are unlikely to be attractive to scientists and scientifically minded philosophers. (shrink)
The outstanding stumbling blocks to any reductive account of phenomenal consciousness remain the subjectivity of phenomenal properties and cognitive and epistemic gaps that plague the relationship between physical and phenomenal properties. I suggest that a deflationary interpretation of both is available to defenders of self- representational accounts.
Quantum mechanics is, at least at first glance and at least in part, a mathematical machine for predicting the behaviors of microscopic particles — or, at least, of the measuring instruments we use to explore those behaviors — and in that capacity, it is spectacularly successful: in terms of power and precision, head and shoulders above any theory we have ever had. Mathematically, the theory is well understood; we know what its parts are, how they are put together, and why, (...) in the mechanical sense (i.e., in a sense that can be answered by describing the internal grinding of gear against gear), the whole thing performs the way it does, how the information that gets fed in at one end is converted into what comes out the other. The question of what kind of a world it describes, however, is controversial; there is very little agreement, among physicists and among philosophers, about what the world is like according to quantum mechanics. Minimally interpreted, the theory describes a set of facts about the way the microscopic world impinges on the macroscopic one, how it affects our measuring instruments, described in everyday language or the language of classical mechanics. Disagreement centers on the question of what a microscopic world, which affects our apparatuses in the prescribed manner, is, or even could be, like intrinsically ; or how those apparatuses could themselves be built out of microscopic parts of the sort the theory describes.[1.. (shrink)
In this paper we examine aspects of Canguilhem’s philosophy of biology, concerning the knowledge of life and its consequences on science and vitalism. His concept of life stems from the idea of a living individual, endowed with creative subjectivity and norms, a Kantian view which “disconcerts logic”. In contrast, two different approaches ground naturalistic perspectives to explore the logic of life and the logic of the living individual in the 1970s. Although Canguilhem is closer to the second, there are divergences; (...) for example, unlike them, he does not dismiss vitalism, often referring to it in his work and even at times describing himself as a vitalist. The reason may lie in their different views of science. (shrink)
The intuitive difference between a system that choreographs the motion of its parts in the service of goals of its own formulation and a system composed of a collection of parts doing their own thing without coordination has been shaken by now familiar examples of self-organization. There is a broad and growing presumption in parts of philosophy and across the sciences that the appearance of centralized information-processing and control in the service of system-wide goals is mere appearance, i.e., an explanatory (...) heuristic we have evolved to predict behavior, but one that will eventually get swept away in the advancing tide of self-organization. I argue that there is a distinction of central importance here, and that no adequate science of complex systems can dispense with it. (shrink)
A reading is given of Curie''s Principle that the symmetry of a cause is always preserved its effects. The truth of the principle is demonstrated and its importance, under the proposed reading, is defended.As far as I see, all a priori statements in physics have their origin in symmetry. (Weyl, Symmetry, p. 126).
I want to consider some features of the position put forward by Julian Barbour in The End of Time that seem to me of particular philosophical interest. At the level of generality at which I'll be concerned with it, the view is relatively easy to describe. It can be arrived at by thinking of time as decomposing in some natural way linearly ordered atomic parts, ‘moments’, and combining an observation about the internal structure of moments with an epistemological doctrine about (...) our access to the past. The epistemological doctrine, which I'll call ‘Presentism’, following Butterfield, is the view that our access to the past is mediated by records, or local representations, of it. The observation is that the state of the world at any moment has the structure of what Barbour calls a ‘time capsule’, which is to say that it constitutes a partial record of its past, it is pregnant with interrelated mutually consistent representations of its own history. (shrink)
The intuitive notion of cause carries with it the idea of compulsion. When we learn that the dynamical laws are deterministic, we give this a causal reading and imagine our actions compelled to occur by conditions laid down at the beginning of the universe. Hume famously argued that this idea of compulsion is borrowed from experience and illegitimately projected onto regularities in the world. Exploiting the interventionist analysis of causal relations, together with an insight about the degeneracy of one’s epistemic (...) relations to one’s own actions, I defend a Humean position with regard to the idea of causal compulsion. Although I discuss only compulsion, a similar story could be told about the temporal directedness of causation. (shrink)
In the evolutionary biology of the Modern Synthesis the study of patterns refers to how to identify and systematise order in lineages, looking for hierarchies or for branching/splitting events in the tree of life, whereas the resulting order is supposed to be due to underlying processes or mechanisms. But patterns and processes play distinct roles in evo-devo: four different views on the role of patterns and processes in descriptions and explanations of development and evolution: A) transformational; B) generative; C) processual; (...) and D) complex are reviewed in this paper. Then, this discussion is related to two issues in evo-devo: homology and variation. (shrink)
The Hard Problem of the mind is addressed and it is argued that physical-phenomenal property identities have the same status as the identification of an ostended bit of physical space and the coordinates assigned the spot on a map of the terrain. It is argued, that is to say, that such identities are, or follow from, stipulations which interpret the map.
Human beings think of themselves in terms of a privileged non-descriptive designator — a mental “I”. Such thoughts are called “de se” thoughts. The mind/body problem is the problem of deciding what kind of thing I am, and it can be regarded as arising from the fact that we think of ourselves non-descriptively. Why do we think of ourselves in this way? We investigate the functional role of “I” (and also “here” and “now”) in cognition, arguing that the use of (...) such non-descriptive “reflexive” designators is essential for making sophisticated cognition work in a general-purpose cognitive agent. If we were to build a robot capable of similar cognitive tasks as humans, it would have to be equipped with such designators. Once we understand the functional role of reflexive designators in cognition, we will see that to make cognition work properly, an agent must use a de se designator in specific ways in its reasoning. Rather simple arguments based upon how “I” works in reasoning lead to the conclusion that it cannot designate the body or part of the body. If it designates anything, it must be something non-physical. However, for the purpose of making the reasoning work correctly, it makes no difference whether “I” actually designates anything. If we were to build a robot that more or less duplicated human cognition, we would not have to equip it with anything for “I” to designate, and general physicalist inclinations suggest that there would be nothing for “I” to designate in the robot. In particular, it cannot designate the physical contraption. So the robot would believe “I exist”, but it would be wrong. Why should we think we are any different? (shrink)
There’s a long history of discussion of probability in philosophy, but objective chance separated itself off and came into its own as a topic with the advent of a physical theory - quantum mechanics - in which chances play a central, and apparently ineliminable, role. In 1980 David Lewis wrote a paper pointing out that a very broad class of accounts of the nature of chance apparently lead to a contradiction when combined with a principle that expresses the role of (...) chance in guiding belief. There is still no settled agreement on the proper response to the Lewis problem. At the time he wrote the article, Lewis despaired of a solution, but, although he never achieved one that satisfied him completely, by 1994, due to work primarily by Thau and Hall, he had come to think the problem could be disarmed if we fudged a little on the meaning of ‘chance’. I’ll say more about this below. What I’m going to suggest, however, is that the qualification is unnecessary. The problem depends on an assumption that should be rejected, viz., that using information about chance to guide credence requires one to conditionalize on the theory of chance that one is using. I’m going to propose a general recipe for using information about chance to guide belief that does not require conditionalization on a theory of chance at any stage. Lewis’ problem doesn’t arise in this setting. (shrink)
The riddle posed by the double nature of the ego certainly lies beyond [the limits of science]. On the one hand, I am a real individual man, born by a mother anddestined to carrying out real and psychical acts. On the other hand, I am "vision" open toreason, a self-penetrating light, immanent sense-giving consciousness, or how ever you may call it, and as such unique.
The chance of a physical event is the objective, single-case probability that it will occur. In probabilistic physical theories like quantum mechanics, the chances of physical events play the formal role that the values of physical quantities play in classical physics, and there is a temptation to regard them on the model of the latter as describing intrinsic properties of the systems to which they are assigned. I argue that this understanding of chances in quantum mechanics, despite being a part (...) of the orthodox interpretation of the theory and the most prevalent view in the physical community, is incompatible with a very wide range of metaphysical views about the nature of chance. The options that remain are unlikely to be attractive to scientists and scientifically minded philosophers. (shrink)
The work of Pere Alberch is crucial to study the early stages of evo-devo. In particular, it illustrates very persuasively why developmental systems have so much to say about the course of evolutionary change. In addition to an important empirical work, he elaborated a stimulating framework of theoretical ideas on biological form, morphological variation, and how developmental processes establish possible evolutionary paths previous to the action of natural selection. In this framework, the study of development and evolution are related through (...) the notion of possible morphologies. In his view, the morphology of organisms shows internal coherence and structure, emergent from complex non linear interactions among parts and with the environment. (shrink)
Human beings think of themselves in terms of a privileged non-descriptive designator — a mental “I”. Such thoughts are called “_de se_” thoughts. The mind/body problem is the problem of deciding what kind of thing I am, and it can be regarded as arising from the fact that we think of ourselves non-descriptively. Why do we think of ourselves in this way? We investigate the functional role of “I” (and also “here” and “now”) in cognition, arguing that the use of (...) such non-descriptive “reflexive” designators is essential for making sophisticated cognition work in a general-purpose cognitive agent. If we were to build a robot capable of similar cognitive tasks as humans, it would have to be equipped with such designators. (shrink)
We review and discuss the recent monograph by David Wallace on Everettian Quantum Mechanics. This book is a high point of two decades of work on Everett in both physics and philosophy. It is also a beautiful and welcome exemplar of a modern way of doing metaphysics. We discuss certain aspects more critically, and take the opportunity to sketch an alternative pragmatist approach to probability in Everett, to be fully developed elsewhere.
Dennett argues that the decentralized view of human cognitive organization finding increasing support in parts of cognitive science undermines talk of an inner self. On his view, the causal underpinnings of behavior are distributed across a collection of autonomous subsystems operating without any centralized supervision. Selves are fictions contrived to simplify description and facilitate prediction of behavior with no real correlate inside the mind. Dennett often uses an analogy with termite colonies whose behavior looks organized and purposeful to the external (...) eye, but which is actually the emergent product of uncoordinated activity of separate components marching to the beat of their individual drums. I examine the cognitive organization of a system steering by an internal model of self and environment, and argue that it provides a model that lies between the image of mind as termite colony and a naïve Cartesianism that views the self as inner substance. (shrink)
For most of the major philosophers of the seventeenth and eighteenth centuries, human cognition was understood as involving the mind’s reflexive grasp of its own contents. But other important figures have described the very idea of a reflexive thought as incoherent. Ryle notably likened the idea of a reflexive thought to an arm that grasps itself. Recent work in philosophy, psychology, and the cognitive sciences has greatly clarified the special epistemic and semantic properties of reflexive thought. This article is an (...) attempt to give an explicit characterization of the structure of reflexive thoughts that explains those properties and avoids the complaints of its critics. (shrink)
Bayesians take “definite” or “single-case” probabilities to be basic. Definite probabilities attach to closed formulas or propositions. We write them here using small caps: PROB(P) and PROB(P/Q). Most objective probability theories begin instead with “indefinite” or “general” probabilities (sometimes called “statistical probabilities”). Indefinite probabilities attach to open formulas or propositions. We write indefinite probabilities using lower case “prob” and free variables: prob(Bx/Ax). The indefinite probability of an A being a B is not about any particular A, but rather about the (...) property of being an A. In this respect, its logical form is the same as that of relative frequencies. For instance, we might talk about the probability of a human baby being female. That probability is about human babies in general — not about individuals. If we examine a baby and determine conclusively that she is female, then the definite probability of her being female is 1, but that does not alter the indefinite probability of human babies in general being female. Most objective approaches to probability tie probabilities to relative frequencies in some way, and the resulting probabilities have the same logical form as the relative frequencies. That is, they are indefinite probabilities. The simplest theories identify indefinite probabilities with relative frequencies.3 It is often objected that such “finite frequency theories” are inadequate because our probability judgments often diverge from relative frequencies. For example, we can talk about a coin being fair (and so the indefinite probability of a flip landing heads is 0.5) even when it is flipped only once and then destroyed (in which case the relative frequency is either 1 or 0). For understanding such indefinite probabilities, it has been suggested that we need a notion of probability that talks about possible instances of properties as well as actual instances.. (shrink)
In September 2008, 10 years after the untimely death of Pere Alberch (1954–1998), the 20th Altenberg Workshop in Theoretical Biology gathered a group of Pere’s students, col- laborators, and colleagues (Figure 1) to celebrate his contribu- tions to the origins of EvoDevo. Hosted by the Konrad Lorenz Institute for Evolution and Cognition Research (KLI) outside Vienna, the group met for two days of discussion. The meeting was organized in tandem with a congress held in May 2008 at the Cavanilles Institute (...) for Biodiversity and Evolutionary Biology (ICBiBE) in Valencia, Spain. The talks at the KLI were equal parts: nostalgic remembrance, excitement over new ways of thinking about old problems, and an unrepressed vitriol against the resurgence of reductionist thinking in EvoDevo. Here we highlight some of the key aspects of Pere’s life and work that informed and infused the talks. (shrink)
The aim of this article is to examine how the notion of biological autonomy may be linked to other notions of autonomy usual in philosophical discussions. Starting in the 70s, the Chilean biologists Humberto Maturana and Francisco Varela developed a theory of life as autopoiesis which gives rise to a new conception of autonomy: biological autonomy. The development of this concept implies the recovery of the notion of the organism in a scientific context in which biology and philosophy of biology (...) are focused on the study of the gene by Molecular Biology and evolution by natural selection, by the so called Modern Synthesis. Here we try to show some implications of the concept of life as autonomy for current biology and how this concept can be related to other more usual ones in philosophy. (shrink)