This paper asks whether statutory social insurance programs, which provide contributory tax-based income support to people with disabilities, are compatible with the disability rights movement's ideas. Central to the movement that led to the Americans with Disabilities Act is the insight that physical or mental conditions do not disable; barriers created by the environment or by social attitudes keep persons with physical or mental differences from participating in society as equals.The conflict between the civil rights approach and insurance seems apparent. (...) A person takes out insurance to deal with tragedy, such as premature death, or damage, such as accidental harm to an automobile or home. Social insurance, for example, the United States Social Security old-age and disability programs, consists of government-run insurance to cover risks of advanced age and disability for which the private market has not provided affordable coverage. But the civil rights approach to disability posits that disability is not a risk, not tragedy, and not a damage or defect. Instead it is a maladaptation of society to human variation. This paper argues that a justification remains for social insurance under the civil rights approach to disability, and further suggests that expansion of social insurance for disability is both compatible with disability rights principles and supported by wise public policy. (shrink)
We advance a theory of inductive reasoning based on similarity, and test it on arguments involving mammal categories. To measure similarity, we quantified the overlap of neural activation in left Brodmann area 37 (LBA37) in response to pictures of different categories; the choice of LBA37 is motivated by previous literature. The theory was tested against probability judgments for 160 arguments generated from 16 mammal categories and a common predicate. The theory’s predictions (based on neural similarity) correlate strongly with these (...) estimates. Other brain regions previously implicated in semantic cognition yield similarities that also allow the model to predict inductive judgments accurately whereas use of rated similarity in place of neural similarity is less successful. (shrink)
We advance a theory of inductive inference designed to predict the conditional probability that certain natural categories satisfy a given predicate given that others do (or do not). A key component of the theory is the similarity of the categories to one another. We measure such similarities in terms of the overlap of metabolic activity in voxels of various posterior regions of the brain in response to viewing instances of the category. The theory and similarity measure are tested against averaged (...) probability judgments elicited from a separate group of subjects. Fruit serve as categories in the present experiment; results are compared to earlier work with mammals. (shrink)
Causal selection is the task of picking out, from a field of known causally relevant factors, some factors as the actual causes of an event or class of events or the causes that "make the difference". The Causal Parity Thesis in the philosophy of biology is basically the claim that there are no grounds for such a selection. The main target of this thesis is usually gene centrism, the doctrine that genes play some special role in ontogeny, which is often (...) described in terms of information-bearing or programming. This paper is concerned with the attempt of refuting the Causal Parity Thesis by offering principles of causal selection that are spelled out in terms of an explicit philosophical account of causation, namely an interventionist account. I show that two such accounts that have been developed, although they contain important insights about causation in biology, nonetheless fail to refute the Causal Parity Thesis: Ken Waters's account of actual difference-making and Jim Woodward's account of causal specificity. A combination of the two also doesn't do the trick, nor does David Lewis's original notion of influence. We need additional conceptual resources. I argue that the resources we need consist in a special class of counterfactual conditionals, namely counterfactuals the antecedents of which describe biologically normal interventions. (shrink)
This paper examines how experimental scientists choose theoretical frameworks as well as their experimental systems for doing research. I start out with Kuhn's claim that there are no (single) algorithms that could determine the choices made by individual scientists. Samir Okasha has recently provided an argument for this claim in terms of social choice theory, which I briefly discuss. Then, I show why this problem is not relevant in an experimental science. There are social mechanisms in place that make sure (...) the community chooses the best framework and a matching experimental system. As historical evidence for this claim, I present the case of classical genetics. (shrink)
This paper examines causal theories of reference with respect to how plausible an account they give of non-physical natural kind terms such as ‘gene’ as well as of the truth of the associated theoretical claims. I first show that reference fixism for ‘gene’ fails. By this, I mean the claim that the reference of ‘gene’ was stable over longer historical periods, for example, since the classical period of transmission genetics. Second, I show that the theory of partial reference does not (...) do justice to some widely held realist intuitions about classical genetics. This result is at loggerheads with the explicit goals usually associated with partial theories of reference, which is to defend a realist semantics for scientific terms. Thirdly, I show that, contrary to received wisdom and perhaps contrary to physics and chemistry, neither reference fixism nor partial reference are necessary in order to hold on to scientific realism about biology. I pinpoint the reasons for this in the nature of biological kinds, which do not even remotely resemble natural kinds (i.e., Lockean real essences) as traditionally conceived. (shrink)
‘DU bist Radio’ (DBR) is an award winning [DBR has been awarded with the “Catholic Media Award of the German Bishops Conference, Prädikat WERTvoll” (2011), the Suisse “Media Prize Aargau/Solothurn” (2010), the German “Alternative Media Award” (2009) and was nominated for the “Prix Europa” (2009)] monthly radio format that goes on air on three Swiss radio stations. The purpose of this program which was first broadcast in 2009 is the development of a new media format which—without applying any journalistic (or (...) other) filter and influence—conveys authenticity of expression amongst society’s most vulnerable fellow citizens such as patients, clients and the socially deprived. So-called marginal groups are encouraged to speak for themselves, as a possible paradigm case for encouraging the inclusion of patients’ and relatives’ “unfiltered” voices in general and in clinical ethics as well. Before handing over the microphone to the groups in focus, a team of journalists, educated in medical ethics, over a period of 4 days, teaches them on-site radio skills and craft. Once this task is completed and the actual production of the broadcast begins, the media crew does not exert any influence whatsoever on the content of the 1-h program. Thus, the final product is solely created and accounted for by the media-inexperienced participants, leading to unforeseen and often surprising results. It is discussed that the DBR approach of fostering authenticity of expression can serve as an enhancement to today’s respect and autonomy oriented field of medical ethics. (shrink)
This paper outlines the often striking parallels of various approaches to ontic vagueness, as well as their even more striking differences. Though circling around the same idea, some of these approaches were developed to solve quite diverse theoretical problems and encounter different challenges. In addition to these difficulties, the frequently disregarded epistemological problems of all theories of ontic vagueness turn out to be even more serious under critical scrutiny. The same holds for the difficulties of deciding, for every case of (...) vagueness, whether the vagueness involved is semantic or ontic. (shrink)
This conception of natural kinds might be dubbed a 'structural kinds' view. It is the conception of kinds offered by ExtOSR within a Humean framework. To invoke structural kinds also means to invoke structural laws. For laws generalize over ...
For loss averse investors, a sequence of risky investments looks less attractive if it is evaluated myopically—an effect called myopic loss aversion (MLA). The consequences of this effect have been confirmed in several experiments and its robustness is largely undisputed. The effect’s causes, however, have not been thoroughly examined with regard to one important aspect. Due to the construction of the lotteries that were used in the experiments, none of the studies is able to distinguish between MLA and an explanation (...) based on (myopic) loss probability aversion (MLPA). This distinction is important, however, in discussion of the practical relevance and the generalizability of the phenomenon. We designed an experiment that is able to disentangle lottery attractiveness and loss probabilities. Our analysis reveals that mere loss probabilities are not as important in this dynamic context as previous findings in other domains suggest. The results favor the MLA over the MLPA explanation. (shrink)
We advance a theory of inductive reasoning based on similarity, and <span class='Hi'>test</span> it on arguments involving mammal categories. To measure similarity, we quantified the overlap of neural activation in left Brodmann area 19 and the left ventral temporal cortex in response to pictures of different categories; the choice of of these regions is motivated by previous literature. The theory was tested against probability judgments for 40 arguments generated from 9 mammal categories and a common predicate. The results are interpreted (...) in the context of Hume’s thesis relating similarity to inductive inference. (shrink)
This paper intends to give a philosophical analysis of the concepts of consciousness and rationality, and particularly to display the correlation existing between what is usually called the “normal state of consciousness” and what should be called the “normal state of rationality”. Eventually, it draws consequences for the correlation existing between “altered/aberrant states of consciousness” and “altered/aberrant rationality”. Although it argues from a broad phenomenological perspective, its grounding technicalities belong to the field of process thought, as fleshed out by the (...) later Alfred North Whitehead (1861–1947). (shrink)
The Introduction highlights the three main themes of the book: (1) the ontological and epistemological status of everyday human consciousness, (2) the distribution of consciousness in the natural world, and (3) panpsychism. The individual contributions to the book are summarized and related literature is briefly discussed.
This collection opens a dialogue between process philosophy and contemporary consciousness studies. Approaching consciousness from diverse disciplinary perspectives—philosophy, psychology, neuroscience, neuropathology, psychotherapy, biology, animal ethology, and physics—the contributors offer empirical and philosophical support for a model of consciousness inspired by the process philosophy of Alfred North Whitehead (1861–1947). Whitehead’s model is developed in ways he could not have anticipated to show how it can advance current debates beyond well-known sticking points. This has trenchant consequences for epistemology and suggests fresh and (...) promising perspectives on such topics as the mind-body problem, the neurobiology of consciousness, animal consciousness, the evolution of consciousness, panpsychism, the unity of consciousness, epiphenomenalism, free will, and causation. (shrink)
The authors argue that the consciousness debate inhabits the same problem space today as it did in the 17th century. They attribute the lack of progress to a mindset still polarized by Descartes’ real distinction between mind and body, resulting in a standoff between humanistic and scientistic approaches. They suggest that consciousness can be adequately studied only by a multiplicity of disciplines so that the paramount problem is how to integrate diverse disciplinary perspectives into a coherent metatheory. Process philosophy is (...) well qualified to attempt such a synthesis. The rationale for the volume is summed up in the book's unifying thesis: normal, focal-attentive consciousness is not the sui generis phenomenon it is usually taken to be, but part of a wider spectrum of experience (including marginal, deviant, and non-human experience) that can only be studied by approaches as diverse as phenomenology, psycho- and neuropathology, biology, and zoology. (shrink)
Although Whitehead’s particular style of philosophizing--looking at traditional philosophical problems in light of recent scientific advances--was part of a trend that began with the scientific revolutions in the early 20th century and continues today, he was marginalized in 20th century philosophy because of his outspoken defense of what he was doing as “metaphysics.” Metaphysics, for Whitehead, is a cross-disciplinary hermeneutic responsible for coherently integrating the perspectives of the special sciences with one another and with everyday experience. The program of such (...) a meta-discipline is challenging to philosophical orthodoxy because it enlarges, rather than narrows, the range of empirical evidence that philosophy must acknowledge. This places Whitehead’s philosophy in a perennial tradition that seeks to resolve fundamental antinomies through synthesis and reconciliation rather than reduction or elimination. (shrink)
Going back at least to Duhem, there is a tradition of thinking that crucial experiments are impossible in science. I analyse Duhem's arguments and show that they are based on the excessively strong assumption that only deductive reasoning is permissible in experimental science. This opens the possibility that some principle of inductive inference could provide a sufficient reason for preferring one among a group of hypotheses on the basis of an appropriately controlled experiment. To be sure, there are analogues to (...) Duhem's problems that pertain to inductive inference. Using a famous experiment from the history of molecular biology as an example, I show that an experimentalist version of inference to the best explanation (IBE) does a better job in handling these problems than other accounts of scientific inference. Furthermore, I introduce a concept of experimental mechanism and show that it can guide inferences from data within an IBE-based framework for induction. Introduction Duhem on the Logic of Crucial Experiments ‘The Most Beautiful Experiment in Biology’ Why Not Simple Elimination? Severe Testing An Experimentalist Version of IBE 6.1 Physiological and experimental mechanisms 6.2 Explaining the data 6.3 IBE and the problem of untested auxiliaries 6.4 IBE-turtles all the way down Van Fraassen's ‘Bad Lot’ Argument IBE and Bayesianism Conclusions CiteULike Connotea Del.icio.us What's this? (shrink)
It has been claimed that the intentional stance is necessary to individuate behavioral traits. This thesis, while clearly false, points to two interesting sets of problems concerning biological explanations of behavior: The first is a general in the philosophy of science: the theory-ladenness of observation. The second problem concerns the principles of trait individuation, which is a general problem in philosophy of biology. After discussing some alternatives, I show that one way of individuating the behavioral traits of an organism is (...) by a special use of the concept of biological function, as understood in an enriched causal role (not selected effect) sense. On this view, a behavioral trait is essentially a special kind of regularity, namely a regularity that is produced by some regulatory mechanism. Regulatory mechanisms always require goal states, which can only be provided by functional considerations. As an example from actual (as opposed to folk) science, I examine the case of social behavior in nematodes. I show that the attempt to explain this phenomenon actually transformed it. This supports the view that scientific explanation does not explain an explanandum phenomenon that is given prior to the explanation; rather, the explanandum is changed by the explanation. This means that there could be a plurality of stances that have some heuristic value initially, but which will be abandoned in favor of a functional characterization eventually. (shrink)
This article examines the role of experimental generalizations and physical laws in neuroscientific explanations, using Hodgkin and Huxley’s electrophysiological model from 1952 as a test case. I show that the fact that the model was partly fitted to experimental data did not affect its explanatory status, nor did the false mechanistic assumptions made by Hodgkin and Huxley. The model satisfies two important criteria of explanatory status: it contains invariant generalizations and it is modular (both in James Woodward’s sense). Further, I (...) argue that there is a sense in which the explanatory heteronomy thesis holds true for this case. †To contact the author, please write to: SNF‐Professorship for Philosophy of Science, University of Basel, Missionsstrasse 21, 4003 Basel, Switzerland; e‐mail: email@example.com. (shrink)
I defend the view that single experiments can provide a sufficient reason for preferring one among a group of hypotheses against the widely held belief that “crucial experiments” are impossible. My argument is based on the examination of a historical case from molecular biology, namely the Meselson-Stahl experiment. “The most beautiful experiment in biology”, as it is known, provided the first experimental evidence for the operation of a semi-conservative mechanism of DNA replication, as predicted by Watson and Crick in 1953. (...) I use a mechanistic account of explanation to show that this case is best construed as an inference to the best explanation (IBE). Furthermore, I show how such an account can deal with Duhem's well-known arguments against crucial experiments as well as Van Fraassen's “bad lot” argument against IBE. (shrink)
Larry Temkin has shown that Derek Parfit’s well-known Mere Addition Paradox suggests a powerful argument for the intransitivity of the relation “better than.” The crux of the argument is the view that equality is essentially comparative, according to which the same inequality can be evaluated differently depending on what it is being compared to. The comparative view of equality should be rejected, I argue, and hence so too this argument for intransitivity.
A number of neo-Kantians have suggested that an act may be morally worthy even if sympathy and similar emotions are present, so long as they are not what in fact motivates right action–so long as duty, and duty alone, in fact motivates. Thus, the ideal Kantian moral agent need not be a cold and unfeeling person, as some critics have suggested. Two objections to this view need to be answered. First, some maintain that motives cannot be present without in fact (...) motivating. Such non-motivating reasons, it is claimed, are incoherent. Second, if such motives are not in fact motivating, then nonetheless the moral agent's performance of right action will be objectionably cold and unfeeling. While the first objection is not compelling, since the alternative according to which all motives in fact motivate but differ in strength suffers from the very same problems attributed to the neo-Kantian view, the second has force, and any account of moral worth must make room for motives such as sympathy actually motivating right action. (shrink)