In this paper I examine the prevailing assumption that there is a right to procreate and question whether there exists a coherent notion of such a right. I argue that we should question any and all procreative activities, not just alternative procreative means and contexts. I suggest that clinging to the assumption of a right to procreate prevents serious scrutiny of reproductive behavior and that, instead of continuing to embrace this assumption, attempts should be made to provide a proper foundation (...) for it. I argue that the focus of procreative activities and discourse on reproductive ethics should be on obligations instead of rights, as rights talk tends to obfuscate recognition of obligations toward others, particularly those who bear the most significant burdens of the procreative process. I examine some possible foundations of a right to procreate as well as John Robertson’s thoughtful account of “procreative liberty” but conclude that at the present time there exists no compelling account of a right to procreate. Finally, I conclude that in the absence of a satisfactory account of a right to procreate, we should refrain from grounding practices or polices on the assumption that there is such a right. (shrink)
Academic-industry collaborations and the conflicts of interest (COI) arising out of them are not new. However, as industry funding for research in the life and health sciences has increased and scandals involving financial COI are brought to the public’s attention, demands for disclosure have grown. In a March 2008 American Council on Science and Health report by Ronald Bailey, he argues that the focus on COI—especially financial COI—is obsessive and likely to be more detrimental to scientific progress and public health (...) than COI themselves. In response, we argue that downplaying the potential negative impact of COI arising out of academic-industry relationships is no less harmful than overreacting to it. (shrink)
In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types of (...) robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients. (shrink)
While human genetic research promises to deliver a range of health benefits to the population, genetic research that takes place in Indigenous communities has proven controversial. Indigenous peoples have raised concerns, including a lack of benefit to their communities, a diversion of attention and resources from non-genetic causes of health disparities and racism in health care, a reinforcement of “victim-blaming” approaches to health inequalities, and possible misuse of blood and tissue samples. Drawing on the international literature, this article reviews the (...) ethical issues relevant to genetic research in Indigenous populations and considers how some of these have been negotiated in a genomic research project currently under way in a remote Aboriginal community. We consider how the different levels of Indigenous research governance operating in Australia impacted on the research project and discuss whether specific guidelines for the conduct of genetic research in Aboriginal and Torres Strait Islander communities are warranted. (shrink)
As we near a time when robots may serve a vital function by becoming caregivers, it is important to examine the ethical implications of this development. By applying the capabilities approach as a guide to both the design and use of robot caregivers, we hope that this will maximize opportunities to preserve or expand freedom for care recipients. We think the use of the capabilities approach will be especially valuable for improving the ability of impaired persons to interface more effectively (...) with their physical and social environments. (shrink)
Erratum to: Bioethical InquiryDOI 10.1007/s11673-012-9391-xLobna Rouhani, University of Melbourne, is a co-author of the article “Genetic Research and Aboriginal and Torres Strait Islander Australians” (2012, 419–432) that was published in the Journal of Bioethical Inquiry’s 9(4) symposium “Cases and Culture.” Her name was omitted from the publication and she should be credited as the third author of this article.
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton’s footsteps. I argue for two related theses in light of this standard interpretation, based on a (...) reading of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. (shrink)
Chow pays lip service (but not much more!) to Type I errors and thus opts for a hard (all-or-none) .05 level of significance (Superego of Neyman/Pearson theory; Gigerenzer 1993). Most working scientists disregard Type I errors and thus utilize a soft .05 level (Ego of Fisher; Gigerenzer 1993), which lets them report gradations of significance (e.g., p.
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error (...) probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a meta-statistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies. Introduction and overview 1.1 Behavioristic and inferential rationales for Neyman–Pearson (N–P) tests 1.2 Severity rationale: induction as severe testing 1.3 Severity as a meta-statistical concept: three required restrictions on the N–P paradigm Error statistical tests from the severity perspective 2.1 N–P test T(): type I, II error probabilities and power 2.2 Specifying test T() using p-values Neyman's post-data use of power 3.1 Neyman: does failure to reject H warrant confirming H? Severe testing as a basic concept for an adequate post-data inference 4.1 The severity interpretation of acceptance (SIA) for test T() 4.2 The fallacy of acceptance (i.e., an insignificant difference): Ms Rosy 4.3 Severity and power Fallacy of rejection: statistical vs. substantive significance 5.1 Taking a rejection of H0 as evidence for a substantive claim or theory 5.2 A statistically significant difference from H0 may fail to indicate a substantively important magnitude 5.3 Principle for the severity interpretation of a rejection (SIR) 5.4 Comparing significant results with different sample sizes in T(): large n problem 5.5 General testing rules for T(), using the severe testing concept The severe testing concept and confidence intervals 6.1 Dualities between one and two-sided intervals and tests 6.2 Avoiding shortcomings of confidence intervals Beyond the N–P paradigm: pure significance, and misspecification tests Concluding comments: have we shown severity to be a basic concept in a N–P philosophy of induction? (shrink)
I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did (...) hold gives a deeper understanding of NPT tests than their typical formulation as accept-reject routines, against which criticisms of NPT are really directed. The Pearsonian view that emerges suggests how NPT tests may avoid these criticisms while still retaining what is central to these methods: the control of error probabilities. (shrink)
In Philosophical Problems of Statistical Inference, Seidenfeld argues that the Neyman-Pearson (NP) theory of confidence intervals is inadequate for a theory of inductive inference because, for a given situation, the 'best' NP confidence interval, [CIλ], sometimes yields intervals which are trivial (i.e., tautologous). I argue that (1) Seidenfeld's criticism of trivial intervals is based upon illegitimately interpreting confidence levels as measures of final precision; (2) for the situation which Seidenfeld considers, the 'best' NP confidence interval is not [CIλ] as (...) Seidenfeld suggests, but rather a one-sided interval [CI0]; and since [CI0] never yields trivial intervals, NP theory escapes Seidenfeld's criticism entirely; (3) Seidenfeld's criterion of non-triviality is inadequate, for it leads him to judge an alternative confidence interval, [CI alt. ], superior to [CIλ] although [CI alt. ] results in counterintuitive inferences. I conclude that Seidenfeld has not shown that the NP theory of confidence intervals is inadequate for a theory of inductive inference. (shrink)
In the past, hypothesis testing in medicine has employed the paradigm of the repeatable experiment. In statistical hypothesis testing, an unbiased sample is drawn from a larger source population, and a calculated statistic is compared to a preassigned critical region, on the assumption that the comparison could be repeated an indefinite number of times. However, repeated experiments often cannot be performed on human beings, due to ethical or economic constraints. We describe a new paradigm for hypothesis testing which uses only (...) rearrangements of data present within the observed data set. The token swap test, based on this new paradigm, is applied to three data sets from cardiovascular pathology, and computational experiments suggest that the token swap test satisfies the Neyman Pearson condition. (shrink)
Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bio-informatics, where classical methods from mathematical optimization fail. (...) Nowadays statistical tools that are able to cope with problems like small sample sizes, non-normal distributions, noisy results, etc. are developed for the analysis of algorithms. Although there are adequate tools to discuss the statistical significance of experimental data, statistical significance is not scientifically meaningful per se. It is necessary to bridge the gap between the statistical significance of an experimental result and its scientific meaning. We will propose some ideas on how to accomplish this task based on Mayo’s learning model (NPT*). (shrink)
Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bioinformatics, where classical methods from mathematical optimization fail. (...) Nowadays statistical tools that are able to cope with problems like small sample sizes, non-normal distributions, noisy results, etc. are developed for the analysis of algorithms. Although there are adequate tools to discuss the statistical significance of experimental data, statistical significance is not scientifically meaningful per se. It is necessary to bridge the gap between the statistical significance of an experimental result and its scientific meaning. We will propose some ideas on how to accomplish this task based on Mayo's learning model. (shrink)
The strong weak truth table (sw) reducibility was suggested by Downey, Hirschfeldt, and LaForte as a measure of relative randomness, alternative to the Solovay reducibility. It also occurs naturally in proofs in classical computability theory as well as in the recent work of Soare, Nabutovsky, and Weinberger on applications of computability to differential geometry. We study the sw-degrees of c.e. reals and construct a c.e. real which has no random c.e. real (i.e., Ω number) sw-above it.
G. E. Moore's ‘A Defence of Common Sense’ has generated the kind of interest and contrariety which often accompany what is new, provocative, and even important in philosophy. Moore himself reportedly agreed with Wittgenstein's estimate that this was his best article, while C. D. Broad has lamented its very great but largely unfortunate influence. Although the essay inspired Wittgenstein to explore the basis of Moore's claim to know many propositions of common sense to be true, A. J. Ayer judges its (...) enduring value to lie in provoking a more sophisticated conception of the very type of metaphysics which disputes any such unqualified claim of certainty. (shrink)
Esse artigo mostra o significado da unicidade da ética e da estética no Tractatus Logico-Philosophicus. Primeiro, ele apresenta os principais aspectos ética tractatiana: que ela não hierarquiza fatos, que ela é eudemonista, e que ela não propõe qualquer finalidade externa às ações do sujeito ético. Segundo, ele mostra que a obra de arte é a expressão da vida de um ponto de vista ético, ou seja, ela é a expressão do significado da vida de um ponto de vista da eternidade. (...) Concluindo, ele mostra que essa concepção propõe uma delimitação absoluta que separa o que é arte e que não é arte. (shrink)
This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...) The analysis first focuses on an agent’s trustworthiness , this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system. (shrink)