“Sociosexuality from Argentina to Zimbabwe: A 48-nation study of sex, culture, and strategies of human mating” delivers on its title. By combining empiricism and careful hypothesis testing, it not only contributes to our current knowledge but also points the way to further advances.
I disagree with several of Chow's traditional descriptions and justifications of null hypothesis testing: (1) accepting the null hypothesis whenever p > .05; (2) random sampling from a population; (3) the frequentist interpretation of probability; (4) having the null hypothesis generate both a probability distribution and a complement of the desired conclusion; (5) assuming that researchers must fix their sample size before performing their study.
This commentary advocates an alternative to null-hypothesis testing that was originally represented by Rozeboom over three decades ago yet is not considered by Chow (1996). The central distinguishing feature of this approach is that it allows the scientist to conclude that the data are much better fit by those hypotheses whose values fall inside the interval than by those outside.
Three experiments were used to investigate individuals' hypothesis-testing process as a function of moral perceived utilities , which in turn depend on perceived responsibility and fear of guilt. Moral perceived utilities are related to individuals' moral standards and specifically to people's attempt to face up to their own responsibilities, and to avoid feeling guilty of irresponsibility. The results showed that responsibility and fear of guilt in testing hypotheses involved a process defined as prudential mode , which entails focusing (...) on and confirming the worst hypothesis , and then reiterating the testing process. In particular, the results showed that responsible and guilt-fearing individuals: (1) tended to search prudentially for examples confirming the worst hypothesis and to search for counter-examples falsifying the positive hypothesis; (2) focused on the worst alternative, and tended to confirm it; (3) prudentially kept up the testing process, even if faced with initial positive evidence. Our discussion of the results emphasises how people are largely pragmatic in their hypothesis testing, using efficient cognitive strategies that focus on error minimisation rather than on truth detection. In a context of responsibility and guilt, the errors are linked to people's failure to face up to their own responsibilities, and are thus moral errors. (shrink)
In the past, hypothesis testing in medicine has employed the paradigm of the repeatable experiment. In statistical hypothesis testing, an unbiased sample is drawn from a larger source population, and a calculated statistic is compared to a preassigned critical region, on the assumption that the comparison could be repeated an indefinite number of times. However, repeated experiments often cannot be performed on human beings, due to ethical or economic constraints. We describe a new paradigm for hypothesis testing (...) which uses only rearrangements of data present within the observed data set. The token swap test, based on this new paradigm, is applied to three data sets from cardiovascular pathology, and computational experiments suggest that the token swap test satisfies the Neyman Pearson condition. (shrink)
Although many researchers may perceive empirical hypothesis testing using inferential statistics to be a value free process, I argue that any conclusion based on inferential statistics contains an important and intractable value judgment. Consequently, I conclude that researchers should use the same rationale for examining the ethical ramifications of committing errors in statistical inference that they use to examine the ethical parameters of a proposed research design.
In this commentary, I agree with Chow's treatment of null hypothesis significance testing as a noninferential procedure. However, I dispute his reconstruction of the logic of theory corroboration. I also challenge recent criticisms of NHSTP based on power analysis and meta-analysis.
Humans appear to follow normative rules of inductive reasoning in "premise diversity tasks" that is, they know that dissimilar rather than similar evidence is better for generalising hypotheses. In three experiments, we use a "hypothesis limitation task" to compare a related inductive reasoning skill knowing how to limit hypotheses by using a negative test strategy. Participants are told that one category member has some property (e.g. Dogs have a merocrine gland) and are asked what evidence they would test to ensure (...) that either all (generalisation) or only (limitation) category members have that property (e.g. All/Only mammals have merocrine glands; tests: wolf, bull, crocodile). Despite participants' reluctance to use negative tests in the Wason 2-4-6 task and other reasoning tasks, participants do use normatively correct negative tests in the hypothesis limitation task as often as they use diverse positive tests in the premise diversity task. Moreover, when given a hypothesis limitation task before a rule evaluation task (similar to the 2-4-6 task), the use of negative tests increases. Thus, when testing hypotheses, people can and do use the right kind of test strategy for the task. (shrink)
I agree with Gibbs that the message of the base rate literature reads differently depending on which null hypothesis is used to frame the issue. But I argue that the normative null hypothesis, H0: “People use base rates in a Bayesian manner,” is no longer appropriate. I also challenge Adler's distinction between unused and ignored base rates, and criticize Goodie's reluctance to shift research attention to the field. Macchi's arguments about textual ambiguities in traditional base rate problems suggest that empirical (...)testing is needed to tease apart the effects of problem clarification and problem framing. Macdonald's, Fletcher's and Snow's skepticism about the value of Bayesian methods in real world judgment tasks is treated as a challenge for the next generation of empirical base rate studies. (shrink)
We argue that Chow's defense of hypothesis-testing procedures attempts to restore an aura of objectivity to the core procedures, allowing these to take on the role of judgment that should be reserved for the researcher. We provide a brief overview of what we call the historical case against hypothesis testing and argue that the latter has led to a constrained and simplified conception of what passes for theory in psychology.
According to one theory, the brain is a sophisticated hypothesis tester: perception is Bayesian unconscious inference where the brain actively uses predictions to <span class='Hi'>test</span>, and then refine, models about what the causes of its sensory input might be. The brain’s task is simply continually to minimise prediction error. This theory, which is getting increasingly popular, holds great explanatory promise for a number of central areas of research at the intersection of philosophy and cognitive neuroscience. I show how the theory (...) can help us understand striking phenomena at three cognitive levels: vision, sensory integration, and belief. First, I illustrate central aspects of the theory by showing how it provides a nice explanation of why binocular rivalry occurs. Then I suggest how the theory may explain the role of the unified sense of self in rubber hand and full body illusions driven by visuotactile conflict. Finally, I show how it provides an approach to delusion formation that is consistent with one-deficit accounts of monothematic delusions. (shrink)
The interpretation of tests of a point null hypothesis against an unspecified alternative is a classical and yet unresolved issue in statistical methodology. This paper approaches the problem from the perspective of Lindley's Paradox: the divergence of Bayesian and frequentist inference in hypothesis tests with large sample size. I contend that the standard approaches in both frameworks fail to resolve the paradox. As an alternative, I suggest the Bayesian Reference Criterion: (i) it targets the predictive performance of the null hypothesis (...) in future experiments; (ii) it provides a proper decision-theoretic model for testing a point null hypothesis and (iii) it convincingly accounts for Lindley's Paradox. (shrink)
Several of Krueger & Funder's (K&F's) suggestions may promote more balanced social cognition research, but reconsidered null hypothesis statistical testing (NHST) is not one of them. Although NHST has primarily supported negative conclusions, this is simply because most conclusions have been negative. NHST can support positive, negative, and even balanced conclusions. Better NHST practices would benefit psychology, but would not alter the balance between positive and negative approaches.
The function of REM, or any other stage of sleep, can currently only be conjectured. A rational evaluation of the role of REM in memory processing requires systematic testing of hypotheses that are optimally derived from a complete synthesis of existing knowledge. Our view is that the large number of studies supporting a relationship between REM-related brain activity and memory is not easily explained away. [Vertes & Eastman].
The first of 3 objectives in this study was to address the major problem with Null Hypothesis Significance Testing (NHST) and 2 common misconceptions related to NHST that cause confusion for students and researchers. The misconcep- tions are (a) a smaller p indicates a stronger relationship and (b) statistical signifi- cance indicates practical importance. The second objective was to determine how this problem and the misconceptions were treated in 12 recent textbooks used in edu- cation research methods and statistics (...) classes. The third objective was to examine how the textbooks’ presentations relate to current best practices and how much help they provide for students. The results show that almost all of the textbooks fail to acknowledge that there is controversy surrounding NHST. Most of the textbooks dealt, at least minimally, with the alleged misconceptions of interest, but they pro- vided relatively little help for students. (shrink)
Gangestad & Simpson's account of the role of good-gene sexual selection in conditional human mating strategies is reasonably convincing, but could be more so with a little more attention to (1), dropping unnecessary sub hypotheses and especially (2) the inclusion of alternative evolutionary explanations.
Conscious perception and attention are difficult to study, partly because their relation to each other is not fully understood. Rather than conceiving and studying them in isolation from each other it may be useful to locate them in an independently motivated, general framework, from which a principled account of how they relate can then transpire. Accordingly, these mental phenomena are here reviewed through the prism of the increasingly influential predictive coding framework. On this framework, conscious perception can be seen as (...) the upshot of prediction error minimisation and attention as the optimisation of precision expectations during such perceptual inference. This approach maps on well to a range of standard characteristics of conscious perception and attention, and can be used to explain a range of empirical findings on their relation to each other. (shrink)
This paper investigates whether there is a discrepancy between the stated and actual aims in biomechanical research, particularly with respect to hypothesis testing. We present an analysis of one hundred papers recently published in The Journal of Experimental Biology and Journal of Biomechanics, and examine the prevalence of papers which (a) have hypothesis testing as a stated aim, (b) contain hypothesis testing claims that appear to be purely presentational (i.e. which seem not to have influenced the actual (...) study), and (c) have exploration as a stated aim. We found that whereas no papers had exploration as a stated aim, 58% of papers had hypothesis testing as a stated aim. We had strong suspicions, at the bare minimum, that presentational hypotheses were present in 31% of the papers in this latter group. (shrink)
We test conformity-related values applying the value-pragmatics hypothesis by evaluating how personal values related to compliance moderate the relationships between situational factors and unethical decisions. We examine the direct and indirect effects of the values of traditionalism, conformity, and stimulation, as they combine with the situational factors of rewards and punishments in the person–situation interaction model. We find strong support for the value-pragmatics view of ethical decision making and further build support for the person–situation interaction model.
Recent work by Joshua Knobe has established that people are far more likely to describe bad but foreseen side effects as intentionally performed than good but foreseen side effects (this is sometimes called the 'Knobe effect' or the 'side-effect effect.' Edouard Machery has proposed a novel explanation for this asymmetry: it results from construing the bad side effect as a cost that must be incurred to receive a benefit. In this paper, I argue that Machery's 'trade-off hypothesis' is wrong. I (...) do this by reproducing the asymmetry between judgments about good and bad side effects in cases that cannot plausibly be construed as trade-offs. (shrink)
A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p -value to make such a test. The model will be rejected if something surprising is observed (relative to what else might have been observed). It is shown that the relation between this measure of surprise (the s -value) and the surprise indices of Weaver and Good is similar (...) to the relationship between a p -value, a corresponding odds-ratio, and a logit or log-odds statistic. The s -value is always larger than the corresponding p -value, and is not uniformly distributed. Difficulties with the whole approach are discussed. (shrink)
Frequentism and Bayesianism represent very different approaches to hypothesis testing, and this presents a skeptical challenge for Bayesians. Given that most empirical research uses frequentist methods, why (if at all) should we rely on it? While it is well known that there are conditions under which Bayesian and frequentist methods agree, without some reason to think these conditions are typically met, the Bayesian hasn’t shown why we are usually safe in relying on results reported by significance testers. In this (...) article, I provide arguments that such conditions will usually be met; the Bayesian can maintain her theoretical disagreement with the frequentist while holding that her error is mostly harmless in practice. (shrink)
Unfortunately, reading Chow's work is likely to leave the reader more confused than enlightened. My preferred solutions to the “controversy” about null- hypothesis testing are: (1) recognize that we really want to test the hypothesis that an effect is “small,” not null, and (2) use Bayesian methods, which are much more in keeping with the way humans naturally think than are classical statistical methods.
It is argued that current attempts to model human learning behavior commonly fail on one of two counts: either the model assumptions are artificially restricted so as to permit the application of mathematical techniques in deriving their consequences, or else the required complex assumptions are imbedded in computer programs whose technical details obscure the theoretical content of the model. The first failing is characteristic of so-called mathematical models of learning, while the second is characteristic of computer simulation models. An approach (...) to model building which avoids both these failings is presented under the title of a black-box theory of learning. This method permits the statement of assumptions of any desired complexity in a language which clearly exhibits their theoretical content.Section II of the paper is devoted to the problem of testing and comparing alternative learning theories. The policy advocated is to abandon attempts at hypothesis testing. It is argued that, in general, we not only lack sufficient data and sufficiently powerful techniques to test hypotheses, but that the truth of a model is not really the issue of basic interest. A given model may be true in the sense that on the basis of available evidence we cannot statistically reject it, but not interesting in the sense that it provides little information about the processes underlying behavior. Rather, we should accept or reject models on the basis of how much information they provide about the way in which subjects respond to environmental structure. This attitude toward model testing is made precise by introducing a formal measure of the information content of a model. Finally, it is argued that the statistical concept of degrees-of-freedom is misleading when used in the context of model testing and should be replaced by a measure of the information absorbed from the data in estimating parameters. (shrink)
Analyzing three key cases that arose in 1993, I argue that the practice of sending in "testers" -- persons posing as job applicants -- to ferret out workplace discrimination is easier to defend from an ethical standpoint in an agency's investigation stems from an actual complaint. By contrast, defendants may rightfully challenge the legitimacy of the procedures used for "test" subjects when an investigation is based solely on the general goals of an antidiscrimination agency.
Some monothematic types of delusions may arise because subjects have unusual experiences. The role of this experiential component in the pathogenesis of delusion is still not understood. Focussing on delusions of alien control, we outline a model for reality testing competence on unusual experiences. We propose that nascent delusions arise when there are local failures of reality testing performance, and that monothematic delusions arise as normal responses to these. In the course of this we address questions concerning the (...) tenacity with which delusions are maintained, their often bizarre content, the patients' inability to dismiss them, and their often circumscribed character. (shrink)
Chow's endorsement of a limited role for null hypothesis significance testing is a needed corrective of research malpractice, but his decision to place this procedure in a hypothetico-deductive framework of Popperian cast is unwise. Various failures of this version of the hypothetico-deductive method have negative implications for Chow's treatment of significance testing, meta-analysis, and theory evaluation.
This book by one of the world's foremost philosophers in the fields of epistemology and logic offers an account of suppositional reasoning relevant to practical deliberation, explanation, prediction and hypothesis testing. Suppositions made 'for the sake of argument' sometimes conflict with our beliefs, and when they do, some beliefs are rejected and others retained. Thanks to such belief contravention, adding content to a supposition can undermine conclusions reached without it. Subversion can also arise because suppositional reasoning is ampliative. These (...) two types of nonmonotonic logic are the focus of this book. A detailed comparison of nonmonotonicity appropriate to both belief contravening and ampliative suppositional reasoning reveals important differences that have been overlooked. (shrink)
During the last few years a large number of companies have emerged offering DNA testing via the Internet “direct-to-consumer”. In this paper, I analyse the rhetoric appeal to personal identity put forward on the websites of some of these consumer genomics companies. The investigation is limited to non-health-related DNA testing and focuses on individualistic and communitarian—in a descriptive sense—visions of identity. The individualistic visions stress that each individual is unique and suggest that this uniqueness can be supported by, (...) for example, DNA fingerprinting. The communitarian visions emphasise that individuals are members of communities, in this case genetic communities. It is suggested that these visions can be supported by, for example, various types of tests for genetic ancestry tracing. The main part of the paper is devoted to an analysis of these communitarian visions of identity and the DNA tests they refer to. (shrink)
Broad genome-wide testing is increasingly finding its way to the public through the online direct-to-consumer marketing of so-called personal genome tests. Personal genome tests estimate genetic susceptibilities to multiple diseases and other phenotypic traits simultaneously. Providers commonly make use of Terms of Service agreements rather than informed consent procedures. However, to protect consumers from the potential physical, psychological and social harms associated with personal genome testing and to promote autonomous decision-making with regard to the testing offer, we (...) argue that current practices of information provision are insufficient and that there is a place – and a need – for informed consent in personal genome testing, also when it is offered commercially. The increasing quantity, complexity and diversity of most testing offers, however, pose challenges for information provision and informed consent. Both specific and generic models for informed consent fail to meet its moral aims when applied to personal genome testing. Consumers should be enabled to know the limitations, risks and implications of personal genome testing and should be given control over the genetic information they do or do not wish to obtain. We present the outline of a new model for informed consent which can meet both the norm of providing sufficient information and the norm of providing understandable information. The model can be used for personal genome testing, but will also be applicable to other, future forms of broad genetic testing or screening in commercial and clinical settings. (shrink)
In this article, I claim that at least some young people have the requisite capacity for political participation, and that the exclusion of these young people is in breach of the reasonable expectation that all capable citizens are included in democratic processes. I suggest implementing a capacity test for those under the current age of majority. I outline a system of capacity testing for the youth, distinguish this proposal from prior attempts to justify capacity testing and argue that (...) a suitably constrained capacity testing regime is not simply defensible, but superior to the current system, which arbitrarily excludes some capable members of society from participation. Finally, I explain why only this limited capacity testing regime is acceptable. (shrink)
Chow's (1996) Statistical significance is a defence of null-hypothesis significance testing (NHSTP). The most common and straightforward use of significance testing is for the statistical corroboration of general hypotheses. In this case, criticisms of NHSTP, at least those mentioned in the book, are unfounded or misdirected. This point is driven home by the author a bit too forcefully and meticulously. The awkward and cumbersome organisation and argumentation of the book makes it even harder to read.
Two alternative solutions to the problem of computing the values of theoretical quantities and of testing theoretical hypotheses are Sneed’s structuralist eliminationism and Glymour’s bootstrapping. Sneed attempts to solve the problem by eliminating theoretical quantities by means of the so-called Ramsey-Sneed sentence that represents the global empirical claim of the given theory. Glymour proposes to solve the problem by deducing the values of the theoretical quantities from the hypothesis to be tested. In those cases where the theoretical quantities are (...) not strongly Ramsey-eliminable, eliminationism does not succeed in computing the values of theoretical quantities, and it is compelled to use bootstrapping in this task. On the other hand, we see that a general notion of bootstrapping provides a formally correct procedure for computing theoretical quantities, and thus contributes to the solution to the problem of testing theoretical hypotheses involving these quantities. (shrink)
In the present paper some formal aspects of the hypothesis-directed stage of medical diagnosis are studied and an algorithm of the diagnostic problem solving process is described. A given field of medical knowledge is represented by a pair of graphs. The sentences describing observed symptoms and signs constitute the data on which the algorithm is based. In the first step, the set of true judgments is determined and the hypotheses which are impossible in a given situation are rejected. In consequence (...) the model of knowledge is modified and working hypotheses are chosen. Further steps consist in selection of hypotheses in view of testing. The set of expected symptoms is determined and these are classified according to their diagnostic value. The process ends when the conditions of arriving at a useful solution of a given problem are fulfilled. The description of the algorithm is based on a clinical example. The model aims at reflecting some of the most important structural features of the diagnosis and does not embrace the probability evaluation problems. (shrink)
Several difficulties have been raised concerning applicability of Glymour's model to developing and "un-natural" sciences, those contexts in which he claims it should be most clearly instantiated. An analysis of testing in such a field, archaeology, indicates that while bootstrapping may be realized in general outline practice necessarily departs from the ideal in at least three important respects 1) testing is not strictly theory contained, 2) the theory-mediated inference from evidence to test hypothesis is not exclusively (...) deductive and, 3) structural considerations do not displace or take precedence over substantive considerations. These points of divergence reflect the fact that bootstrapping in developing and exploratory sciences is as much a process of theory construction as of theory testing. (shrink)
Immanuel Kant’s three great Critiques stand among the bulkier monuments of Enlightenment thought. The first is best known; the last had until recently been rather less studied. But his final Critique contains, I contend, a remarkable development of Kant’s theory of how human beings use and create systems of knowledge. While Kant was not himself concerned with the neuronal substrates of cognition, I argue this development yields a novel empirical hypothesis susceptible of experimental investigation. Here I present (...) the Kantian motivation and describe experimental work aimed at testing predictions arising from the new hypothesis. (shrink)
Elliott Sober (1987, 1993) and Orzack and Sober (forthcoming) argue that adaptationism is a very general hypothesis that can be tested by testing various particular hypotheses that invoke natural selection to explain the presence of traits in populations of organisms. In this paper, I challenge Sobers claim that adaptationism is an hypothesis and I argue that it is best viewed as a heuristic (or research strategy). Biologists would still have good reasons for employing this research strategy even if it (...) turns out that natural selection is not the most important cause of evolution. (shrink)
In this paper we argue that it is often adaptive to use one's background beliefs when interpreting information that, from a normative point of view, is incomplete. In both of the experiments reported here participants were presented with an item possessing two features and were asked to judge, in the light of some evidence concerning the features, to which of two categories it was more likely that the item belonged. It was found that when participants received evidence relevant to (...) just one of these hypothesised categories (i.e. evidence that did not form a Bayesian likelihood ratio) they used their background beliefs to interpret this information. In Experiment 2, on the other hand, participants behaved in a broadly Bayesian manner when the evidence they received constituted a completed likelihood ratio. We discuss the circumstances under which participants, when making their judgements, consider the alternative hypothesis. We conclude with a discussion of the implications of our results for an understanding of hypothesis testing, belief revision, and categorisation. (shrink)
According to the knowledge argument, physicalism fails because when physically omniscient Mary first sees red, her gain in phenomenal knowledge involves a gain in factual knowledge. Thus not all facts are physical facts. According to the ability hypothesis, the knowledge argument fails because Mary only acquires abilities to imagine, remember and recognise redness, and not new factual knowledge. I argue that reducing Mary’s new knowledge to abilities does not affect the issue of whether she also learns factually: I show that (...) gaining specific new phenomenal knowledge is required for acquiring abilities of the relevant kind. Phenomenal knowledge being basic to abilities, and not vice versa, it is left an open question whether someone who acquires such abilities also learns something factual. The answer depends on whether the new phenomenal knowledge involved is factual. But this is the same question we wanted to settle when first considering the knowledge argument. The ability hypothesis, therefore, has offered us no dialectical progress with the knowledge argument, and is best forgotten. (shrink)
As drug testing has become increasingly used to maximize corporate profits by minimizing the economic impact of employee substance abuse, numerous arguments have been advanced which draw the ethical justification for such testing into question, including the position that testing amounts to a violation of employee privacy by attempting to regulate an employee's behavior in her own home, outside the employer's legitimate sphere of control. This article first proposes that an employee's right to privacy is violated when (...) personal information is collected or used by the employer in a way which is irrelevant to the terms of employment. This article then argues that drug testing is relevant and therefore ethically justified within the terms of the employment agreement, and therefore does not amount to a violation of an employee's right to privacy. Arguments to the contrary, including the aforementioned appeal to the employer's limited sphere of control, do not account for reasonable constraints on employee privacy which are intrinsic to the demands of the workplace and implicit in the terms of the employment contract. (shrink)
David Lewis (1983, 1988) and Laurence Nemirow (1980, 1990) claim that knowing what an experience is like is knowing-how, not knowing-that. They identify this know-how with the abilities to remember, imagine, and recognize experiences, and Lewis labels their view ‘the Ability Hypothesis’. The Ability Hypothesis has intrinsic interest. But Lewis and Nemirow devised it specifically to block certain anti-physicalist arguments due to Thomas Nagel (1974, 1986) and Frank Jackson (1982, 1986). Does it?
In psychiatry, pharmacological drugs play an important experimental role in attempts to identify the neurobiological causes of mental disorders. Besides being developed in applied contexts as potential treatments for patients with mental disorders, pharmacological drugs play a crucial role in research contexts as experimental instruments that facilitate the formulation and revision of neurobiological theories of psychopathology. This paper examines the various epistemic functions that pharmacological drugs serve in the discovery, refinement, testing, and elaboration of neurobiological theories of mental disorders. (...) I articulate this thesis with reference to the history of antipsychotic drugs and the evolution of the dopamine hypothesis of schizophrenia in the second half of the twentieth century. I argue that interventions with psychiatric patients through the medium of antipsychotic drugs provide researchers with information and evidence about the neurobiological causes of schizophrenia. This analysis highlights the importance of pharmacological drugs as research tools in the generation of psychiatric knowledge and the dynamic relationship between practical and theoretical contexts in psychiatry. (shrink)
: On the grounds that rape is an act of violence, not a natural act of intercourse, Roman Catholic teaching traditionally has permitted women who have been raped to take steps to prevent pregnancy, while consistently prohibiting abortion even in the case of rape. Recent scientific evidence that emergency contraception (EC) works primarily by preventing ovulation, not by preventing implantation or by aborting implanted embryos, has led Church authorities to permit the use of EC drugs in the setting of rape. (...) Doubts about whether an abortifacient effect of EC drugs has been completely disproven have led to controversy within the Church about whether it is sufficient to determine that a woman is not pregnant before using EC drugs or whether one must establish that she has not recently ovulated. This article presents clinical, epidemiological, and ethical arguments why testing for pregnancy should be morally sufficient for a faith community that is strongly opposed to abortion. (shrink)
Sherri Roush () and I (, ) have each argued independently that the most significant challenge to scientific realism arises from our inability to consider the full range of serious alternatives to a given hypothesis we seek to test, but we diverge significantly concerning the range of cases in which this problem becomes acute. Here I argue against Roush's further suggestion that the atomic hypothesis represents a case in which scientific ingenuity has enabled us to overcome the problem, showing how (...) her general strategy is undermined by evidence I have already offered in support of what I have called the 'problem of unconceived alternatives'. I then go on to show why her strategy will not generally (if ever) allow us to formulate and test exhaustive spaces of hypotheses in cases of fundamental scientific theorizing. (shrink)
If an experiment on a small number of animals can cure a disease that affects tens of thousands, it could be justifiable. Whether this is really the case in Professor Aziz’s experiments, about which I was asked in the BBC2 documentary Monkeys, Rats and Me: Animal Testing, is a question I have not studied sufficiently to offer an opinion about. Certainly it has been disputed. In my book Animal Liberation I propose asking experimenters who use animals if they would (...) be prepared to carry out their experiments on human beings at a similar mental level — say, those born with irreversible brain damage. (shrink)
According to the Ability Hypothesis, knowing what it is like to have experience E is just having the ability to imagine or recognize or remember having experience E. I examine various versions of the Ability Hypothesis and point out that they all face serious objections. Then I propose a new version that is not vulnerable to these objections: knowing what it is like to experience E is having the ability todiscriminate imagining or having experience E from imagining or having any (...) other experience. I argue that if we replace the ability to imagine or recognize with the ability to discriminate, the Ability Hypothesis can be salvaged. (shrink)
The Perceptual Hypothesis is that we sometimes see, and thereby have non-inferential knowledge of, others' mental features. The Perceptual Hypothesis opposes Inferentialism, which is the view that our knowledge of others' mental features is always inferential. The claim that some mental features are embodied is the claim that some mental features are realised by states or processes that extend beyond the brain. The view I discuss here is that the Perceptual Hypothesis is plausible if, but only if, the mental features (...) it claims we see are suitably embodied. Call this Embodied Perception Theory. I argue that Embodied Perception Theory is false. It doesn't follow that the Perceptual Hypothesis is implausible. The considerations which serve to undermine Embodied Perception Theory serve equally to undermine the motivations for assuming that others' mental lives are always imperceptible. (shrink)
Routine testing is a practice whereby medical professionals ask all patients whether they would like an HIV test, regardless of whether there is anything unique to a given patient that suggests the presence of HIV. In three respects I aim to offer a fresh perspective on the debate about whether a developing country with a high rate of HIV infection morally ought to adopt routine testing. First, I present a neat framework that organises the moral issues at stake, (...) bringing out the basic principles involved and exhibiting their logical relationships. Second, appealing to the Kantian principle of respect for the dignity of persons, I offer a thorough justification for routine testing when it serves as a gateway to anti-retroviral treatment (ART). Third, I present a respect-based defence of the controversial and novel thesis that routine testing is morally justified even if ART is unaffordable or otherwise unavailable. (shrink)
Enactive approaches foreground the role of interpersonal interaction in explanations of social understanding. This motivates, in combination with a recent interest in neuroscientific studies involving actual interactions, the question of how interactive processes relate to neural mechanisms involved in social understanding. We introduce the Interactive Brain Hypothesis (IBH) in order to help map the spectrum of possible relations between social interaction and neural processes. The hypothesis states that interactive experience and skills play enabling roles in both the development and current (...) function of social brain mechanisms, even in cases where social understanding happens in the absence of immediate interaction. We examine the plausibility of this hypothesis against developmental and neurobiological evidence and contrast it with the widespread assumption that mindreading is crucial to all social cognition. We describe the elements of social interaction that bear most directly on this hypothesis and discuss the empirical possibilities open to social neuroscience. We propose that the link between coordination dynamics and social understanding can be best grasped by studying transitions between states of coordination. These transitions form part of the self-organization of interaction processes that characterize the dynamics of social engagement. The patterns and synergies of this self-organization help explain how individuals understand each other. Various possibilities for role-taking emerge during interaction, determining a spectrum of participation. This view contrasts sharply with the observational stance that has guided research in social neuroscience until recently. We also introduce the concept of readiness to interact to describe the practices and dispositions that are summoned in situations of social significance (even if not interactive). This latter idea links interactive factors to more classical observational scenarios. (shrink)
As knowledge increases about the human genome,prenatal genetic testing will become cheaper,safer and more comprehensive. It is likelythat there will be a great deal of support formaking prenatal testing for a wide range ofgenetic disorders a routine part of antenatalcare. Such routine testing is necessarilycoercive in nature and does not involve thesame standard of consent as is required inother health care settings. This paper askswhether this level of coercion is ethicallyjustifiable in this case, or whether pregnantwomen have (...) a right to remain in ignorance ofthe genetic make-up of the fetus they arecarrying. While information gained by genetictesting may be useful for pregnant women whenmaking decisions about their pregnancy, it doesnot prevent harm to future children. It isargued that as this kind of testing providesinformation in the interests of the pregnantwomen and not in the interests of any futurechild, the same standards of consent that arenormally required for genetic testing should berequired in this instance. (shrink)
This paper introduces a new family of cases where agents are jointly morally responsible for outcomes over which they have no individual control, a family that resists standard ways of understanding outcome responsibility. First, the agents in these cases do not individually facilitate the outcomes and would not seem individually responsible for them if the other agents were replaced by non-agential causes. This undermines attempts to understand joint responsibility as overlapping individual responsibility; the responsibility in question is essentially joint. Second, (...) the agents involved in these cases are not aware of each other's existence and do not form a social group. This undermines attempts to understand joint responsibility in terms of actual or possible joint action or joint intentions, or in terms of other social ties. Instead, it is argued that intuitions about joint responsibility are best understood given the Explanation Hypothesis, according to which a group of agents are seen as jointly responsible for outcomes that are suitably explained by their motivational structures: something bad happened because they didn’t care enough; something good happened because their dedication was extraordinary. One important consequence of the proposed account is that responsibility for outcomes of collective action is a deeply normative matter. (shrink)
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...) is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a meta-statistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies. Introduction and overview 1.1 Behavioristic and inferential rationales for Neyman–Pearson (N–P) tests 1.2 Severity rationale: induction as severe testing 1.3 Severity as a meta-statistical concept: three required restrictions on the N–P paradigm Error statistical tests from the severity perspective 2.1 N–P test T(): type I, II error probabilities and power 2.2 Specifying test T() using p-values Neyman's post-data use of power 3.1 Neyman: does failure to reject H warrant confirming H? Severe testing as a basic concept for an adequate post-data inference 4.1 The severity interpretation of acceptance (SIA) for test T() 4.2 The fallacy of acceptance (i.e., an insignificant difference): Ms Rosy 4.3 Severity and power Fallacy of rejection: statistical vs. substantive significance 5.1 Taking a rejection of H0 as evidence for a substantive claim or theory 5.2 A statistically significant difference from H0 may fail to indicate a substantively important magnitude 5.3 Principle for the severity interpretation of a rejection (SIR) 5.4 Comparing significant results with different sample sizes in T(): large n problem 5.5 General testing rules for T(), using the severe testing concept The severe testing concept and confidence intervals 6.1 Dualities between one and two-sided intervals and tests 6.2 Avoiding shortcomings of confidence intervals Beyond the N–P paradigm: pure significance, and misspecification tests Concluding comments: have we shown severity to be a basic concept in a N–P philosophy of induction? (shrink)
Entertaining diverse assumptions about empirical research, commentators give a wide range of verdicts on the NHSTP defence in Statistical significance. The null-hypothesis significance-test procedure (NHSTP) is defended in a framework in which deductive and inductive rules are deployed in theory corroboration in the spirit of Popper's Conjectures and refutations (1968b). The defensible hypothetico-deductive structure of the framework is used to make explicit the distinctions between (1) substantive and statistical hypotheses, (2) statistical alternative and conceptual alternative hypotheses, and (3) making (...) statistical decisions and drawing theoretical conclusions. These distinctions make it easier to show that (1) H0 can be true, (2) the effect size is irrelevant to theory corroboration, and (3) “strong” hypotheses make no difference to NHSTP. Reservations about statistical power, meta-analysis, and the Bayesian approach are still warranted. (shrink)
Epidemiologists and geneticists claim that genetics has an increasing role to play in public health policies and programs in the future. Within this perspective, genetic testing and screening are instrumental in avoiding the birth of children with serious, costly or untreatable disorders. This paper discusses genetic testing and screening within the framework of eugenics in the health care context of India. Observations are based on literature review and empirical research using qualitative methods. I distinguish ‘private’ from ‘public’ eugenics. (...) I refer to the practice of prenatal diagnosis as an aspect of private eugenics, when the initiative to test comes from the pregnant woman herself. Public eugenics involves testing initiated by the state or medical profession through (more or less) obligatory testing programmes. To illustrate these concepts I discuss the management of thalassaemia, which I see as an example of private eugenics that is moving into the sphere of public eugenics. I then discuss the recently launched newborn screening programme as an example of public eugenics. I use Foucault’s concepts of power and governmentality to explore the thin line separating individual choice and overt or covert coercion, and between private and public eugenics. We can expect that the use of genetic testing technology will have serious and far-reaching implications for cultural perceptions regarding health and disease and women’s experience of pregnancy, besides creating new ethical dilemmas and new professional and parental responsibilities. Therefore, culturally sensitive health literacy programmes to empower the public and sensitise professionals need attention. (shrink)
Larry Laudan has challenged the realist to come up with a program that submits realism to "those stringent empirical demands which the realist himself minimally insists on when appraising scientific theories." This paper shows how the realist can go about taking up Laudan on this challenge; and, in such a way that the realist hypothesis actually ends up being confirmed, by any empirical standards. In other words, it is shown that we can test for convergent realism, just as readily as (...) Laudan can test for a connection between theories that are controlled by the cannons of science and their subsequent reliability. (shrink)
The debate over the genetic testing of minors has developed into a major bioethical topic. Although several controversial questions remain unanswered, a degree of consensus has been reached regarding the policies on genetic testing of minors. Recently, several commentators have suggested that these policies are overly restrictive, too narrow in focus, and even in conflict with the limited empirical evidence that exists on this issue. We respond to these arguments in this paper, by first offering a clarification of (...) three key concepts—autonomy of the minor, future autonomy, and parental authority—which must be disentangled. We then respond to the arguments by noting the uncertainty of the value of predictive genetic information, and by assessing the psychosocial risks still involved in genetic testing of minors, which are also largely unknown. We conclude that the current consensus position is justified at this stage, in light of the predictions of harm resulting from genetic testing of minors that have not been adequately proved to be unwarranted. (shrink)
The ‘Knobe effect’ is the name given to the empirical finding that judgments about whether an action is intentional or not seem to depend on the moral valence of this action. To account for this phenomenon, Scaife and Webber have recently advanced the ‘Consideration Hypothesis’, according to which people’s ascriptions of intentionality are driven by whether they think the agent took the outcome in consideration when taking his decision. In this paper, I examine Scaife and Webber’s hypothesis and conclude that (...) it is supported neither by the existing literature nor by their own experiments, whose results I did not replicate, and that the ‘Consideration Hypothesis’ is not the best available account of the ‘Knobe Effect’. (shrink)
Genetic testing is currently subject to little oversight, despite the significant ethical issues involved. Repeated recommendations for increased regulation of the genetic testing market have led to little progress in the policy arena. A 2005 Internet search identified 13 websites offering health-related genetic testing for direct purchase by the consumer. Further examination of these sites showed that overall, biotech companies are not providing enough information for consumers to make well-informed decisions; they are not consistently offering genetic counseling (...) services; and some sites even offer tests with little evidence of clinical value. This article aims to raise company and consumer awareness about the ethical concerns surrounding the direct-to-consumer marketing of health-related genetic tests. It also suggests ways that biotech companies can bring their services to the public in an ethically responsible manner, without increased regulatory oversight. (shrink)
MacDonald and Kreitman (1991) propose a test of the neutral mutationrandom drift (NM-RD) hypothesis, the central claim of the neutral theory of molecular evolution. The test involves generating predictions from the NM-RD hypothesis about patterns of molecular substitutions. Alternative selection hypotheses predict that the data will deviate from the predictions of the NM-RD hypothesis in specifiable ways. To conduct the test Mac- Donald and Kreitman examine the evolutionary dynamics of the alcohol dehydrogenase (Adh) gene in three species of Drosophila. The (...) test compares the number of DNA sequence changes between species and within species. The number of DNA differences is an indicator of the evolutionary rate of the Adh gene. Based on the test they conclude that there is strong evidence for adaptive protein evolution at particular sites in the gene. Understanding the test requires some basic knowledge about molecular terms and the predictions of neutral theory. The two important terms are fixed differences and polymorphisms. These are determined by comparing DNA sequences made up of thousands of individual nucleotide sites. A site that is unchanged within a species but different from a related species counts as a fixed difference. These are mutations that occur in some common ancestor of the lineage such that all descendants inherit the change. A site that differs within a species counts as a polymorphism. Determining the number of fixed differences and polymorphisms requires placing 1 each individual gene sequence onto a phylogenetic tree. A coalescent tree charts the ancestral relationships for a set of individual gene sequences. Sequences sampled from within a species form a within-species tree. The common ancestors of each within-species tree form a between-species tree. A detected difference counts as a polymorphism or a fixed difference depending on where it occurs in the phylogenetic tree (cf. Table 1). The test uses the numbers of polymorphisms and fixed differences as indicators of evolutionary rates.. (shrink)
Modern medicine emphasizes treatment of the sick. It is often said that the widespread genetic testing soon to follow the completion of the Human Genome Project will usher in a new era of preventive medicine. Such changes require new ways of thinking, however. For example, there may be nothing clinically wrong with a healthy patient who requests genetic testing, even if the tests reveal disease genes. Since all individuals have genetic skeletons in their closets, it is important to (...) be careful not to confuse having disease genes with having the diseases that they cause. Unfortunately, many in the public have adopted a kind of genetic determinism that sees genes as destiny: for example, having the gene associated with colon cancer means they will develop colon cancer. Physicians tend to be more careful, yet even they are not immune to subtle versions of genetic determinism. One example of this is the uncritical categorization of certain diseases as “genetic”. In fact, an adequate concept of genetic disease is extremely difficult to come by. The simplest notion would require a 1:1 correspondence between a disease and its genes, but this is the exception rather than the rule. For example, cystic fibrosis (CF) is often put forward as a good example of a genetic disease, since it seems to result from mutations in a single gene, CFTR. Even in this case, however, the exact relationship between CFTR mutations and disease is not clear, as virtually every possible combination of sweat chloride test results, genetic test results, and symptoms has been observed. If a patient presents with the classic symptoms of CF and is found to have a mutation in the CFTR gene, the physician might understandably infer that the mutation caused the disease. But if an asymptomatic patient is tested and it is discovered that he or she has a CFTR mutation, it is unclear what this means. The doctor might tell the patient the gene is abnormal and that he or she is likely to develop pulmonary problems, etc., but it’s not really known whether even this qualified prognosis is true.. (shrink)
This paper deals with the conflict between the desire of an employer to test employees for honesty and chemical dependency, and the right of the employee to privacy. Not only is the physical privacy of the employee infringed upon, but the psychic privacy of the individual as well. It is the conclusion of the paper that such an invasion of privacy is not justified without serious and compelling reason, and not the mere chance that testing will reveal problems among (...) some percentage of the tested persons. (shrink)
We postulate the Testing Principle : that individuals ''act like statisticians'' when they face uncertainty in a decision problem, ranking alternatives to the extent that available evidence allows. The Testing Principle implies that completeness of preferences, rather than the sure-thing principle , is violated in the Ellsberg Paradox. In the experiment, subjects chose between risky and uncertain acts in modified Ellsberg-type urn problems, with sample information about the uncertain urn. Our results show, consistent with the Testing Principle, (...) that the uncertain urn is chosen more often when the sample size is larger, holding constant a measure of ambiguity (proportion of balls of unknown colour in the urn). The Testing Principle rationalises the Ellsberg Paradox. Behaviour consistent with the principle leads to a reduction in Ellsberg-type violations as the statistical quality of sample information is improved, holding ambiguity constant. The Testing Principle also provides a normative rationale for the Ellsberg paradox that is consistent with procedural rationality. (shrink)
Scientists, the medical profession, philosophers, social scientists, policy makers, and the public at large have been quick to embrace the accomplishments of genetic science. The enthusiasm for the new biotechnologies is not unrelated to their worthy goal. The belief that the new genetic technologies will help to decrease human suffering by improving the public’s health has been a significant influence in the acceptance of technologies such as genetic testing and screening. But accepting this end should not blind us to (...) the need for an evaluation of whether a particular means is adequate to achieve it. Lack of such evaluation notwithstanding, discussions of the ethical, legal, and social implications have tended to presuppose that the development and implementation of genetic testing will be an appropriate means to reduce human suffering in significant ways. I argue here that such an assumption is mistaken. In part this is the case because human biology is more complex than sometimes it is made to appear in these debates. But, the idea that human suffering resulting from disease can be reduced in significant ways with the use of genetic testing also ignores the social contexts in which these technologies are being developed and implemented. (shrink)
Despite recent advances in ways to prevent transmission of HIV from a mother to her child during pregnancy, infants continue to be born and become infected with HIV, particularly in southern Africa where HIV prevalence is the highest in the world. In this region, emphasis has shifted from voluntary HIV counselling and testing to routine testing of women during pregnancy. There have also been proposals for mandatory testing. Could mandatory testing ever be an option, even in (...) high-prevalence settings? Many previous examinations of mandatory testing have dealt with it in the context of low HIV prevalence and a well-resourced health care system. In this discussion, different assumptions are made. Within this context, where mandatory testing may be a strategy of last resort, the objections to it are reviewed. Special attention is paid in the discussion to the entrenched vulnerability of women in much of southern Africa and how this contributes to both HIV prevalence and ongoing challenges for preventing HIV transmission during pregnancy. While mandatory testing is ethically plausible, particularly when coupled with guaranteed access to treatment and care, the discussion argues that the moment to employ this strategy has not yet come. Many barriers remain for pregnant women in terms of access to testing, treatment and care, most acutely in the southern African setting, despite the presence of national and international human rights instruments aimed at empowering women and removing such barriers. While this situation persists, mandatory HIV testing during pregnancy cannot be justified. (shrink)
By some estimates one-third of American corporations now require their employees to be tested for drug use. These requirements are compatible with general employment law while promoting the public's interest in fighting drug use. Moreover, the United States Supreme Court has ruled that drug testing programs are constitutionally permissible within both the public and the private sectors. It appears mandatory drug testing is a permanent fixture of American corporate life. (Bakaly, C. G., Grossman, J. M. 1989).
Many philosophers and psychologists now argue that emotions play a vital role in reasoning. This paper explores one particular way of elucidating how emotions help reason which may be dubbed ?the search hypothesis of emotion?. After outlining the search hypothesis of emotion and dispensing with a red herring that has marred previous statements of the hypothesis, I discuss two alternative readings of the search hypothesis. It is argued that the search hypothesis must be construed as an account of what emotions (...) typically do, rather than as a definition of emotion. Even as an account of what emotions typically do, the search hypothesis can only be evaluated in the context of a specific theory of what emotions are. 1 Introduction 2 The search hypothesis of emotion 3 A red herring: the frame problem 4 The search problem 5 Two readings of the search hypothesis 6 Two final remarks 7 Conclusion. (shrink)
: Decisions about funding health services are crucial to controlling costs in health care insurance plans, yet they encounter serious challenges from intellectual property protection—e.g., patents—of health care services. Using Myriad Genetics' commercial genetic susceptibility test for hereditary breast cancer (BRCA testing) in the context of the Canadian health insurance system as a case study, this paper applies concepts from social contract theory to help develop more just and rational approaches to health care decision making. Specifically, Daniels's and Sabin's (...) "accountability for reasonableness" is compared to broader notions of public consultation, demonstrating that expert assessments in specific decisions must be transparent and accountable and supplemented by public consultation. (shrink)
The physician-patient relationship has changed over the last several decades, requiring a systematic reevaluation of the competing demands of patients, physicians, and families. In the era of genetic testing, using a model of patient care known as the family covenant may prove effective in accounting for these demands. The family covenant articulates the roles of the physician, patient, and the family prior to genetic testing, as the participants consensually define them. The initial agreement defines the boundaries of autonomy (...) and benefit for all participating family members. The physician may then serve as a facilitator in the relationship, working with all parties in resolving potential conflicts regarding genetic information. The family covenant promotes a fuller discussion of the competing ethical claims that may come to bear after genetic test results are received. (shrink)
Genetic testing in the workplace is a technology both full of promise and fraught with ethical peril. Though not yet common, it is likely to become increasingly so. We survey the key arguments in favour of such testing, along with the most significant ethical worries. We further propose a set of pragmatic criteria, which, if met, would make it permissible for employers to offer (but not to require) workplace genetic testing.
Most of the debate about drug testing in the workplace has focused on the right to privacy. Proponents of testing have had to tackle difficult questions concerning the nature, extent, and weight of the privacy rights of employees. This paper examines a different kind of argument — the claim that because corporations are responsible for harms committed by employees while under the influence of drugs, they are entitled to test for drug use. This argument has considerable intuitive appeal, (...) because it seems, at least at first glance, to bypass the issue of privacy rights altogether. The argument turns, not on rights, but on the nature and conditions of responsibility. We may therefore call it an ought implies can argument.In spite of its initial appeal, however, the argument does not succeed in circumventing the claims of privacy rights. Even responsibility for the actions of others does not entitle us to do anything at all to control their behavior; we must look to rights, among other things, to determine what sorts of controls are morally permissible. In addition, the argument rests on unjustified assumptions about the connection between drug testing and the prevention of drug-related harm. (shrink)
This paper provides an empirical account of commercial genetic predisposition testing in mainland China, based on interviews with company mangers, regulators and clients, and literature research during fieldwork in mainland China from July to September 2006. This research demonstrates that the commercialization of genetic testing and the lack of adequate regulation have created an environment in which dubious advertising practices and misleading and unprofessional medical advice are commonplace. The consequences of these ethically problematic activities for the users of (...) predictive tests are, as yet, unknown. The paper concludes with a bioethical and social science perspective on the social and ethical issues raised by the dissemination and utilization of genetic testing in mainland China. (shrink)
The diagnosis of HIV infection is the point of entry for treatment and prevention services, yet many infected persons in both developed and developing countries remain undiagnosed. To reduce the number of undiagnosed infections, a variety of expanded testing policies have been recommended, including opt-out testing. This testing model assumes that in populations of increased HIV prevalence, voluntary testing should be offered to all patients seen in healthcare settings and performed unless patients specifically decline. While this (...) approach raises ethical issues concerning “voluntariness”, access to care, and stigma, the potential benefits of opt-out testing far outweigh its potential adverse effects. (shrink)
Predictive genetic testing may confront those affected with difficult life situations that they have not experienced before. These life situations may be interpreted as ‘absurd’. In this paper we present a case study of a predictive test situation, showing the perspective of a woman going through the process of deciding for or against taking the test, and struggling with feelings of alienation. To interpret her experiences, we refer to the concept of absurdity, developed by the French Philosopher Albert Camus. (...) Camus' writings on absurdity appear to resonate with patients' stories when they talk about their body and experiences of illness. In this paper we draw on Camus' philosophical essay ‘The Myth of Sisyphus’ (1942), and compare the absurd experiences of Sisyphus with the interviewee's story. This comparison opens up a field of ethical reflection. We demonstrate that Camus' concept of absurdity offers a new and promising approach to understanding the fragility of patients' situations, especially in the field of predictive testing. We show that people affected might find new meaning through narratives that help them to reconstruct the absurd without totally overcoming it. In conclusion, we will draw out some normative consequences of our narrative approach. (shrink)
Despite decades of prevention efforts, millions of persons worldwide continue to become infected by the human immunodeficiency virus (HIV) every year. This urgent problem of global epidemic control has recently lead to significant changes in HIV testing policies. Provider-initiated approaches to HIV testing have been embraced by the Centers for Disease Control and Prevention and the World Health Organization, such as those that routinely inform persons that they will be tested for HIV unless they explicitly refuse ('opt out'). (...) While these policies appear to increase uptake of testing, they raise a number of ethical concerns that have been debated in journals and at international AIDS conferences. However, one special form of 'provider-initiated' testing is being practiced and promoted in various parts of the world, and has advocates within international health agencies, but has received little attention in the bioethical literature: mandatory premarital HIV testing. This article analyses some of the key ethical issues related to mandatory premarital HIV testing in resource-poor settings with generalized HIV epidemics. We will first briefly mention some mandatory HIV premarital testing proposals, policies and practices worldwide, and offer a number of conceptual and factual distinctions to help distinguish different types of mandatory testing policies. Using premarital testing in Goma (Democratic Republic of Congo) as a point of departure, we will use influential public health ethics principles to evaluate different forms of mandatory testing. We conclude by making concrete recommendations concerning the place of mandatory premarital testing in the struggle against HIV/AIDS. (shrink)
In this article I attempt to examine the justification for the mandatory drug testing of employees. The justification commonly assumes the form of the productivity argument which states that an employer has a proprietary right to regulate the purchased time of the employee. Since the employer may be rightfully concerned with the employee''s productive output, so this argument goes, the employer retains the right to motivate production. By extension, the employee''s behavior outside of the workplace which affects his or (...) her productive capacity may also be regulated, including drug use which may affect this capacity. Thus it is claimed that the employer has the right to test employees for drug use and to impose sanctions when it is discovered.I argue that the implications of the productivity argument lead to unacceptable consequences and thus must be rejected. The productivity argument can be examined in light of a thought-experiment in which the reader is asked to imagine the discovery of two drugs, both of which enhance employee productivity. Calling these drugs hedonine and pononine, I imagine the first to be pleasurable to the employee while the second is accompanied by a degree of pain and discomfort. Since the mandated use of both of these imagined drugs would be consistent with the productivity argument, I maintain that the productivity argument thereby fails and so must be rejected. As employee drug testing is justified by this argument, it must also be rejected. (shrink)
: There is a general consensus in the medical and medical ethics communities against predictive genetic testing of children for late onset conditions, but minimal consideration is given to predictive testing of asymptomatic children for disorders that present later in childhood when presymptomatic treatment cannot influence the course of the disease. In this paper, I examine the question of whether it is ethical to perform predictive testing and screening of newborns and young children for conditions that present (...) later in childhood. I consider the risks and benefits of (1) predictive testing of children from high-risk families; (2) predictive population screening for conditions that are untreatable; and (3) predictive population screening for conditions in which the efficacy of presymptomatic treatment is equivocal. I conclude in favor of parental discretion for predictive genetic testing, but against state-sponsored predictive screening for conditions that do not fulfill public health screening criteria. (shrink)
This article calls into question the charge that frequentist testing is susceptible to the base-rate fallacy. It is argued that the apparent similarity between examples like the Harvard Medical School test and frequentist testing is highly misleading. A closer scrutiny reveals that such examples have none of the basic features of a proper frequentist test, such as legitimate data, hypotheses, test statistics, and sampling distributions. Indeed, the relevant error probabilities are replaced with the false positive/negative rates that constitute (...) deductive calculations based on known probabilities among events. As a result, the ampliative dimension of frequentist induction—learning from data about the underlying data-generating mechanism—is missing. *Received August 2009; revised January 2010. †To contact the author, please write to: Department of Economics, Virginia Tech, Blacksburg, VA 24061; e-mail: firstname.lastname@example.org. (shrink)
There is consensus that children have questionable decisional capacity and, therefore, in general a parent or a guardian must give permission to enroll a child in a research study. Moreover, freedom from duress and coercion, the cardinal rule in research involving adults, is even more important for children. This principle is embodied prominently in the Nuremberg Code (1947) and is embodied in various federal human research protection regulations. In a program named "SATURN" (Student Athletic Testing Using Random Notification), each (...) school in the Oregon public-school system may implement a mandatory drug-testing program for high school student athletes. A prospective study to identify drug use among student-athletes, SATURN is designed both to evaluate the influence of random drug testing and to validate the survey data through identification of individuals who do not report drug use. The enrollment of students in the drug-testing study is a requirement for playing a school sport. In addition to the coercive nature of this study design, there were ethically questionable practices in recruitment, informed consent, and confidentiality. This article concerns the question of whether research can be conducted with high school students in conjunction with a mandatory drug-testing program, while adhering to prevailing ethical standards regarding human-subjects research and specifically the participation of children in research. (shrink)
If the NHSTP procedure is essential for controlling for chance, why is there little, if any, discussion of the nature of chance by Chow and other advocates of the procedure. Also, many criticisms that Chow takes to be aimed against the NHSTP (null-hypothesis significance-test) procedure are actually directed against the kind of theory that is tested by the procedure.
Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth’s (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they (...) argue for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately. (shrink)
Theories of statistical testing may be seen as attempts to provide systematic means for evaluating scientific conjectures on the basis of incomplete or inaccurate observational data. The Neyman-Pearson Theory of Testing (NPT) has purported to provide an objective means for testing statistical hypotheses corresponding to scientific claims. Despite their widespread use in science, methods of NPT have themselves been accused of failing to be objective; and the purported objectivity of scientific claims based upon NPT has been called (...) into question. The purpose of this paper is first to clarify this question by examining the conceptions of (I) the function served by NPT in science, and (II) the requirements of an objective theory of statistics upon which attacks on NPT's objectivity are based. Our grounds for rejecting these conceptions suggest altered conceptions of (I) and (II) that might avoid such attacks. Second, we propose a reformulation of NPT, denoted by NPT*, based on these altered conceptions, and argue that it provides an objective theory of statistics. The crux of our argument is that by being able to objectively control error frequencies NPT* is able to objectively evaluate what has or has not been learned from the result of a statistical test. (shrink)
This paper considers whether game theory can be tested, what difficulties experimenters face in testing it, and what can be learned from attempts to test it. I emphasize that tests of game theory rely on fallible assumptions concerning particular features of the strategic situation and of the players. These do not render game theory untestable in principle, but they create serious problems. In coping with these problems, experimenters may use game theory to learn what games experimental subjects are playing.
Testing the other. It is nowadays a commonplace of academic discourse on social sciences, especially when it comes to such disciplines as anthropology and semiotics, to oppose the old (and old-fashioned) methods of the “structuralists” to post-modern and post-structural epistemological attitudes. Structuralism, it is said, was based on the idea that it is possible to apprehend the meaning of cultural productions from an exterior and therefore objective standpoint, just by making explicit their immanent principles of organization. Today, on the (...) contrary, a totally distinct approach of cultural productions would stem from the consciousness of a strict interdependence, or even of an identity in nature between subject and object at all levels of the process of knowledge, at least in the area of the humanities. However, such a crude opposition proves insufficient when one observes the effective practices of current research. The example here analysed is the account given by the American anthropologist Paul Rabinow of his first mission abroad: Reflections on Fieldwork in Morocco. The analysis, based on the use of a semiotic modelling of interaction, consists in exploring the variety of positions respectively adopted by the anthropologist and his informants according to circumstances and contexts. Four regimes are in principle distinguishable: programmation, based on regularity and predictability of the actors’ behaviour, manipulation, based on some kind of contractualization of their relationships, adjustment, based upon reciprocal sensitivity and various strategies permitting to both partners of the interaction to test one another, and a regime of consent to the unexpected or the unforeseeable. The main result of the analysis resides in the possibility of showing that at each of these styles of pragmatic interaction corresponds a specific regime at the cognitive level as well. This leads tostressing the complexity, if not heterogeneity, of the strategies of knowledge involved at various stages of anthropological research, from the collection ofdata to the cooperative production of new forms of understanding. Taking the risk of generalization, one might also consider the interactional device, which ishere tested through the reading of P. Rabinow’s report as a metatheoretical model describing the various epistemological stances at work and at stake in thepractices of research in social sciences at large. (shrink)
In this essay, I indicate how social-science approaches can throw light on predictive genetic testing (PGT) in various societal contexts. In the first section, I discuss definitions of various forms of PGT, and point out their inherent ambiguity and inappropriateness when taken out of an ideal–typical context. In section two, I argue further that an ethics approach proceeding from the point of view of the abstract individual in a given society should be supplemented by an approach that regards bioethics (...) as inherently ambiguous, contested, changeable and context-dependent. In the last section, I place these bioethical discussions of PGT in the context of Asian communities. Here, a critical view of what constitutes a community and culture proves necessary to understand the role of bioethical debates and the empirical manifestations of PGT in Asian societies. A discussion of the concepts of family and kinship in relation to PGT indicates that any bioethical analysis has to take into account that bioethical values are not just reflections of a cultural community, but embody both bioethical ideals and prevalent political rhetoric which is exhibited, propagated and manipulated by individuals and collectives for a variety of purposes. I end by summarising the contributions that social science could make to the understanding of the bioethics of PGT. (shrink)
Traditional attempts to delineate the distinctive rationality of modern science have taken it for granted that the purpose of empirical research is to test judgments. The choice of concepts to use in those judgments is therefore seen either a matter of indifference (Popper) or as important choice which must be made, so to speak, in advance of all empirical research (Carnap). I argue that scientific method aims precisely at empirical testing of concepts, and that even the simplest scientific ex- (...) periment or observation results in conceptual change. (shrink)
Even in a theory corroboration context, attention to effect size is called for if significance testing is to be of any value. I sketch a Popperian construal of significance tests that better fits into scientific inference as a whole. Because of its many errors Chow's book cannot be recommended to the novice.
Chow's (1996) defense of the null-hypothesis significance-test procedure (NHSTP) is thoughtful and compelling in many respects. Nevertheless, techniques such as meta-analysis, power analysis, effect size estimation, and confidence intervals can be useful supplements to NHSTP in furthering the cumulative nature of behavioral research, as illustrated by the history of research on the spontaneous recovery of verbal learning.
Prenatal care and the practice of prenatal genetic testing are about to be changed fundamentally. Due to several ground-breaking technological developments prenatal screening and diagnosis (PND) will soon be offered earlier in gestation, with less procedure-related risks and for a profoundly enlarged variety of targets. In this paper it is argued that the existing normative framework for prenatal screening and diagnosis cannot answer adequately to these new developments. In concentrating on issues of informed consent and the reproductive autonomy of (...) the pregnant women the ethical debate misses problems related to the clinical pathway as a whole and to implicit normative attributions to clinical actions or the function of health care professionals. If, however, ethical debate would focus on the clinical context and on the ends of PND to a larger extent, it would be able to provide a more comprehensive analysis of the ethical challenges especially of the new technologies in order to be more adequately prepared for their implementation. (shrink)
Background: As genetics technology proceeds, practices of genetic testing have become more heterogeneous: many different types of tests are finding their way to the public in different settings and for a variety of purposes. This diversification is relevant to the discourse on ethical, legal and societal issues (ELSI) surrounding genetic testing, which must evolve to encompass these differences. One important development is the rise of personal genome testing on the basis of genetic profiling: the testing of (...) multiple genetic variants simultaneously for the prediction of common multifactorial diseases. Currently, an increasing number of companies are offering personal genome tests directly to consumers and are spurring ELSI-discussions, which stand in need of clarification. This paper presents a systematic approach to the ELSI-evaluation of personal genome testing for multifactorial diseases along the lines of its test characteristics.DiscussionThis paper addresses four test characteristics of personal genome testing: its being a non-targeted type of testing, its high analytical validity, low clinical validity and problematic clinical utility. These characteristics raise their own specific ELSI, for example: non-targeted genetic profiling poses serious problems for information provision and informed consent. Questions about the quantity and quality of the necessary information, as well as about moral responsibilities with regard to the provision of information are therefore becoming central themes within ELSI-discussions of personal genome testing. Further, the current low level of clinical validity of genetic profiles raises questions concerning societal risks and regulatory requirements, whereas simultaneously it causes traditional ELSI-issues of clinical genetics, such as psychological and health risks, discrimination, and stigmatization, to lose part of their relevance. Also, classic notions of clinical utility are challenged by the newer notion of 'personal utility.'SummaryConsideration of test characteristics is essential to any valuable discourse on the ELSI of personal genome testing for multifactorial diseases. Four key characteristics of the test - targeted/non-targeted testing, analytical validity, clinical validity and clinical utility - together determine the applicability and the relevance of ELSI to specific tests. The paper identifies and discusses four areas of interest for the ELSI-debate on personal genome testing: informational problems, risks, regulatory issues, and the notion of personal utility. (shrink)