Keller & Miller's (K&M's) conclusion appears to be correct; namely, that common, harmful, heritable mental disorders are largely maintained at present frequencies by mutation-selection balance at many different loci. However, their “paradox” is questionable. (Published Online November 9 2006).
The images from wars in the Middle East that haunt us are those of young women killing and torturing. Their media circulated stories share a sense of shock. They have both galvanized and confounded debates over feminism and women's equality. And, as Oliver argues in this essay, they share, perhaps subliminally, the problematic notion of women as both offensive and defensive weapons of war, a notion that is symptomatic of fears of women's "mysterious" powers.
In the post-Newtonian world motion is assumed to be a simple category which relates to the locomotion of bodies in space, and is usually associated only with physics. Philosophy, God and Motion shows that this is a relatively recent understanding of motion and that prior to the scientific revolution motion was a much broader and more mysterious category, applying to moral as well as physical movements. SimonOliver presents fresh interpretations of key figures in the history of western (...) thought including Plato, Aristotle, Aquinas and Newton, examining the thinkers' handling of the concept of motion. Through close readings of seminal texts in ancient and medieval cosmology and early modern natural philosophy, the book moves from antique to modern times investigating how motion has been of great significance within theology, philosophy and science. Particularly important is the relation between motion and God, following Aristotle traditional doctrines of God have understood the divine as the 'unmoved mover' while post-Holocaust theologians have suggested that in order to be compassionate God must undergo the motion of suffering. Philosophy, God and Motion suggests that there may be an authentically theological, as well as a natural scientific understanding of motion. (shrink)
In Womanizing Nietzsche, Kelly Oliver uses an analysis of the position of woman in Nietzsche's texts to open onto the larger question of philosophy's relation to the feminine and the maternal. Offering readings from Nietzsche, Derrida, Irigaray, Kristeva, Freud and Lacan, Oliver builds an innovative foundation for an ontology of intersubjective relationships that suggests a new approach to ethics. Oliver argues that while Freud, Nietzsche and Derrida, in particular, attempt to open up philosophy to its other--the unconscious, (...) the body, difference, even the feminine--their attempts depend on closing off the possibility of a specifically feminine other. In this regard, Oliver maintains that none of these theorists have escaped the Hegelian model of intersubjectivity at the level of Lordship and Bondage. She suggests that the recent talk of the death of philosophy is a symptom of the exclusion of woman, the feminine and the maternal. By problematizing and reformulating the traditional philosophical association between the maternal and nature, Oliver presents an alternative model for intersubjectivity and ethics. (shrink)
Bernard Mayo, who died in 2000, was Professor of Moral Philosophy at the University of St Andrews from 1967–1983. He chose his 19th century predecessor J F Ferrier as the subject of his inaugural lecture delivered on 26th November 1969. Copies of the lecture were printed and distributed, but it was never published. Mayo's choice of subject for his inaugural shows remarkable and at the time highly unusual insight into the value Ferrier's philosophical writings, and rising current interest (...) in Ferrier warrants its publication now, in a lightly edited version that has eliminated references to the specific occasion on which it was delivered. Mayo explores Ferrier's version of the contrast between the human and the natural as a means of illuminating the 20th century debate between realists and relativists in moral philosophy. (shrink)
A valuable intervention in Kristevan scholarship and a significant and exciting contribution in its own right to post-structuralist discussions of ethical and political agency and practice. Contributors: Judith Butler, Tina Chanter, Marilyn Edelstein, Jean Graybeal, Suzanne Guerlac, Alice Jardine, Lisa Lowe, Noelle McAfee, Norma Claire Moruzzi, Kelly Oliver, Tilottma Rajan, Jacqueline Rose, Allison Weir, Mary Bittner Wiseman, Ewa Ziarek.
This enterprising book, written in the spirit of William James, urges our appreciation of the intensely personal character of spiritual transcendence. Phil Oliver's work has important implications for specialists concerned with the Jamesian concept of "pure experience," and it illuminates significant interdisciplinary ties among philosophy, literature, and other intellectual domains.
Kuhn maintains that what marks the transition to a science is the ability to carry out ‘normal’ science—a practice he characterizes as abandoning the kind of testing that Popper lauds as the hallmark of science. Examining Kuhn's own contrast with Popper, I propose to recast Kuhnian normal science. Thus recast, it is seen to consist of severe and reliable tests of low-level experimental hypotheses (normal tests) and is, indeed, the place to look to demarcate science. While thereby vindicating Kuhn on (...) demarcation, my recasting of normal science is seen to tell against Kuhn's view of revolutionary science. (shrink)
I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a (...) deeper understanding of NPT tests than their typical formulation as accept-reject routines, against which criticisms of NPT are really directed. The Pearsonian view that emerges suggests how NPT tests may avoid these criticisms while still retaining what is central to these methods: the control of error probabilities. (shrink)
The concepts of animal, human, and rights are all part of a philosophical tradition that trades on foreclosing the animal, animality, and animals. Rather than looking to qualities or capacities that make animals the same as or different from humans, I investigate the relationship between the human and the animal. To insist, as animal rights and welfare advocates do, that our ethical obligations to animals are based on their similarities to us reinforces the type of humanism that leads to treating (...) animals—and other people—as subordinates. But, if recent philosophies of difference are any indication, we can acknowledge difference without acknowledging our dependence on animals, or without including animals in ethical considerations. Animal ethics requires rethinking both identity and difference by focusing on relationships and responsivity. My aim is not only to suggest an animal ethics but also to show how ethics itself is transformed by considering animals. (shrink)
My essay is framed by Hypatia's first special issue on Motherhood and Sexuality at one end, and by the most recent special issue (as of this writing) on the work of Iris Young, whose work on pregnant embodiment has become canonical, at the other. The questions driving this essay are: When we look back over the last twenty-five years, what has changed in our conceptions of pregnancy and maternity, both in feminist theory and in popular culture? What aspects of feminist (...) debates from the 1970s and 1980s are still relevant today? And, how might what appear to be radical shifts in popular perceptions of pregnancy actually continue traditional values that objectify and “abjectify” the maternal body?Here, I will focus on three central elements of the revaluation of pregnancy and maternity as they show up in feminist philosophy and in popular culture: 1. The relationship between pregnancy and sexuality, both in terms of pregnant sexuality and in terms of the pregnant body as sexual object; 2. The “choice” to become a mother as a “feminist choice”; 3. The temporality of pregnancy and birth as marking something like “women's time.”. (shrink)
Yah boo sucks to the grammer wot we lernt in skool! Grammar (and the bad old traditional logic) says that quantifier phrases such as 'nobody', 'everyone', 'all women', 'some men' and 'a man' are in the same category as names such as 'Milly', 'Molly' and 'Mandy'. So, prior to their first corrective lessons, students are awfully muddled, the first and fundamental problem being the Woozle hunt for somebody called 'nobody'. Hoorah for modern logic and logic teachers! The story used to (...) justify our current logics is entirely fictional. The claims about names and quantifier phrases in English are wildly false. Two of the heroes of modern logic, Russell and Hilbert, make the very mistakes which are falsely blamed on traditional logic. The villain, Meinong, turns out to have been working a different patch. Ideas ascribed to traditional grammar are modern inventions. Neither logicians nor grammarians can be trusted to tell the history of either grammar or logic. (shrink)
Suicide has been condemned in our culture in one way or another since Augustine offered theological arguments against it in the sixth century. More recently, theological condemnation has given way to the view that suicidal behavior must always be symptomatic of emotional disturbance and mental illness. However, suicide has not always been viewed so negatively. In other times and cultures, it has been held that circumstances might befall a person in which suicide would be a perfectly rational course of action, (...) in the same sense that any other course of action could be rational: that it could be sensible, i.e., defensible by good reasons, or that it could be in keeping with the agent's fundamental interests. Indiscriminate use of modern life-sustaining technologies has renewed interest in the possibility of rational suicide. Today proponents of rational suicide tend to equate the rationality of suicide with the competence of the decision to commit suicide. Keywords: suicide, rational, euthanasia CiteULike Connotea Del.icio.us What's this? (shrink)
We present a plural logic that is as expressively strong as it can be without sacrificing axiomatisability, axiomatise it, and use it to chart the expressive limits set by axiomatisability. To the standard apparatus of quantification using singular variables our object-language adds plural variables, a predicate expressing inclusion (is/are/is one of/are among), and a plural definite description operator. Axiomatisability demands that plural variables only occur free, but they have a surprisingly important role. Plural description is not eliminable in favour of (...) quantification; on the contrary, quantification is definable in terms of it. Predicates and functors (function signs) can take plural as well as singular terms as arguments, and both many-valued and single-valued functions are expressible. The system accommodates collective as well as distributive predicates, and the condition for a predicate to be distributive is definable within it; similarly for functors. An essential part of the project is to demonstrate the soundness and completeness of the calculus with respect to a semantics that does without set-theoretic domains and in which the use of set-theoretic extensions of predicates and functors is replaced by the sui generis relations and functions for which the extensions were at best artificial surrogates. Our metalanguage is designed to solve the difficulties involved in talking plurally about individuals and about the semantic values of plural items. (shrink)
The growing acceptance and success of experimental economics has increased the interest of researchers in tackling philosophical and methodological challenges to which their work increasingly gives rise. I sketch some general issues that call for the combined expertise of experimental economists and philosophers of science, of experiment, and of inductive‐statistical inference and modeling. †To contact the author, please write to: 235 Major Williams, Virginia Tech, Blacksburg, VA 24061‐0126; e‐mail: email@example.com.
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...) is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a meta-statistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies. Introduction and overview 1.1 Behavioristic and inferential rationales for Neyman–Pearson (N–P) tests 1.2 Severity rationale: induction as severe testing 1.3 Severity as a meta-statistical concept: three required restrictions on the N–P paradigm Error statistical tests from the severity perspective 2.1 N–P test T(): type I, II error probabilities and power 2.2 Specifying test T() using p-values Neyman's post-data use of power 3.1 Neyman: does failure to reject H warrant confirming H? Severe testing as a basic concept for an adequate post-data inference 4.1 The severity interpretation of acceptance (SIA) for test T() 4.2 The fallacy of acceptance (i.e., an insignificant difference): Ms Rosy 4.3 Severity and power Fallacy of rejection: statistical vs. substantive significance 5.1 Taking a rejection of H0 as evidence for a substantive claim or theory 5.2 A statistically significant difference from H0 may fail to indicate a substantively important magnitude 5.3 Principle for the severity interpretation of a rejection (SIR) 5.4 Comparing significant results with different sample sizes in T(): large n problem 5.5 General testing rules for T(), using the severe testing concept The severe testing concept and confidence intervals 6.1 Dualities between one and two-sided intervals and tests 6.2 Avoiding shortcomings of confidence intervals Beyond the N–P paradigm: pure significance, and misspecification tests Concluding comments: have we shown severity to be a basic concept in a N–P philosophy of induction? (shrink)
The history of the idea of predicate is the history of its emancipation. The lesson of this paper is that there are two more steps to take. The first is to recognize that predicates need not have a fixed degree, the second that they can combine with plural terms. We begin by articulating the notion of a multigrade predicate: one that takes variably many arguments. We counter objections to the very idea posed by Peirce, Dummett's Frege, and Strawson. We show (...) that the arguments of a multigrade predicate must be grouped into places, with perhaps several arguments occupying positions at a place. Variability may relate to places or positions. Russell's multiple judgement predicate turns out to be just one example of a family—‘is necessarily true of’, ‘is said of’, ‘is instantiated by’ and so on—of predicates with variably many places. Our main concern, however, is lists. Any adequate account of lists must include plural as well as singular terms. On one account, lists are mere strings of separate arguments, which occupy variably many positions within a place of a multigrade predicate. A quite different account takes the list itself to be a compound plural term. We compare these rival conceptions, and reach some surprising conclusions. As a coda, we deploy the conceptual apparatus developed in the paper to assess Morton's pioneer system of multigrade logic. (shrink)
Introduction: The role of animals in philosophies of man -- Part I: What's wrong with animal rights? -- The right to remain silent -- Part II: Animal pedagogy -- You are what you eat : Rousseau's cat -- Say the human responded : Herder's sheep -- Part III: Difference worthy of its name -- Hair of the dog : Derrida's and Rousseau's good taste -- Sexual difference, animal difference : Derrida's sexy silkworm -- Part IV: It's our fault -- The (...) beaver's struggle with species-being : De Beauvoir and the praying mantis -- Answering the call of nature : Lacan walking the dog -- Part V: Estranged kinship -- The abyss between humans and animals : Heidegger puts the bee in being -- Strange kinship : Merleau-Ponty's sensuous stickleback -- Stopping the anthropological machine : Agamben's tick-tocking tick -- Psychoanalysis and the science of kinship -- Psychoanalysis as animal by-product : Freud's zoophilia -- Animal abjects, maternal abjects : Kristeva's strays -- Conclusion: Sustainable ethics. (shrink)
Paulo Freire : the educator, his oeuvre, and changing contexts -- Holistic interpretations of Freire's work : a critical review -- Critical literacy, praxis, and emancipatory politics -- "Remaining on the same side of the river" : neo-liberalism, party movements, and the struggle for greater coherence -- Reinventing Freire in a Southern context : the Mediterranean -- Engaging with practice : a Freirean reflection on different pedagogical sites.
In this essay, I argue that the contemporary notion of law has been reduced to regulations and disciplinary codes that do not and cannot give meaning to our emotional lives and moral sensibilities. As a result, we have increasing numbers of what I call “abysmal individuals” who suffer from a split between law—broadly conceived as that which gives form and structure to social life—and personal embodied sensations of pain and pleasure. My attempt to understand the place of Abu Ghraib within (...) American culture leads to an analysis of our valorization of innocence and ignorance that not only becomes the grounds on which we morally (if not legally) excuse abusive behavior as “fun,” but also becomes part of the justification for condoning some forms of violence while condemning others. In addition, I argue that the distinction between legitimate and illegitimate violence trades on underlying assumptions about the relationship between culture and nature, technology and bodies, wherein bodies are imagined as natural and outside of the realm of law. (shrink)
The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophical scrutiny can (...) help disentangle 'practical' problems of model validation, and conversely, a methodology of statistical model validation can shed light on a number of issues of interest to philosophers of science. (shrink)
We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest another approach for meta-methodology based on a conglomeration of tools and strategies (from statistical modeling, (...) experimental design, and related fields) that affords forward looking procedures for learning from error and for controlling error. The resulting “error statistical” appraisal is empirical—methods are appraised by examining their capacities to control error. At the same time, this account is normative, in that the strategies that pass muster are claims about how actually to proceed in given contexts to reach reliable inferences from limited data. (shrink)
I argue that the Bayesian Way of reconstructing Duhem's problem fails to advance a solution to the problem of which of a group of hypotheses ought to be rejected or "blamed" when experiment disagrees with prediction. But scientists do regularly tackle and often enough solve Duhemian problems. When they do, they employ a logic and methodology which may be called error statistics. I discuss the key properties of this approach which enable it to split off the task of testing auxiliary (...) hypotheses from that of appraising a primary hypothesis. By discriminating patterns of error, this approach can at least block, if not also severely test, attempted explanations of an anomaly. I illustrate how this approach directs progress with Duhemian problems and explains how scientists actually grapple with them. (shrink)
Russell had two theories of definite descriptions: one for singular descriptions, another for plural descriptions. We chart its development, in which ‘On Denoting’ plays a part but not the part one might expect, before explaining why it eventually fails. We go on to consider many-valued functions, since they too bring in plural terms—terms such as ‘4’ or the descriptive ‘the inhabitants of London’ which, like plain plural descriptions, stand for more than one thing. Logicians need to take plural reference seriously (...) if only because mathematicians take many-valued functions seriously. We assess the objection (by Russell, Frege and others) that many-valued functions are illegitimate because the corresponding terms are ambiguous. We also assess the various methods proposed for getting rid of them. Finding the objection ill-founded and the methods ineffective, we introduce a logical framework that admits plural reference, and use it to answer some earlier questions and to raise some more. (shrink)
We argue that a responsible analysis of today's evidence-based risk assessments and risk debates in biology demands a critical or metascientific scrutiny of the uncertainties, assumptions, and threats of error along the manifold steps in risk analysis. Without an accompanying methodological critique, neither sensitivity to social and ethical values, nor conceptual clarification alone, suffices. In this view, restricting the invitation for philosophical involvement to those wearing a "bioethicist" label precludes the vitally important role philosophers of science may be able to (...) play as bioevidentialists. The goal of this paper is to give a brief and partial sketch of how a metascientific scrutiny of risk evidence might work. (shrink)
While philosophers have studied probability and induction, statistics has not received the kind of philosophical attention mathematics and physics have. Despite increasing use of statistics in science, statistical advances have been little noted in the philosophy of science literature. This paper shows the relevance of statistics to both theoretical and applied problems of philosophy. It begins by discussing the relevance of statistics to the problem of induction and then discusses the reasoning that leads to causal generalizations and how statistics elucidates (...) the structure of science as it is actually practiced. In addition to being relevant for building an adequate theory of scientific inference, it is argued that statistics provides a link between philosophy, science and public policy. (shrink)
This paper, drawing on original sources, provides an overview of and a discussion on those writings and ideas, in Antonio Gramsci's huge corpus of work, that are relevant to the education of adults. This should provide a fitting tribute to this major social theorist of the 20th century on the 70th anniversary of his death. Among the topics discussed are those of adult education for industrial democracy, adult education and cultural preparation, adult literacy, prison education, adult education and the Southern (...) Question with specific reference to immigration, and, most important of all, adult education in the context of an intellectual and moral reform. (shrink)
In seeking general accounts of evidence, confirmation, or inference, philosophers have looked to logical relationships between evidence and hypotheses. Such logics of evidential relationship, whether hypothetico-deductive, Bayesian, or instantiationist fail to capture or be relevant to scientific practice. They require information that scientists do not generally have (e.g., an exhaustive set of hypotheses), while lacking slots within which to include considerations to which scientists regularly appeal (e.g., error probabilities). Building on my co-symposiasts contributions, I suggest some directions in which a (...) new and more adequate philosophy of evidence can move. (shrink)
The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober () question whether this severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as (...) a springboard, I elucidate some of the central examples that have long been controversial, and clarify how the severity criterion is properly applied to them. Severity and Use-Constructing: Four Points (and Some Clarificatory Notes) 1.1 Point 1: Getting beyond all or nothing standpoints 1.2 Point 2: The rationale for prohibiting double-counting is the requirement that tests be severe 1.3 Point 3: Evaluate severity of a test T by its associated construction rule R 1.4 Point 4: The ease of passing vs. ease of erroneous passing: Statistical vs. Definitional probability The False Dilemma: Hitchcock and Sober 2.1 Marsha measures her desk reliably 2.2 A false dilemma Canonical Errors of Inference 3.1 How construction rules may alter the error-probing performance of tests 3.2 Rules for accounting for anomalies 3.3 Hunting for statistically significant differences Concluding Remarks CiteULike Connotea Del.icio.us What's this? (shrink)
: Our stereotypes of maternity and paternity as manifest in the history of philosophy and psychoanalysis interfere with the ability to imagine loving relationships. The associations of maternity with antisocial nature and paternity with disembodied cul-ture are inadequate to set up primary love relationships. Analyzing the conflicts in these associations, I reformulate the maternal body as social and lawful, and I re-formulate the paternal function as embodied, which enables imagining our primary relationships as loving.
This essay argues that Hegel's discussion of the family in "The Ethical Order" section of Phenomenology of Spirit undermines the entire project of that text. Hegel's project demands that every element of consciousness be conceptualizable, and yet, woman, an essential unconscious element of consciousness, is in principle unconceptualizable. The end of the essay attempts to relate Hegel's discussion of the family to contemporary discussions of family values.
In Philosophical Problems of Statistical Inference, Seidenfeld argues that the Neyman-Pearson (NP) theory of confidence intervals is inadequate for a theory of inductive inference because, for a given situation, the 'best' NP confidence interval, [CIλ], sometimes yields intervals which are trivial (i.e., tautologous). I argue that (1) Seidenfeld's criticism of trivial intervals is based upon illegitimately interpreting confidence levels as measures of final precision; (2) for the situation which Seidenfeld considers, the 'best' NP confidence interval is not [CIλ] as Seidenfeld (...) suggests, but rather a one-sided interval [CI0]; and since [CI0] never yields trivial intervals, NP theory escapes Seidenfeld's criticism entirely; (3) Seidenfeld's criterion of non-triviality is inadequate, for it leads him to judge an alternative confidence interval, [CI alt. ], superior to [CIλ] although [CI alt. ] results in counterintuitive inferences. I conclude that Seidenfeld has not shown that the NP theory of confidence intervals is inadequate for a theory of inductive inference. (shrink)
While many philosophers of science have accorded special evidential significance to tests whose results are "novel facts", there continues to be disagreement over both the definition of novelty and why it should matter. The view of novelty favored by Giere, Lakatos, Worrall and many others is that of use-novelty: An accordance between evidence e and hypothesis h provides a genuine test of h only if e is not used in h's construction. I argue that what lies behind the intuition that (...) novelty matters is the deeper intuition that severe tests matter. I set out a criterion of severity akin to the notion of a test's power in Neyman-Pearson statistics. I argue that tests which are use-novel may fail to be severe, and tests that are severe may fail to be use-novel. I discuss the 1919 eclipse data as a severe test of Einstein's law of gravity. (shrink)
An important theme to have emerged from the new experimentalist movement is that much of actual scientific practice deals not with appraising full-blown theories but with the manifold local tasks required to arrive at data, distinguish fact from artifact, and estimate backgrounds. Still, no program for working out a philosophy of experiment based on this recognition has been demarcated. I suggest why the new experimentalism has come up short, and propose a remedy appealing to the practice of standard error (...) statistics. I illustrate a portion of my proposal using Galison's (1987) experimental narrative on neutral currents. (shrink)
Theories of statistical testing may be seen as attempts to provide systematic means for evaluating scientific conjectures on the basis of incomplete or inaccurate observational data. The Neyman-Pearson Theory of Testing (NPT) has purported to provide an objective means for testing statistical hypotheses corresponding to scientific claims. Despite their widespread use in science, methods of NPT have themselves been accused of failing to be objective; and the purported objectivity of scientific claims based upon NPT has been called into question. The (...) purpose of this paper is first to clarify this question by examining the conceptions of (I) the function served by NPT in science, and (II) the requirements of an objective theory of statistics upon which attacks on NPT's objectivity are based. Our grounds for rejecting these conceptions suggest altered conceptions of (I) and (II) that might avoid such attacks. Second, we propose a reformulation of NPT, denoted by NPT*, based on these altered conceptions, and argue that it provides an objective theory of statistics. The crux of our argument is that by being able to objectively control error frequencies NPT* is able to objectively evaluate what has or has not been learned from the result of a statistical test. (shrink)
The key problem in the controversy over group selection is that of defining a criterion of group selection that identifies a distinct causal process that is irreducible to the causal process of individual selection. We aim to clarify this problem and to formulate an adequate model of irreducible group selection. We distinguish two types of group selection models, labeling them type I and type II models. Type I models are invoked to explain differences among groups in their respective rates of (...) production of contained individuals. Type II models are invoked to explain differences among groups in their respective rates of production of distinct new groups. Taking Elliott Sober's model as an exemplar, we argue that although type I models have some biological importance--they force biologists to consider the role of group properties in influencing the fitness of organisms--they fail to identify a distinct group-level causal selection process. Type II models if properly framed, however, do identify a group-level causal selection process that is not reducible to individual selection. We propose such a type II model and apply it to some of the major candidates for group selection. (shrink)
Carter and Leslie's Doomsday Argument maintains that reflection upon the number of humans born thus far, when that number is viewed as having been uniformly randomly selected from amongst all humans, past, present and future, leads to a dramatic rise in the probability of an early end to the human experiment. We examine the Bayesian structure of the Argument and find that the drama is largely due to its oversimplification.
: I begin to suggest an alternative to the notion of vision based in alienation and hostility put forth by Jean-Paul Sartre, Sigmund Freud, and Jacques Lacan. I diagnose this alienating vision as a result of a particular alienating notion of space presupposed by their theories. I develop Irigaray's comments about light and air to suggest an alternative notion of space that opens up the possibility that vision connects us to others rather than alienates us from them.
The error statistical account of testing uses statistical considerations, not to provide a measure of probability of hypotheses, but to model patterns of irregularity that are useful for controlling, distinguishing, and learning from errors. The aim of this paper is (1) to explain the main points of contrast between the error statistical and the subjective Bayesian approach and (2) to elucidate the key errors that underlie the central objection raised by Colin Howson at our PSA 96 Symposium.
Troubled times in education means that philosophers of education, who seem to never stop making defenses of our field, have to do so with more flexibility and a greater understanding of how peripheral we may have become. The only thing worse than a defensive philosopher is a confident and certain philosopher, so it may be that our very marginality will give us renewed energies for problematizing education. Occupying our marginal position carefully and in concert with other marginal inquiries, I think, (...) will do our field good. Because of its attention to what it takes to be willing to learn and to approach theoretical and real world obstacles with open if cautious interest, philosophy of education is about holding concepts and movements in tension, bending the implications of commonplace, commonsensical ideas about education, and carefully examining the all of these maneuvers for the exclusions they wittingly and unwittingly produce. Problematizing the certainties derived from majoritarian positions, be it whiteness, Westernness, or any other dominant perspective, can provide us with a diversity of claims to scrutinize and epistemological positions to be wary of. (shrink)
What is the nature of the decision-related personal values of corporate management? Managers' attitudes and behaviors are built upon their personal value systems (PVS). Knowledge about the structure of management's PVS assists in understanding the attributes of corporate decision making. Utilizing a survey instrument developed and used by England (1967, 1975), this article updates this research into corporate managers' personal value systems. England's PVS consists of sixty-six pre-tested values clustered into five groups. As one could expect with personal values, statistical (...) tests reveal that even with dramatic changes in the business environment the overall personal values structure has not changed over the intervening three decades. The results also reveal that corporate managers retain their pragmatic value orientation as discussed by England. (shrink)
Julia Kristeva is known as rejecting feminism, nonetheless her work is useful for feminist theory. I reconsider Kristeva's rejection of feminism and her theories of difference, identity, and maternity, elaborating on Kristeva's contributions to debates over the necessity of identity politics, indicating how Kristeva's theory suggests the cause of and possible solutions to women's oppression in Western culture, and, using Kristeva's theory, setting up a framework for a feminist rethinking of politics and ethics.
In this article, I argue that the liberal framework-its autonomous individuals with equal rights-allows judges to justify enforcing surrogacy contracts. More importantly, even where judges do not enforce surrogacy contracts, the liberal framework conceals gender and class issues which insure that the surrogate will lose custody of her child. I suggest that Marx's analysis of estranged labor can reveal the class and gender issues which the liberal framework conceals.
: The Centers for Disease Control and Prevention (CDC) has recommended that HIV testing be routinely offered to certain patients in hospitals with a high prevalence of HIV infection and on all pregnant women. The CDC does not, however, offer implementation level guidelines for obtaining informed consent. We provide a moral justification for requiring informed consent for HIV testing and propose guidelines for securing such consent. In particular we argue that genuine informed consent can be secured without elaborate counseling, such (...) as that currently used at Counseling and Testing Sites, provided that sufficient written notice is given to the patients before testing and that they are specifically asked for permission. (shrink)
This essay examines media images of women in recent conflicts in the Middle East. From the Abu Ghraib prison abuses to protests in Iran, women have become the public face of violence, carried out and suffered. Women’s bodies are figured as sexual and violent, a potent combination that stirs public imagination and feeds into stereotypes of women as femme fatales or “bombshells.”.
Although much of the growing literature on organizational identity implicitly recognizes the normative nature of identity, the ethical implications of organizational identity work and talk have not yet been explored in depth. Working from a meta-ethical perspective, we claim that the dynamic, processual, and temporal activities recently associated with organizational identity always have an ethical dimension, whether “good” or “bad.” In order to describe the ethical dimensions of organizational identity, we introduce the balance theory of practical wisdom as a theoretical (...) framework, and connect this theory to existing organizational identity concepts. We present an empirical case focused on an international paint company to illustrate the relevance of this theory for empirical organizational identity research. Our intention is to expand existing theory by bringing an aspect of organizational identity that has been tangentially acknowledged to the forefront, and by identifying it as a fruitful avenue for future theory development as well as empirical research. (shrink)
Between 1983 and 1993 the authors published a series of articles and a book promulgating and explicating "Critical Mass Theory," a theory of public goods provision in groups. In this article we seek to trace the growth, change, or decline of the theory, primarily through an analysis of all journal citations of the theory. We find that the majority of citations are essentially gratuitous or pick a single point from the theory, which may or may not be central to the (...) theory. However, we identify four lines of theorizing that creatively use substantial parts of Critical Mass Theory in their own development: (1) theories relevant to issues in communication studies such as interaction media and shared databases; (2) Macy's work on adaptive learning models; (3) Heckathorn's models of sanctioning systems; and (4) theories that are centrally concerned with issues of influence in collective goods processes. A few additional, less-developed lines of work are also discussed. None of this work identifies itself as being itself "Critical Mass Theory," but many of the innovations and assertions of the theory are important bases for its development. (shrink)