Do we violate human rights when we cooperate with and impose a global institutional order that engenders extreme poverty? Thomas Pogge argues that by shaping and enforcing the social conditions that foreseeably and avoidably cause global poverty we are violating the negative duty not to cooperate in the imposition of a coercive institutional order that avoidably leaves human rights unfulfilled. This article argues that Pogge's argument fails to distinguish between harms caused by the global institutions themselves and harms caused by (...) the domestic policies of particular states and collective action problems for which collective responsibility cannot be assigned. The article also argues that his position relies on questionable factual and theoretical claims about the impact of global institutions on poverty, and about the benefits and harms of certain features of these institutions. Participation in, and benefit from, global institutions is unlikely to constitute a violation of our negative duties towards the poor. Key Words: justice international regimes institutions human rights trade. (shrink)
This document collects discussion and commentary on issues raised in the workshop by its participants. Contributors are: Greg Frost-Arnold, David Harker, P. D. Magnus, John Manchak, John D. Norton, J. Brian Pitts, Kyle Stanford, Dana Tulodziecki.
Descartes' place in history, by L. J. Lafleur.--A central ambiguity in Descartes, by S. Rosen.--Doubt, common sense and affirmation in Descartes and Hume, by H. J. Allen.--Some remarks on logic and the cogito, by R. N. Beck.--The cogito, an ambiguous performance, by J. B. Wilbur.--The modalities of Descartes' proofs for the existence of God, by B. Magnus.--Descartes and the phenomenological problem of the embodiment of consciousness, by J. M. Edie.--The person and his body: critique of existentialist responses to Descartes, (...) by P. A. Bertocci. (shrink)
The no-miracles argument and the pessimistic induction are arguably the main considerations for and against scientific realism. Recently these arguments have been accused of embodying a familiar, seductive fallacy. In each case, we are tricked by a base rate fallacy, one much-discussed in the psychological literature. In this paper we consider this accusation and use it as an explanation for why the two most prominent `wholesale' arguments in the literature seem irresolvable. Framed probabilistically, we can see very clearly why realists (...) and anti-realists have been talking past one another. We then formulate a dilemma for advocates of either argument, answer potential objections to our criticism, discuss what remains (if anything) of these two major arguments, and then speculate about a future philosophy of science freed from these two arguments. In so doing, we connect the point about base rates to the wholesale/retail distinction; we believe it hints at an answer of how to distinguish profitable from unprofitable realism debates. In short, we offer a probabilistic analysis of the feeling of ennui afflicting contemporary philosophy of science. (shrink)
There are two senses of ‘what scientists know’: An individual sense (the separate opinions of individual scientists) and a collective sense (the state of the discipline). The latter is what matters for policy and planning, but it is not something that can be directly observed or reported. A function can be defined to map individual judgments onto an aggregate judgment. I argue that such a function cannot effectively capture community opinion, especially in cases that matter to us.
Kyle Stanford has recently claimed to offer a new challenge to scientific realism. Taking his inspiration from the familiar Pessimistic Induction (PI), Stanford proposes a New Induction (NI). Contra Anjan Chakravartty’s suggestion that the NI is a ‘red herring’, I argue that it reveals something deep and important about science. The Problem of Unconceived Alternatives, which lies at the heart of the NI, yields a richer anti-realism than the PI. It explains why science falls short when it falls short, and (...) so it might figure in the most coherent account of scientific practice. However, this best account will be antirealist in some respects and about some theories. It will not be a sweeping antirealism about all or most of science. (shrink)
The problem of underdetermination is thought to hold important lessons for philosophy of science. Yet, as Kyle Stanford has recently argued, typical treatments of it offer only restatements of familiar philosophical problems. Following suggestions in Duhem and Sklar, Stanford calls for a New Induction from the history of science. It will provide proof, he thinks, of "the kind of underdetermination that the history of science reveals to be a distinctive and genuine threat to even our best scientific theories" . This (...) paper examines Stanford's New Induction and argues that it -- like the other forms of underdetermination that he criticizes -- merely recapitulates familiar philosophical conundra. (shrink)
Homeostatic property clusters (HPCs) are offered as a way of understanding natural kinds, especially biological species. I review the HPC approach and then discuss an objection by Ereshefsky and Matthen, to the effect that an HPC qua cluster seems ill-fitted as a description of a polymorphic species. The standard response by champions of the HPC approach is to say that all members of a polymorphic species have things in common, namely dispositions or conditional properties. I argue that this response fails. (...) Instances of an HPC kind need not all be similar in their exhibited properties. Instead, HPCs should instead be understood as unified by the underlying causal mechanism that maintains them. The causal mechanism can both produce and explain some systematic differences between a kind’s members. An HPC kind is best understood not as a single cluster of properties maintained in stasis by causal forces, but as a complex of related property clusters kept in relation by an underlying causal process. This approach requires recognizing that taxonomic systems serve both explanatory and inductive purposes. (shrink)
These are indispensable for successful science in some domain; in short, they are natural kinds. This book gives a general account of what it is to be a natural kind. It untangles philosophical puzzles surrounding natural kinds.
Relying on interviews and fieldwork observations, the article investigates the choice of signs made by guide dogs and their visually impaired handlers while the team is on the move. It also explores the dependence of the choice of signs on specific functions of communication and examines the changes and development of sign usage throughout the team’s work. A significant part of the team’s communication appears to be related to retaining the communicative situation itself: to the establishment of intrateam contact; to (...) keeping the other prone to receive messages and to establish adequate sign relations; to giving and receiving feedback. The signs used for the purpose of retaining contact are analyzed in the article mainly with the handler in the role of the addresser. Signs also vary according to the character and aim of the team’s referential communication. Searching for objects and places, orientation and avoidance of obstacles can be discerned as three major functional frames that determine the choice of signs. As the team’s cooperation evolves, so also do the means of communication. The analysis shows that intrateam communication becomes less segmented and the signs used in referential communication shift from symbolic to symptomatic signs and become harder to detect for an outside observer. (shrink)
The accepted narrative treats John Stuart Mill’s Kinds as the historical prototype for our natural kinds, but Mill actually employs two separate notions: Kinds and natural groups. Considering these, along with the accounts of Mill’s nineteenth-century interlocutors, forces us to recognize two distinct questions. First, what marks a natural kind as worthy of inclusion in taxonomy? Second, what exists in the world that makes a category meet that criterion? Mill’s two notions offer separate answers to the two questions: natural groups (...) for taxonomy and Kinds for ontology. This distinction is ignored in many contemporary debates about natural kinds and is obscured by the standard narrative that treats our natural kinds just as a development of Mill’s Kinds. (shrink)
This paper offers a general characterization of underdetermination and gives a prima facie case for the underdetermination of the topology of the universe. A survey of several philosophical approaches to the problem fails to resolve the issue: the case involves the possibility of massive reduplication, but Strawson on massive reduplication provides no help here; it is not obvious that any of the rival theories are to be preferred on grounds of simplicity; and the usual talk of empirically equivalent theories misses (...) the point entirely. (If the choice is underdetermined, then the theories are not empirically equivalent!) Yet the thought experiment is analogous to a live scientific possibility, and actual astronomy faces underdetermination of this kind. This paper concludes by suggesting how the matter can be resolved, either by localizing the underdetermination or by defeating it entirely. Introduction A brief preliminary Around the universe in 80 days Some attempts at resolving the problem 4.1 Indexicality 4.2 Simplicity 4.3 Empirical equivalence 4.4 Is this just a philosophers' fantasy? Move along... ...nothing to see here 6.1 Rules of repetition 6.2 Some possible replies Conclusion. (shrink)
Given the fact that many people use Wikipedia, we should ask: Can we trust it? The empirical evidence suggests that Wikipedia articles are sometimes quite good but that they vary a great deal. As such, it is wrong to ask for a monolithic verdict on Wikipedia. Interacting with Wikipedia involves assessing where it is likely to be reliable and where not. I identify five strategies that we use to assess claims from other sources and argue that, to a greater of (...) lesser degree, Wikipedia frustrates all of them. Interacting responsibly with something like Wikipedia requires new epistemic methods and strategies. (shrink)
According to many philosophers, psychological explanation canlegitimately be given in terms of belief and desire, but not in termsof knowledge. To explain why someone does what they do (so the common wisdom holds) you can appeal to what they think or what they want, but not what they know. Timothy Williamson has recently argued against this view. Knowledge, Williamson insists, plays an essential role in ordinary psychological explanation.Williamson's argument works on two fronts.First, he argues against the claim that, unlike knowledge, (...) belief is``composite'' (representable as a conjunction of a narrow and a broadcondition). Belief's failure to be composite, Williamson thinks, undermines the usual motivations for psychological explanation in terms of belief rather than knowledge.Unfortunately, we claim, the motivations Williamson argues against donot depend on the claim that belief is composite, so what he saysleaves the case for a psychology of belief unscathed.Second, Williamson argues that knowledge can sometimes provide abetter explanation of action than belief can.We argue that, in the cases considered, explanations that cite beliefs(but not knowledge) are no less successful than explanations that citeknowledge. Thus, we conclude that Williamson's arguments fail both coming andgoing: they fail to undermine a psychology of belief, and they fail tomotivate a psychology of knowledge. (shrink)
Background theories in science are used both to prove and to disprove that theory choice is underdetermined by data. The alleged proof appeals to the fact that experiments to decide between theories typically require auxiliary assumptions from other theories. If this generates a kind of underdetermination, it shows that standards of scientific inference are fallible and must be appropriately contextualized. The alleged disproof appeals to the possibility of suitable background theories to show that no theory choice can be timelessly or (...) noncontextually underdetermined: Foreground theories might be distinguished against different backgrounds. Philosophers have often replied to such a disproof by focussing their attention not on theories but on Total Sciences. If empirically equivalent Total Sciences were at stake, then there would be no background against which they could be differentiated. I offer several reasons to think that Total Science is a philosophers' fiction. No respectable underdetermination can be based on it. (shrink)
Abstract: There is a long tradition of trying to analyze art either by providing a definition (essentialism) or by tracing its contours as an indefinable, open concept (anti-essentialism). Both art essentialists and art anti-essentialists share an implicit assumption of art concept monism. This article argues that this assumption is a mistake. Species concept pluralism—a well-explored position in philosophy of biology—provides a model for art concept pluralism. The article explores the conditions under which concept pluralism is appropriate, and argues that they (...) obtain for art. Art concept pluralism allows us to recognize that different art concepts are useful for different purposes, and what has been feuding definitions can be seen as characterizations of specific art concepts. (shrink)
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential contexts , we (...) can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductive inference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inference rules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between an inference construed demonstratively or ampliatively. (...) The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive. (shrink)
One approach to science treats science as a cognitive accomplishment of individuals and defines a scientific community as an aggregate of individual inquirers. Another treats science as a fundamentally collective endeavor and defines a scientist as a member of a scientific community. Distributed cognition has been offered as a framework that could be used to reconcile these two approaches. Adam Toon has recently asked if the cognitive and the social can be friends at last. He answers that they probably cannot, (...) posing objections to the would-be rapprochement. We clarify both the animosity and the tonic proposed to resolve it, ultimately arguing that worries raised by Toon and others are uncompelling. (shrink)
Institutional ethics consultation services for biomedical scientists have begun to proliferate, especially for clinical researchers. We discuss several models of ethics consultation and describe a team-based approach used at Stanford University in the context of these models. As research ethics consultation services expand, there are many unresolved questions that need to be addressed, including what the scope, composition, and purpose of such services should be, whether core competencies for consultants can and should be defined, and how conflicts of interest should (...) be mitigated. We make preliminary recommendations for the structure and process of research ethics consultation, based on our initial experiences in a pilot program. (shrink)
This paper will address the translation of basic stem cell research into clinical research. While “stem cell” trials are sometimes used to describe established practices of bone marrow transplantation or transplantation of primary cells derived from bone marrow, for the purposes of this paper, I am primarily focusing on stem cell trials which are far less established, including use of hESC derived stem cells. The central ethical challenges in stem cell clinical trials arise in frontier research, not in standard, well-established (...) areas of research. (shrink)
Judith Butler's Kritik der ethischen Gewalt represents a significant refinement of her position on the relationship between the construction of the subject and her social subjection. While Butler's earlier texts reflect a somewhat restricted notion of agency, her Adorno Lectures formulate a notion of agency that extends beyond mere resistance. This essay traces the development of Butler's account of agency and evaluates it in light of feminist projects of social transformation.
The underdetermination of theory by evidence is supposed to be a reason to rethink science. It is not. Many authors claim that underdetermination has momentous consequences for the status of scientific claims, but such claims are hidden in an umbra of obscurity and a penumbra of equivocation. So many various phenomena pass for `underdetermination' that it's tempting to think that it is no unified phenomenon at all, so I begin by providing a framework within which all these worries can be (...) seen as species of one genus: A claim of underdetermination involves (at least implicitly) a set of rival theories, a standard of responsible judgment, and a scope of circumstances in which responsible choice between the rivals is impossible. Within this framework, I show that one variety of underdetermination motivated modern scepticism and thus is a familiar problem at the heart of epistemology. I survey arguments that infer from underdetermination to some reëvaluation of science: top-down arguments infer a priori from the ubiquity of underdetermination to some conclusion about science; bottom-up arguments infer from specific instances of underdetermination, to the claim that underdetermination is widespread, and then to some conclusion about science. The top-down arguments either fail to deliver underdetermination of any great significance or (as with modern scepticism) deliver some well-worn epistemic concern. The bottom-up arguments must rely on cases. I consider several promising cases and find them to either be so specialized that they cannot underwrite conclusions about science in general or not be underdetermined at all. Neither top-down nor bottom-up arguments can motivate any deep reconsideration of science. (shrink)
Philip Kitcher develops the Galilean Strategy to defend realism against its many opponents. I explore the structure of the Galilean Strategy and consider it specifically as an instrument against constructive empiricism. Kitcher claims that the Galilean Strategy underwrites an inference from success to truth. We should resist that conclusion, I argue, but the Galilean Strategy should lead us by other routes to believe in many things about which the empiricist would rather remain agnostic. 1 Target: empiricism 2 The Galilean Strategy (...) 3 Strengthening the argument 4 Success and truth 5 Conclusion. (shrink)
The significance of Friedrich Nietzsche for twentieth century culture is now no longer a matter of dispute. He was quite simply one of the most influential of modern thinkers. The opening essay of this 1996 Companion provides a chronologically organised introduction to and summary of Nietzsche's published works, while also providing an overview of their basic themes and concerns. It is followed by three essays on the appropriation and misappropriation of his writings, and a group of essays exploring the nature (...) of Nietzsche's philosophy and its relation to the modern and post-modern world. The final contributions consider Nietzsche's influence on the twentieth century in Europe, the USA, and Asia. New readers and non-specialists will find this the most convenient, accessible guide to Nietzsche currently available. Advanced students and specialists will find a conspectus of recent developments in the interpretation of Nietzsche. (shrink)
On the basis of a comparative analysis of the biosemiotic work of Jakob von Uexküll and of various theories on biological holism, this article takes a look at the question: what is the status of a semiotic approach in respect to a holistic one? The period from 1920 to 1940 was the peak-time of holistic theories, despite the fact that agreement on a unified and accepted set of holistic ideas was never reached. A variety of holisms, dependent on the cultural (...) and disciplinary contexts, is sketched here from the works of Jan Smuts, Adolf Meyer-Abich, John Scott Haldane, Kurt Goldstein, Alfred North Whitehead and Wolfgang Köhler. In contrast with his contemporary holists, who used the model of an organism as a unifying explanatory tool for all levels of reality, Jakob von Uexküll confined himself to disciplinary organicism by extending the borders of the definition of “organism” without any intention to surpass the borders of biology itself. The comparison reveals also a significant difference in the perspectives of Uexküll and his contemporary holists, a difference between a view from a subjective centre in contrast with an all-encompassing structural view. Uexküll’s theories are fairly near to J. S. Haldane’s interpretation of an organism as a coordinative centre, but even here their models do not coincide. Although biosemiotics and holistic biology have different theoretical starting points and research-goals, it is possible nonetheless to place them under one and the same doctrinal roof. (shrink)
When we ask what natural kinds are, there are two different things we might have in mind. The first, which I’ll call the taxonomy question, is what distinguishes a category which is a natural kind from an arbitrary class. The second, which I’ll call the ontology question, is what manner of stuff there is that realizes the category. Many philosophers have systematically conflated the two questions. The confusion is exhibited both by essentialists and by philosophers who pose their accounts in (...) terms of similarity. It also leads to misreading philosophers who do make the distinction. Distinguishing the questions allows for a more subtle understanding of both natural kinds and their underlying metaphysics. (shrink)
Debates about the underdetermination of theory by data often turn on specific examples. Cases invoked often enough become familiar, even well worn. Since Helen Longino’s discussion of the case, the connection between prenatal hormone levels and gender-linked childhood behaviour has become one of these stock examples. However, as I argue here, the case is not genuinely underdetermined. We can easily imagine a possible experiment to decide the question. The fact that we would not perform this experiment is a moral, rather (...) than epistemic, point. Finally, I suggest that the ‘underdetermination’ of the case may be inessential for Longino to establish her central claim about it. (shrink)
This paper argues against the common, often implicit view that theories are some specific kind of thing. Instead, I argue for theory concept pluralism: There are multiple distinct theory concepts which we legitimately use in different domains and for different purposes, and we should not expect this to change. The argument goes by analogy with species concept pluralism, a familiar position in philosophy of biology. I conclude by considering some consequences for philosophy of science if theory concept pluralism is correct.
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: `Where scientific standards cannot guide us, we should believe nothing.' Another response, which we can call the fideist response, is to believe whatever we would like to believe: `If science cannot speak to the question, then we may believe anything without science ever contradicting us.' C.S. Peirce recognized these options and suggested evading (...) the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. 1. Peirce's Logical Maxim 2. The concept of underdetermination 3. Our dilemma 4. Endgame. (shrink)
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: "Where scientific standards cannot guide us, we should believe nothing". Another response, which we can call the fideist response, is to believe whatever we would like to believe: "If science cannot speak to the question, then we may believe anything without science ever contradicting us". C.S. Peirce recognized these options and suggested evading (...) the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. (shrink)
Thomas Reid is often misread as defending common sense, if at all, only by relying on illicit premises about God or our natural faculties. On these theological or reliabilist misreadings, Reid makes common sense assertions where he cannot give arguments. This paper attempts to untangle Reid's defense of common sense by distinguishing four arguments: (a) the argument from madness, (b) the argument from natural faculties, (c) the argument from impotence, and (d) the argument from practical commitment. Of these, (a) and (...) (c) do rely on problematic premises that are no more secure than claims of common sense itself. Yet (b) and (d) do not. This conclusion can be established directly by considering the arguments informally, but one might still worry that there is an implicit premise in them. In order to address this concern, I reconstruct the arguments in the framework of subjective Bayesianism. The worry becomes this: Do the arguments rely on specific values for the prior probability of some premises? Reid's appeals to our prior cognitive and practical commitments do not. Rather than relying on specific probability assignments, they draw on things that are part of the Bayesian framework itself, such as the nature of observation and the connection between belief and action. Contra the theological or reliabilist readings, the defense of common sense does not require indefensible premises. (shrink)
Biologists often define evolution as a change in allele frequencies. Consideration of the evolution of the pocket mouse will show that it is possible to have evolution without any change in the allele frequencies in a population (through change in the genotype frequencies). The implications of this for genic selectionism are then discussed. Sober and Lewontin (1982) have constructed an example to demonstrate the blindness of genic selectionism in certain cases. Sterelny and Kitcher (1988) offer a defense against these arguments (...) which assumes a conventionalist approach to populations. The example considered here will be shown to offer a more plausible and far-reaching argument against the view that alleles can always be seen as the units of selection. (shrink)