Suppose that groups have reasons to act. Do the members of a group “inherit” the group’s reason? Alexander Dietz has recently argued that they do so in some circumstances. Dietz considers two principles. The first one – which I call the “Simple Principle” – claims that the members of a group always inherit the group’s reason. The second one – which I call “Dietz’s Principle,” which is the one Dietz advocates – claims that the members of a group inherit the (...) group’s reason when they cooperate. Although Dietz thinks that the Simple Principle is intuitively appealing he argues that it has to be rejected because it has – in contrast to his own principle – counterintuitive implications. In this article, I shall try to show that Dietz’s Principle also has counterintuitive implications. Furthermore, I shall consider some revisions of Dietz’s Principle, but conclude that they are unattractive. Finally, I shall suggest that Dietz’s Principle is ad hoc. (shrink)
This document collects discussion and commentary on issues raised in the workshop by its participants. Contributors are: Greg Frost-Arnold, David Harker, P. D. Magnus, John Manchak, John D. Norton, J. Brian Pitts, Kyle Stanford, Dana Tulodziecki.
Descartes' place in history, by L. J. Lafleur.--A central ambiguity in Descartes, by S. Rosen.--Doubt, common sense and affirmation in Descartes and Hume, by H. J. Allen.--Some remarks on logic and the cogito, by R. N. Beck.--The cogito, an ambiguous performance, by J. B. Wilbur.--The modalities of Descartes' proofs for the existence of God, by B. Magnus.--Descartes and the phenomenological problem of the embodiment of consciousness, by J. M. Edie.--The person and his body: critique of existentialist responses to Descartes, (...) by P. A. Bertocci. (shrink)
The no-miracles argument and the pessimistic induction are arguably the main considerations for and against scientific realism. Recently these arguments have been accused of embodying a familiar, seductive fallacy. In each case, we are tricked by a base rate fallacy, one much-discussed in the psychological literature. In this paper we consider this accusation and use it as an explanation for why the two most prominent `wholesale' arguments in the literature seem irresolvable. Framed probabilistically, we can see very clearly why realists (...) and anti-realists have been talking past one another. We then formulate a dilemma for advocates of either argument, answer potential objections to our criticism, discuss what remains (if anything) of these two major arguments, and then speculate about a future philosophy of science freed from these two arguments. In so doing, we connect the point about base rates to the wholesale/retail distinction; we believe it hints at an answer of how to distinguish profitable from unprofitable realism debates. In short, we offer a probabilistic analysis of the feeling of ennui afflicting contemporary philosophy of science. (shrink)
Some scientific categories seem to correspond to genuine features of the world and are indispensable for successful science in some domain; in short, they are natural kinds. This book gives a general account of what it is to be a natural kind and puts the account to work illuminating numerous specific examples.
:Empirical work has shown that patients and physicians have markedly divergent understandings of treatability statements in the context of serious illness. Patients often understand treatability statements as conveying good news for prognosis and quality of life. In contrast, physicians often do not intend treatability statements to convey improvement in prognosis or quality of life, but merely that a treatment is available. Similarly, patients often understand treatability statements as conveying encouragement to hope and pursue further treatment, though this may not be (...) intended by physicians. This radical divergence in understandings may lead to severe miscommunication. This paper seeks to better understand this divergence through linguistic theory—in particular, H.P. Grice’s notion of conversational implicature. This theoretical approach reveals three levels of meaning of treatability statements: the literal meaning, the physician’s intended meaning, and the patient’s received meaning. The divergence between the physician’s intended meaning and the patient’s received meaning can be understood to arise from the lack of shared experience between physicians and patients, and the differing assumptions that each party makes about conversations. This divergence in meaning raises new and largely unidentified challenges to informed consent and shared decision making in the context of serious illness, which indicates a need for further empirical research in this area. (shrink)
Institutional ethics consultation services for biomedical scientists have begun to proliferate, especially for clinical researchers. We discuss several models of ethics consultation and describe a team-based approach used at Stanford University in the context of these models. As research ethics consultation services expand, there are many unresolved questions that need to be addressed, including what the scope, composition, and purpose of such services should be, whether core competencies for consultants can and should be defined, and how conflicts of interest should (...) be mitigated. We make preliminary recommendations for the structure and process of research ethics consultation, based on our initial experiences in a pilot program. (shrink)
forall x: Calgary is a full-featured textbook on formal logic. It covers key notions of logic such as consequence and validity of arguments, the syntax of truth-functional propositional logic TFL and truth-table semantics, the syntax of first-order (predicate) logic FOL with identity (first-order interpretations), translating (formalizing) English in TFL and FOL, and Fitch-style natural deduction proof systems for both TFL and FOL. It also deals with some advanced topics such as truth-functional completeness and modal logic. Exercises with solutions are available. (...) It is provided in PDF (for screen reading, printing, and a special version for dyslexics) and in LaTeX source code. (shrink)
Homeostatic property clusters (HPCs) are offered as a way of understanding natural kinds, especially biological species. I review the HPC approach and then discuss an objection by Ereshefsky and Matthen, to the effect that an HPC qua cluster seems ill-fitted as a description of a polymorphic species. The standard response by champions of the HPC approach is to say that all members of a polymorphic species have things in common, namely dispositions or conditional properties. I argue that this response fails. (...) Instances of an HPC kind need not all be similar in their exhibited properties. Instead, HPCs should instead be understood as unified by the underlying causal mechanism that maintains them. The causal mechanism can both produce and explain some systematic differences between a kind’s members. An HPC kind is best understood not as a single cluster of properties maintained in stasis by causal forces, but as a complex of related property clusters kept in relation by an underlying causal process. This approach requires recognizing that taxonomic systems serve both explanatory and inductive purposes. (shrink)
Kyle Stanford has recently claimed to offer a new challenge to scientific realism. Taking his inspiration from the familiar Pessimistic Induction (PI), Stanford proposes a New Induction (NI). Contra Anjan Chakravartty’s suggestion that the NI is a ‘red herring’, I argue that it reveals something deep and important about science. The Problem of Unconceived Alternatives, which lies at the heart of the NI, yields a richer anti-realism than the PI. It explains why science falls short when it falls short, and (...) so it might figure in the most coherent account of scientific practice. However, this best account will be antirealist in some respects and about some theories. It will not be a sweeping antirealism about all or most of science. (shrink)
There are two senses of ‘what scientists know’: An individual sense (the separate opinions of individual scientists) and a collective sense (the state of the discipline). The latter is what matters for policy and planning, but it is not something that can be directly observed or reported. A function can be defined to map individual judgments onto an aggregate judgment. I argue that such a function cannot effectively capture community opinion, especially in cases that matter to us.
The problem of underdetermination is thought to hold important lessons for philosophy of science. Yet, as Kyle Stanford has recently argued, typical treatments of it offer only restatements of familiar philosophical problems. Following suggestions in Duhem and Sklar, Stanford calls for a New Induction from the history of science. It will provide proof, he thinks, of "the kind of underdetermination that the history of science reveals to be a distinctive and genuine threat to even our best scientific theories" . This (...) paper examines Stanford's New Induction and argues that it -- like the other forms of underdetermination that he criticizes -- merely recapitulates familiar philosophical conundra. (shrink)
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential contexts , we (...) can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
This is the second article in a series of review articles addressing biosemiotic terminology. The biosemiotic glossary project is designed to integrate views of members within the biosemiotic community based on a standard survey and related publications. The methodology section describes the format of the survey conducted July–August 2014 in preparation of the current review and targeted on Jakob von Uexküll’s term ‘Umwelt’. Next, we summarize denotation, synonyms and antonyms, with special emphasis on the denotation of this term in current (...) biosemiotic usage. The survey findings include ratings of eight citations defining or making use of the term Umwelt. We provide a summary of respondents’ own definitions and suggested term usage. Further sections address etymology, relevant contexts of use, and related terms in English and other languages. A section on the notion’s Uexküllian meaning and later biosemiotic meaning is followed by attempt at synthesis and conclusion. We conclude that the Umwelt is a centerpiece phenomenon, a phenomenon that other phenomena in the living realm are organized around. To sum up Uexküll’s view, we can characterize an Umwelt as the subjective world of an organism, enveloping a perceptual world and an effector world, which is always part of the organism itself and a key component of nature, which is held together by functional cycles connecting different Umwelten. In order to pay respect to Uexküll’s work, we must move from notion to model, from mention of Uexküll’s Umwelt term to actual application of it. (shrink)
It is now commonly held that values play a role in scientific judgment, but many arguments for that conclusion are limited. First, many arguments do not show that values are, strictly speaking, indispensable. The role of values could in principle be filled by a random or arbitrary decision. Second, many arguments concern scientific theories and concepts which have obvious practical consequences, thus suggesting or at least leaving open the possibility that abstruse sciences without such a connection could be value-free. Third, (...) many arguments concern the role values play in inferring from evidence, thus taking evidence as given. This paper argues that these limitations do not hold in general. There are values involved in every scientific judgment. They cannot even conceivably be replaced by a coin toss, they arise as much for exotic as for practical sciences, and they are at issue as much for observation as for explicit inference. (shrink)
When we ask what natural kinds are, there are two different things we might have in mind. The first, which I’ll call the taxonomy question, is what distinguishes a category which is a natural kind from an arbitrary class. The second, which I’ll call the ontology question, is what manner of stuff there is that realizes the category. Many philosophers have systematically conflated the two questions. The confusion is exhibited both by essentialists and by philosophers who pose their accounts in (...) terms of similarity. It also leads to misreading philosophers who do make the distinction. Distinguishing the questions allows for a more subtle understanding of both natural kinds and their underlying metaphysics. (shrink)
Given the fact that many people use Wikipedia, we should ask: Can we trust it? The empirical evidence suggests that Wikipedia articles are sometimes quite good but that they vary a great deal. As such, it is wrong to ask for a monolithic verdict on Wikipedia. Interacting with Wikipedia involves assessing where it is likely to be reliable and where not. I identify five strategies that we use to assess claims from other sources and argue that, to a greater of (...) lesser degree, Wikipedia frustrates all of them. Interacting responsibly with something like Wikipedia requires new epistemic methods and strategies. (shrink)
Abstract: There is a long tradition of trying to analyze art either by providing a definition (essentialism) or by tracing its contours as an indefinable, open concept (anti-essentialism). Both art essentialists and art anti-essentialists share an implicit assumption of art concept monism. This article argues that this assumption is a mistake. Species concept pluralism—a well-explored position in philosophy of biology—provides a model for art concept pluralism. The article explores the conditions under which concept pluralism is appropriate, and argues that they (...) obtain for art. Art concept pluralism allows us to recognize that different art concepts are useful for different purposes, and what has been feuding definitions can be seen as characterizations of specific art concepts. (shrink)
The underdetermination of theory by evidence is supposed to be a reason to rethink science. It is not. Many authors claim that underdetermination has momentous consequences for the status of scientific claims, but such claims are hidden in an umbra of obscurity and a penumbra of equivocation. So many various phenomena pass for `underdetermination' that it's tempting to think that it is no unified phenomenon at all, so I begin by providing a framework within which all these worries can be (...) seen as species of one genus: A claim of underdetermination involves (at least implicitly) a set of rival theories, a standard of responsible judgment, and a scope of circumstances in which responsible choice between the rivals is impossible. Within this framework, I show that one variety of underdetermination motivated modern scepticism and thus is a familiar problem at the heart of epistemology. I survey arguments that infer from underdetermination to some reëvaluation of science: top-down arguments infer a priori from the ubiquity of underdetermination to some conclusion about science; bottom-up arguments infer from specific instances of underdetermination, to the claim that underdetermination is widespread, and then to some conclusion about science. The top-down arguments either fail to deliver underdetermination of any great significance or (as with modern scepticism) deliver some well-worn epistemic concern. The bottom-up arguments must rely on cases. I consider several promising cases and find them to either be so specialized that they cannot underwrite conclusions about science in general or not be underdetermined at all. Neither top-down nor bottom-up arguments can motivate any deep reconsideration of science. (shrink)
The accepted narrative treats John Stuart Mill’s Kinds as the historical prototype for our natural kinds, but Mill actually employs two separate notions: Kinds and natural groups. Considering these, along with the accounts of Mill’s nineteenth-century interlocutors, forces us to recognize two distinct questions. First, what marks a natural kind as worthy of inclusion in taxonomy? Second, what exists in the world that makes a category meet that criterion? Mill’s two notions offer separate answers to the two questions: natural groups (...) for taxonomy and Kinds for ontology. This distinction is ignored in many contemporary debates about natural kinds and is obscured by the standard narrative that treats our natural kinds just as a development of Mill’s Kinds. (shrink)
Nelson Goodman's distinction between autographic and allographic arts is appealing, we suggest, because it promises to resolve several prima facie puzzles. We consider and rebut a recent argument that alleges that digital images explode the autographic/allographic distinction. Regardless, there is another familiar problem with the distinction, especially as Goodman formulates it: it seems to entirely ignore an important sense in which all artworks are historical. We note in reply that some artworks can be considered both as historical products and as (...) formal structures. Talk about such works is ambiguous between the two conceptions. This allows us to recover Goodman's distinction: art forms that are ambiguous in this way are allographic. With that formulation settled, we argue that digital images are allographic. We conclude by considering the objection that digital photographs, unlike other digital images, would count as autographic by our criterion; we reply that this points to the vexed nature of photography rather than any problem with the distinction. (shrink)
This paper offers a general characterization of underdetermination and gives a prima facie case for the underdetermination of the topology of the universe. A survey of several philosophical approaches to the problem fails to resolve the issue: the case involves the possibility of massive reduplication, but Strawson on massive reduplication provides no help here; it is not obvious that any of the rival theories are to be preferred on grounds of simplicity; and the usual talk of empirically equivalent theories misses (...) the point entirely. (If the choice is underdetermined, then the theories are not empirically equivalent!) Yet the thought experiment is analogous to a live scientific possibility, and actual astronomy faces underdetermination of this kind. This paper concludes by suggesting how the matter can be resolved, either by localizing the underdetermination or by defeating it entirely. Introduction A brief preliminary Around the universe in 80 days Some attempts at resolving the problem 4.1 Indexicality 4.2 Simplicity 4.3 Empirical equivalence 4.4 Is this just a philosophers' fantasy? Move along... ...nothing to see here 6.1 Rules of repetition 6.2 Some possible replies Conclusion. (shrink)
Cover versions form a loose but identifiable category of tracks and performances. We distinguish four kinds of covers and argue that they mark important differences in the modes of evaluation that are possible or appropriate for each: mimic covers, which aim merely to echo the canonical track; rendition covers, which change the sound of the canonical track; transformative covers, which diverge so much as to instantiate a distinct, albeit derivative song; and referential covers, which not only instantiate a distinct song, (...) but for which the new song is in part about the original song. In order to allow for the very possibility of transformative and referential covers, we argue that a cover is characterized by relation to a canonical track rather than merely by being a new instance of a song that had been recorded previously. (shrink)
This paper will address the translation of basic stem cell research into clinical research. While “stem cell” trials are sometimes used to describe established practices of bone marrow transplantation or transplantation of primary cells derived from bone marrow, for the purposes of this paper, I am primarily focusing on stem cell trials which are far less established, including use of hESC derived stem cells. The central ethical challenges in stem cell clinical trials arise in frontier research, not in standard, well-established (...) areas of research. (shrink)
According to many philosophers, psychological explanation canlegitimately be given in terms of belief and desire, but not in termsof knowledge. To explain why someone does what they do (so the common wisdom holds) you can appeal to what they think or what they want, but not what they know. Timothy Williamson has recently argued against this view. Knowledge, Williamson insists, plays an essential role in ordinary psychological explanation.Williamson's argument works on two fronts.First, he argues against the claim that, unlike knowledge, (...) belief is``composite'' (representable as a conjunction of a narrow and a broadcondition). Belief's failure to be composite, Williamson thinks, undermines the usual motivations for psychological explanation in terms of belief rather than knowledge.Unfortunately, we claim, the motivations Williamson argues against donot depend on the claim that belief is composite, so what he saysleaves the case for a psychology of belief unscathed.Second, Williamson argues that knowledge can sometimes provide abetter explanation of action than belief can.We argue that, in the cases considered, explanations that cite beliefs(but not knowledge) are no less successful than explanations that citeknowledge. Thus, we conclude that Williamson's arguments fail both coming andgoing: they fail to undermine a psychology of belief, and they fail tomotivate a psychology of knowledge. (shrink)
Philip Kitcher develops the Galilean Strategy to defend realism against its many opponents. I explore the structure of the Galilean Strategy and consider it specifically as an instrument against constructive empiricism. Kitcher claims that the Galilean Strategy underwrites an inference from success to truth. We should resist that conclusion, I argue, but the Galilean Strategy should lead us by other routes to believe in many things about which the empiricist would rather remain agnostic. 1 Target: empiricism 2 The Galilean Strategy (...) 3 Strengthening the argument 4 Success and truth 5 Conclusion. (shrink)
The underdetermination of theory by data obtains when, inescapably, evidence is insufficient to allow scientists to decide responsibly between rival theories. One response to would-be underdetermination is to deny that the rival theories are distinct theories at all, insisting instead that they are just different formulations of the same underlying theory; we call this the identical rivals response. An argument adapted from John Norton suggests that the response is presumptively always appropriate, while another from Larry Laudan and Jarrett Leplin suggests (...) that the response is never appropriate. Arguments from Einstein for the special and general theories of relativity may fruitfully be seen as instances of the identical rivals response; since Einstein’s arguments are generally accepted, the response is at least sometimes appropriate. But when is it appropriate? We attempt to steer a middle course between Norton’s view and that of Laudan and Leplin: the identical rivals response is appropriate when there is good reason for adopting a parsimonious ontology. Although in simple cases the identical rivals response need not involve any ontological difference between the theories, in actual scientific cases it typically requires treating apparent posits of the various theories as mere verbal ornaments or computational conveniences. Since these would-be posits are not now detectable, there is no perfectly reliable way to decide whether we should eliminate them or not. As such, there is no rule for deciding whether the identical rivals response is appropriate or not. Nevertheless, there are considerations that suggest for and against the response; we conclude by suggesting two of them. (shrink)
In late 2014, the jazz combo Mostly Other People Do the Killing released Blue—an album that is a note-for-note remake of Miles Davis's 1959 landmark album Kind of Blue. This is a thought experiment made concrete, raising metaphysical puzzles familiar from discussion of indiscernible counterparts. It is an actual album, rather than merely a concept, and so poses the aesthetic puzzle of why one would ever actually listen to it.
Based on interviews with guide dog users from Sweden, Estonia and Germany and participatory observation of the teams’ work, the article discusses three kinds of semiotic challenges encountered by the guide dog teams: perceptual, sociocultural and communicative challenges. Perceptual challenges stem from a mismatch between affordances of the urban environment and perceptual and motoric abilities of the team. Sociocultural challenges pertain to the conflicting meanings that are attributed to dogs in different social contexts and to incompatible social norms. Challenges related (...) to intrateam communication and interpretation of the other counterpart’s behavior are mostly tied to the difficulties of placing the other’s activities in the right context. Germany, Estonia and Sweden differ in their history of guide dog institutions and the organisation of guide dog work, but the challenges of the guide dog users appear to be fairly similar. However, differences appear in the stress laid on one or another type of challenge as well as in the explanations provided by the informants for the background of the challenges. The challenges, as analysed in the article, reflect not only the existing problems of guide dog users, but also their expectations for a social and physical environment, in which the teams would feel welcome. (shrink)
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductive inference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inference rules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between an inference construed demonstratively or ampliatively. (...) The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive. (shrink)
Thomas Reid is often misread as defending common sense, if at all, only by relying on illicit premises about God or our natural faculties. On these theological or reliabilist misreadings, Reid makes common sense assertions where he cannot give arguments. This paper attempts to untangle Reid's defense of common sense by distinguishing four arguments: (a) the argument from madness, (b) the argument from natural faculties, (c) the argument from impotence, and (d) the argument from practical commitment. Of these, (a) and (...) (c) do rely on problematic premises that are no more secure than claims of common sense itself. Yet (b) and (d) do not. This conclusion can be established directly by considering the arguments informally, but one might still worry that there is an implicit premise in them. In order to address this concern, I reconstruct the arguments in the framework of subjective Bayesianism. The worry becomes this: Do the arguments rely on specific values for the prior probability of some premises? Reid's appeals to our prior cognitive and practical commitments do not. Rather than relying on specific probability assignments, they draw on things that are part of the Bayesian framework itself, such as the nature of observation and the connection between belief and action. Contra the theological or reliabilist readings, the defense of common sense does not require indefensible premises. (shrink)
Judith Butler's Kritik der ethischen Gewalt represents a significant refinement of her position on the relationship between the construction of the subject and her social subjection. While Butler's earlier texts reflect a somewhat restricted notion of agency, her Adorno Lectures formulate a notion of agency that extends beyond mere resistance. This essay traces the development of Butler's account of agency and evaluates it in light of feminist projects of social transformation.
If two theory formulations are merely different expressions of the same theory, then any problem of choosing between them cannot be due to the underdetermination of theories by data. So one might suspect that we need to be able to tell distinct theories from mere alternate formulations before we can say anything substantive about underdetermination, that we need to solve the problem of identical rivals before addressing the problem of underdetermination. Here I consider two possible solutions: Quine proposes that we (...) call two theories identical if they are equivalent under a reconstrual of predicates, but this would mishandle important cases. Another proposal is to defer to the particular judgements of actual scientists. Consideration of an historical episodethe alleged equivalence of wave and matrix mechanicsshows that this second proposal also fails. Nevertheless, I suggest, the original suspicion is wrong; there are ways to enquire into underdetermination without having solved the problem of identical rivals. (shrink)
In this paper, I explore and defend the idea that musical works are historical individuals. Guy Rohrbaugh (2003) proposes this for works of art in general. Julian Dodd (2007) objects that the whole idea is outré metaphysics, that it is too far beyond the pale to be taken seriously. Their disagreement could be seen as a skirmish in the broader war between revisionists and reactionaries, a conflict about which of metaphysics and art should trump the other when there is a (...) conflict. That dispute is a matter of philosophical methodology as much as it is a dispute about art. I argue that the ontology of works as individuals need not be dunked in that morass. My primary strategy is to show, contra Dodd's accusation, that historical individuals are familiar parts of the world. Although the ontological details are open to debate, it is the standard opinion of biologists is that biological species are historical individuals. So there is no conflict here between fidelity to art and respectable metaphysics. What suits species will fit musical work as well. (shrink)
Peter Baumann offers the tantalizing suggestion that Thomas Reid is almost, but not quite, a pragmatist. He motivates this claim by posing a dilemma for common sense philosophy: Will it be dogmatism or scepticism? Baumann claims that Reid points to but does not embrace a pragmatist third way between these unsavory options. If we understand `pragmatism' differently than Baumann does, however, we need not be so equivocal in attributing it to Reid. Reid makes what we could call an argument from (...) practical commitment, and this is plausibly an instance of what William James calls the pragmatic method. (shrink)