The no-miracles argument and the pessimistic induction are arguably the main considerations for and against scientific realism. Recently these arguments have been accused of embodying a familiar, seductive fallacy. In each case, we are tricked by a base rate fallacy, one much-discussed in the psychological literature. In this paper we consider this accusation and use it as an explanation for why the two most prominent `wholesale' arguments in the literature seem irresolvable. Framed probabilistically, we can see very clearly why realists (...) and anti-realists have been talking past one another. We then formulate a dilemma for advocates of either argument, answer potential objections to our criticism, discuss what remains (if anything) of these two major arguments, and then speculate about a future philosophy of science freed from these two arguments. In so doing, we connect the point about base rates to the wholesale/retail distinction; we believe it hints at an answer of how to distinguish profitable from unprofitable realism debates. In short, we offer a probabilistic analysis of the feeling of ennui afflicting contemporary philosophy of science. (shrink)
Homeostatic property clusters (HPCs) are offered as a way of understanding natural kinds, especially biological species. I review the HPC approach and then discuss an objection by Ereshefsky and Matthen, to the effect that an HPC qua cluster seems ill-fitted as a description of a polymorphic species. The standard response by champions of the HPC approach is to say that all members of a polymorphic species have things in common, namely dispositions or conditional properties. I argue that this response fails. (...) Instances of an HPC kind need not all be similar in their exhibited properties. Instead, HPCs should instead be understood as unified by the underlying causal mechanism that maintains them. The causal mechanism can both produce and explain some systematic differences between a kind’s members. An HPC kind is best understood not as a single cluster of properties maintained in stasis by causal forces, but as a complex of related property clusters kept in relation by an underlying causal process. This approach requires recognizing that taxonomic systems serve both explanatory and inductive purposes. (shrink)
Kyle Stanford has recently claimed to offer a new challenge to scientific realism. Taking his inspiration from the familiar Pessimistic Induction (PI), Stanford proposes a New Induction (NI). Contra Anjan Chakravartty’s suggestion that the NI is a ‘red herring’, I argue that it reveals something deep and important about science. The Problem of Unconceived Alternatives, which lies at the heart of the NI, yields a richer anti-realism than the PI. It explains why science falls short when it falls short, and (...) so it might figure in the most coherent account of scientific practice. However, this best account will be antirealist in some respects and about some theories. It will not be a sweeping antirealism about all or most of science. (shrink)
The problem of underdetermination is thought to hold important lessons for philosophy of science. Yet, as Kyle Stanford has recently argued, typical treatments of it offer only restatements of familiar philosophical problems. Following suggestions in Duhem and Sklar, Stanford calls for a New Induction from the history of science. It will provide proof, he thinks, of "the kind of underdetermination that the history of science reveals to be a distinctive and genuine threat to even our best scientific theories" . This (...) paper examines Stanford's New Induction and argues that it -- like the other forms of underdetermination that he criticizes -- merely recapitulates familiar philosophical conundra. (shrink)
It is now commonly held that values play a role in scientific judgment, but many arguments for that conclusion are limited. First, many arguments do not show that values are, strictly speaking, indispensable. The role of values could in principle be filled by a random or arbitrary decision. Second, many arguments concern scientific theories and concepts which have obvious practical consequences, thus suggesting or at least leaving open the possibility that abstruse sciences without such a connection could be value-free. Third, (...) many arguments concern the role values play in inferring from evidence, thus taking evidence as given. This paper argues that these limitations do not hold in general. There are values involved in every scientific judgment. They cannot even conceivably be replaced by a coin toss, they arise as much for exotic as for practical sciences, and they are at issue as much for observation as for explicit inference. (shrink)
There are two senses of ‘what scientists know’: An individual sense (the separate opinions of individual scientists) and a collective sense (the state of the discipline). The latter is what matters for policy and planning, but it is not something that can be directly observed or reported. A function can be defined to map individual judgments onto an aggregate judgment. I argue that such a function cannot effectively capture community opinion, especially in cases that matter to us.
Some scientific categories seem to correspond to genuine features of the world and are indispensable for successful science in some domain; in short, they are natural kinds. This book gives a general account of what it is to be a natural kind and puts the account to work illuminating numerous specific examples.
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential contexts , we (...) can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
When we ask what natural kinds are, there are two different things we might have in mind. The first, which I’ll call the taxonomy question, is what distinguishes a category which is a natural kind from an arbitrary class. The second, which I’ll call the ontology question, is what manner of stuff there is that realizes the category. Many philosophers have systematically conflated the two questions. The confusion is exhibited both by essentialists and by philosophers who pose their accounts in (...) terms of similarity. It also leads to misreading philosophers who do make the distinction. Distinguishing the questions allows for a more subtle understanding of both natural kinds and their underlying metaphysics. (shrink)
The accepted narrative treats John Stuart Mill’s Kinds as the historical prototype for our natural kinds, but Mill actually employs two separate notions: Kinds and natural groups. Considering these, along with the accounts of Mill’s nineteenth-century interlocutors, forces us to recognize two distinct questions. First, what marks a natural kind as worthy of inclusion in taxonomy? Second, what exists in the world that makes a category meet that criterion? Mill’s two notions offer separate answers to the two questions: natural groups (...) for taxonomy and Kinds for ontology. This distinction is ignored in many contemporary debates about natural kinds and is obscured by the standard narrative that treats our natural kinds just as a development of Mill’s Kinds. (shrink)
Abstract: There is a long tradition of trying to analyze art either by providing a definition (essentialism) or by tracing its contours as an indefinable, open concept (anti-essentialism). Both art essentialists and art anti-essentialists share an implicit assumption of art concept monism. This article argues that this assumption is a mistake. Species concept pluralism—a well-explored position in philosophy of biology—provides a model for art concept pluralism. The article explores the conditions under which concept pluralism is appropriate, and argues that they (...) obtain for art. Art concept pluralism allows us to recognize that different art concepts are useful for different purposes, and what has been feuding definitions can be seen as characterizations of specific art concepts. (shrink)
Given the fact that many people use Wikipedia, we should ask: Can we trust it? The empirical evidence suggests that Wikipedia articles are sometimes quite good but that they vary a great deal. As such, it is wrong to ask for a monolithic verdict on Wikipedia. Interacting with Wikipedia involves assessing where it is likely to be reliable and where not. I identify five strategies that we use to assess claims from other sources and argue that, to a greater of (...) lesser degree, Wikipedia frustrates all of them. Interacting responsibly with something like Wikipedia requires new epistemic methods and strategies. (shrink)
Nelson Goodman's distinction between autographic and allographic arts is appealing, we suggest, because it promises to resolve several prima facie puzzles. We consider and rebut a recent argument that alleges that digital images explode the autographic/allographic distinction. Regardless, there is another familiar problem with the distinction, especially as Goodman formulates it: it seems to entirely ignore an important sense in which all artworks are historical. We note in reply that some artworks can be considered both as historical products and as (...) formal structures. Talk about such works is ambiguous between the two conceptions. This allows us to recover Goodman's distinction: art forms that are ambiguous in this way are allographic. With that formulation settled, we argue that digital images are allographic. We conclude by considering the objection that digital photographs, unlike other digital images, would count as autographic by our criterion; we reply that this points to the vexed nature of photography rather than any problem with the distinction. (shrink)
The underdetermination of theory by evidence is supposed to be a reason to rethink science. It is not. Many authors claim that underdetermination has momentous consequences for the status of scientific claims, but such claims are hidden in an umbra of obscurity and a penumbra of equivocation. So many various phenomena pass for `underdetermination' that it's tempting to think that it is no unified phenomenon at all, so I begin by providing a framework within which all these worries can be (...) seen as species of one genus: A claim of underdetermination involves (at least implicitly) a set of rival theories, a standard of responsible judgment, and a scope of circumstances in which responsible choice between the rivals is impossible. Within this framework, I show that one variety of underdetermination motivated modern scepticism and thus is a familiar problem at the heart of epistemology. I survey arguments that infer from underdetermination to some reëvaluation of science: top-down arguments infer a priori from the ubiquity of underdetermination to some conclusion about science; bottom-up arguments infer from specific instances of underdetermination, to the claim that underdetermination is widespread, and then to some conclusion about science. The top-down arguments either fail to deliver underdetermination of any great significance or (as with modern scepticism) deliver some well-worn epistemic concern. The bottom-up arguments must rely on cases. I consider several promising cases and find them to either be so specialized that they cannot underwrite conclusions about science in general or not be underdetermined at all. Neither top-down nor bottom-up arguments can motivate any deep reconsideration of science. (shrink)
Cover versions form a loose but identifiable category of tracks and performances. We distinguish four kinds of covers and argue that they mark important differences in the modes of evaluation that are possible or appropriate for each: mimic covers, which aim merely to echo the canonical track; rendition covers, which change the sound of the canonical track; transformative covers, which diverge so much as to instantiate a distinct, albeit derivative song; and referential covers, which not only instantiate a distinct song, (...) but for which the new song is in part about the original song. In order to allow for the very possibility of transformative and referential covers, we argue that a cover is characterized by relation to a canonical track rather than merely by being a new instance of a song that had been recorded previously. (shrink)
According to many philosophers, psychological explanation canlegitimately be given in terms of belief and desire, but not in termsof knowledge. To explain why someone does what they do (so the common wisdom holds) you can appeal to what they think or what they want, but not what they know. Timothy Williamson has recently argued against this view. Knowledge, Williamson insists, plays an essential role in ordinary psychological explanation.Williamson's argument works on two fronts.First, he argues against the claim that, unlike knowledge, (...) belief is``composite'' (representable as a conjunction of a narrow and a broadcondition). Belief's failure to be composite, Williamson thinks, undermines the usual motivations for psychological explanation in terms of belief rather than knowledge.Unfortunately, we claim, the motivations Williamson argues against donot depend on the claim that belief is composite, so what he saysleaves the case for a psychology of belief unscathed.Second, Williamson argues that knowledge can sometimes provide abetter explanation of action than belief can.We argue that, in the cases considered, explanations that cite beliefs(but not knowledge) are no less successful than explanations that citeknowledge. Thus, we conclude that Williamson's arguments fail both coming andgoing: they fail to undermine a psychology of belief, and they fail tomotivate a psychology of knowledge. (shrink)
In late 2014, the jazz combo Mostly Other People Do the Killing released Blue—an album that is a note-for-note remake of Miles Davis's 1959 landmark album Kind of Blue. This is a thought experiment made concrete, raising metaphysical puzzles familiar from discussion of indiscernible counterparts. It is an actual album, rather than merely a concept, and so poses the aesthetic puzzle of why one would ever actually listen to it.
This paper offers a general characterization of underdetermination and gives a prima facie case for the underdetermination of the topology of the universe. A survey of several philosophical approaches to the problem fails to resolve the issue: the case involves the possibility of massive reduplication, but Strawson on massive reduplication provides no help here; it is not obvious that any of the rival theories are to be preferred on grounds of simplicity; and the usual talk of empirically equivalent theories misses (...) the point entirely. (If the choice is underdetermined, then the theories are not empirically equivalent!) Yet the thought experiment is analogous to a live scientific possibility, and actual astronomy faces underdetermination of this kind. This paper concludes by suggesting how the matter can be resolved, either by localizing the underdetermination or by defeating it entirely. Introduction A brief preliminary Around the universe in 80 days Some attempts at resolving the problem 4.1 Indexicality 4.2 Simplicity 4.3 Empirical equivalence 4.4 Is this just a philosophers' fantasy? Move along... ...nothing to see here 6.1 Rules of repetition 6.2 Some possible replies Conclusion. (shrink)
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductive inference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inference rules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between an inference construed demonstratively or ampliatively. (...) The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive. (shrink)
Peter Baumann offers the tantalizing suggestion that Thomas Reid is almost, but not quite, a pragmatist. He motivates this claim by posing a dilemma for common sense philosophy: Will it be dogmatism or scepticism? Baumann claims that Reid points to but does not embrace a pragmatist third way between these unsavory options. If we understand `pragmatism' differently than Baumann does, however, we need not be so equivocal in attributing it to Reid. Reid makes what we could call an argument from (...) practical commitment, and this is plausibly an instance of what William James calls the pragmatic method. (shrink)
Philip Kitcher develops the Galilean Strategy to defend realism against its many opponents. I explore the structure of the Galilean Strategy and consider it specifically as an instrument against constructive empiricism. Kitcher claims that the Galilean Strategy underwrites an inference from success to truth. We should resist that conclusion, I argue, but the Galilean Strategy should lead us by other routes to believe in many things about which the empiricist would rather remain agnostic. 1 Target: empiricism 2 The Galilean Strategy (...) 3 Strengthening the argument 4 Success and truth 5 Conclusion. (shrink)
Background theories in science are used both to prove and to disprove that theory choice is underdetermined by data. The alleged proof appeals to the fact that experiments to decide between theories typically require auxiliary assumptions from other theories. If this generates a kind of underdetermination, it shows that standards of scientific inference are fallible and must be appropriately contextualized. The alleged disproof appeals to the possibility of suitable background theories to show that no theory choice can be timelessly or (...) noncontextually underdetermined: Foreground theories might be distinguished against different backgrounds. Philosophers have often replied to such a disproof by focussing their attention not on theories but on Total Sciences. If empirically equivalent Total Sciences were at stake, then there would be no background against which they could be differentiated. I offer several reasons to think that Total Science is a philosophers' fiction. No respectable underdetermination can be based on it. (shrink)
One approach to science treats science as a cognitive accomplishment of individuals and defines a scientific community as an aggregate of individual inquirers. Another treats science as a fundamentally collective endeavor and defines a scientist as a member of a scientific community. Distributed cognition has been offered as a framework that could be used to reconcile these two approaches. Adam Toon has recently asked if the cognitive and the social can be friends at last. He answers that they probably cannot, (...) posing objections to the would-be rapprochement. We clarify both the animosity and the tonic proposed to resolve it, ultimately arguing that worries raised by Toon and others are uncompelling. (shrink)
Some philosophers think that there is a gap between is and ought which necessarily makes normative enquiry a different kind of thing than empirical science. This position gains support from our ability to explicate our inferential practices in a way that makes it impermissible to move from descriptive premises to a normative conclusion. But we can also explicate them in a way that allows such moves. So there is no categorical answer as to whether there is or is not a (...) gap. The question of an is-ought gap is a practical and strategic matter rather than a logical one, and it may properly be answered in different ways for different questions or at different times. (shrink)
The underdetermination of theory by data obtains when, inescapably, evidence is insufficient to allow scientists to decide responsibly between rival theories. One response to would-be underdetermination is to deny that the rival theories are distinct theories at all, insisting instead that they are just different formulations of the same underlying theory; we call this the identical rivals response. An argument adapted from John Norton suggests that the response is presumptively always appropriate, while another from Larry Laudan and Jarrett Leplin suggests (...) that the response is never appropriate. Arguments from Einstein for the special and general theories of relativity may fruitfully be seen as instances of the identical rivals response; since Einstein’s arguments are generally accepted, the response is at least sometimes appropriate. But when is it appropriate? We attempt to steer a middle course between Norton’s view and that of Laudan and Leplin: the identical rivals response is appropriate when there is good reason for adopting a parsimonious ontology. Although in simple cases the identical rivals response need not involve any ontological difference between the theories, in actual scientific cases it typically requires treating apparent posits of the various theories as mere verbal ornaments or computational conveniences. Since these would-be posits are not now detectable, there is no perfectly reliable way to decide whether we should eliminate them or not. As such, there is no rule for deciding whether the identical rivals response is appropriate or not. Nevertheless, there are considerations that suggest for and against the response; we conclude by suggesting two of them. (shrink)
It seems obvious that a community of one thousand scientists working together to make discoveries and solve puzzles should arrange itself differently than would one thousand scientist-hermits working independently. Because of limited time, resources, and attention, an independent scientist can explore only some of the possible approaches to a problem. Working alone, each hermit would explore the most promising approaches. They would needlessly duplicate the work of others and would be unlikely to develop approaches which look unpromising but really have (...) tremendous potential. Contrariwise, a large community can more rigorously explore the space of possible approaches. Most scientists should work on the most promising approaches, but a smaller number can be committed to approaches that initially look less promising. Exploratory work can reveal if one of those initially unpromising approaches has unrealized potential, and more scientists can adopt it once its potential becomes more apparent. (shrink)
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: "Where scientific standards cannot guide us, we should believe nothing". Another response, which we can call the fideist response, is to believe whatever we would like to believe: "If science cannot speak to the question, then we may believe anything without science ever contradicting us". C.S. Peirce recognized these options and suggested evading (...) the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. (shrink)
Thomas Reid is often misread as defending common sense, if at all, only by relying on illicit premises about God or our natural faculties. On these theological or reliabilist misreadings, Reid makes common sense assertions where he cannot give arguments. This paper attempts to untangle Reid's defense of common sense by distinguishing four arguments: (a) the argument from madness, (b) the argument from natural faculties, (c) the argument from impotence, and (d) the argument from practical commitment. Of these, (a) and (...) (c) do rely on problematic premises that are no more secure than claims of common sense itself. Yet (b) and (d) do not. This conclusion can be established directly by considering the arguments informally, but one might still worry that there is an implicit premise in them. In order to address this concern, I reconstruct the arguments in the framework of subjective Bayesianism. The worry becomes this: Do the arguments rely on specific values for the prior probability of some premises? Reid's appeals to our prior cognitive and practical commitments do not. Rather than relying on specific probability assignments, they draw on things that are part of the Bayesian framework itself, such as the nature of observation and the connection between belief and action. Contra the theological or reliabilist readings, the defense of common sense does not require indefensible premises. (shrink)
If two theory formulations are merely different expressions of the same theory, then any problem of choosing between them cannot be due to the underdetermination of theories by data. So one might suspect that we need to be able to tell distinct theories from mere alternate formulations before we can say anything substantive about underdetermination, that we need to solve the problem of identical rivals before addressing the problem of underdetermination. Here I consider two possible solutions: Quine proposes that we (...) call two theories identical if they are equivalent under a reconstrual of predicates, but this would mishandle important cases. Another proposal is to defer to the particular judgements of actual scientists. Consideration of an historical episodethe alleged equivalence of wave and matrix mechanicsshows that this second proposal also fails. Nevertheless, I suggest, the original suspicion is wrong; there are ways to enquire into underdetermination without having solved the problem of identical rivals. (shrink)
A discussion and qualified defense of Philip Kitcher on scientific significance and ‘well-ordered science.’ (Qualified because I argue that Kitcher’s position is made unstable by his reliance on the largely unanalyzed notion of natural curiosity.).
This paper argues against the common, often implicit view that theories are some specific kind of thing. Instead, I argue for theory concept pluralism: There are multiple distinct theory concepts which we legitimately use in different domains and for different purposes, and we should not expect this to change. The argument goes by analogy with species concept pluralism, a familiar position in philosophy of biology. I conclude by considering some consequences for philosophy of science if theory concept pluralism is correct.
Debates about the underdetermination of theory by data often turn on specific examples. Cases invoked often enough become familiar, even well worn. Since Helen Longino’s discussion of the case, the connection between prenatal hormone levels and gender-linked childhood behaviour has become one of these stock examples. However, as I argue here, the case is not genuinely underdetermined. We can easily imagine a possible experiment to decide the question. The fact that we would not perform this experiment is a moral, rather (...) than epistemic, point. Finally, I suggest that the ”underdetermination’ of the case may be inessential for Longino to establish her central claim about it. (shrink)
This document collects discussion and commentary on issues raised in the workshop by its participants. Contributors are: Greg Frost-Arnold, David Harker, P. D. Magnus, John Manchak, John D. Norton, J. Brian Pitts, Kyle Stanford, Dana Tulodziecki.
The Bare Theory was offered by David Albert as a way of standing by the completeness of quantum mechanics in the face of the measurement problem. This paper surveys objections to the Bare Theory that recur in the literature: what will here be called the oddity objection, the coherence objection, and the context-of-the-universe objection. Critics usually take the Bare Theory to have unacceptably bizarre consequences, but to be free from internal contradiction. Bizarre consequences need not be decisive against the Bare (...) Theory, but a further objection—dubbed here the calibration objection—has been underestimated. This paper argues that the Bare Theory is not only odd but also inconsistent. We can imagine a successor to the Bare Theory—the Stripped Theory—which avoids the objections and fulfills the original promise of the Bare Theory, but at the cost of amplifying the bizarre consequences. The Stripped Theory is either a stunning development in our understanding of the world or a reductio disproving the completeness of quantum mechanics. The Bare Theory The usual objections The calibration objection Beyond the Bare Theory. (shrink)
Christy Mag Uidhir has recently argued (a) that there is no in principle aesthetic difference between a live performance and a recording of that performance, and (b) that the proper aesthetic object is a type which is instantiated by the performance and potentially repeatable when recordings are played back. This paper considers several objections to (a) and finds them lacking. I then consider improvised music, a subject that Mag Uidhir explicitly brackets in his discussion. Improvisation reveals problems with (b), because (...) the performance-event and the performance-type are distinct but equally proper aesthetic objects. (shrink)
Typical discussions of virtual reality (VR) fixate on technology for providing sensory stimulation of a certain kind. They thus fail to understand reality as the place wherein we live and work, misunderstanding it instead as merely a sort of presentation. The first half of the paper examines popular conceptions of VR. The most common conception is a shallow one according to which VR is a matter of simulating appearances. Yet there is, even in popular depictions, a second, more subtle conception (...) according to which VR is a matter of facilitating new kinds of interaction. The latter half of the paper turns to questions about the contemporary technology of Internet chatrooms. The fact that chatrooms can be used in certain ways suggests something about the prospects for VR. The penultimate section asks whether chatrooms may legitimately be thought of as places. (In a sense, they may.) The final section asks whether cybersex may legitimately be thought of as sex. (Again, yes.) Chatroom technology thus provides an argument for the second conception of VR over its much ballyhooed rival. (shrink)
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: `Where scientific standards cannot guide us, we should believe nothing.' Another response, which we can call the fideist response, is to believe whatever we would like to believe: `If science cannot speak to the question, then we may believe anything without science ever contradicting us.' C.S. Peirce recognized these options and suggested evading (...) the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. 1. Peirce's Logical Maxim 2. The concept of underdetermination 3. Our dilemma 4. Endgame. (shrink)
The accepted narrative treats John Stuart Mill's Kinds as the historical prototype for our natural kinds, but Mill actually employs two separate notions: Kinds and natural groups. Considering these, along with the accounts of Mill's 19th-century interlocutors, forces us to recognize two distinct questions. First, what marks a natural kind as worthy of inclusion in taxonomy? Second, what exists in the world that makes a category meet that criterion? Mill's two notions offer separate answers to the two questions: natural groups (...) for taxonomy, and Kinds for ontology. This distinction is ignored in many contemporary debates about natural kinds and is obscured by the standard narrative which treats our natural kinds just as a development of Mill's Kinds. (shrink)
Eric Barnes’ The Paradox of Predictivism is concerned primarily with two facts: predictivism and pluralism. In the middle part of the book, he peers through these two lenses at the tired realist scarecrow of the no-miracles argument. He attempts to reanimate this weatherworn realist argument, contra suggestions by people like me that it should be abandoned. In this paper, I want to get clear on Barnes’ contribution to the debate. He focuses on what he calls the miraculous endorsement argument, which (...) explains not the success of a specific theory but instead the history of successes for an entire research program. The history of successes is explained by reliable and improving methods, which are the flipside of approximately true background theories. Yet, as Barnes notes, the whole story must begin with methods that are at least minimally reliable. Barnes demands that the realist explain the origin of the minimally reliable take-off point, and he suggests a way that the realist might do so. I contend that his explanation still relies on contingent developments and so fails to completely explain the development of take-off theories. However, this line of argument digs into familiar details of the no-miracles argument and overlooks what’s new in Barnes’ approach. By calling attention to pluralism, he reminds us that we need an account of scientific expertise. This is important, I suggest, because expertise is not indefinite. We do not trust specific experts for everything, but only for things within the bounds of their expertise. Drawing these boundaries relies on our own background theories and is only likely to be reliable if our background theories are approximately true. I argue, then, that pluralism gives us reason to be realists. (shrink)
Philosophy of science in the past half century can be seen as a reaction against logical empiricism's focus on modern logic as the format in which debates should be expressed and on physics as the canonical science. These reactions have resulted in a fragmentation of the field. Although this provides ways forward for disparate philosophies of various sciences, it threatens the very possibility of general philosophy of science. The debate that most obviously continues to be conducted at the general level—the (...) debate about scientific realism—only does so because of a dangerous naïveté. Nevertheless, this article suggests that there is a place for general work not by starting at the highest level of abstraction but instead by abstracting general lessons from actual science. (shrink)
Wikipedia is a free encyclopedia that is written and edited entirely by visitors to its website. I argue that we are misled when we think of it in the same epistemic category with traditional general encyclopedias. An empirical assessment of its reliability reveals that it varies widely from topic to topic. So any particular claim found in it cannot be relied on based on its source. I survey some methods that we use in assessing specific claims and argue that the (...) structure of the Wikipedia frustrates them. (shrink)