Critical Theory and Animal Liberation is the first collection to look at the human relationship with animals from the critical or 'left' tradition in political and social thought. The contributions in this volume highlight connections between our everyday treatment of animals and other forms of oppression, violence, and domination. Breaking with past treatments that have framed the problem as one of 'animal rights,' the authors instead depict the exploitation and killing of other animals as a political question of the first (...) order. (shrink)
I argue in this paper that animal biotechnology constitutes a dangerous ontological collapse between animals and the technical-economic apparatus. By ontological collapse, I mean the elimination of fundamental ontological tensions between embodied subjects and the principles of scientific, technological, and economic rationalization. Biotechnology imposes this collapse in various ways: by genetically “reprogramming” animals to serve as uniform commodities, by abstracting them into data and code, and, in some cases, by literally manipulating their movements with computer technologies. These and other forms (...) of ontological violence not only lead to profound physical suffering for the animals involved, but also distort the phenomenological basis of their existence, especially their perceptual experience and expression of subjective time and space. In subordinating nonhuman animals to the logic of “technological rationality” or “technique,” to borrow Herbert Marcuse and Jacques Ellul’s respective terms, biotechnology perpetuates the productive extermination of animals. Biotech animals are exterminated in the sense of being “drive[n] beyond the boundaries” of meaningful existence and “destroyed completely” or “completely wiped out” as subjects. But they are also exterminated in the sense of being “overproduced” and “overgenerated,” both quantitatively and qualitatively. I go on to argue that the collapse of the ontological is accompanied by a collapse of the ethical. This ethical collapse is characterized by the internalization of the logic of technique and the corresponding failure both within technoscientific culture itself and within some scholarly discourses about biotechnology to evaluate from a genuinely critical vantage point the fundamental ethical issues that animal biotechnology raises. The aim of this paper is to offer an alternative analysis of the ontological and ethical implications of biotechnology from the standpoint of Marcuse and Ellul’s critical theory of technology. To explore other ramifications of animal biotechnology, I draw on Theodor Adorno and Max Horkheimer’s insights into ideologies of extermination and Maurice Merleau-Ponty’s phenomenology of embodiment. (shrink)
Theorizing in ecology and evolution often proceeds via the construction of multiple idealized models. To determine whether a theoretical result actually depends on core features of the models and is not an artifact of simplifying assumptions, theorists have developed the technique of robustness analysis, the examination of multiple models looking for common predictions. A striking example of robustness analysis in ecology is the discovery of the Volterra Principle, which describes the effect of general biocides in predator-prey systems. This paper details (...) the discovery of the Volterra Principle and the demonstration of its robustness. It considers the classical ecology literature on robustness and introduces two individual-based models of predation, which are used to further analyze the Volterra Principle. The paper also introduces a distinction between parameter robustness, structural robustness, and representational robustness, and demonstrates that the Volterra Principle exhibits all three kinds of robustness. *Received September 2006; revised May 2007. ‡Earlier versions of this paper were presented at the Australasian Association of Philosophy, the London School of Economics, and the University of Bristol. The authors wish to thank those audiences as well as Patrick Forber, Ken Waters, Deena Skolnick Weisberg, Uri Wilensky, and Bill Wimsatt for many helpful comments. Special thanks to Giacomo Sillari for his assistance in translating Volterra's original paper and his insightful thoughts about Volterra's aims and methods. Some of the research in this paper was supported by NSF grant SES-0620887 to MW. †To contact the authors, please write to: Michael Weisberg, Department of Philosophy, University of Pennsylvania, 433 Logan Hall, Philadelphia, PA 19104; e-mail: firstname.lastname@example.org; Kenneth Reisman, Pluribo, Inc., 100 Park Avenue, Suite 1600, New York, NY 10017; e-mail: email@example.com. (shrink)
Ned Block argues that the higher-order (HO) approach to explaining consciousness is ‘defunct’ because a prominent objection (the ‘misrepresentation objection’) exposes the view as ‘incoherent’. What’s more, a response to this objection that I’ve offered elsewhere (Weisberg 2010) fails because it ‘amounts to abusing the notion of what-it’s-like-ness’ (xxx).1 In this response, I wish to plead guilty as charged. Indeed, I will continue herein to abuse Block’s notion of what-it’s-like-ness. After doing so, I will argue that the HO approach (...) accounts for the sense of what-it’s-like-ness that matters in a theory of consciousness. I will also argue that the only incoherence present in the HO theory is that generated by embracing Block’s controversial notion of what-it’s-like-ness, something no theorist of any stripe ought to do. Block is famous for (among other things) having introduced the notion of ‘phenomenal consciousness’ into contemporary philosophy of mind (Block 1995). This term is widely employed in the philosophical literature and it even appears in the empirical literature. But wide-speared usage has brought about divergent interpretations of the term. We can distinguish a ‘moderate’ and a ‘zealous’ reading of ‘phenomenal consciousness’. On the moderate reading, ‘phenomenal consciousness’ just means ‘experience’. Many people have embraced this sense of the term and use it to roughly pick out conscious experience involving sensory quality (states like conscious visual experiences or conscious pains, for example).2 On the zealous reading, however, phenomenal consciousness is held to be ‘distinct from any cognitive, intentional, or functional property’ (Block 1995: 234). That is, any explanation of phenomenal consciousness in exclusively cognitive, intentional, or functional terms will fail to capture, without remainder, what is really distinctive about phenomenal consciousness. Block, of course, is fully clear about embracing the zealous reading; indeed, his initial introduction of the notion is in those terms. The same ambiguity occurs with the much-used (and abused) idea of ‘what-it’s-like-ness’.. (shrink)
I argue that the rationale behind the fine-tuning argument for design is self-undermining, refuting the argument’s own premise that fine-tuning is to be expected given design. In (Weisberg 2010) I argued on informal grounds that this premise is unsupported. White (2011) countered that it can be derived from three plausible assumptions. But White’s third assumption is based on a fallacious rationale, and is even objectionable by the design theorist’s own lights. The argument that shows this, the argument from divine (...) indifference, simultaneously exposes the fine-tuning argument’s self-undermining character. The same argument also answers Bradley’s (forthcoming) reply to my earlier objection. (shrink)
The covalent bond, a difficult concept to define precisely, plays a central role in chemical predictions, interventions, and explanations. I investigate the structural conception of the covalent bond, which says that bonding is a directional, submolecular region of electron density, located between individual atomic centers and responsible for holding the atoms together. Several approaches to constructing molecular models are considered in order to determine which features of the structural conception of bonding, if any, are robust across these models. Key components (...) of the structural conception are absent in all but the simplest quantum mechanical models of molecular structure, seriously challenging the conception’s viability. †To contact the author, please write to: Department of Philosophy, University of Pennsylvania, 433 Cohen Hall, Philadelphia, PA 19104‐6304; e‐mail: firstname.lastname@example.org. (shrink)
Each of us, right now, is having a unique conscious experience. Nothing is more basic to our lives as thinking beings and nothing, it seems, is better known to us. But the ever-expanding reach of natural science suggests that everything in our world is ultimately physical. The challenge of fitting consciousness into our modern scientific worldview, of taking the subjective “feel” of conscious experience and showing that it is just neural activity in the brain, is among the most intriguing explanatory (...) problems of our times. -/- In this book, Josh Weisberg presents the range of contemporary responses to the philosophical problem of consciousness. The basic philosophical tools of the trade are introduced, including thought experiments featuring Mary the color-deprived super scientist and fearsome philosophical “zombies”. The book then systematically considers the space of philosophical theories of consciousness. Dualist and other “non-reductive” accounts of consciousness hold that we must expand our basic physical ontology to include the intrinsic features of consciousness. Functionalist and identity theories, by contrast, hold that with the right philosophical stage-setting, we can fit consciousness into the standard scientific picture. And “mysterians” hold that any solution to the problem is beyond such small-minded creatures as us. -/- Throughout the book, the complexity of current debates on consciousness is handled in a clear and concise way, providing the reader with a fine introductory guide to the rich philosophical terrain. The work makes an excellent entry point to one of the most exciting areas of study in philosophy and science today. (shrink)
Nobel laureate Roald Hoffmann's contributions to chemistry are well known. Less well known, however, is that over a career that spans nearly fifty years, Hoffmann has thought and written extensively about a wide variety of other topics, such as chemistry's relationship to philosophy, literature, and the arts, including the nature of chemical reasoning, the role of symbolism and writing in science, and the relationship between art and craft and science. In Roald Hoffmann on the Philosophy, Art, and Science of Chemistry, (...) Jeffrey Kovac and Michael Weisberg bring together twenty-eight of Hoffmann's most important essays. Gathered here are Hoffmann's most philosophically significant and interesting essays and lectures, many of which are not widely accessible. In essays such as "Why Buy That Theory," "Nearly Circular Reasoning," "How Should Chemists Think," "The Metaphor, Unchained," "Art in Science," and "Molecular Beauty," we find the mature reflections of one of America's leading scientists. Organized under the general headings of Chemical Reasoning and Explanation, Writing and Communicating, Art and Science, Education, and Ethics, these stimulating essays provide invaluable insight into the teaching and practice of science. (shrink)
Weisberg identifies the risks throughout a 2000 year span of western history of overly flexible responses to crises and perceived emergencies. So ensconced is the norm of infinite openness to ideas and changing circumstances that, he argues, his readers need to work hard to be able to resist the tendency of others to fold their tents and betray their own deepest and soundest values when challenged to do so by "new" conditions.
one takes to be the most salient, any pair could be judged more similar to each other than to the third. Goodman uses this second problem to showthat there can be no context-free similarity metric, either in the trivial case or in a scientifically ...
Philosophers of science increasingly recognize the importance of idealization: the intentional introduction of distortion into scientiﬁc theories. Yet this recognition has not yielded consensus about the nature of idealization. e literature of the past thirty years contains disparate characterizations and justiﬁcations, but little evidence of convergence towards a common position.
Many standard philosophical accounts of scientific practice fail to distinguish between modeling and other types of theory construction. This failure is unfortunate because there are important contrasts among the goals, procedures, and representations employed by modelers and other kinds of theorists. We can see some of these differences intuitively when we reflect on the methods of theorists such as Vito Volterra and Linus Pauling on the one hand, and Charles Darwin and Dimitri Mendeleev on the other. Much of Volterra's and (...) Pauling's work involved modeling; much of Darwin's and Mendeleev's did not. In order to capture this distinction, I consider two examples of theory construction in detail: Volterra's treatment of post-WWI fishery dynamics and Mendeleev's construction of the periodic system. I argue that modeling can be distinguished from other forms of theorizing by the procedures modelers use to represent and to study real-world phenomena: indirect representation and analysis. This differentiation between modelers and non-modelers is one component of the larger project of understanding the practice of modeling, its distinctive features, and the strategies of abstraction and idealization it employs. (shrink)
Because of its complexity, contemporary scientific research is almost always tackled by groups of scientists, each of which works in a different part of a given research domain. We believe that understanding scientific progress thus requires understanding this division of cognitive labor. To this end, we present a novel agent-based model of scientific research in which scientists divide their labor to explore an unknown epistemic landscape. Scientists aim to climb uphill in this landscape, where elevation represents the significance of the (...) results discovered by employing a research approach. We consider three different search strategies scientists can adopt for exploring the landscape. In the first, scientists work alone and do not let the discoveries of the community as a whole influence their actions. This is compared with two social research strategies, which we call the follower and maverick strategies. Followers are biased towards what others have already discovered, and we find that pure populations of these scientists do less well than scientists acting independently. However, pure populations of mavericks, who try to avoid research approaches that have already been taken, vastly outperform both of the other strategies. Finally, we show that in mixed populations, mavericks stimulate followers to greater levels of epistemic production, making polymorphic populations of mavericks and followers ideal in many research domains. (shrink)
Modelers often rely on robustness analysis, the search for predictions common to several independent models. Robustness analysis has been characterized and championed by Richard Levins and William Wimsatt, who see it as central to modern theoretical practice. The practice has also been severely criticized by Steven Orzack and Elliott Sober, who claim that it is a nonempirical form of confirmation, effective only under unusual circumstances. This paper addresses Orzack and Sober's criticisms by giving a new account of robustness analysis and (...) showing how the practice can identify robust theorems. Once the structure of robust theorems is clearly articulated, it can be shown that such theorems have a degree of confirmation, despite the lack of direct empirical evidence for their truth. (shrink)
Inference to the Best Explanation (IBE) and Bayesianism are our two most prominent theories of scientific inference. Are they compatible? Van Fraassen famously argued that they are not, concluding that IBE must be wrong since Bayesianism is right. Writers since then, from both the Bayesian and explanationist camps, have usually considered van Fraassen’s argument to be misguided, and have plumped for the view that Bayesianism and IBE are actually compatible. I argue that van Fraassen’s argument is actually not so misguided, (...) and that it causes more trouble for compatibilists than is typically thought. Bayesianism in its dominant, subjectivist form, can only be made compatible with IBE if IBE is made subservient to conditionalization in a way that robs IBE of much of its substance and interest. If Bayesianism and IBE are to be fit together, I argue, a strongly objective Bayesianism is the preferred option. I go on to sketch this objectivist, IBE-based Bayesianism, and offer some preliminary suggestions for its development. (shrink)
The study of insight in problem solving and creative thinking has seen an upsurge of interest in the last 30 years. Current theorising concerning insight has taken one of two tacks. The special-process view, which grew out of the Gestalt psychologists’ theorising about insight, proposes that insight is the result of a dedicated set of processes that is activated by the individual's reaching impasse while trying to deal with a problematic situation. In contrast, the business-as-usual view argues that insight is (...) brought about by the same processes that underlie ordinary thinking . Although those two views are typically treated as being in opposition, it has recently been proposed that a complete understanding of insight will require bringing together aspects of both views. The present paper carries that proposal further. Critical analysis of those two viewpoints demonstrates that each has a positive contribution to make to our understanding of insight, but also is.. (shrink)
Despite their best efforts, scientists may be unable to construct models that simultaneously exemplify every theoretical virtue. One explanation for this is the existence of tradeoffs: relationships of attenuation that constrain the extent to which models can have such desirable qualities. In this paper, we characterize three types of tradeoffs theorists may confront. These characterizations are then used to examine the relationships between parameter precision and two types of generality. We show that several of these relationships exhibit tradeoffs and discuss (...) what consequences those tradeoffs have for theoretical practice. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
The bootstrapping problem poses a general challenge, afflicting even strongly internalist theories. Even if one must always know that one’s source is reliable to gain knowledge from it, bootstrapping is still possible. I survey some solutions internalists might offer and defend the one I find most plausible: that bootstrapping involves an abuse of inductive reasoning akin to generalizing from a small or biased sample. I also argue that this solution is equally available to the reliabilist. The moral is that the (...) issues raised by bootstrapping are orthogonal to questions about internalism and basic knowledge, having more to do with the nature of good inductive reasoning. (shrink)
Conditionalization and Jeffrey Conditionalization cannot simultaneously satisfy two widely held desiderata on rules for empirical learning. The first desideratum is confirmational holism, which says that the evidential import of an experience is always sensitive to our background assumptions. The second desideratum is commutativity, which says that the order in which one acquires evidence shouldn't affect what conclusions one draws, provided the same total evidence is gathered in the end. (Jeffrey) Conditionalization cannot satisfy either of these desiderata without violating the other. (...) This is a surprising problem, and I offer a diagnosis of its source. I argue that (Jeffrey) Conditionalization is inherently anti-holistic in a way that is just exacerbated by the requirement of commutativity. The dilemma is thus a superficial manifestation of (Jeffrey) Conditionalization's fundamentally anti-holistic nature. (shrink)
This paper is an interpretation and defense of Richard Levins’ “The Strategy of Model Building in Population Biology,” which has been extremely influential among biologists since its publication 40 years ago. In this article, Levins confronted some of the deepest philosophical issues surrounding modeling and theory construction. By way of interpretation, I discuss each of Levins’ major philosophical themes: the problem of complexity, the brute-force approach, the existence and consequence of tradeoffs, and robustness analysis. I argue that Levins’ article is (...) concerned, at its core, with justifying the use of multiple, idealized models in population biology. (shrink)
Clark and Shackel have recently argued that previous attempts to resolve the two-envelope paradox fail, and that we must look to symmetries of the relevant expected-value calculations for a solution. Clark and Shackel also argue for a novel solution to the peeking case, a variant of the two-envelope scenario in which you are allowed to look in your envelope before deciding whether or not to swap. Whatever the merits of these solutions, they go beyond accepted decision theory, even contradicting it (...) in the peeking case. Thus if we are to take their solutions seriously, we must understand Clark and Shackel to be proposing a revision of standard decision theory. Understood as such, we will argue, their proposal is both implausible and unnecessary. (shrink)
Recent proposals that frame norms of action in terms of knowledge have been challenged by Bayesian decision theorists. Bayesians object that knowledge-based norms conflict with the highly successful and established view that rational action is rooted in degrees of belief. I argue that the knowledge-based and Bayesian pictures are not as incompatible as these objectors have made out. Attending to the mechanisms of practical reasoning exposes space for both knowledge and degrees of belief to play their respective roles.
Bootstrapping is a suspicious form of reasoning that verifies a source's reliability by checking it against itself. Theories that endorse such reasoning face the bootstrapping problem. This article considers which theories face the problem, and surveys potential solutions. The initial focus is on theories like reliabilism and dogmatism, which allow one to gain knowledge from a source without knowing that it is reliable. But the discussion quickly turns to a more general version of the problem that does not depend on (...) this allowance. Five potential solutions to the general problem are evaluated, and some implications for the literature on peer disagreement are considered. (shrink)
Sometimes appearances provide epistemic support that gets undercut later. In an earlier paper I argued that standard Bayesian update rules are at odds with this phenomenon because they are ‘rigid’. Here I generalize and bolster that argument. I first show that the update rules of Dempster–Shafer theory and ranking theory are rigid too, hence also at odds with the defeasibility of appearances. I then rebut three Bayesian attempts to solve the problem. I conclude that defeasible appearances pose a more difficult (...) and pervasive challenge for formal epistemology than is currently thought. 1 The Challenge for Bayesianism1.1 Updating and experience1.2 The problem1.3 Objections2 The Challenge for Dempster–Shafer Theory2.1 Background on Dempster–Shafer theory2.2 The problem for Dempster–Shafer theory3 The Challenge for Ranking Theory4 The Appeal to Metacognition5 The Appeal to Richer Inputs6 The Appeal to a Generic Underminer7 Conclusion. (shrink)
Van Fraassen famously endorses the Principle of Reflection as a constraint on rational credence, and argues that Reflection is entailed by the more traditional principle of Conditionalization. He draws two morals from this alleged entailment. First, that Reflection can be regarded as an alternative to Conditionalization – a more lenient standard of rationality. And second, that commitment to Conditionalization can be turned into support for Reflection. Van Fraassen also argues that Reflection implies Conditionalization, thus offering a new justification for Conditionalization. (...) I argue that neither principle entails the other, and thus neither can be used to motivate the other in the way van Fraassen says. There are ways to connect Conditionalization to Reflection, but these connections depend on poor assumptions about our introspective access, and are not tight enough to draw the sorts of conclusions van Fraassen wants. Upon close examination, the two principles seem to be getting at two quite independent epistemic norms. (shrink)
Forty years ago, Bayesian philosophers were just catching a new wave of technical innovation, ushering in an era of scoring rules, imprecise credences, and infinitesimal probabilities. Meanwhile, down the hall, Gettier’s 1963 paper  was shaping a literature with little obvious interest in the formal programs of Reichenbach, Hempel, and Carnap, or their successors like Jeffrey, Levi, Skyrms, van Fraassen, and Lewis. And how Bayesians might accommodate the discourses of full belief and knowledge was but a glimmer in the eye (...) of Isaac Levi.Forty years later, scoring rules, imprecise credences, and infinitesimal probabilities are all the rage. And the formal and “informal” traditions are increasingly coming together as Bayesian arguments spill over into debates about the foundations of empirical knowledge, skepticism, and more. Relatedly, Bayesian interest in full belief and knowledge has never been greater.Much more besides has happened in the last forty years of Bayesian philosophy, .. (shrink)
Scientific research is almost always conducted by communities of scientists of varying size and complexity. Such communities are effective, in part, because they divide their cognitive labor: not every scientist works on the same project. Philip Kitcher and Michael Strevens have pioneered efforts to understand this division of cognitive labor by proposing models of how scientists make decisions about which project to work on. For such models to be useful, they must be simple enough for us to understand their dynamics, (...) but faithful enough to reality that we can use them to analyze real scientific communities. To satisfy the first requirement, we must employ idealizations to simplify the model. The second requirement demands that these idealizations not be so extreme that we lose the ability to describe real-world phenomena. This paper investigates the status of the assumptions that Kitcher and Strevens make in their models, by first inquiring whether they are reasonable representations of reality, and then by checking the models' robustness against weakenings of these assumptions. To do this, we first argue against the reality of the assumptions, and then develop a series of agent-based simulations to systematically test their effects on model outcomes. We find that the models are not robust against weakenings of these idealizations. In fact we find that under certain conditions, this can lead to the model predicting outcomes that are qualitatively opposite of the original model outcomes. (shrink)
ABSTRCT: In this commentary, I criticize Metzinger's interdisciplinary approach to fixing the explanandum of a theory of consciousness and I offer a commonsense alternative in its place. I then re-evaluate Metzinger's multi-faceted working concept of consciousness, and argue for a shift away from the notion of "global availability" and towards the notio ns of "perspectivalness" and "transparency." This serves to highlight the role of Metzinger's "phenomenal model of the intentionality relation" (PMIR) in explaining consciousness, and it helps to locate Metzinger's (...) theory in relation to other naturalistic theories of. (shrink)
An important objection to the "higher-order" theory of consciousness turns on the possibility of higher-order misrepresentation. I argue that the objection fails because it illicitly assumes a characterization of consciousness explicitly rejected by HO theory. This in turn raises the question of what justifies an initial characterization of the data a theory of consciousness must explain. I distinguish between intrinsic and extrinsic characterizations of consciousness, and I propose several desiderata a successful characterization of consciousness must meet. I then defend the (...) particular extrinsic characterization of the HO theory, the "transitivity principle," against its intrinsic rivals, thereby showing that the misrepresentation objection conclusively falls short. (shrink)
This paper examines a series of Schelling-like models of residential segregation, in which agents prefer to be in the minority. We demon- strate that as long as agents care about the characteristics of their wider community, they tend to end up in a segregated state. We then investigate the process that causes this, and conclude that the result hinges on the similarity of informational states amongst agents of the same type. This is quite di erent from Schelling-like behavior, and sug- (...) gests (in his terms) that segregation is an instance of macro behavior which can arise from a wide variety of micro motives. (shrink)
We have known for a long time that there is complex, intelligent life. More recently we have discovered that the physics of our universe is fine-tuned so as to allow for the existence of such life. Call these two observations the Old Datum and the New Datum, respectively. Our question here is: once we know the Old Datum, does the New Datum provide additional evidence for the design hypothesis? I argue that it does not. Thus, there is an important sense (...) in which the much-touted fine-tuning of physics is irrelevant to debates about design. (shrink)
This article reviews the recent literature on idealization, specifically idealization in the course of scientific modeling. We argue that idealization is not a unified concept and that there are three different types of idealization: Galilean, minimalist, and multiple models, each with its own justification. We explore the extent to which idealization is a permanent feature of scientific representation and discuss its implications for debates about scientific realism.
The same-order representation theory of consciousness holds that conscious mental states represent both the world and themselves. This complex representational structure is posited in part to avoid a powerful objection to the more traditional higher-order representation theory of consciousness. The objection contends that the higher-order theory fails to account for the intimate relationship that holds between conscious states and our awareness of them--the theory 'divides the phenomenal labor' in an illicit fashion. This 'failure of intimacy' is exposed by the possibility (...) of misrepresentation by higher-order states. In this paper, I argue that despite appearances, the same-order theory fails to avoid the objection, and thus also has troubles with intimacy. (shrink)
Risk-weighted expected utility theory is motivated by small-world problems like the Allais paradox, but it is a grand-world theory by nature. And, at the grand-world level, its ability to handle the Allais paradox is dubious. The REU model described in Risk and Rationality turns out to be risk-seeking rather than risk-averse on one natural way of formulating the Allais gambles in the grand-world context. This result illustrates a general problem with the case for REU theory, we argue. There is a (...) tension between the small-world thinking marshaled against standard expected utility theory, and the grand-world thinking inherent to the risk-weighted alternative. (shrink)
Elliott Sober has recently argued that the cosmological design argument is unsound, since our observation of cosmic fine-tuning is subject to an observation selection effect (OSE). I argue that this view commits Sober to rejecting patently correct design inferences in more mundane scenarios. I show that Sober's view, that there are OSEs in those mundane cases, rests on a confusion about what information an agent ought to treat as background when evaluating likelihoods. Applying this analysis to the design argument shows (...) that our observation of fine-tuning is not rendered uninformative by an OSE. Design and the Anthropic Objection Previous responses to the Anthropic Objection Variations: experimental squads and survivor reunions Why there is no OSE in firing squad cases Application to the design argument. (shrink)
Young children spend a large portion of their time pretending about non-real situations. Why? We answer this question by using the framework of Bayesian causal models to argue that pretending and counterfactual reasoning engage the same component cognitive abilities: disengaging with current reality, making inferences about an alternative representation of reality, and keeping this representation separate from reality. In turn, according to causal models accounts, counterfactual reasoning is a crucial tool that children need to plan for the future and learn (...) about the world. Both planning with causal models and learning about them require the ability to create false premises and generate conclusions from these premises. We argue that pretending allows children to practice these important cognitive skills. We also consider the prevalence of unrealistic scenarios in children's play and explain how they can be useful in learning, despite appearances to the contrary. (shrink)
Roald Hoffmann and other theorists claim that we ought to use highly idealized chemical models (“qualitative models”) in order to increase our understanding of chemical phenomena, even though other models are available which make more highly accurate predictions. I assess this norm by examining one of the tradeoffs faced by model builders and model users—the tradeoff between precision and generality. After arguing that this tradeoff obtains in many cases, I discuss how the existence of this tradeoff can help us defend (...) Hoffmann's norm for modelling. (shrink)
Simulation and Similarity: Using Models to Understand the World is an account of modeling in contemporary science. Modeling is a form of surrogate reasoning where target systems in the natural world are studied using models, which are similar to these targets. My book develops an account of the nature of models, the practice of modeling, and the similarity relation that holds between models and their targets. I also analyze the conceptual tools that allow theorists to identify the trustworthy aspects of (...) models. Taken as a whole, I try to account for the ways that modeling is actually practiced by theorists, while abstracting sufficiently to understand the similarities and differences among examples of concrete, mathematical, and computational modeling.I am grateful to Wendy Parker, Jay Odenbaugh, and Bill Wimsatt for their careful and interesting reading of my book, as well as their constructive criticisms. Although I naturally disagree with some of their critiques, I have learned much .. (shrink)
In defending semantic externalism, philosophers of language have often assumed that there is a straightforward connection between scientiﬁc kinds and the natural kinds recognized by ordinary language users.1 For example, the claim that water is H2O assumes that the ordinary language kind water corresponds to a chemical kind, which contains all the molecules with molecular formula H2O as its members. This assumption about the coordination between ordinary language kinds and scientiﬁc kinds is important for the externalist program, because it is (...) what allows us to discover empirically the extensions of ordinary language kind terms. (shrink)
s Gibson (1982) correctly points out, despite Quine’s brief flirtation with a “mitigated phenomenalism” (Gibson’s phrase) in the late 1940’s and early 1950’s, Quine’s ontology of 1953 (“On Mental Entities”) and beyond left no room for non-physical sensory objects or qualities. Anyone familiar with the contemporary neo-dualist qualia-freak-fest might wonder why Quinean lessons were insufficiently transmitted to the current generation.
Some left-nested indicative conditionals are hard to interpret while others seem fine. Some proponents of the view that indicative conditionals have No Truth Values (NTV) use their view to explain why some left-nestings are hard to interpret: the embedded conditional does not express the truth conditions needed by the embedding conditional. Left-nestings that seem fine are then explained away as cases of ad hoc, pragmatic interpretation. We challenge this explanation. The standard reasons for NTV about indicative conditionals (triviality results, Gibbardian (...) standoffs, etc.) extend naturally to NTV about biconditionals. So NTVers about conditionals should also be NTVers about biconditionals. But biconditionals embed much more freely than conditionals. If NTV explains why some left-nested conditionals are hard to interpret, why do biconditionals embed successfully in the very contexts where conditionals do not embed? (shrink)
Most materialist responses to the zombie argument against materialism take either a ?type-A? or ?type-B? approach: they either deny the conceivability of zombies or accept their conceivability while denying their possibility. However, a ?type-Q? materialist approach, inspired by Quinean suspicions about a priority and modal entailment, rejects the sharp line between empirical and conceptual truths needed for the traditional responses. In this paper, I develop a type-Q response to the zombie argument, one stressing the theory-laden nature of our conceivability and (...) possibility intuitions. I argue that our first-person access to the conscious mind systematically misleads us into thinking that the distinctive qualities of conscious experience?qualia?are nonfunctional. Qualia, I contend, are functional, even though they do not seem to be. To support my claim, I introduce the ?meditations? of Rene Descartes? zombie twin. This establishes the plausibility of an appearance/reality distinction for consciousness and it undermines various anti-materialist objections based on privileged first-person access. I conclude that the best overall theory posits an appearance/reality distinction for qualia, and this, for the type-Q materialist, is decisive. (shrink)