Each of us, right now, is having a unique conscious experience. Nothing is more basic to our lives as thinking beings and nothing, it seems, is better known to us. But the ever-expanding reach of natural science suggests that everything in our world is ultimately physical. The challenge of fitting consciousness into our modern scientific worldview, of taking the subjective “feel” of conscious experience and showing that it is just neural activity in the brain, is among the most intriguing explanatory (...) problems of our times. -/- In this book, JoshWeisberg presents the range of contemporary responses to the philosophical problem of consciousness. The basic philosophical tools of the trade are introduced, including thought experiments featuring Mary the color-deprived super scientist and fearsome philosophical “zombies”. The book then systematically considers the space of philosophical theories of consciousness. Dualist and other “non-reductive” accounts of consciousness hold that we must expand our basic physical ontology to include the intrinsic features of consciousness. Functionalist and identity theories, by contrast, hold that with the right philosophical stage-setting, we can fit consciousness into the standard scientific picture. And “mysterians” hold that any solution to the problem is beyond such small-minded creatures as us. -/- Throughout the book, the complexity of current debates on consciousness is handled in a clear and concise way, providing the reader with a fine introductory guide to the rich philosophical terrain. The work makes an excellent entry point to one of the most exciting areas of study in philosophy and science today. (shrink)
ABSTRCT: In this commentary, I criticize Metzinger's interdisciplinary approach to fixing the explanandum of a theory of consciousness and I offer a commonsense alternative in its place. I then re-evaluate Metzinger's multi-faceted working concept of consciousness, and argue for a shift away from the notion of "global availability" and towards the notio ns of "perspectivalness" and "transparency." This serves to highlight the role of Metzinger's "phenomenal model of the intentionality relation" (PMIR) in explaining consciousness, and it helps to locate Metzinger's (...) theory in relation to other naturalistic theories of. (shrink)
An important objection to the "higher-order" theory of consciousness turns on the possibility of higher-order misrepresentation. I argue that the objection fails because it illicitly assumes a characterization of consciousness explicitly rejected by HO theory. This in turn raises the question of what justifies an initial characterization of the data a theory of consciousness must explain. I distinguish between intrinsic and extrinsic characterizations of consciousness, and I propose several desiderata a successful characterization of consciousness must meet. I then defend the (...) particular extrinsic characterization of the HO theory, the "transitivity principle," against its intrinsic rivals, thereby showing that the misrepresentation objection conclusively falls short. (shrink)
The same-order representation theory of consciousness holds that conscious mental states represent both the world and themselves. This complex representational structure is posited in part to avoid a powerful objection to the more traditional higher-order representation theory of consciousness. The objection contends that the higher-order theory fails to account for the intimate relationship that holds between conscious states and our awareness of them--the theory 'divides the phenomenal labor' in an illicit fashion. This 'failure of intimacy' is exposed by the possibility (...) of misrepresentation by higher-order states. In this paper, I argue that despite appearances, the same-order theory fails to avoid the objection, and thus also has troubles with intimacy. (shrink)
s Gibson (1982) correctly points out, despite Quine’s brief flirtation with a “mitigated phenomenalism” (Gibson’s phrase) in the late 1940’s and early 1950’s, Quine’s ontology of 1953 (“On Mental Entities”) and beyond left no room for non-physical sensory objects or qualities. Anyone familiar with the contemporary neo-dualist qualia-freak-fest might wonder why Quinean lessons were insufficiently transmitted to the current generation.
Most materialist responses to the zombie argument against materialism take either a ?type-A? or ?type-B? approach: they either deny the conceivability of zombies or accept their conceivability while denying their possibility. However, a ?type-Q? materialist approach, inspired by Quinean suspicions about a priority and modal entailment, rejects the sharp line between empirical and conceptual truths needed for the traditional responses. In this paper, I develop a type-Q response to the zombie argument, one stressing the theory-laden nature of our conceivability and (...) possibility intuitions. I argue that our first-person access to the conscious mind systematically misleads us into thinking that the distinctive qualities of conscious experience?qualia?are nonfunctional. Qualia, I contend, are functional, even though they do not seem to be. To support my claim, I introduce the ?meditations? of Rene Descartes? zombie twin. This establishes the plausibility of an appearance/reality distinction for consciousness and it undermines various anti-materialist objections based on privileged first-person access. I conclude that the best overall theory posits an appearance/reality distinction for qualia, and this, for the type-Q materialist, is decisive. (shrink)
Some theorists approach the Gordian knot of consciousness by proclaiming its inherent tangle and mystery. Others draw out the sword of reduction and cut the knot to pieces. Philosopher Thomas Metzinger, in his important new book, Being No One: The Self-Model Theory of Subjectivity, instead attempts to disentangle the knot one careful strand at a time. The result is an extensive and complex work containing almost 700 pages of philosophical analysis, phenomenological reflection, and scientific data. The text offers a sweeping (...) and comprehensive tour through the entire landscape of consciousness studies, and it lays out Metzinger's rich and stimulating theory of the subjective mind. Metzinger's skilled integration of philosophy and neuroscience provides a valuable framework for interdisciplinary research on consciousness. (shrink)
Some theorists approach the Gordian knot of consciousness by proclaiming its inherent tangle and mystery. Others draw out the sword of reduction and cut the knot to pieces. Philosopher Thomas Metzinger, in his important new book, Being No One: The Self-Model Theory of Subjectivity,1 instead attempts to disentangle the knot one careful strand at a time. The result is an extensive and complex work containing almost 700 pages of philosophical analysis, phenomenological reflection, and scientific data. The text offers a sweeping (...) and comprehensive tour through the entire landscape of consciousness studies, and it lays out Metzinger's rich and stimulating theory of the subjective mind. Metzinger's skilled integration of philosophy and neuroscience provides a valuable framework for interdisciplinary research on consciousness. Metzinger's overall goal in Being No One is to defend a representational theory of subjectivity, one that reduces subjective mental processes to representational mental processes. Subjective experiences take place whe n there is a conscious perspective, an active first-person point of view. It occurs in. (shrink)
What happens when a psychologist who’s spent the last 30 years developing a method of introspective sampling and a philosopher whose central research project is casting skeptical doubt on the accuracy of introspection write a book together? The result, Hurlburt & Schwitzgebel’s thought-provoking Describing Inner Experience?, is both encouraging and disheartening. Encouraging, because the book is a fine example of fruitful and open-minded interdisciplinary engagement; disheartening, because it makes clear just how difficult it is to justify the accuracy of introspective (...) methods in psychology and philosophy. And since debates in consciousness studies largely turn on fine points of introspective detail, this is no minor methodological stumbling block. (shrink)
subjective appearance of unity, but respects unity can be adequately dealt with by the theory. I the actual and potential disunity of the brain will close by briefly considering some worries about processes that underwrite consciousness. eliminativism that often accompany discussions of unity and consciousness.
When you have ruled everything else out, then what you are left with, no matter how improbable, must be the truth. This adage from Doyle describes the path taken by Leopold Stubenberg in his book, Consciousness and Qualia. He spends most of the work critically examining and then discarding potential explications of consciousness before finally, in the last chapter, offering his own theory, carefully selected to avoid the pitfalls that did in rival accounts. He delivers a bold and simple slogan (...) that distills the essence of his view: “To be conscious is to have qualia” (262). (shrink)
Over the last quarter century or so, no one has done more to shape debate in the philosophy of mind and cognitive science than Jerry Fodor. He is best known for championing the Computational Theory of Mind (CTM), the view that thinking consists of computations over syntactically structured mental representations (Fodor, 1975). He has also developed the idea that the mind is partially made up of isolated mechanisms called “modules” that employ innate databases informationally encapsulated from the rest of the (...) mind (Fodor, 1983). (shrink)
Theorizing in ecology and evolution often proceeds via the construction of multiple idealized models. To determine whether a theoretical result actually depends on core features of the models and is not an artifact of simplifying assumptions, theorists have developed the technique of robustness analysis, the examination of multiple models looking for common predictions. A striking example of robustness analysis in ecology is the discovery of the Volterra Principle, which describes the effect of general biocides in predator-prey systems. This paper details (...) the discovery of the Volterra Principle and the demonstration of its robustness. It considers the classical ecology literature on robustness and introduces two individual-based models of predation, which are used to further analyze the Volterra Principle. The paper also introduces a distinction between parameter robustness, structural robustness, and representational robustness, and demonstrates that the Volterra Principle exhibits all three kinds of robustness. *Received September 2006; revised May 2007. ‡Earlier versions of this paper were presented at the Australasian Association of Philosophy, the London School of Economics, and the University of Bristol. The authors wish to thank those audiences as well as Patrick Forber, Ken Waters, Deena Skolnick Weisberg, Uri Wilensky, and Bill Wimsatt for many helpful comments. Special thanks to Giacomo Sillari for his assistance in translating Volterra's original paper and his insightful thoughts about Volterra's aims and methods. Some of the research in this paper was supported by NSF grant SES-0620887 to MW. †To contact the authors, please write to: Michael Weisberg, Department of Philosophy, University of Pennsylvania, 433 Logan Hall, Philadelphia, PA 19104; e-mail: firstname.lastname@example.org; Kenneth Reisman, Pluribo, Inc., 100 Park Avenue, Suite 1600, New York, NY 10017; e-mail: email@example.com. (shrink)
Ned Block argues that the higher-order (HO) approach to explaining consciousness is ‘defunct’ because a prominent objection (the ‘misrepresentation objection’) exposes the view as ‘incoherent’. What’s more, a response to this objection that I’ve offered elsewhere (Weisberg 2010) fails because it ‘amounts to abusing the notion of what-it’s-like-ness’ (xxx).1 In this response, I wish to plead guilty as charged. Indeed, I will continue herein to abuse Block’s notion of what-it’s-like-ness. After doing so, I will argue that the HO approach (...) accounts for the sense of what-it’s-like-ness that matters in a theory of consciousness. I will also argue that the only incoherence present in the HO theory is that generated by embracing Block’s controversial notion of what-it’s-like-ness, something no theorist of any stripe ought to do. Block is famous for (among other things) having introduced the notion of ‘phenomenal consciousness’ into contemporary philosophy of mind (Block 1995). This term is widely employed in the philosophical literature and it even appears in the empirical literature. But wide-speared usage has brought about divergent interpretations of the term. We can distinguish a ‘moderate’ and a ‘zealous’ reading of ‘phenomenal consciousness’. On the moderate reading, ‘phenomenal consciousness’ just means ‘experience’. Many people have embraced this sense of the term and use it to roughly pick out conscious experience involving sensory quality (states like conscious visual experiences or conscious pains, for example).2 On the zealous reading, however, phenomenal consciousness is held to be ‘distinct from any cognitive, intentional, or functional property’ (Block 1995: 234). That is, any explanation of phenomenal consciousness in exclusively cognitive, intentional, or functional terms will fail to capture, without remainder, what is really distinctive about phenomenal consciousness. Block, of course, is fully clear about embracing the zealous reading; indeed, his initial introduction of the notion is in those terms. The same ambiguity occurs with the much-used (and abused) idea of ‘what-it’s-like-ness’.. (shrink)
I argue that the rationale behind the fine-tuning argument for design is self-undermining, refuting the argument’s own premise that fine-tuning is to be expected given design. In (Weisberg 2010) I argued on informal grounds that this premise is unsupported. White (2011) countered that it can be derived from three plausible assumptions. But White’s third assumption is based on a fallacious rationale, and is even objectionable by the design theorist’s own lights. The argument that shows this, the argument from divine (...) indifference, simultaneously exposes the fine-tuning argument’s self-undermining character. The same argument also answers Bradley’s (forthcoming) reply to my earlier objection. (shrink)
This is my reply to JoshWeisberg, Robert Van Gulick, and William Seager, published in JCS vol 20, 2013. This symposium grew out of an author-meets-critics session at the Central APA conference in 2013 on my 2012 book THE CONSCIOUSNESS PARADOX (MIT Press). Topics covered include higher-order thought (HOT) theory, my own "wide intrinsicality view," the problem of misrepresentation, targetless HOTs, conceptualism, introspection, and the transitivity principle.
The covalent bond, a difficult concept to define precisely, plays a central role in chemical predictions, interventions, and explanations. I investigate the structural conception of the covalent bond, which says that bonding is a directional, submolecular region of electron density, located between individual atomic centers and responsible for holding the atoms together. Several approaches to constructing molecular models are considered in order to determine which features of the structural conception of bonding, if any, are robust across these models. Key components (...) of the structural conception are absent in all but the simplest quantum mechanical models of molecular structure, seriously challenging the conception’s viability. †To contact the author, please write to: Department of Philosophy, University of Pennsylvania, 433 Cohen Hall, Philadelphia, PA 19104‐6304; e‐mail: firstname.lastname@example.org. (shrink)
I have learned a lot from JoshWeisberg’s substantial criticism in his well-crafted and systematic commentary . Unfortunately, I have to concede many of the points he intelligently makes. But I am also flattered by the way he ultimately uses his criticism to emphasize some of those aspects of the theory that can perhaps possibly count as exactly the core of my own genuine contribution to the problem—and nicely turns them back against myself. And I am certainly grateful (...) for a whole range of helpful clarifications. (shrink)
Nobel laureate Roald Hoffmann's contributions to chemistry are well known. Less well known, however, is that over a career that spans nearly fifty years, Hoffmann has thought and written extensively about a wide variety of other topics, such as chemistry's relationship to philosophy, literature, and the arts, including the nature of chemical reasoning, the role of symbolism and writing in science, and the relationship between art and craft and science. In Roald Hoffmann on the Philosophy, Art, and Science of Chemistry, (...) Jeffrey Kovac and Michael Weisberg bring together twenty-eight of Hoffmann's most important essays. Gathered here are Hoffmann's most philosophically significant and interesting essays and lectures, many of which are not widely accessible. In essays such as "Why Buy That Theory," "Nearly Circular Reasoning," "How Should Chemists Think," "The Metaphor, Unchained," "Art in Science," and "Molecular Beauty," we find the mature reflections of one of America's leading scientists. Organized under the general headings of Chemical Reasoning and Explanation, Writing and Communicating, Art and Science, Education, and Ethics, these stimulating essays provide invaluable insight into the teaching and practice of science. (shrink)
Weisberg identifies the risks throughout a 2000 year span of western history of overly flexible responses to crises and perceived emergencies. So ensconced is the norm of infinite openness to ideas and changing circumstances that, he argues, his readers need to work hard to be able to resist the tendency of others to fold their tents and betray their own deepest and soundest values when challenged to do so by "new" conditions.
one takes to be the most salient, any pair could be judged more similar to each other than to the third. Goodman uses this second problem to showthat there can be no context-free similarity metric, either in the trivial case or in a scientifically ...
Philosophers of science increasingly recognize the importance of idealization: the intentional introduction of distortion into scientiﬁc theories. Yet this recognition has not yielded consensus about the nature of idealization. e literature of the past thirty years contains disparate characterizations and justiﬁcations, but little evidence of convergence towards a common position.
Many standard philosophical accounts of scientific practice fail to distinguish between modeling and other types of theory construction. This failure is unfortunate because there are important contrasts among the goals, procedures, and representations employed by modelers and other kinds of theorists. We can see some of these differences intuitively when we reflect on the methods of theorists such as Vito Volterra and Linus Pauling on the one hand, and Charles Darwin and Dimitri Mendeleev on the other. Much of Volterra's and (...) Pauling's work involved modeling; much of Darwin's and Mendeleev's did not. In order to capture this distinction, I consider two examples of theory construction in detail: Volterra's treatment of post-WWI fishery dynamics and Mendeleev's construction of the periodic system. I argue that modeling can be distinguished from other forms of theorizing by the procedures modelers use to represent and to study real-world phenomena: indirect representation and analysis. This differentiation between modelers and non-modelers is one component of the larger project of understanding the practice of modeling, its distinctive features, and the strategies of abstraction and idealization it employs. (shrink)
Because of its complexity, contemporary scientific research is almost always tackled by groups of scientists, each of which works in a different part of a given research domain. We believe that understanding scientific progress thus requires understanding this division of cognitive labor. To this end, we present a novel agent-based model of scientific research in which scientists divide their labor to explore an unknown epistemic landscape. Scientists aim to climb uphill in this landscape, where elevation represents the significance of the (...) results discovered by employing a research approach. We consider three different search strategies scientists can adopt for exploring the landscape. In the first, scientists work alone and do not let the discoveries of the community as a whole influence their actions. This is compared with two social research strategies, which we call the follower and maverick strategies. Followers are biased towards what others have already discovered, and we find that pure populations of these scientists do less well than scientists acting independently. However, pure populations of mavericks, who try to avoid research approaches that have already been taken, vastly outperform both of the other strategies. Finally, we show that in mixed populations, mavericks stimulate followers to greater levels of epistemic production, making polymorphic populations of mavericks and followers ideal in many research domains. (shrink)
Modelers often rely on robustness analysis, the search for predictions common to several independent models. Robustness analysis has been characterized and championed by Richard Levins and William Wimsatt, who see it as central to modern theoretical practice. The practice has also been severely criticized by Steven Orzack and Elliott Sober, who claim that it is a nonempirical form of confirmation, effective only under unusual circumstances. This paper addresses Orzack and Sober's criticisms by giving a new account of robustness analysis and (...) showing how the practice can identify robust theorems. Once the structure of robust theorems is clearly articulated, it can be shown that such theorems have a degree of confirmation, despite the lack of direct empirical evidence for their truth. (shrink)
The study of insight in problem solving and creative thinking has seen an upsurge of interest in the last 30 years. Current theorising concerning insight has taken one of two tacks. The special-process view, which grew out of the Gestalt psychologists’ theorising about insight, proposes that insight is the result of a dedicated set of processes that is activated by the individual's reaching impasse while trying to deal with a problematic situation. In contrast, the business-as-usual view argues that insight is (...) brought about by the same processes that underlie ordinary thinking . Although those two views are typically treated as being in opposition, it has recently been proposed that a complete understanding of insight will require bringing together aspects of both views. The present paper carries that proposal further. Critical analysis of those two viewpoints demonstrates that each has a positive contribution to make to our understanding of insight, but also is.. (shrink)
The bootstrapping problem poses a general challenge, afflicting even strongly internalist theories. Even if one must always know that one’s source is reliable to gain knowledge from it, bootstrapping is still possible. I survey some solutions internalists might offer and defend the one I find most plausible: that bootstrapping involves an abuse of inductive reasoning akin to generalizing from a small or biased sample. I also argue that this solution is equally available to the reliabilist. The moral is that the (...) issues raised by bootstrapping are orthogonal to questions about internalism and basic knowledge, having more to do with the nature of good inductive reasoning. (shrink)
Inference to the Best Explanation (IBE) and Bayesianism are our two most prominent theories of scientific inference. Are they compatible? Van Fraassen famously argued that they are not, concluding that IBE must be wrong since Bayesianism is right. Writers since then, from both the Bayesian and explanationist camps, have usually considered van Fraassen’s argument to be misguided, and have plumped for the view that Bayesianism and IBE are actually compatible. I argue that van Fraassen’s argument is actually not so misguided, (...) and that it causes more trouble for compatibilists than is typically thought. Bayesianism in its dominant, subjectivist form, can only be made compatible with IBE if IBE is made subservient to conditionalization in a way that robs IBE of much of its substance and interest. If Bayesianism and IBE are to be fit together, I argue, a strongly objective Bayesianism is the preferred option. I go on to sketch this objectivist, IBE-based Bayesianism, and offer some preliminary suggestions for its development. (shrink)
Despite their best efforts, scientists may be unable to construct models that simultaneously exemplify every theoretical virtue. One explanation for this is the existence of tradeoffs: relationships of attenuation that constrain the extent to which models can have such desirable qualities. In this paper, we characterize three types of tradeoffs theorists may confront. These characterizations are then used to examine the relationships between parameter precision and two types of generality. We show that several of these relationships exhibit tradeoffs and discuss (...) what consequences those tradeoffs have for theoretical practice. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
In this paper, I argue against the claim recently defended by JoshWeisberg that a certain version of the self-representational approach to phenomenal consciousness cannot avoid a set of problems that have plagued higher-order approaches. These problems arise specifically for theories that allow for higher-order misrepresentation or—in the domain of self-representational theories—self-misrepresentation. In response to Weisberg, I articulate a self-representational theory of phenomenal consciousness according to which it is contingently impossible for self-representations tokened in the context of (...) a conscious mental state to misrepresent their objects. This contingent infallibility allows the theory to both acknowledge the (logical) possibility of self-misrepresentation and avoid the problems of self-misrepresentation. Expanding further on Weisberg’s work, I consider and reveal the shortcomings of three other self-representational models—put forward by Kreigel, Van Gulick, and Gennaro—in order to show that each indicates the need for this sort of infallibility. I then argue that contingent infallibility is in principle acceptable on naturalistic grounds only if we attribute (1) a neo-Fregean kind of directly referring, indexical content to self-representational mental states and (2) a certain ontological structure to the complex conscious mental states of which these indexical self-representations are a part. In these sections I draw on ideas from the work of Perry and Kaplan to articulate the context-dependent semantic structure of inner-representational states. (shrink)
Clark and Shackel have recently argued that previous attempts to resolve the two-envelope paradox fail, and that we must look to symmetries of the relevant expected-value calculations for a solution. Clark and Shackel also argue for a novel solution to the peeking case, a variant of the two-envelope scenario in which you are allowed to look in your envelope before deciding whether or not to swap. Whatever the merits of these solutions, they go beyond accepted decision theory, even contradicting it (...) in the peeking case. Thus if we are to take their solutions seriously, we must understand Clark and Shackel to be proposing a revision of standard decision theory. Understood as such, we will argue, their proposal is both implausible and unnecessary. (shrink)
This paper is an interpretation and defense of Richard Levins’ “The Strategy of Model Building in Population Biology,” which has been extremely influential among biologists since its publication 40 years ago. In this article, Levins confronted some of the deepest philosophical issues surrounding modeling and theory construction. By way of interpretation, I discuss each of Levins’ major philosophical themes: the problem of complexity, the brute-force approach, the existence and consequence of tradeoffs, and robustness analysis. I argue that Levins’ article is (...) concerned, at its core, with justifying the use of multiple, idealized models in population biology. (shrink)
Forty years ago, Bayesian philosophers were just catching a new wave of technical innovation, ushering in an era of scoring rules, imprecise credences, and infinitesimal probabilities. Meanwhile, down the hall, Gettier’s 1963 paper  was shaping a literature with little obvious interest in the formal programs of Reichenbach, Hempel, and Carnap, or their successors like Jeffrey, Levi, Skyrms, van Fraassen, and Lewis. And how Bayesians might accommodate the discourses of full belief and knowledge was but a glimmer in the eye (...) of Isaac Levi.Forty years later, scoring rules, imprecise credences, and infinitesimal probabilities are all the rage. And the formal and “informal” traditions are increasingly coming together as Bayesian arguments spill over into debates about the foundations of empirical knowledge, skepticism, and more. Relatedly, Bayesian interest in full belief and knowledge has never been greater.Much more besides has happened in the last forty years of Bayesian philosophy, .. (shrink)
Recent proposals that frame norms of action in terms of knowledge have been challenged by Bayesian decision theorists. Bayesians object that knowledge-based norms conflict with the highly successful and established view that rational action is rooted in degrees of belief. I argue that the knowledge-based and Bayesian pictures are not as incompatible as these objectors have made out. Attending to the mechanisms of practical reasoning exposes space for both knowledge and degrees of belief to play their respective roles.
Conditionalization and Jeffrey Conditionalization cannot simultaneously satisfy two widely held desiderata on rules for empirical learning. The first desideratum is confirmational holism, which says that the evidential import of an experience is always sensitive to our background assumptions. The second desideratum is commutativity, which says that the order in which one acquires evidence shouldn't affect what conclusions one draws, provided the same total evidence is gathered in the end. (Jeffrey) Conditionalization cannot satisfy either of these desiderata without violating the other. (...) This is a surprising problem, and I offer a diagnosis of its source. I argue that (Jeffrey) Conditionalization is inherently anti-holistic in a way that is just exacerbated by the requirement of commutativity. The dilemma is thus a superficial manifestation of (Jeffrey) Conditionalization's fundamentally anti-holistic nature. (shrink)
Van Fraassen famously endorses the Principle of Reflection as a constraint on rational credence, and argues that Reflection is entailed by the more traditional principle of Conditionalization. He draws two morals from this alleged entailment. First, that Reflection can be regarded as an alternative to Conditionalization – a more lenient standard of rationality. And second, that commitment to Conditionalization can be turned into support for Reflection. Van Fraassen also argues that Reflection implies Conditionalization, thus offering a new justification for Conditionalization. (...) I argue that neither principle entails the other, and thus neither can be used to motivate the other in the way van Fraassen says. There are ways to connect Conditionalization to Reflection, but these connections depend on poor assumptions about our introspective access, and are not tight enough to draw the sorts of conclusions van Fraassen wants. Upon close examination, the two principles seem to be getting at two quite independent epistemic norms. (shrink)