According to a prominent claim in recent epistemology, people are less likely to ascribe knowledge to a high stakes subject for whom the practical consequences of error are severe, than to a low stakes subject for whom the practical consequences of error are slight. We offer an opinionated "state of the art" on experimental research about the role of stakes in knowledge judgments. We draw on a first wave of empirical studies--due to Feltz & Zarpentine (2010), May (...) et al (2010), and Buckwalter (2010)--which cast doubt on folk stakes sensitivity, and a second wave of empirical studies--due to Pinillos (2012) and Sripada & Stanley (2012)--said to vindicate it, as well as new studies of our own. We conclude that the balance of evidence to date best supports Folk stakes insensitivity, or that all else equal, stakes do not affect knowledge ascription. (shrink)
There is conflicting experimental evidence about whether the “stakes” or importance of being wrong affect judgments about whether a subject knows a proposition. To date, judgments about stakes effects on knowledge have been investigated using binary paradigms: responses to “low” stakes cases are compared with responses to “high stakes” cases. However, stakes or importance are not binary properties—they are scalar: whether a situation is “high” or “low” stakes is a matter of degree. So far, (...) no experimental work has investigated the scalar nature of stakes effects on knowledge: do stakes effects increase as the stakes get higher? Do stakes effects only appear once a certain threshold of stakes has been crossed? Does the effect plateau at a certain point? To address these questions, we conducted experiments that probe for the scalarity of stakes effects using several experimental approaches. We found evidence of scalar stakes effects using an “evidence seeking” experimental design, but no evidence of scalar effects using a traditional “evidence-fixed” experimental design. In addition, using the evidence-seeking design, we uncovered a large, but previously unnoticed framing effect on whether participants are skeptical about whether someone can know something, no matter how much evidence they have. The rate of skeptical responses and the rate at which participants were willing to attribute “lazy knowledge”—that someone can know something without having to check—were themselves subject to a stakes effect: participants were more skeptical when the stakes were higher, and more prone to attribute lazy knowledge when the stakes were lower. We argue that the novel skeptical stakes effect provides resources to respond to criticisms of the evidence-seeking approach that argue that it does not target knowledge. (shrink)
In the remainder of this article, we will disarm an important motivation for epistemic contextualism and interest-relative invariantism. We will accomplish this by presenting a stringent test of whether there is a stakes effect on ordinary knowledge ascription. Having shown that, even on a stringent way of testing, stakes fail to impact ordinary knowledge ascription, we will conclude that we should take another look at classical invariantism. Here is how we will proceed. Section 1 lays out some limitations (...) of previous research on stakes. Section 2 presents our study and concludes that there is little evidence for a substantial stakes effect. Section 3 responds to objections. The conclusion clears the way for classical invariantism. (shrink)
Defenses of pragmatic encroachment commonly rely on two thoughts: first, that the gap between one’s strength of epistemic position on p and perfect strength sometimes makes a difference to what one is justified in doing, and second, that the higher the stakes, the harder it is to know. It is often assumed that these ideas complement each other. This chapter shows that these ideas are far from complementary. Along the way, a variety of strategies for regimenting the somewhat inchoate (...) notion of stakes are indicated, and some troubling cases for pragmatic encroachment raised. (shrink)
Several authors have recently endorsed the thesis that there is what has been called pragmatic encroachment on knowledge—in other words, that two people who are in the same situation with respect to truth-related factors may differ in whether they know something, due to a difference in their practical circumstances. This paper aims not to defend this thesis, but to explore how it could be true. What I aim to do, is to show how practical factors could play a role in (...) defeating knowledge by defeating epistemic rationality—the very kind of rationality that is entailed by knowledge, and in which Pascalian considerations do not play any role—even though epistemic rationality consists in having adequate evidence. (shrink)
In their chapter “Knowledge, Practical Adequacy, and Stakes,” Charity Anderson and John Hawthorne present several challenges to the doctrine of pragmatic encroachment. In this brief reply to their chapter two things are aimed at. First, the chapter argues that there is a sense in which their case against pragmatic encroachment is a bit weaker, and another sense in which that case is much stronger, than Anderson and Hawthorne’s own argument would suggest. Second, the chapter highlights and then builds upon (...) their extremely interesting reflections on one sort of practical matter that has not received proper attention in the literature: the epistemic significance of double-checking. This is done with an eye towards pointing in the direction of further work. (shrink)
What can rational deliberation indicate about belief? Belief clearly influences deliberation. The principle that rational belief is stake-invariant rules out at least one way that deliberation might influence belief. The principle is widely, if implicitly, held in work on the epistemology of categorical belief, and it is built into the model of choice-guiding degrees of belief that comes to us from Ramsey and de Finetti. Criticisms of subjective probabilism include challenges to the assumption of additive values (the package principle) employed (...) by defenses of probabilism. But the value-interaction phenomena often cited in such challenges are excluded by stake-invariance. A comparison with treatments of categorical belief suggests that the appeal to stake-invariance is not ad hoc. Whether or not to model belief as stake-invariant is a question not settled here. (shrink)
The term “know” is one of the ten most common verbs in English, and yet a central aspect of its usage remains mysterious. Our willingness to ascribe knowledge depends not just on epistemic factors such as the quality of our evidence. It also depends on seemingly non-epistemic factors. For instance, we become less inclined to ascribe knowledge when it’s important to be right, or once our attention is drawn to possible sources of error. Accounts of this phenomenon proliferate, but no (...) consensus has been achieved, decades of research notwithstanding. Alexander Dinges offers a fresh examination of this ongoing debate. After reviewing and complementing relevant data from both armchair and experimental philosophy, he assesses extant accounts of this data including semantic, metaphysical, pragmatic, doxastic as well as more recent psychological accounts. Against this background, he offers a novel psychological account based on the idea that non-epistemic factors affect estimates of probability. (shrink)
The idea that beliefs may be stake-sensitive is explored. This is the idea that the strength with which a single, persistent belief is held may vary and depend upon what the believer takes to be at stake. The stakes in question are tied to the truth of the belief—not, as in Pascal’s wager and other cases, to the belief’s presence. Categorical beliefs and degrees of belief are considered; both kinds of account typically exclude the idea and treat belief as (...) stake-invariant , though an exception is briefly described. The role of the assumption of stake-invariance in familiar accounts of degrees of belief is also discussed, and morals are drawn concerning finite and countable Dutch book arguments. (shrink)
I distinguish two different kinds of practical stakes associated with propositions. The W-stakes track what is at stake with respect to whether the proposition is true or false. The A-stakes track what is at stake with respect to whether an agent believes the proposition. This poses a dilemma for those who claim that whether a proposition is known can depend on the stakes associated with it. Only the W-stakes reading of this view preserves intuitions about (...) knowledge-attributions, but only the A-stakes reading preserves the putative link between knowledge and practical reasoning that has motivated it. (shrink)
It is widely accepted that our initial intuitions regarding knowledge attributions in stakes-shifting cases are best explained by standards variantism, the view that the standards for knowledge may vary with contexts in an epistemically interesting way. Against standards variantism, I argue that no prominent account of the standards for knowledge can explain our intuitions regarding stakes-shifting cases. I argue that the only way to preserve our initial intuitions regarding such cases is to endorse position variantism, the view that (...) one’s epistemic position may vary with contexts in an epistemically interesting way. Some had argued that epistemic position is incompatible with intellectualism. In reply, I point out that position variantism and intellectualism are compatible, if one’s truth-relevant factors with respect to p can vary with contexts in an epistemically interesting way. (shrink)
Trust is a kind of risky reliance on another person. Social scientists have offered two basic accounts of trust: predictive expectation accounts and staking accounts. Predictive expectation accounts identify trust with a judgment that performance is likely. Staking accounts identify trust with a judgment that reliance on the person's performance is worthwhile. I argue that these two views of trust are different, that the staking account is preferable to the predictive expectation account on grounds of intuitive adequacy and coherence with (...) plausible explanations of action; and that there are counterexamples to both accounts. I then set forward an additional necessary condition on trust, according to which trust implies a moral expectation. When A trusts B to do x, A ascribes to B an obligation to do x, and holds B to this obligation. This Moral Expectation view throws new light on some of the consequences of misplaced trust. I use the example of physicians’ defensive behavior/defensive medicine to illustrate this final point. (shrink)
Trust is a kind of risky reliance on another person. Social scientists have offered two basic accounts of trust: predictive expectation accounts and staking (betting) accounts. Predictive expectation accounts identify trust with a judgment that performance is likely. Staking accounts identify trust with a judgment that reliance on the person’s performance is worthwhile. I argue (1) that these two views of trust are different, (2) that the staking account is preferable to the predictive expectation account on grounds of intuitive adequacy (...) and coherence with plausible explanations of action; and (3) that there are counterexamples to both accounts. I then set forward an additional necessary condition on trust, according to which trust implies a moral expectation. The content of the moral expectation is this: W hen A trusts B to do x, A ascribes an obligation to B to do x, and holds B to this obligation. This moral expectation account throws new light on some of the consequences of misplaced trust. I use the example of physicians’ defensive behavior to illustrate this final point. (shrink)
Historical patterns of discrimination seem to present us with conflicts between what morality requires and what we epistemically ought to believe. I will argue that these cases lend support to the following nagging suspicion: that the epistemic standards governing belief are not independent of moral considerations. We can resolve these seeming conflicts by adopting a framework wherein standards of evidence for our beliefs to count as justified can shift according to the moral stakes. On this account, believing a paradigmatically (...) racist belief reflects a failure to not only attend to the epistemic risk of being wrong, but also a failure to attend to the distinctively moral risk of wronging others given what we believe. (shrink)
Contextualists and Subject Sensitive Invariantists often cite the knowledge norm of assertion as part of their argument. They claim that the knowledge norms in conjunction with our intuitions about when a subject is properly asserting in low or high stakes contexts provides strong evidence that what counts as knowledge depends on practical factors. In this paper, I present new data to suggest they are mistaken in the way they think about cases involving high and low stakes and I (...) show how insensitive invariantists can explain the data. I exploit recent work done on the distinction between flouting a norm and being blamed for that violation to formulate a rigorous theory of rational expected blameworthiness that allows insensitive invariantists to explain the data cited. (shrink)
According to the fallibilist, it is possible for us to know things when our evidence doesn't entail that our beliefs are correct. Even if there is some chance that we're mistaken about p, we might still know that p is true. Fallibilists will tell you that an important virtue of their view is that infallibilism leads to skepticism. In this paper, we'll see that fallibilist impurism has considerable skeptical consequences of its own. We've missed this because we've focused our attention (...) on the high-stakes cases that they discuss in trying to motivate their impurism about knowledge. We'll see this once we think about the fallibilist impurist's treatment of low-stakes cases. (shrink)
“Pragmatic encroachers” about knowledge generally advocate two ideas: (1) you can rationally act on what you know; (2) knowledge is harder to achieve when more is at stake. Charity Anderson and John Hawthorne have recently argued that these two ideas may not fit together so well. I extend their argument by working out what “high stakes” would have to mean for the two ideas to line up, using decision theory.
In this paper, I aim to establish that, according to almost all democratic theories, instrumentalist considerations often dominate intrinsic proceduralist considerations in our decisions about whether to make extensive use of undemocratic procedures. The reason for this is that almost all democratic theorists, including philosophers commonly thought to be intrinsic proceduralists, accept ‘High Stakes Instrumentalism’. According to HSI, we ought to use undemocratic procedures in order to prevent high stakes errors - very substantively bad or unjust outcomes. However, (...) democratically produced severe substantive injustice is much more common than many proponents of HSI have realised. Proponents of HSI must accept that if undemocratic procedures are the only way to avoid these high stakes errors, then we ought to make extensive use of undemocratic procedures. Consequently, according to almost all democratic theorists, democratic theory ought, for practical purposes, to be reoriented towards difficult moral and empirical questions about the instrumental quality of procedures. Moreover, this is potentially very practically important because if there are available instrumentally superior undemocratic procedures, then wholesale institutional reform is required. This is one of the most potentially practically important findings of normative democratic theory. In spite of this, no-one has yet explicitly recognised it. (shrink)
Orthodoxy in the contemporary debate on knowledge ascriptions holds that the truth‐value of knowledge ascriptions is purely a matter of truth‐relevant factors. One familiar challenge to orthodoxy comes from intuitive practical factor effects . But practical factor effects turn out to be hard to confirm in experimental studies, and where they have been confirmed, they may seem easy to explain away. We suggest a novel experimental paradigm to show that practical factor effects exist. It trades on the idea that people (...) retract knowledge attributions when practical factors shift. We also explain why the resulting data raise a serious challenge to orthodoxy. (shrink)
The ethical practices of credit rating agencies, particularly following the 2008 financial crisis, have been subject to extensive analysis by economists, ethicists, and policymakers. We raise a novel issue facing CRAs that has to do with a problem concerning the transmission of epistemic status of ratings from CRAs to the beneficiaries of the ratings, and use it to provide a new challenge for regulators. Building on recent work in philosophy, we argue that since CRAs have different stakes than the (...) beneficiaries of the ratings in the ratings being accurate, what counts as knowledge concerning credit risk for a CRA may not count as knowledge for the beneficiary. Further, as it stands, many institutional investors are bound by law to make some of their investment decisions dependent on the ratings of officially recognized CRAs. We argue that the observation that the epistemic status of ratings does not transmit from CRAs to beneficiaries makes salient a new challenge for those who think current regulation regarding the CRAs is prudentially justified, namely, to show that the harm caused by acting on a rating that does not have epistemic status for beneficiaries is compensated by the benefit from them acting on a CRA rating that does have epistemic status for the CRA. Unlike most other commentators, therefore, we offer a defeasible reason to drop references to CRAs in prudential regulation of the financial industry. (shrink)
Research in experimental epistemology has revealed a great, yet unsolved mystery: why do ordinary evaluations of knowledge ascribing sentences involving stakes and error appear to diverge so systematically from the predictions professional epistemologists make about them? Two recent solutions to this mystery by Keith DeRose (2011) and N. Ángel Pinillos (2012) argue that these differences arise due to specific problems with the designs of past experimental studies. This paper presents two new experiments to directly test these responses. Results vindicate (...) previous findings by suggesting that (i) the solution to the mystery is not likely to be based on the empirical features these theorists identify, and (ii) that the salience of ascriber error continues to make the difference in folk ratings of third-person knowledge ascribing sentences. (shrink)
Evidence of risk aversion in laboratory settings over small stakes leads to a priori implausible levels of risk aversion over large stakes under certain assumptions. One core assumption in statements of this calibration puzzle is that small-stakes risk aversion is observed over all levels of wealth, or over a â sufficiently largeâ range of wealth. Although this assumption is viewed as self-evident from the vast experimental literature showing risk aversion over laboratory stakes, it actually requires that (...) lab wealth be varied for a given subject as one evaluates the risk attitudes of the subject. We consider evidence from a simple design that tests this assumption, and find that the assumption is strikingly rejected for a large sample of subjects from a population of college students. We conclude that the implausible predictions that flow from these assumptions do not apply to one specialized population widely used to study economic behavior in laboratory experiments. (shrink)
ABSTRACT According to the fallibilist, it is possible for us to know things when our evidence doesn't entail that our beliefs are correct. Even if there is some chance that we're mistaken about p, we might still know that p is true. Fallibilists will tell you that an important virtue of their view is that infallibilism leads to skepticism. In this paper, we'll see that fallibilist impurism has considerable skeptical consequences of its own. We've missed this because we've focused our (...) attention on the high-stakes cases that they discuss in trying to motivate their impurism about knowledge. We'll see this once we think about the fallibilist impurist's treatment of low-stakes cases. […] when error would be especially disastrous, few possibilities are properly ignored. (shrink)
I trace the relationship between the view that knowledge is stakes sensitive and Laurie Paul’s account of the epistemology of transformative experience. The view that knowledge is stakes sensitive comes in different flavours: one can go for subjective or objective conceptions of stakes, where subjective views of stakes take stakes to be a function of an agent’s non-factive mental states, and objective views of stakes do not. I argue that there is a tension between (...) subjective accounts of stakes sensitivity and Paul’s epistemology of transformative experience. (shrink)
This paper presents the results of a within-subject experiment testing whether an increase in the monetary stakes by a factor of 50 â which had never been done before â influences individual behavior in a simple ultimatum bargaining game. Contrary to current wisdom, we found that lowest acceptable offers stated by the responder are proportionally lower in the high-stake condition than in the low-stake condition. This result may be interpreted in terms of the type of utility functions which characterize (...) the subjects. However, in line with prior results, we find that an important increase of the monetary stakes in the ultimatum game has no effect on the offers made by the proposer. Yet, the present research suggests that the reasons underlying these offers are quite different when the stakes are high. (shrink)
People who defend “pragmatic encroachment” about knowledge generally advocate two ideas: you can rationally act according to what you know; knowledge is harder to achieve when more is at stake. In their chapter in this volume, Charity Anderson and John Hawthorne argue that these two ideas may not fit together so well. This chapter extends Anderson and Hawthorne’s argument. By applying some standard decision theory, we can calculate a precise quantity of “how much is at stake” that does fit together (...) with knowledge and action. While this calculated quantity matches intuitions about how much is at stake in certain standard cases, in others it does not.Keywords. (shrink)
Non-governmental organizations increasingly hold firms responsible for harm caused in their supply chains. In this paper, we explore how firms and NGOs talk about cosmopolitan claims regarding supply chain responsibility. We investigate the language used by Apple and a group of Chinese NGOs as well as Adidas and the international NGO Greenpeace about the firms’ environmental responsibilities in their supply chains. We apply electronic text analytic methods to firm and NGO reports totaling over 155,000 words. We identify different conceptualizations of (...) cosmopolitanism in this discourse: a legalistic approach to cosmopolitanism for Apple and a group of Chinese NGOs and a moralistic approach for Adidas and Greenpeace. We argue that these differences connect to the roles that the firms are expected and perhaps willing to take in SCR: legalistic discourse connects to a governmental function of rule development and enforcement; in contrast, moralistic discourse connects to a citizenship function that focuses on doing good to the global community. We discuss implications for companies’ non-market strategies and future research. (shrink)
One class of central debates between normative realists appears to concern whether we should be naturalists or reductionists about the normative. However, metaethical discussion of naturalism and reduction is often inconsistent, murky, or uninformative. This can make it hard to see why commitments relative to these metaphysical categories should matter to normative realists. This paper aims to clarify the nature of these categories, and their significance in debates between normative realists. I develop and defend what I call the joint-carving taxonomy, (...) which builds on David Lewis’ notion of elite properties. I argue that this taxonomy is clear and metaphysically interesting, and answers to distinctive taxonomic interests of normative realists. I also suggest that it has important implications for the project of adjudicating debates among normative realists. (shrink)
Evaluation studies of the Bayh-Dole Act are generally concerned with the pace of innovation or the transgressions to the independence of research. While these concerns are important, I propose here to expand the range of public values considered in assessing Bayh-Dole and formulating future reforms. To this end, I first examine the changes in the terms of the Bayh-Dole debate and the drift in its design. Neoliberal ideas have had a definitive influence on U.S. innovation policy for the last thirty (...) years, including legislation to strengthen patent protection. Moreover, the neoliberal policy agenda is articulated and justified in the interest of competitiveness. Rhetorically, this agenda equates competitiveness with economic growth and this with the public interest. Against that backdrop, I use Public Value Failure criteria to show that values such as political equality, transparency, and fairness in the distribution of the benefits of innovation, are worth considering to counter the policy drift of Bayh-Dole. (shrink)
It is now 25 years since Gareth Evans introduced the distinction between conceptual and nonconceptual content in The Varieties of Reference. This is a fitting time to take stock of what has become a complex and extended debate both within philosophy and at the interface between philosophy and psychology. Unfortunately, the debate has become increasingly murky as it has become increasingly ramified. Much of the contemporary discussion does not do full justice to the powerful theoretical tool originally proposed by Evans (...) and subsequently refined by theorists in the late 1980’s and early 1990’s – most effectively, I think, by Christopher Peacocke (particularly in his 1992). Even worse, significant parts of the discussion are somewhat confused. This paper makes a start on clarifying what I think ought to be the central issues in debates about nonconceptual content. I begin in §1 by pointing out how narrowly focused contemporary discussion is relative to Evans’s original discussion. We are not making as much use as we should of nonconceptual content as a tool for understanding subpersonal information processing and the complexities of its status relative to perception and thought at the personal level. In §2 I turn to what is the central focus of contemporary discussion, namely, the content of perception and identify a “master argument” for nonconceptualism based on the relation between conceptual capacities and capacities for perceptual discrimination. The aim of §3 is to clarify the relation between the claim that perception has nonconceptual content and some superficially similar claims discussed by philosophers of perception. Finally, in §4 I explain why the attention recently focused on what is sometimes called the state version of the nonconceptualist thesis seems to me to be misdirected. (shrink)
This article seeks to clarify the purpose of high-stakes exam and its relationship with teaching and learning by elucidating the educational thought of the eminent neo-Confucian thinker Zhu...
It is commonly held that epistemic standards for S ’s knowledge that p are affected by practical considerations, such as what is at stake in decisions that are guided by that p . I defend a particular view as to why this is, that is referred to as “pragmatic encroachment.” I then discuss a “new argument against miracles” that uses stakes considerations in order to explore the conditions under which stakes affect the level of epistemic support that is (...) required for knowledge. Finally, I generalize my results to include other religiously significant propositions such as “God exists” and “God does not exist.”. (shrink)
Drawing on their extensive research, Nichols and Berliner document and categorize the ways that high-stakes testing threatens the purposes and ideals of the American education system. For more than a decade, the debate over high-stakes testing has dominated the field of education. This passionate and provocative book provides a fresh perspective on the issue and powerful ammunition for opponents of high-stakes tests. Their analysis is grounded in the application of Campbell’s Law, which posits that the greater the (...) social consequences associated with a quantitative indicator, the more likely it is that the indicator itself will become corrupted—and the more likely it is that the use of the indicator will corrupt the social processes it was intended to monitor. Nichols and Berliner illustrate both aspects of this “corruption,” showing how the pressures of high-stakes testing erode the validity of test scores and distort the integrity of the education system. Their analysis provides a coherent and comprehensive intellectual framework for the wide-ranging arguments against high-stakes testing, while putting a compelling human face on the data marshalled in support of those arguments. (shrink)
Why do our intuitive knowledge ascriptions shift when a subject's practical interests are mentioned? Many efforts to answer this question have focused on empirical linguistic evidence for context sensitivity in knowledge claims, but the empirical psychology of belief formation and attribution also merits attention. The present paper examines a major psychological factor (called ?need-for-closure?) relevant to ascriptions involving practical interests. Need-for-closure plays an important role in determining whether one has a settled belief; it also influences the accuracy of one's cognition. (...) Given these effects, it is a mistake to assume that high- and low-stakes subjects provided with the same initial evidence are perceived to enjoy belief formation that is the same as far as truth-conducive factors are concerned. This mistaken assumption has underpinned contextualist and interest-relative invariantist treatments of cases in which contrasting knowledge ascriptions are elicited by descriptions of subjects with the same initial information and different stakes. The paper argues that intellectualist invariantism can easily accommodate such cases. (shrink)
The primary appeal of stakeholder theory in business ethics derives from its promise to help solve two large and often morally difficult problems: (1) how to manage people fairly and efficiently and (2) how to determine the extent of a firm's moral responsibilities beyond its obligations to enhance its profits and economic value. This article investigates a variety of conceptual quandaries that stakeholder theory faces in addressing these two general problems. It argues that these quandaries pose intractable obstacles for stakeholder (...) theory which prevent it from delivering on its large promises. Acknowledging that various versions of stakeholder theory have made a contribution in elucidating the complex nature of firms and business decision making, the article argues that it is time to move on. More precise explications of the nature of modern firms focusing on the application of basic moral principles to different business contexts and situations are likely to prove more accurate and useful. (shrink)
Descartes's claim that the eternal truths were freely created by God is fraught with interpretive difficulties. The main arguments in the literature are classified as concerning the ontological status or the modalities of possibility and necessity of the eternal truths. The views of the principal defenders of the Creation Doctrine – Robert Desgabets, Pierre Sylvain Régis, and Antoine Le Grand are contrasted with those of Nicolas Malebranche. In clarifying the theological, ontological, and logical terms of the debate we can see (...) that what was at stake was the objectivity and certainty of the truths of mathematics and physics. I conclude by suggesting that this issue might fruitfully be used to clarify the disparate discussions in the contemporary literature. (shrink)
Bergson's engagement with evolutionary theory was remarkably up to date with the science of his time. One century later, the scientific and social landscape is undoubtedly quite different, but some of his insights remain of critical importance for the present. This paper aims at discussing three related aspects of Bergson's philosophy of evolution and their relevance for contemporary debates: first, the stark distinction between the affirmation of the reality of change and becoming, on the one hand, and any notion of (...) progress on the other; second, the insistence on the intimate interplay between forms of knowledge and forms of life; third, his idea that machines and organisms, technology and biology, are not separate domains but, rather, stem from and answer to the same problems and needs that living beings express. Such a Bergsonian framework may prove very helpful in reassessing the implicit assumptions of several contemporary debates on the ethical and political stakes of evolution, biosciences, and technologies, as well as the increasingly problematic boundary between “biology” and “culture.”. (shrink)
One challenge for moderate invariantists is to explain why we tend to deny knowledge to subjects in high stakes when the target propositions seem to be inappropriate premises for practical reasoning. According to an account suggested by Williamson, our intuitive judgments are erroneous due to an alleged failure to acknowledge the distinction between first-order and higher-order knowledge: the high-stakes subject lacks the latter but possesses the former. In this paper, I provide three objections to Williamson’s account: i) his (...) account delivers counterintuitive verdicts about what it is appropriate for a high-stakes subject to do; ii) the high-stakes subject doesn’t need iterated knowledge in order to be regarded as appropriately relying on the relevant proposition in practical reasoning; iii) Williamson’s account doesn’t provide a good explanation of why the high-stakes subject would be blameworthy if she were relying on the relevant proposition in her practical reasoning. (shrink)