While many philosophers have agreed that evidence of disagreement is a kind of higher-order evidence, this has not yet resulted in formally precise higher-order approaches to the problem of disagreement. In this paper, we outline a simple formal framework for determining the epistemic significance of a body of higher-order evidence, and use this framework to motivate a novel interpretation of the popular “equal weight view” of peer disagreement—we call it the Variably Equal Weight View (VEW). We show that VEW differs (...) from the standard Split the Difference (SD) interpretation of the equal weight view in almost all cases of peer disagreement, and use our formal framework to explain why SD has seemed attractive but is in fact misguided. A desirable feature of VEW, we argue, is that it gives rise to plausible instances of synergy—an effect whereby the parties to a disagreement should become more (or less) confident in the disputed proposition than any of them were prior to disagreement. Lastly, we show how VEW may be generalized to cases of non-peer disagreement. (shrink)
Epistemic logics based on the possible worlds semantics suffer from the problem of logical omniscience, whereby agents are described as knowing all logical consequences of what they know, including all tautologies. This problem is doubly challenging: on the one hand, agents should be treated as logically non-omniscient, and on the other hand, as moderately logically competent. Many responses to logical omniscience fail to meet this double challenge because the concepts of knowledge and reasoning are not properly separated. In this paper, (...) I present a dynamic logic of knowledge that models an agent’s epistemic state as it evolves over the course of reasoning. I show that the logic does not sacrifice logical competence on the altar of logical non- omniscience. (shrink)
One well-known objection to the traditional Lewis-Stalnaker semantics of counterfactuals is that it delivers counterintuitive semantic verdicts for many counterpossibles (counterfactuals with necessarily false antecedents). To remedy this problem, several authors have proposed extending the set of possible worlds by impossible worlds at which necessary falsehoods may be true. Linguistic ersatz theorists often construe impossible worlds as maximal, inconsistent sets of sentences in some sufficiently expressive language. However, in a recent paper, Bjerring (2014) argues that the “extended” Lewis-Stalnaker semantics delivers (...) the wrong truth-values for many counterpossibles if impossible worlds are required to be maximal. To make room for non-maximal or partial impossible worlds, Bjerring considers two alternative world-ontologies: either (i) we construe impossible worlds as arbitrary (maximal or partial) inconsistent sets of sentences, or (ii) we construe them as (maximal or partial) inconsistent sets of sentences that are closed and consistent with respect to some non-classical logic. Bjerring raises an objection against (i), and suggests that we opt for (ii). In this paper, I argue, first, that Bjerring’s objection against (i) conflates two different conceptions of what it means for a logic to be true at a world. Second, I argue that (ii) imposes too strong constraints on what counts as an impossible world. I conclude that linguistic ersatzists should construe impossible worlds as arbitrary (maximal or partial) inconsistent sets of sentences. (shrink)
The traditional possible-worlds model of belief describes agents as ‘logically omniscient’ in the sense that they believe all logical consequences of what they believe, including all logical truths. This is widely considered a problem if we want to reason about the epistemic lives of non-ideal agents who—much like ordinary human beings—are logically competent, but not logically omniscient. A popular strategy for avoiding logical omniscience centers around the use of impossible worlds: worlds that, in one way or another, violate the laws (...) of logic. In this paper, we argue that existing impossible-worlds models of belief fail to describe agents who are both logically non-omniscient and logically competent. To model such agents, we argue, we need to ‘dynamize’ the impossible-worlds framework in a way that allows us to capture not only what agents believe, but also what they are able to infer from what they believe. In light of this diagnosis, we go on to develop the formal details of a dynamic impossible-worlds framework, and show that it successfully models agents who are both logically non-omniscient and logically competent. (shrink)
In this paper, we present a new semantic framework designed to capture a distinctly cognitive or epistemic notion of meaning akin to Fregean senses. Traditional Carnapian intensions are too coarse-grained for this purpose: they fail to draw semantic distinctions between sentences that, from a Fregean perspective, differ in meaning. This has led some philosophers to introduce more fine-grained hyperintensions that allow us to draw semantic distinctions among co-intensional sentences. But the hyperintensional strategy has a flip-side: it risks drawing semantic distinctions (...) between sentences that, from a Fregean perspective, do not differ in meaning. This is what we call the ‘new problem’ of hyperintensionality to distinguish it from the ‘old problem’ that faced the intensional theory. We show that our semantic framework offers a joint solution to both these problems by virtue of satisfying a version of Frege’s so-called ‘equipollence principle’ for sense individuation. Frege’s principle, we argue, not only captures the semantic intuitions that give rise to the old and the new problem of hyperintensionality, but also points the way to an independently motivated solution to both problems. (shrink)
Don’t form beliefs on the basis of coin flips or random guesses. More generally, don’t take belief gambles: if a proposition is no more likely to be true than false given your total body of evidence, don’t go ahead and believe that proposition. Few would deny this seemingly innocuous piece of epistemic advice. But what, exactly, is wrong with taking belief gambles? Philosophers have debated versions of this question at least since the classic dispute between William Clifford and William James (...) near the end of the nineteenth century. Here I reassess the normative standing of belief gambles from the perspective of epistemic decision theory. The main lesson of the paper is a negative one: it turns out that we need to make some surprisingly strong and hard-to-motivate assumptions to establish a general norm against belief gambles within a decision-theoretic framework. I take this to pose a dilemma for epistemic decision theory: it forces us to either make seemingly unmotivated assumptions to secure a norm against belief gambles, or concede that belief gambles can be rational after all. (shrink)
Titelbaum Oxford studies in epistemology, 2015) has recently argued that the Enkratic Principle is incompatible with the view that rational belief is sensitive to higher-order defeat. That is to say, if it cannot be rational to have akratic beliefs of the form “p, but I shouldn’t believe that p,” then rational beliefs cannot be defeated by higher-order evidence, which indicates that they are irrational. In this paper, I distinguish two ways of understanding Titelbaum’s argument, and argue that neither version is (...) sound. The first version can be shown to rest on a subtle, but crucial, misconstrual of the Enkratic Principle. The second version can be resisted through careful consideration of cases of higher-order defeat. The upshot is that proponents of the Enkratic Principle are free to maintain that rational belief is sensitive to higher-order defeat. (shrink)
Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience within (...) a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it. (shrink)
Evidentialism is the thesis, roughly, that one’s beliefs should fit one’s evidence. The enkratic principle is the thesis, roughly, that one’s beliefs should "line up" with one’s beliefs about which beliefs one ought to have. While both theses have seemed attractive to many, they jointly entail the controversial thesis that self-misleading evidence is impossible. That is to say, if evidentialism and the enkratic principle are both true, one’s evidence cannot support certain false beliefs about which beliefs one’s evidence supports. Recently, (...) a number of epistemologists have challenged the thesis that self-misleading evidence is impossible on the grounds that misleading higher-order evidence does not have the kind of strong and systematic defeating force that would be needed to rule out the possibility of such self-misleading evidence. Here I respond to this challenge by proposing an account of higher-order defeat that does, indeed, render self-misleading evidence impossible. Central to the proposal is the idea that higher-order evidence acquires its normative force by influencing which conditional beliefs it is rational to have. What emerges, I argue, is an independently plausible view of higher-order evidence, which has the additional benefit of allowing us to reconcile evidentialism with the enkratic principle. (shrink)
Epistemic instrumentalists think that epistemic normativity is just a special kind of instrumental normativity. According to them, you have epistemic reason to believe a proposition insofar as doing so is conducive to certain epistemic goals or aims—say, to believe what is true and avoid believing what is false. Perhaps the most prominent challenge for instrumentalists in recent years has been to explain, or explain away, why one’s epistemic reasons often do not seem to depend on one’s aims. This challenge can (...) arguably be met. But a different challenge looms: instrumental reasons in the practical domain have various properties that epistemic reasons do not seem to share. In this chapter, we offer a way for epistemic instrumentalists to overcome this challenge. Our main thesis takes the form of a conditional: if we accept an independently plausible transmission principle of instrumental normativity, we can maintain that epistemic reasons in fact do share the relevant properties of practical instrumental reasons. In addition, we can explain why epistemic reasons seem to lack these properties in the first place: some properties of epistemic reasons are elusive, or easy to overlook, because we tend to think and talk about epistemic reasons in an ‘elliptical’ manner. (shrink)
When one has both epistemic and practical reasons for or against some belief, how do these reasons combine into an all-things-considered reason for or against that belief? The question might seem to presuppose the existence of practical reasons for belief. But we can rid the question of this presupposition. Once we do, a highly general ‘Combinatorial Problem’ emerges. The problem has been thought to be intractable due to certain differences in the combinatorial properties of epistemic and practical reasons. Here we (...) bring good news: if we accept an independently motivated version of epistemic instrumentalism—the view that epistemic reasons are a species of instrumental reasons—we can reduce The Combinatorial Problem to the relatively benign problem of how to weigh different instrumental reasons against each other. As an added benefit, the instrumentalist account can explain the apparent intractability of The Combinatorial Problem in terms of a common tendency to think and talk about epistemic reasons in an elliptical manner. (shrink)
Should you always be certain about what you should believe? In other words, does rationality demand higher-order certainty? First answer: Yes! Higher-order uncertainty can’t be rational, since it breeds at least a mild form of epistemic akrasia. Second answer: No! Higher-order certainty can’t be rational, since it licenses a dogmatic kind of insensitivity to higher-order evidence. Which answer wins out? The first, I argue. Once we get clearer about what higher-order certainty is, a view emerges on which higher-order certainty does (...) not, in fact, license any kind of insensitivity to higher-order evidence. The view as I will describe it has plenty of intuitive appeal. But it is not without substantive commitments: it implies a strong form of internalism about epistemic rationality, and forces us to reconsider standard ways of thinking about the nature of evidential support. Yet, the view put forth promises a simple and elegant solution to a surprisingly difficult problem in our understanding of rational belief. (shrink)
The debate on the epistemology of disagreement has so far focused almost exclusively on cases of disagreement between individual persons. Yet, many social epistemologists agree that at least certain kinds of groups are equally capable of having beliefs that are open to epistemic evaluation. If so, we should expect a comprehensive epistemology of disagreement to accommodate cases of disagreement between group agents, such as juries, governments, companies, and the like. However, this raises a number of fundamental questions concerning what it (...) means for groups to be epistemic peers and to disagree with each other. In this paper, we explore what group peer disagreement amounts to given that we think of group belief in terms of List and Pettit’s ‘belief aggregation model’. We then discuss how the so-called ‘equal weight view’ of peer disagreement is best accommodated within this framework. The account that seems most promising to us says, roughly, that the parties to a group peer disagreement should adopt the belief that results from applying the most suitable belief aggregation function for the combined group on all members of the combined group. To motivate this view, we test it against various intuitive cases, derive some of its notable implications, and discuss how it relates to the equal weight view of individual peer disagreement. (shrink)
Many epistemologists have endorsed a version of the view that rational belief is sensitive to higher-order defeat. That is to say, even a fully rational belief state can be defeated by misleading higher-order evidence, which indicates that the belief state is irrational. In a recent paper, however, Maria Lasonen-Aarnio calls this view into doubt. Her argument proceeds in two stages. First, she argues that higher-order defeat calls for a two-tiered theory of epistemic rationality. Secondly, she argues that there seems to (...) be no satisfactory way of avoiding epistemic dilemmas within a two-tiered framework. Hence, she concludes that the prospects look dim for making sense of higher-order defeat within a broader theoretical picture of epistemic rationality. Here I aim to resist both parts of Lasonen-Aarnio’s challenge. First, I outline a way of accommodating higher-order defeat within a single-tiered framework, by amending epistemic rules with appropriate provisos for different kinds of higher-order defeat. Secondly, I argue that those who nevertheless prefer to accommodate higher-order defeat within a two-tiered framework can do so without admitting to the possibility of epistemic dilemmas, since epistemic rules are not always accompanied by ‘oughts’ in a two-tiered framework. The considerations put forth thus indirectly vindicate the view that rational belief is sensitive to higher-order defeat. (shrink)
Many theories of rational belief give a special place to logic. They say that an ideally rational agent would never be uncertain about logical facts. In short: they say that ideal rationality requires "logical omniscience." Here I argue against the view that ideal rationality requires logical omniscience on the grounds that the requirement of logical omniscience can come into conflict with the requirement to proportion one’s beliefs to the evidence. I proceed in two steps. First, I rehearse an influential line (...) of argument from the "higher-order evidence" debate, which purports to show that it would be dogmatic, even for a cognitively infallible agent, to refuse to revise her beliefs about logical matters in response to evidence indicating that those beliefs are irrational. Second, I defend this "anti-dogmatism" argument against two responses put forth by Declan Smithies and David Christensen. Against Smithies’ response, I argue that it leads to irrational self-ascriptions of epistemic luck, and that it obscures the distinction between propositional and doxastic justification. Against Christensen’s response, I argue that it clashes with one of two attractive deontic principles, and that it is extensionally inadequate. Taken together, these criticisms will suggest that the connection between logic and rationality cannot be what it is standardly taken to be—ideal rationality does not require logical omniscience. (shrink)
We often have reason to doubt our own ability to form rational beliefs, or to doubt that some particular belief of ours is rational. Perhaps we learn that a trusted friend disagrees with us about what our shared evidence supports. Or perhaps we learn that our beliefs have been afflicted by motivated reasoning or other cognitive biases. These are examples of higher-order evidence. While it may seem plausible that higher-order evidence should impact our beliefs, it is less clear how and (...) why. Normally, when evidence impacts our beliefs, it does so by virtue of speaking for or against the truth of theirs contents. But higher-order evidence does not directly concern the contents of the beliefs that they impact. In recent years, philosophers have become increasingly aware of the need to understand the nature and normative role of higher-order evidence. This is partly due to the pervasiveness of higher-order evidence in human life. But it has also become clear that higher-order evidence plays a central role in many epistemological debates, spanning from traditional discussions of internalism/externalism about epistemic justification to more recent discussions of peer disagreement and epistemic akrasia. This volume brings together, for the first time, a distinguished group of leading and up-and-coming epistemologists to explore a range of issues about higher-order evidence. (shrink)
People don't always speak the truth. When they don't, we do better not to trust them. Unfortunately, that's often easier said than done. People don't usually wear a ‘Not to be trusted!’ badge on their sleeves, which lights up every time they depart from the truth. Given this, what can we do to figure out whom to trust, and whom not? My aim in this paper is to offer a partial answer to this question. I propose a heuristic—the “Humility Heuristic”—which (...) is meant to help guide our search for trustworthy advisors. In slogan form, the heuristic says: people worth trusting admit to what they don't know. I give this heuristic a precise probabilistic interpretation, offer a simple argument for it, defend it against some potential worries, and demonstrate its practical worth by showing how it can help address some difficult challenges in the relationship between experts and laypeople. (shrink)
Our aim in this chapter is to draw attention to what we see as a disturbing feature of conciliationist views of disagreement. Roughly put, the trouble is that conciliatory responses to in-group disagreement can lead to the frustration of a group's epistemic priorities: that is, the group's favoured trade-off between the "Jamesian goals" of truth-seeking and error-avoidance. We show how this problem can arise within a simple belief aggregation framework, and draw some general lessons about when the problem is most (...) pronounced. We close with a tentative proposal for how to solve the problem raised without rejecting conciliationism. (shrink)
People tend to think that they know others better than others know them. This phenomenon is known as the “illusion of asymmetric insight.” While the illusion has been well documented by a series of recent experiments, less has been done to explain it. In this paper, we argue that extant explanations are inadequate because they either get the explanatory direction wrong or fail to accommodate the experimental results in a sufficiently nuanced way. Instead, we propose a new explanation that does (...) not face these problems. The explanation is based on two other well-documented psychological phenomena: the tendency to accommodate ambiguous evidence in a biased way, and the tendency to overestimate how much better we know ourselves than we know others. (shrink)
If “perfectionism” in ethics refers to those normative theories that treat the fulfillment or realization of human nature as central to an account of both goodness and moral obligation, in what sense is “human flourishing” a perfectionist notion? How much of what we take “human flourishing” to signify is the result of our understanding of human nature? Is the content of this concept simply read off an examination of our nature? Is there no place for diversity and individuality? Is the (...) belief that the content of such a normative concept can be determined by an appeal to human nature merely the result of epistemological naiveté? What is the exact character of the connection between human flourishing and human nature? These questions are the ultimate concern of this essay, but to appreciate the answers that will be offered it is necessary to understand what is meant by “human flourishing.” “Human flourishing” is a relatively recent term in ethics. It seems to have developed in the last two decades because the traditional translation of the Greek term eudaimonia as “happiness” failed to communicate clearly that eudaimonia was an objective good, not merely a subjective good. (shrink)
A standard formulation of luck-egalitarianism says that ‘it is [in itself] bad – unjust and unfair – for some to be worse off than others [through no fault or choice of their own]’, where ‘fault or choice’ means substantive responsibility-generating fault or choice. This formulation is ambiguous: one ambiguity concerns the possible existence of a gap between what is true of each worse-off individual and what is true of the group of worse-off individuals, fault or choice-wise, the other concerns the (...) notion of fault. I show that certain ways of resolving these ambiguities lead to counterintuitive results; and that the most plausible way of resolving them leads to a theory of distributive justice in which responsibility plays a role significantly different from that in standard luck-egalitarian thinking. My main conclusion here is that luck-egalitarianism is best formulated as the view that it is [in itself] bad – unjust and unfair – for an individual to be worse off than others if, and only if, her being worse off does not fit the degree to which she is at fault in a not purely prudential sense. (shrink)
This paper explores whether natural selection, a putative evolutionary mechanism, and a main one at that, can be characterized on either of the two dominant conceptions of mechanism, due to Glennan and the team of Machamer, Darden, and Craver, that constitute the “new mechanistic philosophy.” The results of the analysis are that neither of the dominant conceptions of mechanism adequately captures natural selection. Nevertheless, the new mechanistic philosophy possesses the resources for an understanding of natural selection under the rubric.
According to Philip Kitcher, scientific unification is achieved via the derivation of numerous scientific statements from economies of argument schemata. I demonstrate that the unification of selection phenomena across domains in which it is claimed to occur--evolutionary biology, immunology and, speculatively, neurobiology--is unattainable on Kitcher's view. I then introduce an alternative method for rendering the desired unification based on the concept of a mechanism schema. I conclude that the gain in unification provided by the alternative account suggests that Kitcher's view (...) is defective. (shrink)
This paper considers recent heated debates led by Jerry A. Coyne andMichael J. Wade on issues stemming from the 1929–1962 R.A. Fisher-Sewall Wrightcontroversy in population genetics. William B. Provine once remarked that theFisher-Wright controversy is central, fundamental, and very influential.Indeed,it is also persistent. The argumentative structure of therecent (1997–2000) debates is analyzed with the aim of eliminating a logicalconflict in them, viz., that the two sides in the debates havedifferent aims and that, as such, they are talking past each other. (...) Given aphilosophical analysis of the argumentative structure of the debates,suggestions supportive of Wade's work on the debate are made that areaimed, modestly, at putting the persistent Fisher-Wright controversy on thecourse to resolution. (shrink)
Do I cause global warming, climate change and their related harms when I go for a leisure drive with my gas-guzzling car? The current verdict seems to be that I do not; the emissions produced by my drive are much too insignificant to make a difference for the occurrence of global warming and its related harms. I argue that our verdict on this issue depends on what we mean by ‘causation’. If we for instance assume a simple counterfactual analysis of (...) causation according to which ‘C causes E’ means ‘if C had not occurred, E would not have occurred’, we must conclude that a single drive does not cause global warming. However, this analysis of causation is well-known for giving counterintuitive results in some important cases. If we instead adopt Lewis’s analysis of causation, it turns out that it is indeterminate whether I cause global warming when I go for a single drive. Still, in contexts where we seek to control or understand global warming, there is a pressure to adopt a more fragile view of this event. When we adopt such a view, it turns out that a single drive does cause global warming. This means that we cannot like Sinnott-Armstrong and Kingston and Sinnott-Armstrong reject the idea that I should refrain from going for a leisure drive simply because such a drive does not cause global warming. (shrink)
In a small handful of papers in theoretical population genetics, John Gillespie (2000a, 2000b, 2001) argues that a new stochastic process he calls "genetic draft" is evolutionarily more significant than genetic drift. This case study of chance in evolution explores Gillespie's proposed stochastic evolutionary force and sketches the implications of Gillespie's argument for philosophers' explorations of genetic drift.
This long-awaited book sets out the implications of Habermas's theory of communicative action for moral theory. "Discourse ethics" attempts to reconstruct a moral point of view from which normative claims can be impartially judged. The theory of justice it develops replaces Kant's categorical imperative with a procedure of justification based on reasoned agreement among participants in practical discourse.Habermas connects communicative ethics to the theory of social action via an examination of research in the social psychology of moral and interpersonal development. (...) He aims to show that our basic moral intuitions spring from something deeper and more universal than contingent features of our tradition, namely from normative presuppositions of social interaction that belong to the repertoire of competent agents in any society. Jürgen Habermas is Professor of Philosophy at the University of Frankfurt. (shrink)
: This paper explores the calibration of laboratory models in population genetics as an experimental strategy for justifying experimental results and claims based upon them following Franklin (1986, 1990) and Rudge (1996, 1998). The analysis provided undermines Coyne et al.'s (1997) critique of Wade and Goodnight's (1991) experimental study of Wright's (1931, 1932) Shifting Balance Theory. The essay concludes by further demonstrating how this analysis bears on Diamond's (1986) claims regarding the weakness of laboratory experiments as evidence, and further how (...) the calibration strategy fits within Lloyd's (1987, 1988) account of the confirmation of ecological and evolutionary models. (shrink)
Sewall Wright's adaptive landscape is the most influential heuristic in evolutionary biology. Wright's biographer, Provine, criticized Wright's adaptive landscape, claiming that its heuristic value is dubious because of deep flaws. Ruse has defended Wright against Provine. Ruse claims Provine has not shown Wright's use of the landscape is flawed, and that, even if it were, it is heuristically valuable. I argue that both Provine's and Ruse's analyses of the adaptive landscape are defective and suggest a more adequate understanding of it.
This article draws on scientific explanations of obesity to motivate the creation of a system of paternalistic public health interventions into the obesity epidemic. Libertarian paternalists argue that paternalism is warranted in light of the cognitive limits of human decision-making abilities. There are further, specific biological limits on our capacity to choose and maintain a healthy diet. These biological facts strengthen the general motivation for libertarian paternalism. As a consequence, the creation of a system of paternalistic public health interventions into (...) the obesity epidemic is warranted. (shrink)
We critique a series of recent papers in which Reidenbach and Robin developed a multidimensional ethics scale. Our critique raises four problems for the scale. First, it is not clear what the scale measures. Second, the semantic differential items used in the scale seem problematic. Third, the scale omits several important ethical rationales. Finally, no caveats accompany the scale to alert managers about its proper and improper use.
The institutionalization of a rights-based proportionality review shares a number of salient features and puzzles with the practice of contestation that the Socrates of the early Platonic dialogues became famous for. Understanding the point of Socratic contestation, and its role in a democratic polity, is also the key to understanding the point of proportionality based rights review. To begin with, when judges decide cases within the proportionality framework they do not primarily interpret authority. They assess reasons. Not surprisingly, they, like (...) Socrates, have been prone to the charge that they offend the values and traditions of the community. The article discusses four types of pathologies that occasionally infect democratic decision-making that rights-based proportionality review is particularly suited to identify. But more basic and equally important is a second kind of justification: Proportionality-based judicial review institutionalizes a right to contest the acts of public authorities and demand a public reasons-based justification. Having a legal remedy that allows for the contestation of acts by public authorities before an impartial and independent court and demanding its justification in terms of public reason is as basic a commitment of liberal democracy as the right to vote. The real question is not whether judicial review is democratically legitimate, but how judicial institutions ought to be structured to best serve their democracy-enhancing and rights protecting purpose. If Socrates was right to insist that the practice of contestation he engaged in deserves the highest praise in a democratic polity, it is equally true that a well structured and appropriately embedded court engaged in rights based proportionality review deserves to be embraced as a vital element of liberal constitutional democracy. (shrink)
Background Traditionally, palliative care has focused on patients suffering from life-threatening somatic diseases such as cancer or progressive neurological disorders. In contrast, despite the often chronic, severely disabling, and potentially life-threatening nature of psychiatric disorders, there are neither palliative care units nor clinical guidelines on palliative measures for patients in psychiatry. Main text This paper contributes to the growing literature on a palliative approach in psychiatry and is based on the assumption that a change of perspective from a curative to (...) a palliative approach could help promote patient-centeredness and increase quality of life for severely ill patients in psychiatry as well as in somatic medicine. To exemplify this, we offer three different clinical scenarios: severe and enduring anorexia nervosa, treatment-refractory schizophrenia, and chronic suicidality and persistent self-injury in borderline personality disorder. Conclusion We emphasize that many typical interventions for treatment-refractory psychiatric disorders may indeed be of a palliative nature. Furthermore, introducing traditional features of palliative care, e.g. so-called goals of care conversations, could aid even further in ensuring that caregivers, patients, and families agree on which treatment goals are to be prioritized in order to optimize quality of life in spite of severe, persistent mental disorder. (shrink)
psychiatric diagnoses such as attention deficit hyperactivity disorder is a rapidly growing and globally increasing phenomenon, not least in different educational contexts such as in family and in school. Children and youths labelled as ADHD are challenging normative claims in terms of nurturing and education, whereas those labelled as ADHD are considered a risk for society to handle. The dominant paradigm regarding ADHD is biomedical, where different levels of attention and activity-impulsivity are perceived as neurobiological dys/functions within the brain best (...) managed by means of an individual diagnosis and instrumental pedagogy. The majority of those labelled as having ADHD encounter a dominant educational model in the form of what is referred to in this article as neurobehaviorism, which is based on onto-epistemological violence. As opposed to this act of violence against being—and against the psychiatrized subject—a less violent educational model is proposed, based on French philosopher Alain Badiou’s ontological examination of being and his concept of love as a truth procedure. In terms of the latter, the focus is on the potential of the encounter as a ‘Two scene of love’. Here, the encounter is a place where it is possible to create new truths and subjects, instead of taking the individual diagnosis as an axiom which only leads to individuals having fixed identities codified in a hierarchical order. This argument is drawn from the ‘mathematical’ formula 1 + 1 = ♥, which originates from an online forum for people who have come into contact with ADHD in one way or another. (shrink)
How can we establish a political/legal order that in principle does not require the human flourishing of any person or group to be given structured preference over that of any other? Addressing this question as the central problem of political philosophy,_ Norms of Liberty_ offers a new conceptual foundation for political liberalism that takes protecting liberty, understood in terms of individual negative rights, as the primary aim of the political/legal order. Rasmussen and Den Uyl argue for construing individual rights (...) as metanormative principles, directly tied to politics, that are used to establish the political/ legal conditions under which full moral conduct can take place. These they distinguish from normative principles, used to provide guidance for moral conduct within the ambit of normative ethics. This crucial distinction allows them to develop liberalism as a metanormative theory, not a guide for moral conduct. The moral universe need not be minimized or morality grounded in sentiment or contracts to support liberalism, they show. Rather, liberalism can be supported, and many of its internal tensions avoided, with an ethical framework of Aristotelian inspiration—one that understands human flourishing to be an objective, inclusive, individualized, agent-relative, social, and self-directed activity. (shrink)
Over the last twenty years, many political philosophers have rejected the idea that justice is fundamentally about distribution. Rather, justice is about social relations, and the so-called distributive paradigm should be replaced by a new relational paradigm. Kasper Lippert-Rasmussen seeks to describe, refine, and assess these thoughts and to propose a comprehensive form of egalitarianism which includes central elements from both relational and distributive paradigms. He shows why many of the challenges that luck egalitarianism faces reappear, once we try (...) to specify relational egalitarianism more fully. His discussion advances understanding of the nature of the relational ideal, and introduces new conceptual tools for understanding it and for exploring the important question of why it is desirable in the first place to relate as equals. Even severe critics of the distributive understanding of justice will find that this book casts important new light on the ideal to which they subscribe. (shrink)
Recently, a number of philosophers of biology have endorsed views about random drift that, we will argue, rest on an implicit assumption that the meaning of concepts such as drift can be understood through an examination of the mathematical models in which drift appears. They also seem to implicitly assume that ontological questions about the causality of terms appearing in the models can be gleaned from the models alone. We will question these general assumptions by showing how the same equation (...) — the simple 2 = p2 + 2pq + q2 — can be given radically different interpretations, one of which is a physical, causal process and one of which is not. This shows that mathematical models on their own yield neither interpretations nor ontological conclusions. Instead, we argue that these issues can only be resolved by considering the phenomena that the models were originally designed to represent and the phenomena to which the models are currently applied. When one does take those factors into account, starting with the motivation for Sewall Wright’s and R.A. Fisher’s early drift models and ending with contemporary applications, a very different picture of the concept of drift emerges. On this view, drift is a term for a set of physical processes, namely, indiscriminate sampling processes. (shrink)
The correspondence theory of truth is a precise and innovative account of how the truth of a proposition depends upon that proposition's connection to a piece of reality. Joshua Rasmussen refines and defends the correspondence theory of truth, proposing new accounts of facts, propositions, and the correspondence between them. With these theories in hand, he then offers original solutions to the toughest objections facing correspondence theorists. Addressing the Problem of Funny Facts, Liar Paradoxes, and traditional epistemological questions concerning how (...) our minds can access reality, he challenges recent objections, and defends what has traditionally been the most popular theory of truth. Written with clarity, precision, and sensitivity to a range of philosophical backgrounds, his book will appeal to advanced students and scholars seeking a deeper understanding of the relationship between truth and reality. (shrink)
Hull et al. make a direct connection between selection and replication. My view is that selection, at its causal crux, is not inherently connected to replication. I make plain the causal crux of selection, distinguishing it from replication. I discuss implications of my results for Hull et al.'s critique of Darden and Cain (1989).
In this first book-length comparative study of these leading eighteenth-century thinkers, Dennis Rasmussen highlights Smith's sympathy with Rousseau's concerns and analyzes in depth the ways in which Smith crafted his arguments to defend ...
: Where there are cases of underdetermination in scientific controversies, such as the case of the molecular clock, scientists may direct the course and terms of dispute by playing off the multidimensional framework of theory evaluation. This is because assessment strategies themselves are underdetermined. Within the framework of assessment, there are a variety of trade-offs between different strategies as well as shifting emphases as specific strategies are given more or less weight in assessment situations. When a strategy is underdetermined, scientists (...) can change the dynamics of a controversy by making assessments using different combinations of evaluation strategies and/or weighting whatever strategies are in play in different ways. Following an underdetermination strategy does not end or resolve a scientific dispute. Consequently, manipulating underdetermination is a feature of controversy dynamics and not controversy closure. (shrink)
Larry L. Rasmussen offers a dramatic new way of thinking about human society, ethics, and the health of our planet. Rejecting the modern ethical assumption that morality applies to human society alone, Earth-honoring Faith argues that we must derive a system of ethics and morality that accounts for the wellbeing of all creation on Earth.
For whatever reasons, students seem more resistant than ever before to reading. Educators have catered to this trend, introducing learning activities other than reading. I argue that, in philosophy at least, nothing can substitute for reading and discussion. I further argue that the best readings are famous, intellectually challenging, and substantial enough to reward the student with a memorable philosophical experience. I have noticed that students appreciate meaty, classical, philosophical works that challenge them, but are bored by dumbed-down textbooks or (...) summaries. After considering some obvious objections, I relate two successful techniques I have used to raise the level of student engagement in class. (shrink)
It is commonly believed that blamees can dismiss hypocritical blame on the ground that the hypocrite has no standing to blame their target. Many believe that the feature of hypocritical blame that undermines standing to blame is that it involves an implicit denial of the moral equality of persons. After all, the hypocrite treats herself better than her blamee for no good reason. In the light of the complement to hypocrites and a comparison of hypocritical and non-hypocritical blamers subscribing to (...) hierarchical moral norms, I show why we must reject the moral equality account of the hypocrite’s lack of standing to blame. (shrink)
This book addresses these three issues: What is discrimination?; What makes it wrong?; What should be done about wrongful discrimination? It argues: that there are different concepts of discrimination; that discrimination is not always morally wrong and that when it is, it is so primarily because of its harmful effects.