In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative with (...) one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
This book is an extensive survey and critical examination of the literature on the use of expert opinion in scientific inquiry and policy making. The elicitation, representation, and use of expert opinion is increasingly important for two reasons: advancing technology leads to more and more complex decision problems, and technologists are turning in greater numbers to "expert systems" and other similar artifacts of artificial intelligence. Cooke here considers how expert opinion is being used today, how an expert's uncertainty is (...) or should be represented, how people do or should reason with uncertainty, how the quality and usefulness of expert opinion can be assessed, and how the views of several experts might be combined. He argues for the importance of developing practical models with a transparent mathematic foundation for the use of expert opinion in science, and presents three tested models, termed "classical," "Bayesian," and "psychological scaling." Detailed case studies illustrate how they can be applied to a diversity of real problems in engineering and planning. (shrink)
The term action of consciousness is used to refer to an influence, such as psychokinesis or free will, that produces an effect on matter that is correlated to mental intention, but not completely determined by physical conditions. Such an action could not conserve energy. But in that case, one wonders why, when highly accurate measurements are done, occasions of non-conserved energy (generated perhaps by unconscious PK) are not detected. A possible explanation is that actions of consciousness take place within the (...) limits of the uncertainty principle. Two models are reviewed that, using the latter assumption, propose that consciousness can originate an action potential in the brain. One (that of Eccles) uses the latter assumption only, and the other (that of Burns) additionally assumes that consciousness acts, within those limits, by ordering quantum fluctuations. (shrink)
This article examines the epistemology of risk assessment in the context of financial modelling for the purposes of making loan underwriting decisions. A financing request for a company in the paper and pulp industry is considered in some detail. The paper and pulp industry was chosen because (1) it is subject to some specific risks that have been identified and studied by bankers, investors and managers of paper and pulp companies and (2) certain features of the industry enable analysts to (...) quantify the impact of specific risk events of a given dimension on a company's future financial performance. While companies in other industries may be subject to similar risk factors, the impact of risk events may be more difficult to gauge in those industries. The ability of financial analysts to model the impact of a risk event, and hence quantify a credit risk, increases the predictive accuracy of the model. I argue that bankers and regulators should recognise the uncertainty associated with unquantifiable credit risk in financial models, and they should view this uncertainty as a credit risk factor in and of itself. Evaluating the relative degree to which credit risk is quantifiable in financial models is a potentially significant yet largely unrecognised tool for credit risk management. I consider some possible applications of this assessment tool for managing risk within the banking industry. (shrink)
Against the tradition, which has considered measurement able to produce pure data on physical systems, the unavoidable role played by the modeling activity in measurement is increasingly acknowledged, particularly with respect to the evaluation of measurement uncertainty. This paper characterizes measurement as a knowledge-based process and proposes a framework to understand the function of models in measurement and to systematically analyze their influence in the production of measurement results and their interpretation. To this aim, a general model of measurement (...) is sketched, which gives the context to highlight the unavoidable, although sometimes implicit, presence of models in measurement and, finally, to propose some remarks on the relations between models and measurement uncertainty, complementarily classified as due to the idealization implied in the models and their realization in the experimental setup. (shrink)
Using epistemic logic, we provide a non-probabilistic way to formalise payoff uncertainty, that is, statements such as ‘player i has approximate knowledge about the utility functions of player j.’ We show that on the basis of this formalisation common knowledge of payoff uncertainty and rationality (in the sense of excluding weakly dominated strategies, due to Dekel and Fudenberg (1990)) characterises a new solution concept we have called ‘mixed iterated strict weak dominance.’.
Recent work suggests that people predict how objects interact in a manner consistent with Newtonian physics, but with additional uncertainty. However, the sources of uncertainty have not been examined. In this study, we measure perceptual noise in initial conditions and stochasticity in the physical model used to make predictions. Participants predicted the trajectory of a moving object through occluded motion and bounces, and we compared their behavior to an ideal observer model. We found that human judgments cannot be (...) captured by simple heuristics and must incorporate noisy dynamics. Moreover, these judgments are biased consistently with a prior expectation on object destinations, suggesting that people use simple expectations about outcomes to compensate for uncertainty about their physical models. (shrink)
This edited collection focuses on recently emerging debates around the themes of "risk", "trust", "uncertainty", and "ambivalence." Where much of the work on these themes in the social sciences has been theory based and driven, this book combines theoretical sophistication with close to the ground analysis and research in the fields of philosophy, education, social policy, government, health and social care, politics and cultural studies.
We are often uncertain how to behave morally in complex situations. In this controversial study, Ted Lockhart contends that moral philosophy has failed to address how we make such moral decisions. Adapting decision theory to the task of decision-making under moral uncertainly, he proposes that we should not always act how we feel we ought to act, and that sometimes we should act against what we feel to be morally right. Lockhart also discusses abortion extensively and proposes new ways to (...) deal with the ethical and moral issues which surround it. (shrink)
Investigation of neural and cognitive processes underlying individual variation in moral preferences is underway, with notable similarities emerging between moral- and risk-based decision-making. Here we specifically assessed moral distributive justice preferences and non-moral financial gambling preferences in the same individuals, and report an association between these seemingly disparate forms of decision-making. Moreover, we find this association between distributive justice and risky decision-making exists primarily when the latter is assessed with the Iowa Gambling Task. These findings are consistent with neuroimaging studies (...) of brain function during moral and risky decision-making. This research also constitutes the first replication of a novel experimental measure of distributive justice decision-making, for which individual variation in performance was found. Further examination of decision-making processes across different contexts may lead to an improved understanding of the factors affecting moral behaviour. (shrink)
The American Buddhist nun and author of the best-selling When Things Fall Apart counsels readers on how to live compassionately and well during times of instability, demonstrating the use of the Three Commitments practice to promote ...
Every choice we make is set against a background of massive ignorance about our past, our future, our circumstances, and ourselves. Philosophers are divided on the moral significance of such ignorance. Some say that it has a direct impact on how we ought to behave - the question of what our moral obligations are; others deny this, claiming that it only affects how we ought to be judged in light of the behaviour in which we choose to engage - the (...) question of what responsibility we bear for our choices. Michael Zimmerman claims that our ignorance has an important bearing on both questions, and offers an account of moral obligation and moral responsibility that is sharply at odds with the prevailing wisdom. His book will be of interest to a wide range of readers in ethics. (shrink)
This paper problematizes contemporary cultural understandings of autism. We make use of the developmental psychology concepts of ‘Theory of Mind’ and ‘mindblindness’ to uncover the meaning of autism as expressed in these concepts. Our concern is that autism is depicted as a puzzle and that this depiction governs not only the way Western culture treats autism but also the way in which it governs everyday interactions with autistic people. Moreover, we show how the concepts of Theory of Mind and mindblindness (...) require autism to be a puzzle in the first place. Rather than treat autism as a puzzle that must be solved, we treat autism as a teacher and thus as having something valuable to contribute toward an understanding of the inherent partiality and uncertainty of human communication and collective life. (shrink)
Heisenberg's uncertainty principle is usually taken to express a limitation of operational possibilities imposed by quantum mechanics. Here we demonstrate that the full content of this principle also includes its positive role as a condition ensuring that mutually exclusive experimental options can be reconciled if an appropriate trade-off is accepted. The uncertainty principle is shown to appear in three manifestations, in the form of uncertainty relations: for the widths of the position and momentum distributions in any quantum (...) state; for the inaccuracies of any joint measurement of these quantities; and for the inaccuracy of a measurement of one of the quantities and the ensuing disturbance in the distribution of the other quantity. Whilst conceptually distinct, these three kinds of uncertainty relations are shown to be closely related formally. Finally, we survey models and experimental implementations of joint measurements of position and momentum and comment briefly on the status of experimental tests of the uncertainty principle. (shrink)
The Ethics of Uncertainty asks what it means to live, act, decide, and respond responsibly, in the aporia of freedom itself - a freedom which on one hand opens us to the open space of possible possibilities, and on the other, leaves us no stable ground or measure for pre/determined decision making. The aporia of freedom is conditioned by the indeterminate space of knowing we must make decisions, and yet, at the same time, we cannot call on an absolute (...) authority or measure as a guide. Aporias open us to freedom, the place where, as Derrida has taught us, an ethical decision may occur. Allowing indeterminacy to exist in our becoming allows a continuous coming to be with others - a becoming always open to the "to come" (Derrida) of the future. Always drawing us toward the possibility of making a decision within the fabric of indecision, aporias give us the possibility of ethical becoming. Overall, this text points us to the possibility of living an ethical life in a world without absolute measure - an ethics, in other words, of uncertainty. (shrink)
Researchers have begun to explore animals' capacities for uncertainty monitoring and metacognition. This exploration could extend the study of animal self-awareness and establish the relationship of self-awareness to other-awareness. It could sharpen descriptions of metacognition in the human literature and suggest the earliest roots of metacognition in human development. We summarize research on uncertainty monitoring by humans, monkeys, and a dolphin within perceptual and metamemory tasks. We extend phylogenetically the search for metacognitive capacities by considering studies that have (...) tested less cognitively sophisticated species. By using the same uncertainty-monitoring paradigms across species, it should be possible to map the phylogenetic distribution of metacognition and illuminate the emergence of mind. We provide a unifying formal description of animals' performances and examine the optimality of their decisional strategies. Finally, we interpret animals' and humans' nearly identical performances psychologically. Low-level, stimulus-based accounts cannot explain the phenomena. The results suggest granting animals a higher-level decision-making process that involves criterion setting using controlled cognitive processes. This conclusion raises the difficult question of animal consciousness. The results show that animals have functional features of or parallels to human conscious cognition. Remaining questions are whether animals also have the phenomenal features that are the feeling/knowing states of human conscious cognition, and whether the present paradigms can be extended to demonstrate that they do. Thus, the comparative study of metacognition potentially grounds the systematic study of animal consciousness. Key Words: cognition; comparative cognition; consciousness; memory monitoring; metacognition; metamemory; self-awareness; uncertainty; uncertainty monitoring. (shrink)
In this thesis I investigate the behaviour of uncertainty about vague matters. It is a fairly common view that vagueness involves uncertainty of some sort. However there are many fundamental questions about this kind of uncertainty that are left open. Could you be genuinely uncertain about p when there is no matter of fact whether p? Could you remain uncertain in a vague proposition even if you knew exactly which possible world obtained? Should your degrees of belief (...) be probabilistically coherent? Should your beliefs in the vague be fixed by your beliefs in the precise? Could one in principle tell what credences a person has in the vague? (shrink)
The way our decisions and actions can affect future generations is surrounded by uncertainty. This is evident in current discussions of environmental risks related to global climate change, biotechnology and the use and storage of nuclear energy. The aim of this paper is to consider more closely how uncertainty affects our moral responsibility to future generations, and to what extent moral agents can be held responsible for activities that inflict risks on future people. It is argued that our (...) moral responsibility to posterity is limited because our ability to foresee how present decisions and activities will affect future people is limited. The reason for this is primarily that we are in a situation of ignorance regarding the pace and direction of future scientific and technological development. This ignorance reduces our responsibility in a temporal dimension because in most areas it is impossible to predict the interests and resource needs of future generations. In one area, however, we have fairly reliable knowledge about future people. It is reasonable to assume that future human beings will have the same basic physiological (physical and biological) needs as we have. On this basis, it is argued that we can be held responsible for activities causing avoidable damage to critical resources that are necessary to provide for future physiological needs. Furthermore, it is suggested that it is prima facie immoral to impose risks upon future generations in cases where the following conditions are fulfilled: (1) the risk poses a threat to the ability of future generations to meet their physiological needs, and (2) the risk assessment is supported by scientifically based harm scenarios. (shrink)
In this article, I discuss an argument that purports to prove that probability theory is the only sensible means of dealing with uncertainty. I show that this argument can succeed only if some rather controversial assumptions about the nature of uncertainty are accepted. I discuss these assumptions and provide reasons for rejecting them. I also present examples of what I take to..
Some real objects show a very particular tendency: that of becomingindependent with regard to the uncertainty of their surroundings. This isachieved by the exchange of three quantities: matter, energy andinformation. A conceptual framework, based on both Non-equilibriumThermodynamic and the Mathematical Theory of Communication is proposedin order to review the concept of change in living individuals. Three mainsituations are discussed in this context: passive independence inconnection with resistant living forms (such as seeds, spores, hibernation,...), active independence in connection with the (...) life span of aliving individual (whether an ant or an ant farm), and the newindependence in connection with the general debate of biological evolution. (shrink)
1. The Problem, and Two Examples Discussions of deontological moral theories typically focus on the advantages and disadvantages of deontological constraints, rules to the effect that some actions should not be performed – at least sometimes – even when performing them will maximize the good. And, of course, the jury is still out on whether deontological constraints can be defended. But in their recent paper "Absolutist Moral Theories and Uncertainty", Frank Jackson and Michael Smith1 emphasize not the general and (...) well-known challenges to deontological constraints, but a more particular difficulty relating to what deontologists2 should say about cases of uncertainty. In their key example, a skier is about to cause the death of ten people by causing an avalanche. Jackson and Smith assume that whether or not it is morally permissible (and presumably also – given the possibility of saving the ten – morally required) to kill the skier (this is the only way of saving the ten) depends, according to a typical deontological theory, on whether or not he intends to kill the ten: If so, then he can permissibly be killed in self- (or other-) defense. If not, then it is presumably impermissible to kill him, for presumably.. (shrink)
How does it come about then, that great scientists such as Einstein, Schrödinger and De Broglie are nevertheless dissatisfied with the situation? Of course, all these objections are levelled not against the correctness of the formulae, but against their interpretation. [...] The lesson to be learned from what I have told of the origin of quantum mechanics is that probable refinements of mathematical methods will not suffice to produce a satisfactory theory, but that somewhere in our doctrine is hidden a (...) concept, unjustified by experience, which we must eliminate to open up the road. (Born [ 1954 ], pp. 8, 11) It is truly surprising how little difference all this makes. Most physicists use quantum mechanics every day in their working lives without needing to worry about the fundamental problem of its interpretation. (Weinberg [ 1992 ], p. 66) I endorse the view that it may be of no relevance to the acceptability of the Everett interpretation of quantum mechanics as a physical theory whether or not an informed observer can be uncertain about the outcome of a quantum measurement prior to its having occurred. However, I suggest that the very possibility of post-measurement, pre-observation uncertainty has an essential role to play in both confirmation theory and decision theory in a branching universe. This is supported by arguments which do not appeal to van Fraassen’s Reflection Principle. (shrink)
In this paper we compare different models of vagueness viewed as a specific form of subjective uncertainty in situations of imperfect discrimination. Our focus is on the logic of the operator “clearly” and on the problem of higher-order vagueness. We first examine the consequences of the notion of intransitivity of indiscriminability for higher-order vagueness, and compare several accounts of vagueness as inexact or imprecise knowledge, namely Williamson’s margin for error semantics, Halpern’s two-dimensional semantics, and the system we call Centered (...) semantics. We then propose a semantics of degrees of clarity, inspired from the signal detection theory model, and outline a view of higher-order vagueness in which the notions of subjective clarity and unclarity are handled asymmetrically at higher orders, namely such that the clarity of clarity is compatible with the unclarity of unclarity. (shrink)
It is shown that the uncertainty principle has nothing directly to do with the non-localisability of position and momentum for an individual system on the quantum logical view. The product Δ x· Δ p for localisation of the ranges of position and momentum of an individual system→ ∞ , while the quantities Δ X and Δ P in the uncertainty principle $\Delta X\cdot \Delta P\geq \hslash /2$ , must be given a statistical interpretation on the quantum logical view.
A successful theory of the language of subjective uncertainty would meet several important constraints. First, it would explain how use of the language of subjective uncertainty aﬀects addressees’ states of subjective uncertainty. Second, it would explain how such use affects what possibilities are treated as live for purposes of conversation. Third, it would accommodate 'quantifying in' to the scope of epistemic modals. Fourth, it would explain the norms governing the language of subjective uncertainty, and the diﬀerences (...) between them and the norms governing the language of subjective certainty. Neither truth conditional nor traditional force modfier theories of the language of subjective uncertainty look adequate to the task of satisfying all four of these constraints. (shrink)
This paper is about the question of what to do under fundamental normative uncertainty. More specifically, it is about a problem that seems to confront all of the plausible answers to that question -- that it is impossible to compare the values of actions across different normative views or theories. I present a solution to that problem in 3 stages.
The Babylonian Talmud, compiled from the 2nd to 7th centuries C.E., is the primary source for all subsequent Jewish laws. It is not written in apodeictic style, but rather as a discursive record of (real or imagined) legal (and other) arguments crossing a wide range of technical topics. Thus, it is not a simple matter to infer general methodological principles underlying the Talmudic approach to legal reasoning. Nevertheless, in this article, we propose a general principle that we believe helps to (...) explain the variety of methods used by the Rabbis of the Talmud for resolving uncertainty in matters of Jewish Law (henceforth: Halakhah). Such uncertainty might arise either if the facts of a case are clear but the relevant law is debatable or if the facts themselves are unclear. (shrink)
We provide examples of the extent and nature of environmental and human health problems and show why in the United States prevailing scientific and legal burden of proof requirements usually cannot be met because of the pervasiveness of scientific uncertainty. We also provide examples of how may assumptions, judgments, evaluations, and inferences in scientific methods are value-laden and that when this is not recognized results of studies will appear to be more factual and value-neutral than warranted. Further, we show (...) that there is a "tension" between the use of the 95 percent confidence rule as a normative basis to reduce speculation in scientific knowledge and other public policy and moral concerns embodied by the adoption of a precautionary principle. Finally, although there is no precise agreement regarding what a precautionary principle might entail, we make several recommendations regarding the placement of the burden of proof and the standard of proof that ought to be required in environmental and human health matters. (shrink)
This article is an attempt at a systematic account of decision making under greater uncertainty than what traditional, mathematically oriented decision theory can cope with. Four components of great uncertainty are distinguished: (1) the identity of the options is not well determined (uncertainty of demarcation) ; (2) the consequences of at least some option are unknown (uncertainty of consequences); (3) it is not clear whether information obtained from others, such as experts, can be relied on ( (...) class='Hi'>uncertainty of reliance); and (4) the values relevant for the decision are not determined with sufficient precision (uncertainty of values). Some possible strategy types are proposed for each of these components. Decisions related to environmental issues are used to illustrate the proposals. (shrink)
Following Lewis, it is widely held that branching worlds differ in important ways from diverging worlds. There is, however, a simple and natural semantics under which ordinary sentences uttered in branching worlds have much the same truth values as they conventionally have in diverging worlds. Under this semantics, whether branching or diverging, speakers cannot say in advance which branch or world is theirs. They are uncertain as to the outcome. This same semantics ensures the truth of utterances typically made about (...) quantum mechanical contingencies, including statements of uncertainty, if the Everett interpretation of quantum mechanics is true. The 'incoherence problem' of the Everett interpretation, that it can give no meaning to the notion of uncertainty, is thereby solved. (shrink)
Two types of measures of probabilistic uncertainty are introduced and investigated. Dispersion measures report how diffused the agent’s second-order probability distribution is over the range of first-order probabilities. Robustness measures reflect the extent to which the agent’s assessment of the prior (objective) probability of an event is perturbed by information about whether or not the event actually took place. The properties of both types of measures are investigated. The most obvious type of robustness measure is shown to coincide with (...) one of the major candidates for a dispersion measure, the mean square deviation measure. (shrink)
Diachronic uncertainty, uncertainty about where an agent falls in time, poses interesting conceptual difficulties. Although the agent is uncertain about where she falls in time, this uncertainty can only obtain at a particular moment in time. We resolve this conceptual tension by providing a transformation from models with diachronic uncertainty relations into “equivalent” models with only synchronic uncertainty relations. The former are interpreted as capturing the causal structure of a situation, while the latter are interpreted (...) as capturing its epistemic structure . The models are equivalent in the sense that agents pass through the same information sets in the same order, In this paper, we investigate how such a transformation may be used to define an appropriate notion of equivalence, which we call epistemic equivalence . Although our project is motivated by problems which have arisen in a variety of disciplines, especially philosophy and game theory, our formal development takes place within the general and flexible framework provided by epistemic temporal logic. (shrink)
This article argues that the decision problem in the original position should be characterized as a decision problem under uncertainty even when it is assumed that the denizens of the original position know that they have an equal chance of ending up in any given individual’s place. It supports this claim by arguing that (a) the continuity axiom of decision theory does not hold between all of the outcomes the denizens of the original position face and that (b) neither (...) us nor the denizens of the original position can know the exact point at which discontinuity sets in, because the language we employ in comparing different outcomes is ineradicably vague. It is also argued that the account underlying (b) can help proponents of superiority in value theory defend their view against arguments offered by Norcross and Griffin. (shrink)
Non-collapse theories of quantum mechanics have the peculiar characteristic that, although their measurements produce definite results, their state vectors remain in a superposition of possible outcomes. David Albert has used this fact to show that the standard uncertainty relations can be violated if self-measurements are made. Bradley Monton, however, has held that Albert has not been careful enough in his treatment of self-measurement and that being more careful (considering mental state supervenience) implies no violation of the relations. In this (...) paper, I will outline both Albert's proposal and Monton's objections. Then, I will show how the uncertainty relations can be violated after all (even after being as careful as Monton). Finally, I will discuss how finding a way around the objections allows us to learn more about what is and what is not possible in non-collapse theories of quantum mechanics. (shrink)
The last 20 years or so has seen an intense search carried out within Dempster–Shafer theory, with the aim of finding a generalization of the Shannon entropy for belief functions. In that time, there has also been much progress made in credal set theory—another generalization of the traditional Bayesian epistemic representation—albeit not in this particular area. In credal set theory, sets of probability functions are utilized to represent the epistemic state of rational agents instead of the single probability function of (...) traditional Bayesian theory. The Shannon entropy has been shown to uniquely capture certain highly intuitive properties of uncertainty, and can thus be considered a measure of that quantity. This article presents two measures developed with the purpose of generalizing the Shannon entropy for (1) unordered convex credal sets and (2) possibly non-convex credal sets ordered by second order probability, thereby providing uncertainty measures for such epistemic representations. There is also a comparison with the results of the measure AU developed within Dempster–Shafer theory in a few instances where unordered convex credal set theory and Dempster–Shafer theory overlap. (shrink)
There is a controversy as to the moral status of an action in the face of uncertainty concerning a non-moral fact that is morally significant (according to an applicable moral standard): According to the objective conception, the right action is determined in light of the truth, namely the actual state of affairs (regarding the pertinent fact), whereas according to the subjective conception, the right action depends on the epistemic state of the agent, namely her (justified) belief (concerning the pertinent (...) fact). A similar debate concerns the law, with respect to uncertainty regarding a legally significant fact. In this paper, I argue that moral and legal normative concepts are ambiguous and include two aspects: The ideal aspect, which is concerned with the constitutive feature of the normative standard, and the pragmatic aspect, which determines the correct action under uncertainty . With regard to each aspect, a different conception is appropriate: The objective conception should govern the ideal aspect and the subjective conception the pragmatic aspect. And the relevant aspect (and therefore the appropriate conception) depends on the question under consideration regarding the pertinent normative standard: what is its constitutive feature or whether an action is right (according to the applicable normative standard) in the face of uncertainty. (shrink)
Three challenges to a unified understanding of delusions emerge from Radden's On Delusion (2011). Here, I propose that in order to respond to these challenges, and to work towards a unifying framework for delusions, we should see delusions as arising in inference under uncertainty. This proposal is based on the observation that delusions in key respects are surprisingly like perceptual illusions, and it is developed further by focusing particularly on individual differences in uncertainty expectations.
In (Hertwig et al. , 2003) Hertwig et al. draw a distinction between decisions from experience and decisions from description. In a decision from experience an agent does not have a summary description of the possible outcomes or their likelihoods. A career choice, deciding whether to back up a computer hard drive, cross a busy street, etc., are typical examples of decisions from experience. In such decisions agents can rely only of their encounters with the corresponding prospects. By contrast, an (...) agent furnished with information sources such as drug-package inserts or mutual-fund brochures—all of which describe risky prospects—will often make decisions from description. In (Hertwig et al. , 2003) it is shown (empirically) that decisions from experience and decisions from description can lead to dramatically different choice behavior. Most of these results (summarized and analyzed in (Hertwig, 2009)) are concerned with the role of risk in decision making. This article presents some preliminary results concerning the role of uncertainty in decision-making. We focus on Ellsberg’s two-color problem and consider a chance setup based on double sampling. We report empirical results which indicate that decisions from description where subjects select between a clear urn, the chance setup based on double sampling and Ellsberg’s vague urn, are such that subjects perceive the chance setup at least as an intermediate option between clear and vague choices (and there is evidence indicating that the double sampling chance setup is seen as operationally indistinguishable from the vague urn). We then suggest how the iterated chance setup can be used in order to study decisions from experience in the case of uncertainty. (shrink)
Robert Proctor has argued that ignorance or non-knowledge can be fruitfully divided into at least three categories: (1) ignorance as native state or starting point; (2) ignorance as lost realm or selective choice; and (3) ignorance as strategic ploy or active construct. This chapter explores Proctor’s second category, ignorance as selective choice. When scientists investigate poorly understood phenomena, they have to make selective choices about what questions to ask, what research strategies and metrics to employ, and what language to use (...) for describing the phenomena. This chapter focuses especially on the selective choice of language for describing and categorizing phenomena in the face of uncertainty. Using several case studies from recent pollution research, I show that linguistic choices are especially significant when we have severely limited knowledge, because those choices can emphasize and highlight some aspects of our limited knowledge rather than others. These selective emphases can in turn influence societal decision making, and they can exacerbate the selectivity of our knowledge by further steering scientific research in some directions rather than others. I conclude with some suggestions for developing scientific language in socially responsible ways, even in the face of significant ignorance and uncertainty. (shrink)
Three types or levels of uncertainty monitoring are distinguished: (1) uncertainty responding but no feelings of uncertainty, (2) conscious feelings of uncertainty, (3) conscious feelings of uncertainty plus reflective awareness of what these feelings are and mean. It is hypothesized that only the first and perhaps the second occur in animals and human infants, whereas all three occur in older humans. Two possible lines of future research are also suggested.
A coherent account of the connections and contrasts between the principles of complementarity and uncertainty is developed starting from a survey of the various formalizations of these principles. The conceptual analysis is illustrated by means of a set of experimental schemes based on Mach-Zehnder interferometry. In particular, path detection via entanglement with a probe system and (quantitative) quantum erasure are exhibited to constitute instances of joint unsharp measurements of complementary pairs of physical quantities, path and interference observables. The analysis (...) uses the representation of observables as positive-operator-valued measures (POVMs). The reconciliation of complementary experimental options in the sense of simultaneous unsharp preparations and measurements is expressed in terms of uncertainty relations of different kinds. The feature of complementarity, manifest in the present examples in the mutual exclusivity of path detection and interference observation, is recovered as a limit case from the appropriate uncertainty relation. It is noted that the complementarity and uncertainty principles are neither completely logically independent nor logical consequences of one another. Since entanglement is an instance of the uncertainty of quantum properties (of compound systems), it is moot to play out uncertainty and entanglement against each other as possible mechanisms enforcing complementarity. (shrink)
Working retrospectively in an uncertain field of knowledge, physicians are engaged in an interpretive practice that is guided by couterweighted, competing, sometimes paradoxical maxims. When you hear hoofbeats, don't think zebras, is the chief of these, the epitome of medicine's practical wisdom, its hermeneutic rule. The accumulated and contradictory wisdom distilled in clinical maxims arises necessarily from the case-based nature of medical practice and the narrative rationality that good practice requires. That these maxims all have their opposites enforces in students (...) and physicians a practical skepticism that encourages them to question their expectations, interrupt patterns, and adjust to new developments as a case unfolds. Yet medicine resolutely ignores both the maxims and the tension between the practical reasoning they represent and the claim that medicine is a science. Indeed, resolute epistemological naivete is part of medicine's accommodation to uncertainty; counterweighted, competing, apparently paradoxical (but always situational) rules enable physicians simultaneously to express and to ignore the practical reason that characterizes their practice. (shrink)
In a certain sense, uncertainty andignorance have been recognized in science andphilosophy from the time of the Greeks.However, the mathematical sciences have beendominated by the pursuit of certainty.Therefore, experiments under simplified andidealized conditions have been regarded as themost reliable source of knowledge. Normally,uncertainty could be ignored or controlled byapplying probability theory and statistics.Today, however, the situation is different.Uncertainty and ignorance have moved intofocus. In particular, the global character ofsome environmental problems has shown that theproblems cannot be disregarded. (...) Therefore,scientists and technologists have in many wayscome into a new situation. The Chernobylaccident is a dramatic example, however,problems such as a possible greenhouse effect,a possible reduction of the ozone layer, and soon are all of the same type. These encompasstotally different problems than scientists andtechnologists are traditionally trained to dealwith. In these cases, the standard use ofstatistics has to change, the burden of proofshould be reversed, one should draw ondifferent kinds of expertise, and, in general,science should be ``democratized.''. (shrink)
In this work I propose an analogy between Pythagoras's theorem and the logical-formal structure of Werner Heisenberg's "relations of uncertainty." The reasons that they have pushed to me to place this analogy have been determined from the following ascertainment: Often, when in exact sciences a problem of measurement precision arises, it has been resolved with the resource of the elevation to the square. To me it seems also that the aporie deriving from the uncertainty principle can find one (...) solution with the resource to this stratagem. In fact, if the first classic example of the argument is the solution of the incommensurability between catheti and the hypotenuse of the triangle rectangle, one of the last cases is that which is represented from Heisenberg's principle of uncertainty. (shrink)
There is a broadly held view that neonatologists are ethically obligated to act to override parental nontreatment decisions for imperiled premature newborns when there is a reasonable chance of a good outcome. It is argued here that three types of uncertainty undercut any such general obligation: (1) the vagueness of the boundary at which an infant’s deficits become so intolerable that death could be reasonably preferred; (2) the uncertainty about whether aggressive treatment will result in the survival of (...) a reasonably healthy child or, alternatively, the survival of a child with intolerable deficits; and (3) the inability to determine an acceptable ratio between the likelihoods of those two outcomes. It is argued that the broadly held view accords insufficient weight to the fact that newborn intensive care increases the likelihood of harm to the child by effecting survival with intolerable deficits. Though treatment may offer a reasonable chance of a good outcome, it is argued that there are situations in which neonatologists should nonetheless defer to parental nontreatment decisions. (shrink)
Uncertainty plays an important role in The General Theory, particularly in the theory of interest rates. Keynes did not provide a theory of uncertainty, but he did make some enlightening remarks about the direction he thought such a theory should take. I argue that some modern innovations in the theory of probability allow us to build a theory which captures these Keynesian insights. If this is the right theory, however, uncertainty cannot carry its weight in Keynes’s arguments. (...) This does not mean that the conclusions of these arguments are necessarily mistaken; in their best formulation they may succeed with merely an appeal to risk. (shrink)
Science can reinforce the healthy aspects of the politics of the policy process, to identify and further the public interest by discrediting policy options serving only special interests and helping to select among “science-confident” and “hedging” options. To do so, scientists must learn how to manage and communicate the degree of uncertainty in scientific understanding and prediction, lest uncertainty be manipulated to discredit science or to justify inaction. For natural resource and environmental policy, the institutional interests of government (...) agencies, as well as private interests, pose challenges of suppression, over-simplification, or distortion of scientific information. Scientists can combat these maneuvers, but must also look inward to ensure that their own special interests do not undermine the usefulness of science. (shrink)
This article addresses the impact of systematic ignorance and epistemic uncertainty upon white Western women's participation in anti-racist and transnational feminisms. I argue that a “methodology of the privileged” is necessary for effective coalition-building across racial and geopolitical inequities. Examining both self-reflexivity and racial sedition as existing methods, I conclude that epistemic uncertainty should be considered an additional strategy rather than a dilemma for the privileged.
Scientific knowledge has not stabilized in the current, early, phase of research and development of nanotechnologies creating a challenge to ‘upstream’ public engagement. Nevertheless, the idea that the public should be involved in deliberative discussions and assessments of emerging technologies at this early stage is widely shared among governmental and nongovernmental stakeholders. Many forums for public debate including focus groups, and citizen juries, have thus been organized to explore public opinions on nanotechnologies in a variety of countries over the past (...) few years. In Switzerland the Centre for Technology Assessment (TA-Swiss) organized such a citizen panel in fall 2006. Drawing from an ethnographic study of this panel called ‘publifocus on nanotechnologies, health, and environment’ this paper looks at the ways members of a stakeholder group deal with the epistemic uncertainty in their deliberation of nanotechnologies. By exploring the statements of the participants in the stakeholder discussion group, this paper reconstructs the narratives that constitute the epistemic foundations of the participants’ evaluations of nanotechnologies. (shrink)
General Practice and Ethics explores the ethical issues faced by general physicans in their everyday practice, addressing two central themes: the uncertainty of outcomes and effectiveness in general practice and the changing pattern of general practitioners' responsibilities.
In this paper I argue that two domains of uncertainty should inform our strategies for making social policy on new genetic technologies. The first is biological complexity, which includes both unknown consequences on known variables and unknown unknowns. The second is value pluralism, which includes both moral conflict and moral pluralism. This framework is used to investigate policy on genetically modified food and suggests that adaptive management is required to track changes in biological knowledge of these interventions and that (...) less simplistic, polemic representations of scientific knowledge are required to permit democratic decision making. (shrink)
The disjunction effect (Tversky & Shafir, 1992) occurs when decision makers prefer option x (versus y) when knowing that event A occurs and also when knowing that event A does not occur, but they refuse x (or prefer y) when not knowing whether or not A occurs. This form of incoherence violates Savage's (1954) sure-thing principle, one of the basic axioms of the rational theory of decision making. The phenomenon was attributed to a lack of clear reasons for accepting an (...) option (x) when subjects are under uncertainty. Through a pragmatic analysis of the task and a consequent reformulation of it, we show that the effect does not depend on the presence of uncertainty, but on the introduction of non-relevant goals into the text problem, in both the well-known Gamble problem and the Hawaii problem. (shrink)
Everettian accounts of quantum mechanics entail that people branch; every possible result of a measurement actually occurs, and I have one successor for each result. Is there room for probability in such an account? The prima facie answer is no; there are no ontic chances here, and no ignorance about what will happen. But since any adequate quantum mechanical theory must make probabilistic predictions, much recent philosophical labor has gone into trying to construct an account of probability for branching selves. (...) One popular strategy involves arguing that branching selves introduce a new kind of subjective uncertainty. I argue here that the variants of this strategy in the literature all fail, either because the uncertainty is spurious, or because it is in the wrong place to yield probabilistic predictions. I conclude that uncertainty cannot be the ground for probability in Everettian quantum mechanics. (shrink)
Uncertainty relations and complementarity of canonically conjugate position and momentum observables in quantum theory are discussed with respect to some general coupling properties of a function and its Fourier transform. The question of joint localization of a particle on bounded position and momentum value sets and the relevance of this question to the interpretation of position-momentum uncertainty relations is surveyed. In particular, it is argued that the Heisenberg interpretation of the uncertainty relations can consistently be carried through (...) in a natural extension of the usual Hilbert space frame of the quantum theory. (shrink)
Environmentalists have advocated the Precautionary Principle (PP) to help guide public and private decisions about the environment. By contrast, industry and its spokesmen have opposed this. There is not one principle, but many that have been recommended for this purpose. Despite the attractiveness of a core idea in all versions of the principle—that decision-makers should take some precautionary steps to ensure that threats of serious and irreversible damage to the environment and public health do not materialize into harm—even one of (...) the most widely endorsed principles needs considerable specification and refinement before it can be used. Moreover, the PP is an approach or guide to utilizing scientific evidence in social or legal decision-making contexts. In this it does not differ in kind from other approaches to using factual information such as in the law. The law provides some models for different strategies to guide decision-making under uncertainty when factual issues cannot be resolved with certainty. These in turn can help guide the formulation of different versions of PP and help clarify some presuppositions of the principle. Once some plausible versions of PP are articulated, I suggest some applications to existing environmental problems. (shrink)
To better illuminate aspects of stress that are relevant to the moral domain, we present a definition and theoretical model of “moral stress.” Our definition posits that moral stress is a psychological state born of an individual’s uncertainty about his or her ability to fulfill relevant moral obligations. This definition assumes a self-and-others relational basis for moral stress. Accordingly, our model draws from a theory of the self (identity theory) and a theory of others (stakeholder theory) to suggest that (...) this uncertainty arises as a manager faces competing claims for limited resources from multiple stakeholders and/or across multiple role identities. We further propose that the extent to which the manager is attentive to the moral aspects of the claims (i.e., moral attentiveness) moderates these effects. We identify several consequences of managerial moral stress and discuss theoretical, empirical, and practical implications of our approach. Most importantly, we argue that this work paves an important path for considering stress through the lens of morality. (shrink)
We argue that in the decision making process required for selecting assertible vague descriptions of an object, it is practical that communicating agents adopt an epistemic stance. This corresponds to the assumption that there exists a set of conventions governing the appropriate use of labels, and about which an agent has only partial knowledge and hence significant uncertainty. It is then proposed that this uncertainty is quantified by a measure corresponding to an agent’s subjective belief that a vague (...) concept label can be appropriately used to describe a particular object. We then apply Bayesian networks to investigate, in the case when knowledge of labelling conventions is represented by an ordering or ranking of the labels according to their appropriateness, how measure values allocated to basic labels can be used to directly infer the appropriateness measure of compound expressions. (shrink)
Considering the instability of nonlinear dynamics, the deductive inference rule Modus ponens itself is not enough to guarantee the validity of reasoning sequences in the real physical world, and similar results cannot necessarily be obtained from similar causes. Some kind of stability hypothesis should be added in order to draw meaningful conclusions. Hence, the uncertainty of deductive inference appears to be like that of inductive inference, and the asymmetry between deduction and induction becomes unrecognizable such as to undermine the (...) basis for the fundamental cleavage between analytic truth and synthetic truth, as W. V. O. Quine pointed out. Induction is not inferior to deduction from a pragmatic point of view. (shrink)
There are different kinds of uncertainty. I outline some of the various ways that uncertainty enters science, focusing on uncertainty in climate science and weather prediction. I then show how we cope with some of these sources of error through sophisticated modelling techniques. I show how we maintain confidence in the face of error.
Hierarchical models are commonly used for modelling uncertainty. They arise whenever there is a `correct' or `ideal' uncertainty model but the modeller is uncertain about what it is. Hierarchical models which involve probability distributions are widely used in Bayesian inference. Alternative models which involve possibility distributions have been proposed by several authors, but these models do not have a clear operational meaning. This paper describes a new hierarchical model which is mathematically equivalent to some of the earlier, possibilistic (...) models and also has a simple behavioural interpretation, in terms of betting rates concerning whether or not a decision maker will agree to buy or sell a risky investment for a specified price. We give a representation theorem which shows that any consistent model of this kind can be interpreted as a model for uncertainty about the behaviour of a Bayesian decision maker. We describe how the model can be used to generate buying and selling prices and to make decisions. (shrink)
This paper considers the relationship between quantum uncertainty and the problem of God. Among the issues considered are the existence and essence ofGod, divine action, human freedom, and personal identity. In recent discussions concerning the relative merits of science and religion, thinkers like Ian Barbourand John Haught have suggested several such credible, albeit tentative, connections between the two on the basis of the epistemological limit imposed upon human knowledge by the Heisenberg Uncertainty Principle.
A probabilistic explication is offered of equipoise and uncertainty in clinical trials. In order to be useful in the justification of clinical trials, equipoise has to be interpreted in terms of overlapping probability distributions of possible treatment outcomes, rather than point estimates representing expectation values. Uncertainty about treatment outcomes is shown to be a necessary but insufficient condition for the ethical defensibility of clinical trials. Additional requirements are proposed for the nature of that uncertainty. The indecisiveness of (...) our criteria for cautious decision-making under uncertainty creates the leeway that makes clinical trials defensible. (shrink)
A substantial body of empirical evidence shows that individuals overweight extreme events and act in conflict with the expected utility theory. These findings were the primary motivation behind the development of a rank-dependent utility theory for choice under uncertainty. The purpose of this paper is to demonstrate that some simple empirical rules of thumb for choice under uncertainty are consistent with the rank-dependent utility theory.
Advance directives are useful ways to express one's wishes about end of life care, but even now most people have not completed one of the documents. David Doukas and William Reichel strongly encourage planning for end of life care. Although Planning for Uncertainty is at times fairly abstract for the general reader, it does provide useful background and practical steps.
This paper uses chronic beryllium disease as a case study to explore some of the challenges for decision-making and some of the problems for obtaining meaningful informed consent when the interpretation of screening results is complicated by their probabilistic nature and is clouded by empirical uncertainty. Although avoidance of further beryllium exposure might seem prudent for any individual whose test results suggest heightened disease risk, we will argue that such a clinical precautionary approach is likely to be a mistake. (...) Instead, advice on the interpretation of screening results must focus not on risk per se, but on avoidable risk, and must be carefully tailored to the individual. These points are of importance for individual decision-making, for informed consent, and for occupational health. (shrink)
Common sense arguments are practically always about incomplete and uncertain information. We distinguish two aspects or kinds of uncertainty. The one is defined as a persons’ uncertainty about the truth of a sentence. The other uncertainty is defined as a persons’ uncertainty of his assessment of the truth of a sentence. In everyday life argumentation we are often faced with both kinds of uncertainty which should be distinguished to avoid misunderstandings among discussants. The paper presents (...) a probabilistic account of both kinds of uncertainty in the framework of coherence. Furthermore, intuitions about the evaluation of the strength of arguments are explored. Both reasoning about uncertainty and the development of a theory of argument strength are central for a realistic theory of rational argumentation. (shrink)
Commercialization of genetically modified organisms (GMOs) have sparked profound controversies concerning adequate approaches to risk regulation. Scientific uncertainty and ambiguity, omitted research areas, and lack of basic knowledge crucial to risk assessmentshave become apparent. The objective of this article is to discuss the policy and practical implementation of the Precautionary Principle. A major conclusion is that the void in scientific understanding concerning risks posed by secondary effects and the complexity ofcause-effect relations warrant further research. Initiatives to approach the acceptance (...) or rejection of a number of risk-associated hypotheses is badly needed. Further, since scientific advice plays a key role in GMOregulations, scientists have a responsibility to address and communicate uncertainty to policy makers and the public. Hence, the acceptance of uncertainty is not only a scientific issue, but is related to public policy and involves an ethical dimension. (shrink)
We discuss several features of coherent choice functions —where the admissible options in a decision problem are exactly those that maximize expected utility for some probability/utility pair in fixed set S of probability/utility pairs. In this paper we consider, primarily, normal form decision problems under uncertainty—where only the probability component of S is indeterminate and utility for two privileged outcomes is determinate. Coherent choice distinguishes between each pair of sets of probabilities regardless the “shape” or “connectedness” of the sets (...) of probabilities. We axiomatize the theory of choice functions and show these axioms are necessary for coherence. The axioms are sufficient for coherence using a set of probability/almost-state-independent utility pairs. We give sufficient conditions when a choice function satisfying our axioms is represented by a set of probability/state-independent utility pairs with a common utility. (shrink)
Taking scientific error and uncertainty seriously Content Type Journal Article Category Book Review Pages 1-4 DOI 10.1007/s11016-011-9628-z Authors Douglas Allchin, Minnesota Center for the Philosophy of Science, University of Minnesota, Minneapolis, MN 55455, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
There is a long history of using logic to model the interpretation of indirect speech acts. Classical logical inference, however, is unable to deal with the combinations of disparate, conflicting, uncertain evidence that shape such speech acts in discourse. We propose to address this by combining logical inference with probabilistic methods. We focus on responses to polar questions with the following property: they are neither yes nor no, but they convey information that can be used to infer such an answer (...) with some degree of confidence, though often not with enough confidence to count as resolving. We present a novel corpus study and associated typology that aims to situate these responses in the broader class of indirect question–answer pairs (IQAPs). We then model the different types of IQAPs using Markov logic networks, which combine first-order logic with probabilities, emphasizing the ways in which this approach allows us to model inferential uncertainty about both the context of utterance and intended meanings. (shrink)
This paper offers a modified version of the certainty equivalence (CE) theory of utility for uncertain prospects and a new set of axioms as its basis. It shows that the CE and the von Neumann-Morgenstern (NM) approaches to uncertainty are opposite in spirit: The CE approach represents a flight from the world of uncertainty to the rules of certainty while the NM approach represents a flight from the world of certainty to one of uncertainty. The two approaches (...) differ even in their treatment of compound prospects and their actuarially identical simple counterparts. (shrink)
Quantum observables can be identified with vector fields on the sphere of normalized states. Consequently, the uncertainty relations for quantum observables become geometric statements. In the Letter the familiar uncertainty relation follows from the following stronger statement: Of all parallelograms with given sides the rectangle has the largest area.
The prestige of science, derived from its claims to certainty, has adversely affected the humanities. There is, in fact, a “politics of certainty”. Our ability to predict events in a limited sphere has been idealized, engendering dangerous illusions about our power to control nature and eliminate time. In addition, the perception and propagation of science as a bearer of certainty has served to legitimate harmful forms of social, sexual, and political power. Yet, as Ilya Prigogine has argued, renewed attention to (...) the irreducible reality of time has brought us to “the end of certainty”. As we enter the age of uncertainty, there is disagreement about how science should be understood and communicated. Some scientists cling to the ideal of certainty, while others emphasize the creative potential of spontaneity, novelty, and surprise. (shrink)
One of the key roles of the English National Institute for Health and Clinical Excellence (NICE) is technology appraisal. This essentially involves evaluating the cost effectiveness of pharmaceutical products and other technologies for use within the National Health Service. Based on a content analysis of key documents which shed light on the nature of appraisals, this paper draws attention to the multiple layers of uncertainty and complexity which are latent within the appraisal process, and the often socially constructed mechanisms (...) for tackling these. Epistemic assumptions, bounded rationality and more explicitly relational forms of managing knowledge are applied to this end. These findings are discussed in the context of the literature highlighting the inherently social process of regulation. A framework is developed which posits the various forms of uncertainty, and responses to these, as potential conduits of regulatory bias—in need of further research. That NICE’s authority is itself regulated by other actors within the regulatory regime, particularly the pharmaceutical industry, exposes it to the threat of regulatory capture. Following Lehoux, it is concluded that a more transparent and reflexive format for technological appraisals is necessary. This would enable a more robust, defensible form of decision-making and moreover enable NICE to preserve its legitimacy in the midst of pressures which threaten this. (shrink)
The purpose of this paper is to solve a serious problem for the projection postulate involving the time-energy uncertainty relation. The problem was recently raised by Teller, who believes that the problem is insoluble and, consequently, that the projection postulate should no longer be regarded as a serious focus for interpretive investigation.
The paper addresses the question, how policy decisions under uncertainty depend on the underlying welfare concept. We study three different welfare measures: The first is directly based on the ex ante (expected) utility of a representative consumer whereas the second relies on an ex ante and the third on an ex post valuation of policy changes compared to the status quo. We show that decisions based on these measures coincide if and only if risk-neutral expected utility maximization is applied. (...) Differences between the decisions are analyzed for both, risk-averse expected utility maximization and the MaxiMin criterion. For risk-averse decision makers, differences between the first and the second concept arise if the absolute risk-aversion of the decision maker is not constant in income. For risk-aversion and the MaxiMin criterion, the effort levels to provide a public good based on an optimization of ex post utility changes exceed those based on the first or second concept. Implications for environmental policy decisions based on the concepts of abatement costs and benefits from abatement are discussed. (shrink)
This article explores the question of how scientific uncertainty can be managed in medical decision making using the Advisory Committee on Immunization Practices as a case study. It concludes that where a high degree of technical consensus exists about the evidence and data, decision makers act according to a clear decision rule. If a high degree of technical consensus does not exist and uncertainty abounds, the decision will be based on a variety of criteria, including readily available resources, (...) decision-process constraints, and the available knowledge base, among other things. Decision makers employ a variety of heuristic devices and techniques, thereby employing a pragmatic approach to uncertainty in medical decision making. The article concludes with recommendations for managing scientific uncertainty in medical decision making. (shrink)
Intuitive predictions and judgements under uncertainty are often mediated by judgemental heuristics that sometimes lead to biases. Our micro-developmental study suggests that a presumption of rationality is justified for adult subjects, in so far as their systematic judgemental biases appear to be due to a specific executive-inhibition failure in working memory, and not necessarily to a lack of understanding of the fundamental principles of probability. This hypothesis was tested using an experimental procedure in which 60 adult subjects were trained (...) to inhibit the classical conjunction bias on a frequency judgement task derived from Tversky and Kahneman's work. Pre- and post-test performance was assessed via a probability judgement task. The data indicated a training effect, suggesting that subjects traditionally labelled as "irrational" with respect to the classical rules of inductive reasoning are in fact "inefficient inhibitors". These findings are discussed in terms of a polymorphous view of rationality. (shrink)
Although we endorse the primacy of uncertainty in reasoning, we argue that a probabilistic framework cannot model the fundamental skill of proof administration. Furthermore, we are skeptical about the assumption that standard probability calculus is the appropriate formalism to represent human uncertainty. There are other models up to this task, so let us not repeat the excesses of the past.
The "prevailing opinion" among decision theorists, according to John Harsanyi, is to use the Bayesian rule, even in situations of uncertainty. I want to argue that the prevailing opinion is wrong, at least in the case of societal risks under uncertainty. Admittedly Bayesian rules are better in many cases of individual risk or certainty. (Both Bayesian and maximin strategies are sometimes needed.) Although I shall not take the time to defend all these points in detail, I shall argue (...) (1) that there are compelling reasons for rejecting Harsanyi's defense of the Bayesian strategy under uncertainty; (2) that it is more rational, in specific types of situations, to prefer the maximin strategy; and (3) that calibrating expert opinions is superior to using the equiprobability assumption or subjective probabilities. (shrink)
A cornerstone of game theory is backward induction, whereby players reason backward from the end of a game in extensive form to the beginning in order to determine what choices are rational at each stage of play. Truels, or three-person duels, are used to illustrate how the outcome can depend on (1) the evenness/oddness of the number of rounds (the parity problem) and (2) uncertainty about the endpoint of the game (the uncertainty problem). Since there is no known (...) endpoint in the latter case, an extension of the idea of backward induction is used to determine the possible outcomes. The parity problem highlights the lack of robustness of backward induction, but it poses no conflict between foundational principles. On the other hand, two conflicting views of the future underlie the uncertainty problem, depending on whether the number of rounds is bounded (the players invariably shoot from the start) or unbounded (they may all cooperate and never shoot, despite the fact that the truel will end with certainty and therefore be effectively bounded). Some real-life examples, in which destructive behavior sometimes occurred and sometimes did not, are used to illustrate these differences, and some ethical implications of the analysis are discussed. (shrink)
The entropy-reduction hypothesis claims that the cognitive processing difficulty on a word in sentence context is determined by the word's effect on the uncertainty about the sentence. Here, this hypothesis is tested more thoroughly than has been done before, using a recurrent neural network for estimating entropy and self-paced reading for obtaining measures of cognitive processing load. Results show a positive relation between reading time on a word and the reduction in entropy due to processing that word, supporting the (...) entropy-reduction hypothesis. Although this effect is independent from the effect of word surprisal, we find no evidence that these two measures correspond to cognitively distinct processes. (shrink)
In animals' natural lives, uncertainty is normal; and certainty, exceptional. Evaluating ambiguous information is essential for survival: Does what is seen, heard, or smelled mean danger? Does that gesture mean aggression or fear? Is he confident or uncertain? If they are conscious of anything, the content of animals' awareness probably includes crucial uncertainties, both their own and those of others.
We give an example from the theory of Markov decision processes which shows that the “optimism in the face of uncertainty” heuristics may fail to make any progress. This is due to the impossibility to falsify a belief that a (transition) probability is larger than 0. Our example shows the utility of Popper’s demand of falsifiability of hypotheses in the area of artificial intelligence.
: Bioethicists have articulated an ideal of shared decision making between physician and patient, but in doing so the role of clinical uncertainty has not been adequately confronted. In the face of uncertainty about the patient's prognosis and the best course of treatment, many physicians revert to a model of nondisclosure and nondiscussion, thus closing off opportunities for shared decision making. Empirical studies suggest that physicians find it more difficult to adhere to norms of disclosure in situations where (...) there is substantial uncertainty. They may be concerned that acknowledging their own uncertainty will undermine patient trust and create additional confusion and anxiety for the patient. We argue, in contrast, that effective disclosure will protect patient trust in the long run and that patients can manage information about uncertainty. In situations where there is substantial uncertainty, extra vigilance is required to ensure that patients are given the tools and information they need to participate in cooperative decision making about their care. (shrink)
In the context of uncertainty and anxiety regarding the role of leadership and management, this article explores the relationship between Mintzberg’s concept of the distinction between the engaged and disconnected manager, Heidegger’s notion authentic and inauthentic being and Benner and Wrubel’s distinction between two forms of professional practice attunement: an attunement to technique and an attunement to lived experience. It argues that while Mintzberg outlines the distinction between engaged and disengaged management, he does not develop an understanding of the (...) conditions which lead a manager to be either engaged or disconnected. The role of anxiety in Heidegger’s distinction between authentic and inauthentic being and the role of stress and worry in Benner and Wrubel’s distinction between an attunement to technique and an attune- ment to the lived experience of professional practice provides the basis for understanding the relationship between engaged and disconnected management. After developing the theoretical perspectives of Mintzberg, Heidegger, Benner and Wrubel, two examples are presented: one of the way in which an engaged manager experiences anxiety as an opportunity for greater attune- ment to lived experience and one who experiences anxiety as a condition for disconnection and detachment from the lived experience of his leadership practice. (shrink)
Keynesian concepts of probability and uncertainty emphasize the basis of knowledge available to economic decision makers. Conditions of uncertainty, which involve missing evidence or doubtful arguments, are distinguished from probable risk. Beyond this, on the basis of the claim that the future is yet to be created, some authors argue for further distinctions among different kinds of uncertainty. The paper reviews this particular argument, distinguishing it from Keynesian uncertainty theory generally, and provides a critique of its (...) implication that, due to innovation, objective distributions of possible events do not generally exist at the time of economic decisions. (shrink)
Boyer & Lienard (B&L) elegantly elaborate the links between normal motivational systems and psychopathology and address the evolutionary and cultural context of ritualized behaviors. However, their model omits a key property of the security-motivation (hazard-precaution) system, and this property suggests that ritualized behavior may generate an alternate satiety signal by substituting, in place of uncertainty, a problem that is verifiably solvable. (Published Online February 8 2007).
The future only exists in the ww. Such Cooperiuv metaphors are the foundation of sensemaking in the following article. Molderez's work represents a unique form of European postmodern metaphorical thought surrounding complexity and sensemaking. Beneath the syllogism and tropes lies a poetic flow relating uncertainty to the potentiality for action. Two intersecting fences and a forward-moving force create a box, the only escape from which is a perpendicular vector: Pragmatic managed advice this piece is not, but as a synesis (...) of free-flowing ideas it contains an inspiration for new insight and creative potential. (shrink)
People are less willing to accept bets about an event when they do not know the true probability of that event. Such uncertainty aversion has been used to explain certain economic phenomena. This paper considers how far standard private information explanations (with strategic decisions to accept bets) can go in explaining phenomena attributed to uncertainty aversion. This paper shows that if two individuals have different prior beliefs about some event, and two sided private information, then each individualâs willingness (...) to bet will exhibit a bid ask spread property. Each individual is prepared to bet for the event, at sufficiently favorable odds, and against, at sufficiently favorable odds, but there is an intermediate range of odds where each individual is not prepared to bet either way. This is only true if signals are distributed continuously and sufficiently smoothly. It is not true, for example, in a finite signal model. (shrink)
Using a recently introduced entropy-like measure of uncertainty of quantum mechanical states, the problem of hidden variables is redefined in operator algebraic framework of quantum mechanics in the following way: if A, , E(A), E() are von Neumann algebras and their state spaces respectively, (, E()) is said to be an entropic hidden theory of (A, E(A)) via a positive map L from onto A if for all states E(A) the composite state ° L E() can be obtained as (...) an average over states in E() that have smaller entropic uncertainty than the entropic uncertainty of . It is shown that if L is a Jordan homomorphism then (, E()) is not an entropic hidden theory of (A, E(A)) via L. (shrink)