This paper identifies an overdetermination problem faced by the non-reductive dispositional property account of disposition ascriptions. Two possible responses to the problem are evaluated and both are shown to have serious drawbacks. Finally it is noted that the traditional conditional analysis of dispositional ascriptions escapes the original difficulty.
Two recurrent arguments levelled against the view that enduring objects survive change are examined within the framework of the B-theory of time: the argument from Leibniz's Law and the argument from Instantiation of Incompatible Properties. Both arguments are shown to be question-begging and hence unsuccessful.
Theoria , the international Swedish philosophy journal, was founded in 1935. Its contributors in the first 75 years include the major Swedish philosophers from this period and in addition a long list of international philosophers, including A. J. Ayer, C. D. Broad, Ernst Cassirer, Hector Neri Castañeda, Arthur C. Danto, Donald Davidson, Nelson Goodman, R. M. Hare, Carl G. Hempel, Jaakko Hintikka, Saul Kripke, Henry E. Kyburg, Keith Lehrer, Isaac Levi, David Lewis, Gerald MacCallum, Richard Montague, Otto Neurath, Arthur N. (...) Prior, W. V. Quine, Nicholas Rescher, Ernest Sosa, Robert C. Stalnaker, P. F. Strawson, Patrick Suppes, Johan van Benthem, Georg Henrik von Wright and many others. Hempel's confirmation paradoxes, Ross's deontic paradox, Montague's universal grammar and Lindström's theorem are among the contributions to philosophy that were first published in Theoria. (shrink)
Formal representations of values and norms are employed in several academic disciplines and specialties, such as economics, jurisprudence, decision theory, and social choice theory. Sven Ove Hansson closely examines such foundational issues as the values of wholes and the values of their parts, the connections between values and norms, how values can be decision-guiding and the structure of normative codes with formal precision. Models of change in both preferences and norms are offered, as well as a new method to (...) base the logic of norms on that of preferences. Hansson has developed a unified formal representation of values and norms that reflects both their static and their dynamic properties. This formalized treatment, carried out in terms of both informal value theory and precise logical detail, will contribute to the clarification of certain issues in the basic philosophical theory of values and norms. (shrink)
As part of the conference commemorating Theoria's 75th anniversary, a round table discussion on philosophy publishing was held in Bergendal, Sollentuna, Sweden, on 1 October 2010. Bengt Hansson was the chair, and the other participants were eight editors-in-chief of philosophy journals: Hans van Ditmarsch (Journal of Philosophical Logic), Pascal Engel (Dialectica), Sven Ove Hansson (Theoria), Vincent Hendricks (Synthese), Søren Holm (Journal of Medical Ethics), Pauline Jacobson (Linguistics and Philosophy), Anthonie Meijers (Philosophical Explorations), Henry S. Richardson (Ethics) and Hans (...) Rott (Erkenntnis). (shrink)
Should Probabilistic Design Replace Safety Factors? Content Type Journal Article Pages 151-168 DOI 10.1007/s13347-010-0003-6 Authors Neelke Doorn, Department of Technology, Policy and Management, Delft University of Technology, PO Box 5015, 2600 GA Delft, The Netherlands Sven Ove Hansson, Department of Philosophy and the History of Technology, Royal Institute of Technology, Teknikringen 78 B, 100 44 Stockholm, Sweden Journal Philosophy & Technology Online ISSN 2210-5441 Print ISSN 2210-5433 Journal Volume Volume 24 Journal Issue Volume 24, Number 2.
A conceptual analysis of falsificationism is performed, in which the central falsificationist thesis is divided into several components. Furthermore, an empirical study of falsification in science is reported, based on the 70 scientific contributions that were published as articles in Nature in 2000. Only one of these articles conformed to the falsificationist recipe for successful science, namely the falsification of a hypothesis that is more accessible to falsification than to verification. It is argued that falsificationism relies on an incorrect view (...) of the nature of scientific inquiry and that it is, therefore, not a tenable research methodology. (shrink)
New technologies and practices, such as drug testing, genetic testing, and electronic surveillance infringe upon the privacy of workers on workplaces. We argue that employees have a prima facie right to privacy, but this right can be overridden by competing moral principles that follow, explicitly or implicitly, from the contract of employment. We propose a set of criteria for when intrusions into an employee''s privacy are justified. Three types of justification are specified, namely those that refer to the employer''s interests, (...) to the interests of the employee her- or himself, and to the interests of third parties such as customers and fellow workers. For each of these three types, sub-criteria are proposed that can be used to determine whether a particular infringement into an employee''s privacy is morally justified or not. (shrink)
The prescriptive has both an objective and a subjective interpretation. In the objective sense, what one ought to do depends on what is actually true. In the subjective sense it depends on what one believes to be true. Ordinary usage seems to vacillate between these two interpretations. An example (the indecisive terrorist) is used to show that a subjective ought statement can have a determinate truth-value in situations where the corresponding objective ought statement has no truth-value, not even an unknowable (...) one. Therefore the subjective ought is not definable in terms of the objective ought. However, definability in the other direction is not excluded by this argument. (shrink)
Mainstream moral theories deal with situations in which the outcome of each possible action is well-determined and knowable. In order to make ethics relevant for problems of risk and uncertainty, moral theories have to be extended so that they cover actions whose outcomes are not determinable beforehand. One approach to this extension problem is to develop methods for appraising probabilistic combinations of outcomes. This approach is investigated and shown not to solve the problem. An alternative approach is then developed. Its (...) starting-point is that everyone has a prima facie moral right not to be exposed to risk. However, this right can be overridden if the risk-exposure is part of an equitable system for risk-taking that works to the advantage of the individual risk-exposed person. (shrink)
The purpose of this presentation is to introduce both the concept of risk and the precautionary principle, that is a major policy principle in present-day risk management. Since risk has been the subject of many misconceptions I will do this in large part by criticizing seven views on risk that I believe to have caused considerable confusion both among scientists and policy-makers. But before looking at the seven myths of risk, let us begin with the basic issue of defining “risk”. (...) The word “risk” often refers, rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In addition, the word has several more specialized meanings. Let me illustrate this by making a few statements about the single most important preventable health hazard in non-starving countries. First: “Lung cancer is one of the major risks that affect smokers.” Here, we use “risk” in the following sense: (1) risk = an unwanted event which may or may not occur. 1 (15). (shrink)
The paper introduces ten open problems in belief revision theory, related to the representation of the belief state, to different notions of degrees of belief, and to the nature of change operations. It is argued that these problems are all issues in philosopical logic, in the strong sense of requiring inputs from both logic and philosophy for their solution.
A general theory of coherence is proposed, in which systemic and relational coherence are shown to be interdefinable. When this theory is applied to sets of sentences, it turns out that logical closure obscures the distinctions that are needed for a meaningful analysis of coherence. It is concluded that references to “all beliefs” in coherentist phrases such as “all beliefs support each other” have to be modified so that merely derived beliefs are excluded. Therefore, in order to avoid absurd conclusions, (...) coherentists have to accept a weak version of epistemic priority, that sorts out merely derived beliefs. Furthermore, it is shown that in belief revision theory, coherence cannot be adequately represented by logical closure, but has to be represented separately. (shrink)
In non-technical contexts, the word “risk” refers, often rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In technical contexts, the word has many uses and specialized meanings. The most common ones are the following.
This text is a non-technical overview of modern decision theory. It is intended for university students with no previous acquaintance with the subject, and was primarily written for the participants of a course on risk analysis at Uppsala University in 1994.
This article argues that, contrary to the received view, prioritarianism and egalitarianism are not jointly incompatible theories in normative ethics. By introducing a distinction between weighing and aggregating, the authors show that the seemingly conflicting intuitions underlying prioritarianism and egalitarianism are consistent. The upshot is a combined position, equality-prioritarianism, which takes both prioritarian and egalitarian considerations into account in a technically precise manner. On this view, the moral value of a distribution of well-being is a product of two factors: the (...) sum of all individuals' priority-adjusted well-being, and a measure of the equality of the distribution in question. Some implications of equality-prioritarianism are considered. (shrink)
In the last half century, decision theory has had a deep influence on moral theory. Its impact has largely been beneficial. However, it has also given rise to some problems, two of which are discussed here. First, issues such as risk-taking and risk imposition have been left out of ethics since they are believed to belong to decision theory, and consequently the ethical aspects of these issues have not been treated in either discipline. Secondly, ethics has adopted the decision-theoretical idea (...) that action-guidance has to be based on cause–effect or means–ends relationships between an individual action and its possible outcomes. This is problematic since the morally relevant connections between an action and future events are not fully covered by such relationships. In response to the first problem it is proposed that moral theory should deal directly and extensively with issues such as risk-taking and risk imposition, thereby intruding unabashedly into the traditional territory of decision theory. As a partial response to the second problem it is proposed that moral theorizing should release itself from the decision-theoretical requirement that the moral status of an action has to be derivable from the consequences (or other properties) that are assignable to that action alone. In particular, the effects that an action can have in combination with other actions by the same or other agents are valid arguments in an action-guiding moral discourse, even if its contribution to these combined consequences cannot be isolated and evaluated separately. (shrink)
Receiving information about threats to one’s health can contribute to anxiety and depression. In contemporary medical ethics there is considerable consensus that patient autonomy, or the patient’s right to know, in most cases outweighs these negative effects of information. Worry about the detrimental effects of information has, however, been voiced in relation to public health more generally. In particular, information about uncertain threats to public health, from—for example, chemicals—are said to entail social costs that have not been given due consideration. (...) This criticism implies a consequentialist argument for withholding such information from the public in their own best interest. In evaluating the argument for this kind of epistemic paternalism, the consequences of making information available must be compared to the consequences of withholding it. Consequences that should be considered include epistemic effects, psychological effects, effects on private decisions, and effects on political decisions. After giving due consideration to the possible uses of uncertain information and rebutting the claims that uncertainties imply small risks and that they are especially prone to entail misunderstandings and anxiety, it is concluded that there is a strong case against withholding of information about uncertain threats to public health. (shrink)
Mainstream risk analysis deviates in at least two important respects from the rationality ideal of mainstream economics. First, expected utility maximization is not applied in a consistent way. It is applied to endodoxastic uncertainty, i.e. the uncertainty (or risk) expressed in a risk assessment, but in many cases not to metadoxastic uncertainty, i.e. uncertainty about which of several competing assessments is correct. Instead, a common approach to metadoxastic uncertainty is to only take the most plausible assessment into account. This will (...) typically lead to risk-prone deviations from risk-neutrality. Secondly, risks and benefits for different persons are added to form a total value of risk. Such calculations are used to support the view that one should accept being exposed to a risk if it brings greater benefits for others. This is in stark contrast to modern Paretian welfare economics, that refrains from interindividual comparisons and does not require people to accept a disadvantage because it brings a larger advantage for others. (Published Online July 11 2006). (shrink)
Cost–benefit analysis (CBA) is much more philosophically interesting than has in general been recognized. Since it is the only well-developed form of applied consequentialism, it is a testing-ground for consequentialism and for the counterfactual analysis that it requires. Ten classes of philosophical problems that affect the practical performance of cost–benefit analysis are investigated: topic selection, dependence on the decision perspective, dangers of super synopticism and undue centralization, prediction problems, the indeterminateness of our control over future decisions, the need to exclude (...) certain consequences for moral reasons, bias in the delimitation of consequences, incommensurability of consequences, difficulties in defending the essential requirement of transferability across contexts, and the normatively questionable but equally essential assumption of interpersonal compensability. (Published Online July 31 2007). (shrink)
This article is an attempt at a systematic account of decision making under greater uncertainty than what traditional, mathematically oriented decision theory can cope with. Four components of great uncertainty are distinguished: (1) the identity of the options is not well determined (uncertainty of demarcation) ; (2) the consequences of at least some option are unknown (uncertainty of consequences); (3) it is not clear whether information obtained from others, such as experts, can be relied on (uncertainty of reliance); and (4) (...) the values relevant for the decision are not determined with sufficient precision (uncertainty of values). Some possible strategy types are proposed for each of these components. Decisions related to environmental issues are used to illustrate the proposals. (shrink)
discussions of risk contain logical and argumentative fallacies that are specific to the subject-matter. Ten such fallacies are identified, that can commonly be found in public debates on risk. They are named as follows: the sheer size fallacy, the converse sheer size fallacy, the fallacy of naturalness, the ostrich's fallacy, the proof-seeking fallacy, the delay fallacy, the technocratic fallacy, the consensus fallacy, the fallacy of pricing, and the infallability fallacy.
Clear-cut cases of decision-making under risk (known probabilities) are unusual in real life. The gambler’s decisions at the roulette table are as close as we can get to this type of decision-making. In contrast, decision-making under uncertainty (unknown probabilities) can be exemplified by a decision whether to enter a jungle that may contain unknown dangers. Life is usually more like an expedition into an unknown jungle than a visit to the casino. Nevertheless, it is common in decision-supporting disciplines to proceed (...) as if reasonably reliable probability estimates were available for all possible outcomes, i.e. as if the prevailing epistemic conditions were analogous to those of gambling at the roulette table. This mistake can be called the tuxedo fallacy . It is argued that traditional engineering practices such as safety factors and multiple safety barriers avoid this fallacy and that they therefore manage uncertainty better than probabilistic risk analysis (PRA). PRA is a useful tool, but it must be supplemented with other methods in order not to limit the analysis to dangers that can be assigned meaningful probability estimates. (shrink)
In economic analysis, it is usually assumed that each individuals well-being (mental welfare) depends on her or his own resources (material welfare). A typology is provided of the ways in which one persons well-being may depend on the material resources of other persons. When such dependencies are taken into account, standard Paretian analysis of welfare needs to be modified. Pareto efficiency on the level of material resources need not coincide with Pareto efficiency on the level of well-being. A change in (...) economic conditions that is Pareto efficient in the standard sense, i.e., with respect to material resources, may nevertheless sacrifice one persons well-being to that of another. It is shown that under plausible assumptions, Pareto efficiency on the level of well-being may require the reduction of inequality on the level of material resources. (shrink)
Two types of measures of probabilistic uncertainty are introduced and investigated. Dispersion measures report how diffused the agent’s second-order probability distribution is over the range of first-order probabilities. Robustness measures reflect the extent to which the agent’s assessment of the prior (objective) probability of an event is perturbed by information about whether or not the event actually took place. The properties of both types of measures are investigated. The most obvious type of robustness measure is shown to coincide with one (...) of the major candidates for a dispersion measure, the mean square deviation measure. (shrink)
It is almost universally acknowledged that risks have to be weighed against benefits, but there are different ways to perform the weighing. In conventional risk analysis, collectivist risk-weighing is the standard. This means that an option is accepted if the sum of all individual benefits outweighs the sum of all individual risks. In practices originating in clinical medicine, such as ethical appraisals of clinical trials, individualist risk-weighing is the standard. This implies a much stricter criterion for risk acceptance, namely that (...) the risk to which each individual is exposed should be outweighed by benefits for that same individual. The different choices of risk-weighing methods in different policy areas seem to have emerged from traditional thought patterns and social relations, rather than from explicit deliberations on possible justifications for the alternative ways to weigh risks against benefits. It is not obvious how the prevalent differences in risk-weighing practices can be reconstructed in terms of consistent underlying principles of preventive health or social priority-setting. (shrink)
Although stability is a central notion in several academic disciplines, the parallelsremain unexplored since previous discussions of the concept have been almostexclusively subject-specific. In the literature we have found three basic conceptsof stability, that we call constancy, robustness, and resilience. They are all foundin both the natural and the social sciences. To analyze the three concepts we introducea general formal framework in which stability relates to transitions between states. Itcan then be shown that robustness is a limiting case of resilience, (...) whereas neitherconstancy nor resilience can be defined in terms of the other. Hence, there are twobasic concepts of stability, both of which are used in both the social and the naturalsciences. This congruence in the concepts of stability is of particular interest forendeavours to construct models that represent both natural and social phenomena. (shrink)
Research is subject to more stringent ethical requirements than most other human activities, and a procedure that is otherwise allowed may be forbidden in research. Hence, risk-taking is more restricted in scientific research than in most non-research contexts, and privacy is better protected in scientific questionnaires than in marketing surveys. Potential arguments for this difference are scrutinized. The case in its favour appears to be weak. A stronger case can be made in favour of a difference in the opposite direction: (...) If perilous or otherwise problematic activities have to be performed it is usually better to perform them in a research context where they are properly evaluated so that guidance is obtained for the future. However, retreating from current ethical demands on research is not a desirable direction to go. Instead, research ethics can serve to inspire the introduction of more stringent ethical principles in other social sectors. (shrink)
The ideal world semantics of standard deontic logic identifies our obligations with how we would act in an ideal world. However, to act as if one lived in an ideal world is bad moral advice, associated with wishful thinking rather than well-considered moral deliberation. Ideal world semantics gives rise to implausible logical principles, and the metaphysical arguments that have been put forward in its favour turn out to be based on a too limited view of truth-functional representation. It is argued (...) that ideal world semantics should be given up in favour of other, more plausible uses of possible worlds for modelling normative subject-matter. (shrink)
Situationist deontic logic is a model of that fraction of normative discourse which refers to only one situation and one set of alternatives. As we can see from a whole series of well-known paradoxes, standard deontic logic (SDL) is seriously mistaken even at the situationist level. In this paper it is shown how a more realistic deontic logic can be based on the assumption that prescriptive predicates satisfy the property of contranegativity. A satisfactory account of situation-specific norms is a necessary (...) prerequisite for a successful treatment of more complex normative structures. (shrink)
In this paper we will argue: (1) that scholars, regardless of their normative stand against or for genetic enhancement indeed have a moral/professional obligation to hold on to a realistic and up-to-date conception of genetic enhancement; (2) that there is an unwarranted hype surrounding the issue of genetic enhancement in general, and gene doping in particular; and (3) that this hype is, at least partly, created due to a simplistic and reductionist conception of genetics often adopted by bioethicists.
Elective surgery can be cancelled when resources are overwhelmed by emergency cases. We hypothesized that such cancellations, on psychological grounds, are followed also by inferior clinical results and we conducted a retrospective survey of patients following joint replacement surgery. Sixty patients having suffered from administrative cancellation prior to their operation during an 18-month period and with six months follow-up were identified and compared with another 60 matched patients after having the same type of surgery but without prior cancellation. All patients (...) received questionnaires on complications and on visual analogue scale (VAS) assessment on subjective wellbeing and quality of life (QoL) at follow-up. The study group reported 50 complications versus 33 for controls (P < 0.03). A borderline significant difference was found for myocardial infarction, 4 versus 0 (P < 0.05). There was no difference in VAS registration and QoL measurements did not quite reach statistical significance (P = 0.06). Cancellations (postponements) of elective surgery for administrative reasons may be followed by inferior clinical results, and this merits further prospective study. (shrink)
A new possible world semantics for deontic logic is proposed. Its intuitive basis is that prohibitive predicates (such as "wrong" and "prohibited") have the property of negativity, i.e. that what is worse than something wrong is itself wrong. The logic of prohibitive predicates is built on this property and on preference logic. Prescriptive predicates are defined in terms of prohibitive predicates, according to the wellknown formula "ought" = "wrong that not". In this preference-based deontic logic (PDL), those theorems that give (...) rise to the paradoxes of standard deontic logic (SDL) are not obtained. (E.g., O(p & q) → Op & Oq and Op → O(p v q)) are theorems of SDL but not of PDL.) The more plausible theorems of SDL, however, can be derived in PDL. (shrink)
A definition of pseudoscience is proposed, according to which a statement is pseudoscientific if and only if it (1) pertains to an issue within the domains of science, (2) is not epistemically warranted, and (3) is part of a doctrine whose major proponents try to create the impression that it is epistemically warranted. This approach has the advantage of separating the definition of pseudoscience from the justification of the claim that science represents the most epistemically warranted statements. The definition is (...) used to explain why proponents of widely divergent criteria for the demarcation between science and pseudoscience tend to be in almost complete agreement on the particular demarcations that should presumably be based on these general criteria. (shrink)
The advantages and disadvantages of formalization in philosophy are summarized. It is concluded that formalized philosophy is an endangered speciality that needs to be revitalized and to increase its interactions with non-formalized philosophy. The enigmatic style that is common in philosophical logic must give way to explicit discussions of the problematic relationship between formal models and the philosophical concepts and issues that motivated their development.
The probability that a fair coin tossed yesterday landed heads is either 0 or 1, but the probability that it would land heads was 0.5. In order to account for the latter type of probabilities, past probabilities, a temporal restriction operator is introduced and axiomatically characterized. It is used to construct a representation of conditional past probabilities. The logic of past probabilities turns out to be strictly weaker than the logic of standard probabilities.
Abstract: This article offers a perspective on the role of philosophy in relation to other academic disciplines and to society in general. Among the issues treated are the delimitation of philosophy, whether it is a science, its role in the community of knowledge disciplines, its losses of subject matter to other disciplines, how it is influenced by social changes and by progress in other disciplines, and its role in interdisciplinary work. It is concluded that philosophy has an important mission in (...) promoting clarity, precision, and open-mindedness in academic research and in society at large. (shrink)
In order to avoid the paradoxes of standard deontic logic, we have to give up the semantic construction that identifies obligatory status with presence in all elements of a subset of the set of possible worlds. It is proposed that deontic logic should instead be based on a preference relation, according to the principle that whatever is better than something permitted is itself permitted. Close connections hold between the logical properties of a preference relation and those of the deontic logics (...) that are derived from it in this way. The paradoxes of SDL can be avoided with this construction, but it is still an open question what type of preference relation is best suited to be used as a basis for deontic logic. (shrink)
In pure science, the standard approach to non-epistemic values is to exclude them as far as possible from scientific deliberations. When science is applied to practical decisions, non-epistemic values cannot be excluded. Instead, they have to be combined with (value-deprived) scientific information in a way that leads to practically optimal decisions. A normative model is proposed for the processing of information in both pure and applied science. A general-purpose corpus of scientific knowledge, with high entry requirements, has a central role (...) in this model. Due to its high entry requirements, the information that it contains is sufficiently reliable for the vast majority of practical purposes. However, for some purposes, the corpus needs to be supplemented with additional information, such as scientific indications of danger that do not satisfy the entry requirements for the corpus. The role of non-epistemic values in the evaluation of scientific information should, as far as possible, be limited to determining the level of evidence required for various types of practical decisions. (shrink)
Although choice and preference are distinct categories, it may in some contexts be a useful idealization to treat choices as fully determined by preferences. In order to construct a general model of such preference-based choice, a method to derive choices from preferences is needed that yields reasonable outcomes for all preference relations, even those that are incomplete and contain cycles. A generalized choice function is introduced for this purpose. It is axiomatically characterized and is shown to compare favourably with alternative (...) constructions. (shrink)
Five examples are given of major philosophical discussions in which technology needs to be taken into account. In the philosophy of science, the notion of mechanism has a central role. It has a technological origin, and its interpretation has links to technology. In the philosophy of mind, a series of technological analogues have had a deep influence on our understanding of human cognition: automata and watches, telegraphy and telephony, and most recently computers. The discussion on free will largely concerns, in (...) Locke’s words, whether we can “put morality and mechanism together.” Notions of computation and automata that have been abstracted from the behavior of technological devices are key concepts both in logic and in the philosophy of mathematics. Finally, bioethics is largely concerned with the ethical issues that new technologies give rise to in healthcare. As these examples show, there is no lack of technology-related subject matter in philosophy, but there is a lack of sustained attention to it. (shrink)