This paper identifies an overdetermination problem faced by the non-reductive dispositional property account of disposition ascriptions. Two possible responses to the problem are evaluated and both are shown to have serious drawbacks. Finally it is noted that the traditional conditional analysis of dispositional ascriptions escapes the original difficulty.
Two recurrent arguments levelled against the view that enduring objects survive change are examined within the framework of the B-theory of time: the argument from Leibniz's Law and the argument from Instantiation of Incompatible Properties. Both arguments are shown to be question-begging and hence unsuccessful.
As part of the conference commemorating Theoria's 75th anniversary, a round table discussion on philosophy publishing was held in Bergendal, Sollentuna, Sweden, on 1 October 2010. Bengt Hansson was the chair, and the other participants were eight editors-in-chief of philosophy journals: Hans van Ditmarsch (Journal of Philosophical Logic), Pascal Engel (Dialectica), Sven Ove Hansson (Theoria), Vincent Hendricks (Synthese), Søren Holm (Journal of Medical Ethics), Pauline Jacobson (Linguistics and Philosophy), Anthonie Meijers (Philosophical Explorations), Henry S. Richardson (Ethics) and Hans (...) Rott (Erkenntnis). (shrink)
Formal representations of values and norms are employed in several academic disciplines and specialties, such as economics, jurisprudence, decision theory, and social choice theory. Sven Ove Hansson closely examines such foundational issues as the values of wholes and the values of their parts, the connections between values and norms, how values can be decision-guiding and the structure of normative codes with formal precision. Models of change in both preferences and norms are offered, as well as a new method to (...) base the logic of norms on that of preferences. Hansson has developed a unified formal representation of values and norms that reflects both their static and their dynamic properties. This formalized treatment, carried out in terms of both informal value theory and precise logical detail, will contribute to the clarification of certain issues in the basic philosophical theory of values and norms. (shrink)
Should Probabilistic Design Replace Safety Factors? Content Type Journal Article Pages 151-168 DOI 10.1007/s13347-010-0003-6 Authors Neelke Doorn, Department of Technology, Policy and Management, Delft University of Technology, PO Box 5015, 2600 GA Delft, The Netherlands Sven Ove Hansson, Department of Philosophy and the History of Technology, Royal Institute of Technology, Teknikringen 78 B, 100 44 Stockholm, Sweden Journal Philosophy & Technology Online ISSN 2210-5441 Print ISSN 2210-5433 Journal Volume Volume 24 Journal Issue Volume 24, Number 2.
A conceptual analysis of falsificationism is performed, in which the central falsificationist thesis is divided into several components. Furthermore, an empirical study of falsification in science is reported, based on the 70 scientific contributions that were published as articles in Nature in 2000. Only one of these articles conformed to the falsificationist recipe for successful science, namely the falsification of a hypothesis that is more accessible to falsification than to verification. It is argued that falsificationism relies on an incorrect view (...) of the nature of scientific inquiry and that it is, therefore, not a tenable research methodology. (shrink)
New technologies and practices, such as drug testing, genetic testing, and electronic surveillance infringe upon the privacy of workers on workplaces. We argue that employees have a prima facie right to privacy, but this right can be overridden by competing moral principles that follow, explicitly or implicitly, from the contract of employment. We propose a set of criteria for when intrusions into an employee''s privacy are justified. Three types of justification are specified, namely those that refer to the employer''s interests, (...) to the interests of the employee her- or himself, and to the interests of third parties such as customers and fellow workers. For each of these three types, sub-criteria are proposed that can be used to determine whether a particular infringement into an employee''s privacy is morally justified or not. (shrink)
The purpose of this presentation is to introduce both the concept of risk and the precautionary principle, that is a major policy principle in present-day risk management. Since risk has been the subject of many misconceptions I will do this in large part by criticizing seven views on risk that I believe to have caused considerable confusion both among scientists and policy-makers. But before looking at the seven myths of risk, let us begin with the basic issue of defining “risk”. (...) The word “risk” often refers, rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In addition, the word has several more specialized meanings. Let me illustrate this by making a few statements about the single most important preventable health hazard in non-starving countries. First: “Lung cancer is one of the major risks that affect smokers.” Here, we use “risk” in the following sense: (1) risk = an unwanted event which may or may not occur. 1 (15). (shrink)
Mainstream moral theories deal with situations in which the outcome of each possible action is well-determined and knowable. In order to make ethics relevant for problems of risk and uncertainty, moral theories have to be extended so that they cover actions whose outcomes are not determinable beforehand. One approach to this extension problem is to develop methods for appraising probabilistic combinations of outcomes. This approach is investigated and shown not to solve the problem. An alternative approach is then developed. Its (...) starting-point is that everyone has a prima facie moral right not to be exposed to risk. However, this right can be overridden if the risk-exposure is part of an equitable system for risk-taking that works to the advantage of the individual risk-exposed person. (shrink)
A general theory of coherence is proposed, in which systemic and relational coherence are shown to be interdefinable. When this theory is applied to sets of sentences, it turns out that logical closure obscures the distinctions that are needed for a meaningful analysis of coherence. It is concluded that references to “all beliefs” in coherentist phrases such as “all beliefs support each other” have to be modified so that merely derived beliefs are excluded. Therefore, in order to avoid absurd conclusions, (...) coherentists have to accept a weak version of epistemic priority, that sorts out merely derived beliefs. Furthermore, it is shown that in belief revision theory, coherence cannot be adequately represented by logical closure, but has to be represented separately. (shrink)
The paper introduces ten open problems in belief revision theory, related to the representation of the belief state, to different notions of degrees of belief, and to the nature of change operations. It is argued that these problems are all issues in philosopical logic, in the strong sense of requiring inputs from both logic and philosophy for their solution.
In non-technical contexts, the word “risk” refers, often rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In technical contexts, the word has many uses and specialized meanings. The most common ones are the following.
This text is a non-technical overview of modern decision theory. It is intended for university students with no previous acquaintance with the subject, and was primarily written for the participants of a course on risk analysis at Uppsala University in 1994.
In the last half century, decision theory has had a deep influence on moral theory. Its impact has largely been beneficial. However, it has also given rise to some problems, two of which are discussed here. First, issues such as risk-taking and risk imposition have been left out of ethics since they are believed to belong to decision theory, and consequently the ethical aspects of these issues have not been treated in either discipline. Secondly, ethics has adopted the decision-theoretical idea (...) that action-guidance has to be based on cause–effect or means–ends relationships between an individual action and its possible outcomes. This is problematic since the morally relevant connections between an action and future events are not fully covered by such relationships. In response to the first problem it is proposed that moral theory should deal directly and extensively with issues such as risk-taking and risk imposition, thereby intruding unabashedly into the traditional territory of decision theory. As a partial response to the second problem it is proposed that moral theorizing should release itself from the decision-theoretical requirement that the moral status of an action has to be derivable from the consequences (or other properties) that are assignable to that action alone. In particular, the effects that an action can have in combination with other actions by the same or other agents are valid arguments in an action-guiding moral discourse, even if its contribution to these combined consequences cannot be isolated and evaluated separately. (shrink)
This article argues that, contrary to the received view, prioritarianism and egalitarianism are not jointly incompatible theories in normative ethics. By introducing a distinction between weighing and aggregating, the authors show that the seemingly conflicting intuitions underlying prioritarianism and egalitarianism are consistent. The upshot is a combined position, equality-prioritarianism, which takes both prioritarian and egalitarian considerations into account in a technically precise manner. On this view, the moral value of a distribution of well-being is a product of two factors: the (...) sum of all individuals' priority-adjusted well-being, and a measure of the equality of the distribution in question. Some implications of equality-prioritarianism are considered. (shrink)
Mainstream risk analysis deviates in at least two important respects from the rationality ideal of mainstream economics. First, expected utility maximization is not applied in a consistent way. It is applied to endodoxastic uncertainty, i.e. the uncertainty (or risk) expressed in a risk assessment, but in many cases not to metadoxastic uncertainty, i.e. uncertainty about which of several competing assessments is correct. Instead, a common approach to metadoxastic uncertainty is to only take the most plausible assessment into account. This will (...) typically lead to risk-prone deviations from risk-neutrality. Secondly, risks and benefits for different persons are added to form a total value of risk. Such calculations are used to support the view that one should accept being exposed to a risk if it brings greater benefits for others. This is in stark contrast to modern Paretian welfare economics, that refrains from interindividual comparisons and does not require people to accept a disadvantage because it brings a larger advantage for others. (Published Online July 11 2006). (shrink)
This article is an attempt at a systematic account of decision making under greater uncertainty than what traditional, mathematically oriented decision theory can cope with. Four components of great uncertainty are distinguished: (1) the identity of the options is not well determined (uncertainty of demarcation) ; (2) the consequences of at least some option are unknown (uncertainty of consequences); (3) it is not clear whether information obtained from others, such as experts, can be relied on (uncertainty of reliance); and (4) (...) the values relevant for the decision are not determined with sufficient precision (uncertainty of values). Some possible strategy types are proposed for each of these components. Decisions related to environmental issues are used to illustrate the proposals. (shrink)
Clear-cut cases of decision-making under risk (known probabilities) are unusual in real life. The gambler’s decisions at the roulette table are as close as we can get to this type of decision-making. In contrast, decision-making under uncertainty (unknown probabilities) can be exemplified by a decision whether to enter a jungle that may contain unknown dangers. Life is usually more like an expedition into an unknown jungle than a visit to the casino. Nevertheless, it is common in decision-supporting disciplines to proceed (...) as if reasonably reliable probability estimates were available for all possible outcomes, i.e. as if the prevailing epistemic conditions were analogous to those of gambling at the roulette table. This mistake can be called the tuxedo fallacy . It is argued that traditional engineering practices such as safety factors and multiple safety barriers avoid this fallacy and that they therefore manage uncertainty better than probabilistic risk analysis (PRA). PRA is a useful tool, but it must be supplemented with other methods in order not to limit the analysis to dangers that can be assigned meaningful probability estimates. (shrink)
Cost–benefit analysis (CBA) is much more philosophically interesting than has in general been recognized. Since it is the only well-developed form of applied consequentialism, it is a testing-ground for consequentialism and for the counterfactual analysis that it requires. Ten classes of philosophical problems that affect the practical performance of cost–benefit analysis are investigated: topic selection, dependence on the decision perspective, dangers of super synopticism and undue centralization, prediction problems, the indeterminateness of our control over future decisions, the need to exclude (...) certain consequences for moral reasons, bias in the delimitation of consequences, incommensurability of consequences, difficulties in defending the essential requirement of transferability across contexts, and the normatively questionable but equally essential assumption of interpersonal compensability. (Published Online July 31 2007). (shrink)
In economic analysis, it is usually assumed that each individuals well-being (mental welfare) depends on her or his own resources (material welfare). A typology is provided of the ways in which one persons well-being may depend on the material resources of other persons. When such dependencies are taken into account, standard Paretian analysis of welfare needs to be modified. Pareto efficiency on the level of material resources need not coincide with Pareto efficiency on the level of well-being. A change in (...) economic conditions that is Pareto efficient in the standard sense, i.e., with respect to material resources, may nevertheless sacrifice one persons well-being to that of another. It is shown that under plausible assumptions, Pareto efficiency on the level of well-being may require the reduction of inequality on the level of material resources. (shrink)
discussions of risk contain logical and argumentative fallacies that are specific to the subject-matter. Ten such fallacies are identified, that can commonly be found in public debates on risk. They are named as follows: the sheer size fallacy, the converse sheer size fallacy, the fallacy of naturalness, the ostrich's fallacy, the proof-seeking fallacy, the delay fallacy, the technocratic fallacy, the consensus fallacy, the fallacy of pricing, and the infallability fallacy.
Two types of measures of probabilistic uncertainty are introduced and investigated. Dispersion measures report how diffused the agent’s second-order probability distribution is over the range of first-order probabilities. Robustness measures reflect the extent to which the agent’s assessment of the prior (objective) probability of an event is perturbed by information about whether or not the event actually took place. The properties of both types of measures are investigated. The most obvious type of robustness measure is shown to coincide with one (...) of the major candidates for a dispersion measure, the mean square deviation measure. (shrink)
It is almost universally acknowledged that risks have to be weighed against benefits, but there are different ways to perform the weighing. In conventional risk analysis, collectivist risk-weighing is the standard. This means that an option is accepted if the sum of all individual benefits outweighs the sum of all individual risks. In practices originating in clinical medicine, such as ethical appraisals of clinical trials, individualist risk-weighing is the standard. This implies a much stricter criterion for risk acceptance, namely that (...) the risk to which each individual is exposed should be outweighed by benefits for that same individual. The different choices of risk-weighing methods in different policy areas seem to have emerged from traditional thought patterns and social relations, rather than from explicit deliberations on possible justifications for the alternative ways to weigh risks against benefits. It is not obvious how the prevalent differences in risk-weighing practices can be reconstructed in terms of consistent underlying principles of preventive health or social priority-setting. (shrink)
Research is subject to more stringent ethical requirements than most other human activities, and a procedure that is otherwise allowed may be forbidden in research. Hence, risk-taking is more restricted in scientific research than in most non-research contexts, and privacy is better protected in scientific questionnaires than in marketing surveys. Potential arguments for this difference are scrutinized. The case in its favour appears to be weak. A stronger case can be made in favour of a difference in the opposite direction: (...) If perilous or otherwise problematic activities have to be performed it is usually better to perform them in a research context where they are properly evaluated so that guidance is obtained for the future. However, retreating from current ethical demands on research is not a desirable direction to go. Instead, research ethics can serve to inspire the introduction of more stringent ethical principles in other social sectors. (shrink)
The ideal world semantics of standard deontic logic identifies our obligations with how we would act in an ideal world. However, to act as if one lived in an ideal world is bad moral advice, associated with wishful thinking rather than well-considered moral deliberation. Ideal world semantics gives rise to implausible logical principles, and the metaphysical arguments that have been put forward in its favour turn out to be based on a too limited view of truth-functional representation. It is argued (...) that ideal world semantics should be given up in favour of other, more plausible uses of possible worlds for modelling normative subject-matter. (shrink)
Although stability is a central notion in several academic disciplines, the parallelsremain unexplored since previous discussions of the concept have been almostexclusively subject-specific. In the literature we have found three basic conceptsof stability, that we call constancy, robustness, and resilience. They are all foundin both the natural and the social sciences. To analyze the three concepts we introducea general formal framework in which stability relates to transitions between states. Itcan then be shown that robustness is a limiting case of resilience, (...) whereas neitherconstancy nor resilience can be defined in terms of the other. Hence, there are twobasic concepts of stability, both of which are used in both the social and the naturalsciences. This congruence in the concepts of stability is of particular interest forendeavours to construct models that represent both natural and social phenomena. (shrink)
In this paper we will argue: (1) that scholars, regardless of their normative stand against or for genetic enhancement indeed have a moral/professional obligation to hold on to a realistic and up-to-date conception of genetic enhancement; (2) that there is an unwarranted hype surrounding the issue of genetic enhancement in general, and gene doping in particular; and (3) that this hype is, at least partly, created due to a simplistic and reductionist conception of genetics often adopted by bioethicists.
Receiving information about threats to one’s health can contribute to anxiety and depression. In contemporary medical ethics there is considerable consensus that patient autonomy, or the patient’s right to know, in most cases outweighs these negative effects of information. Worry about the detrimental effects of information has, however, been voiced in relation to public health more generally. In particular, information about uncertain threats to public health, from—for example, chemicals—are said to entail social costs that have not been given due consideration. (...) This criticism implies a consequentialist argument for withholding such information from the public in their own best interest. In evaluating the argument for this kind of epistemic paternalism, the consequences of making information available must be compared to the consequences of withholding it. Consequences that should be considered include epistemic effects, psychological effects, effects on private decisions, and effects on political decisions. After giving due consideration to the possible uses of uncertain information and rebutting the claims that uncertainties imply small risks and that they are especially prone to entail misunderstandings and anxiety, it is concluded that there is a strong case against withholding of information about uncertain threats to public health. (shrink)
Situationist deontic logic is a model of that fraction of normative discourse which refers to only one situation and one set of alternatives. As we can see from a whole series of well-known paradoxes, standard deontic logic (SDL) is seriously mistaken even at the situationist level. In this paper it is shown how a more realistic deontic logic can be based on the assumption that prescriptive predicates satisfy the property of contranegativity. A satisfactory account of situation-specific norms is a necessary (...) prerequisite for a successful treatment of more complex normative structures. (shrink)
A new possible world semantics for deontic logic is proposed. Its intuitive basis is that prohibitive predicates (such as wrong and prohibited) have the property of negativity, i.e. that what is worse than something wrong is itself wrong. The logic of prohibitive predicates is built on this property and on preference logic. Prescriptive predicates are defined in terms of prohibitive predicates, according to the well-known formula ought = wrong that not. In this preference-based deontic logic (PDL), those theorems that give (...) rise to the paradoxes of standard deontic logic (SDL) are not obtained. (E.g., O(p & q) Op & Oq and Op O(p v q)) are theorems of SDL but not of PDL. The more plausible theorems of SDL, however, can be derived in PDL. (shrink)
The probability that a fair coin tossed yesterday landed heads is either 0 or 1, but the probability that it would land heads was 0.5. In order to account for the latter type of probabilities, past probabilities, a temporal restriction operator is introduced and axiomatically characterized. It is used to construct a representation of conditional past probabilities. The logic of past probabilities turns out to be strictly weaker than the logic of standard probabilities.
The advantages and disadvantages of formalization in philosophy are summarized. It is concluded that formalized philosophy is an endangered speciality that needs to be revitalized and to increase its interactions with non-formalized philosophy. The enigmatic style that is common in philosophical logic must give way to explicit discussions of the problematic relationship between formal models and the philosophical concepts and issues that motivated their development.
Abstract: This article offers a perspective on the role of philosophy in relation to other academic disciplines and to society in general. Among the issues treated are the delimitation of philosophy, whether it is a science, its role in the community of knowledge disciplines, its losses of subject matter to other disciplines, how it is influenced by social changes and by progress in other disciplines, and its role in interdisciplinary work. It is concluded that philosophy has an important mission in (...) promoting clarity, precision, and open-mindedness in academic research and in society at large. (shrink)
In order to avoid the paradoxes of standard deontic logic, we have to give up the semantic construction that identifies obligatory status with presence in all elements of a subset of the set of possible worlds. It is proposed that deontic logic should instead be based on a preference relation, according to the principle that whatever is better than something permitted is itself permitted. Close connections hold between the logical properties of a preference relation and those of the deontic logics (...) that are derived from it in this way. The paradoxes of SDL can be avoided with this construction, but it is still an open question what type of preference relation is best suited to be used as a basis for deontic logic. (shrink)
A definition of pseudoscience is proposed, according to which a statement is pseudoscientific if and only if it (1) pertains to an issue within the domains of science, (2) is not epistemically warranted, and (3) is part of a doctrine whose major proponents try to create the impression that it is epistemically warranted. This approach has the advantage of separating the definition of pseudoscience from the justification of the claim that science represents the most epistemically warranted statements. The definition is (...) used to explain why proponents of widely divergent criteria for the demarcation between science and pseudoscience tend to be in almost complete agreement on the particular demarcations that should presumably be based on these general criteria. (shrink)
Although choice and preference are distinct categories, it may in some contexts be a useful idealization to treat choices as fully determined by preferences. In order to construct a general model of such preference-based choice, a method to derive choices from preferences is needed that yields reasonable outcomes for all preference relations, even those that are incomplete and contain cycles. A generalized choice function is introduced for this purpose. It is axiomatically characterized and is shown to compare favourably with alternative (...) constructions. (shrink)
Pragmatic pluralism denotes a particular approach to problems of international human rights and protections that departs from conventional cosmopolitan approaches. Pragmatic pluralism argues for situated and localized forms of cooperation between state and non-state actors, particularly religious groups and organizations, that may not share the secular, juridical understandings of rights, persons, and obligations common to contemporary cosmopolitan theory. A resource for the development of such a model of pragmatic pluralism can be found in the work of Hannah Arendt. Arendt's early (...) dissertation "Love and Saint Augustine" affords a model of religious community and obligation that can be read productively alongside her later political writings. The possibilities inherent in a cooperative reading of these two parts of her work can be illustrated in relation to an issue of particular concern to cosmopolitan theorists: the international refugee crisis. (shrink)
In order to explore public views on nanobiotechnology (NBT), convergence seminars were held in four places in Europe; namely in Visby (Sweden), Sheffield (UK), Lublin (Poland), and Porto (Portugal). A convergence seminar is a new form of public participatory activity that can be used to deal systematically with the uncertainty associated for instance with the development of an emerging technology like nanobiotechnology. In its first phase, the participants are divided into three “scenario groups” that discuss different future scenarios. In the (...) second phase, the participants are regrouped into three “convergence groups”, each of which contains representatives from each of the three groups from the first phase. In the final third phase, all participants meet for a summary discussion. This pilot project had two aims: (1) to develop and assess the new methodology and (2) to gather advice and recommendations from the public that may be useful for future decisions on nanobiotechnology (NBT). Participants emphasized that they wanted the technology to focus on solutions to environmental and medical problems and to meet the needs of developing countries. The need for further public participation and deliberation on NBT issues seemed to be acknowledged by all participants. Many of them also raised equality concerns. Views on the means by which NBT should be steered into socially useful directions were more divided. In particular, different views were expressed on how much regulation of company activities is needed to curb unwanted developments. The participants’ responses in a questionnaire indicate that the methodology of the convergence seminars was successful for decision-making under uncertainty. In particular, the participants stated that their advice was influenced both by access to different possible future developments and by the points of view of their co-participants, which is what the method is specifically intended to achieve. (shrink)
In pure science, the standard approach to non-epistemic values is to exclude them as far as possible from scientific deliberations. When science is applied to practical decisions, non-epistemic values cannot be excluded. Instead, they have to be combined with (value-deprived) scientific information in a way that leads to practically optimal decisions. A normative model is proposed for the processing of information in both pure and applied science. A general-purpose corpus of scientific knowledge, with high entry requirements, has a central role (...) in this model. Due to its high entry requirements, the information that it contains is sufficiently reliable for the vast majority of practical purposes. However, for some purposes, the corpus needs to be supplemented with additional information, such as scientific indications of danger that do not satisfy the entry requirements for the corpus. The role of non-epistemic values in the evaluation of scientific information should, as far as possible, be limited to determining the level of evidence required for various types of practical decisions. (shrink)
Theoria , the international Swedish philosophy journal, was founded in 1935. Its contributors in the first 75 years include the major Swedish philosophers from this period and in addition a long list of international philosophers, including A. J. Ayer, C. D. Broad, Ernst Cassirer, Hector Neri Castañeda, Arthur C. Danto, Donald Davidson, Nelson Goodman, R. M. Hare, Carl G. Hempel, Jaakko Hintikka, Saul Kripke, Henry E. Kyburg, Keith Lehrer, Isaac Levi, David Lewis, Gerald MacCallum, Richard Montague, Otto Neurath, Arthur N. (...) Prior, W. V. Quine, Nicholas Rescher, Ernest Sosa, Robert C. Stalnaker, P. F. Strawson, Patrick Suppes, Johan van Benthem, Georg Henrik von Wright and many others. Hempel's confirmation paradoxes, Ross's deontic paradox, Montague's universal grammar and Lindström's theorem are among the contributions to philosophy that were first published in Theoria. (shrink)
Although it has often been claimed that all the information contained in second-order probabilities can be contained in first-order probabilities, no practical recipe for the elimination of second-order probabilities without loss of information seems to have been presented. Here, such an elimination method is introduced for repeatable events. However, its application comes at the price of losses in cognitive realism. In spite of their technical eliminability, second-order probabilities are useful because they can provide models of important features of the world (...) that are cognitively more plausible than those that can be obtained with single-level probabilities. (shrink)
We prove that four theses commonly associated with coherentism are incompatible with the representation of a belief state as a logically closed set of sentences. The result is applied to the conventional coherence interpretation of the AGM theory of belief revision, which appears not to be tenable. Our argument also counts against the coherentistic acceptability of a certain form of propositional holism. We argue that the problems arise as an effect of ignoring the distinction between derived and non-derived beliefs, and (...) we suggest that the kind of coherence relevant to epistemic justification is the coherence of non-derived beliefs. (shrink)
Five examples are given of major philosophical discussions in which technology needs to be taken into account. In the philosophy of science, the notion of mechanism has a central role. It has a technological origin, and its interpretation has links to technology. In the philosophy of mind, a series of technological analogues have had a deep influence on our understanding of human cognition: automata and watches, telegraphy and telephony, and most recently computers. The discussion on free will largely concerns, in (...) Locke’s words, whether we can “put morality and mechanism together.” Notions of computation and automata that have been abstracted from the behavior of technological devices are key concepts both in logic and in the philosophy of mathematics. Finally, bioethics is largely concerned with the ethical issues that new technologies give rise to in healthcare. As these examples show, there is no lack of technology-related subject matter in philosophy, but there is a lack of sustained attention to it. (shrink)
The AGM (Alchourrón-GÄrdenfors-Makinson) model of belief change is extended to cover changes on sets of beliefs that arenot closed under logical consequence (belief bases). Three major types of change operations, namely contraction, internal revision, and external revision are axiomatically characterized, and their interrelations are studied. In external revision, the Levi identity is reversed in the sense that onefirst adds the new belief to the belief base, and afterwards contracts its negation. It is argued that external revision represents an intuitively plausible (...) way of revising one's beliefs. Since it typically involves the temporary acceptance of an inconsistent set of beliefs, it can only be used in belief representations that distinguish between different inconsistent sets of belief. (shrink)
A possible world semantics for preference is developed. The remainder operator () is used to give precision to the notion that two states of the world are as similar as possible, given a specified difference between them. A general structure is introduced for preference relations between states of affairs, and three types of such preference relations are defined. It is argued that one of them, actual preference, corresponds closely to the concept of preference in informal discourse. Its logical properties are (...) studied and shown to be plausible. (shrink)
Science is praxis relevant to the extent that it guides goal-directed action by telling us how to act in order to achieve the goals. Investigations aiming at high praxis relevance are performed in various disciplines under names such as clinical trials, evaluation research, intervention research and social experiments. In this contribution, the notion of (direct) praxis relevance is delineated, and it is distinguished from related properties of science such as those of being applied and being practically useful in a wider (...) sense. Recommendations for the achievement of praxis relevance are offered in the form of five principles: the prerogative of direct experiments, minimized theory-induced uncertainty, multiple approximations, causal chain decomposition and successive improvements. (shrink)
By replacement is meant an operation that replaces one sentence by another in a belief set. Replacement can be used as a kind of Sheffer stroke for belief change, since contraction, revision, and expansion can all be defined in terms of it. Replacement can also be defined either in terms of contraction or in terms of revision. Close connections are shown to hold between axioms for replacement and axioms for contraction and revision. Partial meet replacement is axiomatically characterized. It is (...) shown that this operation can have outcomes that are not obtainable through either partial meet contraction or partial meet revision. (shrink)
Value statements can be divided into three major groups according to how their criteria of evaluation are specified. The first of these groups consists of those value statements that are unspecified with respect to the criteria of evaluation. Here is one example: Her decision was very good. The second group consists of the viewpoint-specified value statements. In these value statements, an explicit point of view is given, from which the evaluation is made. We often use adverbs such as “morally”, “aesthetically” (...) etc. to express this type of specification. (shrink)
The maximin rule can be used as a formal version of the precautionary principle. This paper evaluates the feasibility and the intuitive plausibility of this decision rule. The major conclusions are: (1) Precaution has to be applied symmetrically. (2) Precaution is only possible when outcomes are comparable in terms of value, so that it can be determined which outcome is worst. (3) Precaution is sensitive to standards of possibility. Far-away scenarios have to be excluded, and it is difficult to find (...) a principled way to draw the line. (4) Precaution is sensitive to the framing of decision problems. Local cautiousness may add up to global incautiousness. (shrink)
A probabilistic explication is offered of equipoise and uncertainty in clinical trials. In order to be useful in the justification of clinical trials, equipoise has to be interpreted in terms of overlapping probability distributions of possible treatment outcomes, rather than point estimates representing expectation values. Uncertainty about treatment outcomes is shown to be a necessary but insufficient condition for the ethical defensibility of clinical trials. Additional requirements are proposed for the nature of that uncertainty. The indecisiveness of our criteria for (...) cautious decision-making under uncertainty creates the leeway that makes clinical trials defensible. (shrink)
Blockage contraction is an operation of belief contraction that acts directly on the outcome set, i.e. the set of logically closed subsets of the original belief set K that are potential contraction outcomes. Blocking is represented by a binary relation on the outcome set. If a potential outcome X blocks another potential outcome Y, and X does not imply the sentence p to be contracted, then Y ≠ K ÷ p. The contraction outcome K ÷ p is equal to the (...) (unique) inclusion-maximal unblocked element of the outcome set that does not imply p. Conditions on the blocking relation are specified that ensure the existence of such a unique inclusion-maximal set for all sentences p. Blockage contraction is axiomatically characterized and its relations to AGM-style operations are investigated. In a finite-based framework, every transitively relational partial meet contraction is also a blockage contraction. (shrink)
An agent can usually hold a very large number of beliefs. However, only a small part of these beliefs is used at a time. Efficient operations for belief change should affect the beliefs of the agent locally, that is, the changes should be performed only in the relevant part of the belief state. In this paper we define a local consequence operator that only considers the relevant part of a belief base. This operator is used to define local versions of (...) the operations for belief change. Representation theorems are given for the local operators. (shrink)
A transformative decision rule alters the representation of a decision problem, either by changing the set of alternative acts or the set of states of the world taken into consideration, or by modifying the probability or value assignments. A set of transformative decision rules is order-independent in case the order in which the rules are applied is irrelevant. The main result of this paper is an axiomatic characterization of order-independent transformative decision rules, based on a single axiom. It is shown (...) that the proposed axiomatization resolves a problem observed by Teddy Seidenfeld in a previous axiomatization by Peterson. (shrink)
The AGM theory of belief contraction is extended tomultiple contraction, i.e. to contraction by a set of sentences rather than by a single sentence. There are two major variants: Inpackage contraction all the sentences must be removed from the belief set, whereas inchoice contraction it is sufficient that at least one of them is removed. Constructions of both types of multiple contraction are offered and axiomatically characterized. Neither package nor choice contraction can in general be reduced to contractions by single (...) sentences; in the finite case choice contraction allows for reduction. (shrink)
The postulate of recovery is commonly regarded to be the intuitively least compelling of the six basic Gärdenfors postulates for belief contraction. We replace recovery by the seemingly much weaker postulate of core-retainment, which ensures that if x is excluded from K when p is contracted, then x plays some role for the fact that K implies p. Surprisingly enough, core-retainment together with four of the other Gärdenfors postulates implies recovery for logically closed belief sets. Reasonable contraction operators without recovery (...) do not seem to be possible for such sets. Instead, however, they can be obtained for non-closed belief bases. Some results on partial meet contractions on belief bases are given, including an axiomatic characterization and a non-vacuous extension of the AGM closure condition. (shrink)
A number of seminal papers on the logic of belief change by Alchourrön, Gärden-fors, and Makinson have given rise to what is now known as the AGM paradigm. The present discussion note is a response to Neil Tennant's , which aims at a critical appraisal of the AGM approach and the introduction of an alternative approach. We show that important parts of Tennants's critical remarks are based on misunderstandings or on lack of information. In the course of doing this, we (...) attend to some central philosophical issues in the theory of belief change, such as the choice of a representation for belief states and the meaning of an idealized rational agent. (shrink)
Moral theory has mostly focused on idealized situations in which the morally relevant properties of human actions can be known beforehand. Here, a framework is proposed that is intended to sharpen moral intuitions and improve moral argumentation in problems involving risk and uncertainty. Guidelines are proposed for a systematic search of suitable future viewpoints for hypothetical retrospection. In hypothetical retrospection, a decision is evaluated under the assumption that one of the branches of possible future developments has materialized. This evaluation is (...) based on the deliberator’s present values, and each decision is judged in relation to the information available when it was taken. The basic decision rule is to choose an alternative that comes out as morally acceptable (permissible) from all hypothetical retrospections. (shrink)
Analytical tools that give precision to the concept of "independence of syntax" are developed in the form of a series of substitutivity principles. These principles are applied in a study of the rôle of language in belief revision theory. It is shown that sets of sentences can be used in models of belief revision to convey more information than what is conveyed by the combined propositional contents of the respective sets. It is argued that it would be unwise to programmatically (...) restrain the use of sets of sentences to that of representing propositional contents. Instead, the expressive power of language should be used as fully as possible. Therefore, syntax-independence should not be seen as a criterion of adequacy for language-based models of information-processing, but rather as a property that emerges from some but not all the idealization processes through which such models are constructed. (shrink)
We introduce a constructive model of selective belief revision in which it is possible to accept only a part of the input information. A selective revision operator ο is defined by the equality K ο α = K * f(α), where * is an AGM revision operator and f a function, typically with the property ⊢ α → f(α). Axiomatic characterizations are provided for three variants of selective revision.