In the middle of the 1980s, logical tools were discovered that make it possible to model changes in belief and knowledge in entirely new ways. These logical tools turned out to be applicable both to human beliefs and to the contents of databases. This is the first textbook in this new area. It contains both discursive chapters with a minimum of formalism and formal chapters in which proofs and proof methods are presented. By using different selections from the formal section (...) the book can be used on all levels of University education. (shrink)
This book explains how the logic of theory change employs formal models in the investigation of changes in belief states and databases. The topics covered include equivalent characterizations of AGM operations, extended representations of the belief states, change operators not included in the original framework, iterated change, applications of the model, its connections with other formal frameworks, and criticism of the model.
When is it morally acceptable to expose others to risk? Most moral philosophers have had very little to say in answer to that question, but here is a moral philosopher who puts it at the centre of his investigations.
Mainstream moral theories deal with situations in which the outcome of each possible action is well-determined and knowable. In order to make ethics relevant for problems of risk and uncertainty, moral theories have to be extended so that they cover actions whose outcomes are not determinable beforehand. One approach to this extension problem is to develop methods for appraising probabilistic combinations of outcomes. This approach is investigated and shown not to solve the problem. An alternative approach is then developed. Its (...) starting-point is that everyone has a prima facie moral right not to be exposed to risk. However, this right can be overridden if the risk-exposure is part of an equitable system for risk-taking that works to the advantage of the individual risk-exposed person. (shrink)
The introduction of self-driving vehicles gives rise to a large number of ethical issues that go beyond the common, extremely narrow, focus on improbable dilemma-like scenarios. This article provides a broad overview of realistic ethical issues related to self-driving vehicles. Some of the major topics covered are as follows: Strong opinions for and against driverless cars may give rise to severe social and political conflicts. A low tolerance for accidents caused by driverless vehicles may delay the introduction of driverless systems (...) that would substantially reduce the risks. Trade-offs will arise between safety and other requirement on the road traffic system. Over-reliance on the swift collision-avoiding reactions of self-driving vehicles can induce people to take dangerous actions, such as stepping out in front of a car, relying on its fast braking. Children travelling alone can violate safety instructions such as the use of seatbelts. Digital information about routes and destinations can be used to convey commercial and political messages to car users. If fast passage can be bought, then socio-economic segregation of road traffic may result. Terrorists and other criminals can hack into a vehicle and make it crash. They can also use self-driving vehicles for instance to carry bombs to their designed places of detonation or to wreak havoc on a country’s road system. (shrink)
Cost–benefit analysis (CBA) is much more philosophically interesting than has in general been recognized. Since it is the only well-developed form of applied consequentialism, it is a testing-ground for consequentialism and for the counterfactual analysis that it requires. Ten classes of philosophical problems that affect the practical performance of cost–benefit analysis are investigated: topic selection, dependence on the decision perspective, dangers of super synopticism and undue centralization, prediction problems, the indeterminateness of our control over future decisions, the need to exclude (...) certain consequences for moral reasons, bias in the delimitation of consequences, incommensurability of consequences, difficulties in defending the essential requirement of transferability across contexts, and the normatively questionable but equally essential assumption of interpersonal compensability. (Published Online July 31 2007). (shrink)
The AGM theory of belief contraction is extended tomultiple contraction, i.e. to contraction by a set of sentences rather than by a single sentence. There are two major variants: Inpackage contraction all the sentences must be removed from the belief set, whereas inchoice contraction it is sufficient that at least one of them is removed. Constructions of both types of multiple contraction are offered and axiomatically characterized. Neither package nor choice contraction can in general be reduced to contractions by single (...) sentences; in the finite case choice contraction allows for reduction. (shrink)
Formal representations of values and norms are employed in several academic disciplines and specialties, such as economics, jurisprudence, decision theory and social choice theory. Sven Ove Hansson closely examines such foundational issues as the values of wholes and the values of their parts, the connections between values and norms, how values can be decision-guiding and the structure of normative codes with formal precision. Models of change in both preferences and norms are offered, as well as a method to base the (...) logic of norms on that of preferences. Hansson has developed a unified formal representation of values and norms that reflects both their static and their dynamic properties. This formalized treatment, carried out in terms of both informal value theory and precise logical detail, will contribute to the clarification of certain issues in the basic philosophical theory of values and norms. (shrink)
In non-technical contexts, the word “risk” refers, often rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In technical contexts, the word has many uses and specialized meanings. The most common ones are the following.
The postulate of recovery is commonly regarded to be the intuitively least compelling of the six basic Gärdenfors postulates for belief contraction. We replace recovery by the seemingly much weaker postulate of core-retainment, which ensures that if x is excluded from K when p is contracted, then x plays some role for the fact that K implies p. Surprisingly enough, core-retainment together with four of the other Gärdenfors postulates implies recovery for logically closed belief sets. Reasonable contraction operators without recovery (...) do not seem to be possible for such sets. Instead, however, they can be obtained for non-closed belief bases. Some results on partial meet contractions on belief bases are given, including an axiomatic characterization and a non-vacuous extension of the AGM closure condition. (shrink)
The paper introduces ten open problems in belief revision theory, related to the representation of the belief state, to different notions of degrees of belief, and to the nature of change operations. It is argued that these problems are all issues in philosopical logic, in the strong sense of requiring inputs from both logic and philosophy for their solution.
The AGM (Alchourrón-Gärdenfors-Makinson) model of belief change is extended to cover changes on sets of beliefs that are not closed under logical consequence (belief bases). Three major types of change operations, namely contraction, internal revision, and external revision are axiomatically characterized, and their interrelations are studied. In external revision, the Levi identity is reversed in the sense that one first adds the new belief to the belief base, and afterwards contracts its negation. It is argued that external revision represents an (...) intuitively plausible way of revising one's beliefs. Since it typically involves the temporary acceptance of an inconsistent set of beliefs, it can only be used in belief representations that distinguish between different inconsistent sets of belief. (shrink)
The 1985 paper by Carlos Alchourrón, Peter Gärdenfors, and David Makinson, “On the Logic of Theory Change: Partial Meet Contraction and Revision Functions” was the starting-point of a large and rapidly growing literature that employs formal models in the investigation of changes in belief states and databases. In this review, the first twenty-five years of this development are summarized. The topics covered include equivalent characterizations of AGM operations, extended representations of the belief states, change operators not included in the original (...) framework, iterated change, applications of the model, its connections with other formal frameworks, computatibility of AGM operations, and criticism of the model. (shrink)
The 1985 paper by Carlos Alchourrón (1931–1996), Peter Gärdenfors, and David Makinson (AGM), "On the Logic of Theory Change: Partial Meet Contraction and Revision Functions" was the starting-point of a large and rapidly growing literature that employs formal models in the investigation of changes in belief states and databases. In this review, the first twentyfive years of this development are summarized. The topics covered include equivalent characterizations of AGM operations, extended representations of the belief states, change operators not included in (...) the original framework, iterated change, applications of the model, its connections with other formal frameworks, computatibility of AGM operations, and criticism of the model. (shrink)
Kernel contraction is a natural nonrelational generalization of safe contraction. All partial meet contractions are kernel contractions, but the converse relationship does not hold. Kernel contraction is axiomatically characterized. It is shown to be better suited than partial meet contraction for formal treatments of iterated belief change.
One way to construct a contraction operator for a theory (belief set) is to assign to it a base (belief base) and an operator of partial meet contraction for that base. Axiomatic characterizations are given of the theory contractions that are generated in this way by (various types of) partial meet base contractions.
A definition of pseudoscience is proposed, according to which a statement is pseudoscientific if and only if it (1) pertains to an issue within the domains of science, (2) is not epistemically warranted, and (3) is part of a doctrine whose major proponents try to create the impression that it is epistemically warranted. This approach has the advantage of separating the definition of pseudoscience from the justification of the claim that science represents the most epistemically warranted statements. The definition is (...) used to explain why proponents of widely divergent criteria for the demarcation between science and pseudoscience tend to be in almost complete agreement on the particular demarcations that should presumably be based on these general criteria. (shrink)
In order to explore the quantifiability and formalizability of uncertainty a wide range of uncertainties are investigated. They are summarized under eight main categories: factual, possibilistic, metadoxastic, agential, interactive, value, structural, and linguistic uncertainty. This includes both classical uncertainty and the uncertainties commonly called great, deep, or radical. For five of the eight types of uncertainty, both quantitative and non-quantitative formalizations are meaningful and available. For one of them (interactive uncertainty), only non-quantitative formalizations seem to be meaningful, and for two (...) (agential and structural uncertainty) neither quantitative nor non-quantitative formalization seems to be a useful approach. (shrink)
A conceptual analysis of falsificationism is performed, in which the central falsificationist thesis is divided into several components. Furthermore, an empirical study of falsification in science is reported, based on the 70 scientific contributions that were published as articles in Nature in 2000. Only one of these articles conformed to the falsificationist recipe for successful science, namely the falsification of a hypothesis that is more accessible to falsification than to verification. It is argued that falsificationism relies on an incorrect view (...) of the nature of scientific inquiry and that it is, therefore, not a tenable research methodology. (shrink)
The purpose of this presentation is to introduce both the concept of risk and the precautionary principle, that is a major policy principle in present-day risk management. Since risk has been the subject of many misconceptions I will do this in large part by criticizing seven views on risk that I believe to have caused considerable confusion both among scientists and policy-makers. But before looking at the seven myths of risk, let us begin with the basic issue of defining “risk”. (...) The word “risk” often refers, rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In addition, the word has several more specialized meanings. Let me illustrate this by making a few statements about the single most important preventable health hazard in non-starving countries. First: “Lung cancer is one of the major risks that affect smokers.” Here, we use “risk” in the following sense: (1) risk = an unwanted event which may or may not occur. 1 (15). (shrink)
It has been much debated whether epistemic relativism in academia, for instance in the form of social constructivism, the strong programme, deconstructionism, and postmodernism, has paved the way for the recent upsurge in science denial, in particular climate science denial. In order to provide an empirical basis for this discussion, an extensive search of the social science literature was performed. It showed that in the 1990s, climate science was a popular target among academic epistemic relativists. In particular, many STS scholars (...) used it as an allegedly clear example of claims by natural scientists that should be treated as mere social constructions, rather than as reports on the actual state of the natural world. A few connections between social constructivists and corporate science denialism were also uncovered, but the extent of such connections could not be determined. With few exceptions, the stream of criticism of climate science from academic relativists has dwindled since the 1990s. One reason for this seems to be that the contrarian position lost its attraction when it became associated with corporate and right-wing propagandists. (shrink)
This book provides a critical examination of how the choice of what to believe is represented in the standard model of belief change. In particular the use of possible worlds and infinite remainders as objects of choice is critically examined. Descriptors are introduced as a versatile tool for expressing the success conditions of belief change, addressing both local and global descriptor revision. The book presents dynamic descriptors such as Ramsey descriptors that convey how an agent’s beliefs tend to be changed (...) in response to different inputs. It also explores sentential revision and demonstrates how local and global operations of revision by a sentence can be derived as a special case of descriptor revision. Lastly, the book examines revocation, a generalization of contraction in which a specified sentence is removed in a process that may possibly also involve the addition of some new information to the belief set. (shrink)
The advantages and disadvantages of formalization in philosophy are summarized. It is concluded that formalized philosophy is an endangered speciality that needs to be revitalized and to increase its interactions with non-formalized philosophy. The enigmatic style that is common in philosophical logic must give way to explicit discussions of the problematic relationship between formal models and the philosophical concepts and issues that motivated their development.
A new formal model of belief dynamics is proposed, in which the epistemic agent has both probabilistic beliefs and full beliefs. The agent has full belief in a proposition if and only if she considers the probability that it is false to be so close to zero that she chooses to disregard that probability. She treats such a proposition as having the probability 1, but, importantly, she is still willing and able to revise that probability assignment if she receives information (...) that gives her sufficient reasons to do so. Such a proposition is undoubted, but not undoubtable. In the formal model it is assigned a probability 1 − δ, where δ is an infinitesimal number. The proposed model employs probabilistic belief states that contain several underlying probability functions representing alternative probabilistic states of the world. Furthermore, a distinction is made between update and revision, in the same way as in the literature on belief change. The formal properties of the model are investigated, including properties relevant for learning from experience. The set of propositions whose probabilities are infinitesimally close to 1 forms a belief set. Operations that change the probabilistic belief state give rise to changes in this belief set, which have much in common with traditional operations of belief change. (shrink)
This article investigates the properties of multistate top revision, a dichotomous model of belief revision that is based on an underlying model of probability revision. A proposition is included in the belief set if and only if its probability is either 1 or infinitesimally close to 1. Infinitesimal probabilities are used to keep track of propositions that are currently considered to have negligible probability, so that they are available if future information makes them more plausible. Multistate top revision satisfies a (...) slightly modified version of the set of basic and supplementary AGM postulates, except the inclusion and success postulates. This result shows that hyperreal probabilities can provide us with efficient tools for overcoming the well known difficulties in combining dichotomous and probabilistic models of belief change. (shrink)
Ethicists have investigated ethical problems in other disciplines, but there has not been much discussion of the ethics of their own activities. Research in ethics has many ethical problems in common with other areas of research, and it also has problems of its own. The researcher’s integrity is more precarious than in most other disciplines, and therefore even stronger procedural checks are needed to protect it. The promotion of some standpoints in ethical issues may be socially harmful, and even our (...) decisions as to which issues we label as “ethical” may have unintended and potentially harmful social consequences. It can be argued that ethicists have an obligation to make positive contributions to society, but the practical implications of such an obligation are not easily identified. This article provides an overview of ethical issues that arise in research into ethics and in the application of such research. It ends with a list of ten practical proposals for how these issues should be dealt with. (shrink)
Specified meet contraction is the operation defined by the identity where ∼ is full meet contraction and f is a sentential selector, a function from sentences to sentences. With suitable conditions on the sentential selector, specified meet contraction coincides with the partial meet contractions that yield a finite-based contraction outcome if the original belief set is finite-based. In terms of cognitive realism, specified meet contraction has an advantage over partial meet contraction in that the selection mechanism operates on sentences rather (...) than on temporary infinite structures (remainders) that are cognitively inaccessible. Specified meet contraction provides a versatile framework in which other types of contraction, such as severe withdrawal and base-generated contraction, can be expressed with suitably chosen properties of the sentential selector. (shrink)
Clear-cut cases of decision-making under risk (known probabilities) are unusual in real life. The gambler’s decisions at the roulette table are as close as we can get to this type of decision-making. In contrast, decision-making under uncertainty (unknown probabilities) can be exemplified by a decision whether to enter a jungle that may contain unknown dangers. Life is usually more like an expedition into an unknown jungle than a visit to the casino. Nevertheless, it is common in decision-supporting disciplines to proceed (...) as if reasonably reliable probability estimates were available for all possible outcomes, i.e. as if the prevailing epistemic conditions were analogous to those of gambling at the roulette table. This mistake can be called the tuxedo fallacy . It is argued that traditional engineering practices such as safety factors and multiple safety barriers avoid this fallacy and that they therefore manage uncertainty better than probabilistic risk analysis (PRA). PRA is a useful tool, but it must be supplemented with other methods in order not to limit the analysis to dangers that can be assigned meaningful probability estimates. (shrink)
The precautionary principle has often been described as an extreme principle that neglects science and stifles innovation. However, such an interpretation has no support in the official definitions of the principle that have been adopted by the European Union and by the signatories of international treaties on environmental protection. In these documents, the precautionary principle is a guideline specifying how to deal with certain types of scientific uncertainty. In this contribution, this approach to the precautionary principle is explicated with the (...) help of concepts from the philosophy of science and comparisons with general notions of practical rationality. Three major problems in its application are discussed, and it is concluded that to serve its purpose, the precautionary principle has to (1) be combined with other decision principles in cases with competing top priorities, (2) be based on the current state of science, which requires procedures for scientific updates, and (3) exclude potential dangers whose plausibility is too low to trigger meaningful precautionary action. (shrink)
In the most common approaches to belief dynamics, states of belief are represented by sets that are closed under logical consequence. In an alternative approach, they are represented by non-closed belief bases. This representation has attractive properties not shared by closed representations. Most importantly, it can account for repeated belief changes that have not yet been satisfactorily accounted for in the closed approach.
A descriptor is a set of sentences that are truth-functional combinations of expressions of the form \ , where \ is a metalinguistic belief predicate and p a sentence in the object language in which beliefs are expressed. Descriptor revision ) is an operation of belief change that takes us from a belief set K to a new belief set \ where \ is a descriptor representing the success condition. Previously studied operations of belief change are special cases of descriptor (...) revision, hence sentential revision can be represented as \ , contraction as \ , multiple contraction as \ , replacement as \ , etc. General models of descriptor revision are constructed and axiomatically characterized. The common selection mechanisms of AGM style belief change cannot be used, but they can be replaced by choice functions operating directly on the set of potential outcomes . The restrictions of this construction to sentential revision ) and sentential contraction give rise to operations with plausible properties that are also studied in some some detail. (shrink)
We introduce a constructive model of selective belief revision in which it is possible to accept only a part of the input information. A selective revision operator ο is defined by the equality K ο α = K * f(α), where * is an AGM revision operator and f a function, typically with the property ⊢ α → f(α). Axiomatic characterizations are provided for three variants of selective revision.
An agent can usually hold a very large number of beliefs. However, only a small part of these beliefs is used at a time. Efficient operations for belief change should affect the beliefs of the agent locally, that is, the changes should be performed only in the relevant part of the belief state. In this paper we define a local consequence operator that only considers the relevant part of a belief base. This operator is used to define local versions of (...) the operations for belief change. Representation theorems are given for the local operators. (shrink)
In the first part of this paper, I clear the ground from frequent misconceptions of the relationship between fact and value by examining some uses of the adjective “natural” in ethical controversies. Such uses bear evidence to our “natural” tendency to regard nature as the source of ethical norms. I then try to account for the origins of this tendency by offering three related explanations, the most important of which is evolutionistic: if any behaviour that favours our equilibrium with the (...) environment is potentially adaptive, nothing can be more effective for this goal than developing an attitude toward the natural world that considers it as a dispenser of sacred norms that must be invariably respected. By referring to the Aristotelian notion of human flourishing illustrated in the first part of the paper, in the second I discuss as a case study some ethical problems raised by mini-chips implantable in our bodies. I conclude by defending their potential beneficial effects of such new technological instruments. (shrink)
In the last half century, decision theory has had a deep influence on moral theory. Its impact has largely been beneficial. However, it has also given rise to some problems, two of which are discussed here. First, issues such as risk-taking and risk imposition have been left out of ethics since they are believed to belong to decision theory, and consequently the ethical aspects of these issues have not been treated in either discipline. Secondly, ethics has adopted the decision-theoretical idea (...) that action-guidance has to be based on cause–effect or means–ends relationships between an individual action and its possible outcomes. This is problematic since the morally relevant connections between an action and future events are not fully covered by such relationships. In response to the first problem it is proposed that moral theory should deal directly and extensively with issues such as risk-taking and risk imposition, thereby intruding unabashedly into the traditional territory of decision theory. As a partial response to the second problem it is proposed that moral theorizing should release itself from the decision-theoretical requirement that the moral status of an action has to be derivable from the consequences (or other properties) that are assignable to that action alone. In particular, the effects that an action can have in combination with other actions by the same or other agents are valid arguments in an action-guiding moral discourse, even if its contribution to these combined consequences cannot be isolated and evaluated separately. (shrink)