Abstract: In this article, I offer a proposal to clarify what I believe is the proper relation between value maximization and stakeholder theory, which I call enlightened value maximization. Enlightened value maximization utilizes much of the structure of stakeholder theory but accepts maximization of the long-run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders, and specifies long-term value maximization or value seeking as the firm’s objective. This proposal therefore solves the problems that arise (...) from the multiple objectives that accompany traditional stakeholder theory. I also discuss the Balanced Scorecard, which is the managerial equivalent of stakeholder theory, explaining how this theory is flawed because it presents managers with a scorecard that gives no score—that is, no single-valued measure of how they have performed. Thus managers evaluated with such a system (which can easily have two dozen measures and provides no information on the tradeoffs between them) have no way to make principled or purposeful decisions. The solution is to define a true (single dimensional) score for measuring performance for the organization or division (and it must be consistent with the organization’s strategy), and as long as their score is defined properly, (and for lower levels in the organization it will generally not be value) this will enhance their contribution to the firm. (shrink)
The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to (...) existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource providing details on the people, policies, and issues being addressed in association with OBI. (shrink)
A number of researchers have begun to demonstrate that the widely discussed ?Knobe effect? (wherein participants are more likely to think that actions with bad side-effects are brought about intentionally than actions with good or neutral side-effects) can be found in theory of mind judgments that do not involve the concept of intentional action. In this article we report experimental results that show that attributions of knowledge can be influenced by the kinds of (non-epistemic) concerns that drive the Knobe effect. (...) Our findings suggest there is good reason to think that the epistemic version of the Knobe effect is a theoretically significant and robust effect, and that the goodness or badness of side-effects can often have greater influence on participant knowledge attributions than explicit information about objective probabilities. Thus, our work sheds light on important ways in which participant assessments of actions can affect the epistemic assessments participants make of agents? beliefs. (shrink)
I argue that Merleau-Ponty’s use of the case of Schneider in his arguments for the existence of non-conconceptual and non-representational motor intentionality contains a problematic methodological ambiguity. Motor intentionality is both to be revealed by its perspicuous preservation and by its contrastive impairment in one and the same case. To resolve the resulting contradiction I suggest we emphasize the second of Merleau-Ponty’s two lines of argument. I argue that this interpretation is the one in best accordance both with Merleau-Ponty’s general (...) methodology and with the empirical case of Schneider as it was described by Gelb and Goldstein. (shrink)
Self-organized criticality (SOC) is based upon the idea that complex behavior can develop spontaneously in certain multi-body systems whose dynamics vary abruptly. This book is a clear and concise introduction to the field of self-organized criticality, and contains an overview of the main research results. The author begins with an examination of what is meant by SOC, and the systems in which it can occur. He then presents and analyzes computer models to describe a number of systems, and he explains (...) the different mathematical formalisms developed to understand SOC. The final chapter assesses the impact of this field of study, and highlights some key areas of new research. The author assumes no previous knowledge of the field, and the book contains several exercises. It will be ideal as a textbook for graduate students taking physics, engineering, or mathematical biology courses in nonlinear science or complexity. (shrink)
The understanding of decision-making systems has come together in recent years to form a unified theory of decision-making in the mammalian brain as arising from multiple, interacting systems (a planning system, a habit system, and a situation-recognition system). This unified decision-making system has multiple potential access points through which it can be driven to make maladaptive choices, particularly choices that entail seeking of certain drugs or behaviors. We identify 10 key vulnerabilities in the system: (1) moving away from homeostasis, (2) (...) changing allostatic set points, (3) euphorigenic signals, (4) overvaluation in the planning system, (5) incorrect search of situation-action-outcome relationships, (6) misclassification of situations, (7) overvaluation in the habit system, (8) a mismatch in the balance of the two decision systems, (9) over-fast discounting processes, and (10) changed learning rates. These vulnerabilities provide a taxonomy of potential problems with decision-making systems. Although each vulnerability can drive an agent to return to the addictive choice, each vulnerability also implies a characteristic symptomology. Different drugs, different behaviors, and different individuals are likely to access different vulnerabilities. This has implications for an individual's susceptibility to addiction and the transition to addiction, for the potential for relapse, and for the potential for treatment. (shrink)
In this paper, we sketch the development of two important themes of modern set theory, both of which can be regarded as growing out of work of Kurt Gödel. We begin with a review of some basic concepts and conventions of set theory.§0. The ordinal numbers were Georg Cantor's deepest contribution to mathematics. After the natural numbers 0, 1, …, n, … comes the first infinite ordinal number ω, followed by ω + 1, ω + 2, …, ω + ω, (...) … and so forth. ω is the first limit ordinal as it is neither 0 nor a successor ordinal. We follow the von Neumann convention, according to which each ordinal number α is identified with the set {ν ∣ ν α} of its predecessors. The ∈ relation on ordinals thus coincides with <. We have 0 = ∅ and α + 1 = α ∪ {α}. According to the usual set-theoretic conventions, ω is identified with the first infinite cardinal ℵ0, similarly for the first uncountable ordinal number ω1 and the first uncountable cardinal number ℵ1, etc. We thus arrive at the following picture:The von Neumann hierarchy divides the class V of all sets into a hierarchy of sets Vα indexed by the ordinal numbers. The recursive definition reads: ;Vλ = ∪v<λVv for limit ordinals λ. We can represent this hierarchy by the following picture. (shrink)
Research indicates that introducing Philosophy with Children in schools can lead to a number of desirable benefits in terms of improving academic skills in students. However, as PwC differs f...
INTRODUCTION HUTCHESONS LIFE AND WORKS The history of philosophy includes the names of many persons, famous in their time, whose contributions to human ...
This paper analyzes the number of procedural and substantive tension points with which a conscientious whistleblower struggles. Included in the former are such questions as: (1) Am I properly depicting the seriousness of the problem? (2) Have I secured the information properly, analyzed it appropriately, and presented it fairly? (3) Are my motives appropriate? (4) Have I tried fully enough to have the problem corrected within the organization? (5) Should I blow the whistle while still a member of the organization (...) or after having left it? (6) Should I keep anonymity? (7) How ethical is it to assume the role of a judge? (8) How ethical is it to set in motion an act which will likely be very costly to many people? Substantive tension points include such questions as: (1) How fully am I living up to my moral obligations to my organization and my colleagues? (2) Am I appropriately upholding the ethical standards of my profession? (3) How adversely will my action affect my family and other primary groups? (4) Am I being true to myself? (5) How will my action affect the health of such basic values as freedom of expression, independent judgment, courage, fairness, cooperativeness, and loyalty? (shrink)
That death is not a welfare issue appears to be a widespread view among animal welfare researchers. This paper demonstrates that this view is based on a mistaken assumption about harm, which is coupled to ‘welfare’ being conceived as ‘welfare at a time’. Assessments of welfare at a time ignore issues of longevity. In order to assess the welfare issue of death, it is necessary to structure welfare assessment as comparisons of possible lives of the animals. The paper also demonstrates (...) that excluding the welfare issues of being deprived of life from the ethical assessment of killing distorts the ethical considerations. (shrink)
Thomas Nagel recognizes that it is commonly believed that people can neither be held morally responsible nor morally assessed for what is beyond their control. Yet he is convinced that although such a belief may be intuitively plausible, upon reflection we find that we do make moral assessments of persons in a large number of cases in which such assessments depend on factors not under their control. Of such factors he says: (p. 26).
We show in ZFC that if there is no proper class inner model with a Woodin cardinal, then there is an absolutely definablecore modelthat is close toVin various ways.
Mental and behavioral disorders represent a significant portion of the public health burden in all countries. The human cost of these disorders is immense, yet treatment options for sufferers are currently limited, with many patients failing to respond sufficiently to available interventions and drugs. High quality ontologies facilitate data aggregation and comparison across different disciplines, and may therefore speed up the translation of primary research into novel therapeutics. Realism-based ontologies describe entities in reality and the relationships between them in such (...) a way that – once formulated in a suitable formal language – the ontologies can be used for sophisticated automated reasoning applications. Reference ontologies can be applied across different contexts in which different, and often mutually incompatible, domain-specific vocabularies have traditionally been used. In this contribution we describe the Mental Functioning Ontology (MF) and Mental Disease Ontology (MD), two realism-based ontologies currently under development for the description of humanmental functioning and disease. We describe the structure and upper levels of the ontologies and preliminary application scenarios, and identify some open questions. (shrink)
We show that either of the following hypotheses imply that there is an inner model with a proper class of strong cardinals and a proper class of Woodin cardinals. 1) There is a countably closed cardinal k ≥ N₃ such that □k and □(k) fail. 2) There is a cardinal k such that k is weakly compact in the generic extension by Col(k, k⁺). Of special interest is 1) with k = N₃ since it follows from PFA by theorems of (...) Todorcevic and Velickovic. Our main new technical result, which is due to the first author, is a weak covering theorem for the model obtained by stacking mice over $K^c ||k.$. (shrink)
Plausibility models are Kripke models that agents use to reason about knowledge and belief, both of themselves and of each other. Such models are used to interpret the notions of conditional belief, degrees of belief, and safe belief. The logic of conditional belief contains that modality and also the knowledge modality, and similarly for the logic of degrees of belief and the logic of safe belief. With respect to these logics, plausibility models may contain too much information. A proper notion (...) of bisimulation is required that characterises them. We define that notion of bisimulation and prove the required characterisations: on the class of image-finite and preimage-finite models, two pointed Kripke models are modally equivalent in either of the three logics, if and only if they are bisimilar. As a result, the information content of such a model can be similarly expressed in the logic of conditional belief, or the logic of degrees of belief, or that of safe belief. This, we found a surprising result. Still, that does not mean that the logics are equally expressive: the logics of conditional and degrees of belief are incomparable, the logics of degrees of belief and safe belief are incomparable, while the logic of safe belief is more expressive than the logic of conditional belief. In view of the result on bisimulation characterisation, this is an equally surprising result. We hope our insights may contribute to the growing community of formal epistemology and on the relation between qualitative and quantitative modelling. (shrink)
We are developing the Neurological Disease Ontology (ND) to provide a framework to enable representation of aspects of neurological diseases that are relevant to their treatment and study. ND is a representational tool that addresses the need for unambiguous annotation, storage, and retrieval of data associated with the treatment and study of neurological diseases. ND is being developed in compliance with the Open Biomedical Ontology Foundry principles and builds upon the paradigm established by the Ontology for General Medical Science (OGMS) (...) for the representation of entities in the domain of disease and medical practice. Initial applications of ND will include the annotation and analysis of large data sets and patient records for Alzheimer’s disease, multiple sclerosis, and stroke. (shrink)
The general public in Europe seems tohave lost its confidence in food safety. Theremedy for this, as proposed by the Commissionof the EU, is a scientific rearmament. Thequestion, however, is whether more science willbe able to overturn the public distrust.Present experience seems to suggest thecontrary, because there is widespread distrustin the science-based governmental controlsystems. The answer to this problem is thecreation of an independent scientificFood Authority. However, we argue thatindependent scientific advice alone is unlikelyto re-establish public confidence. It is muchmore (...) important to make the scientific advicetransparent, i.e., to state explicitlythe factual and normative premises on which itis based. Risk assessments are based on arather narrow, but well-defined notion of risk.However, the public is concerned with a broadervalue context that comprises both benefits andrisks. Transparency and understanding of thepublic's perception of food risks is anecessary first step in establishing theurgently required public dialogue about thecomplex value questions involved in foodproduction. (shrink)
Recent events and advances address the possibility of cloning endangered and extinct species. The ethics of these types of cloning have special considerations, uniquely different from the types of cloning commonly practiced. Cloning of cheetahs may be ethically appropriate, given certain constraints. However, the ethics of cloning extinct species varies; for example, cloning mammoths and Neanderthals is more ethically problematic than conservation cloning, and requires more attention. Cloning Neanderthals in particular is likely unethical and such a project should not be (...) undertaken. It is important to discuss and plan for the constraints necessary to mitigate the harms of conservation and extinct cloning, and it is imperative that scientific and public discourse enlighten and guide actions in the sphere of cloning. (shrink)
The Commission's recentinterpretation of the Precautionary Principleis used as starting point for an analysis ofthe moral foundation of this principle. ThePrecautionary Principle is shown to have theethical status of an amendment to a liberalprinciple to the effect that a state only mayrestrict a person's actions in order to preventunacceptable harm to others. The amendmentallows for restrictions being justified even incases where there is no conclusive scientificevidence for the risk of harmful effects.However, the liberal tradition has seriousproblems in determining when a (...) risk of harm isunacceptable. Nevertheless, reasonable liberalarguments in favor of precaution can be basedon considerations of irreversible harm andgeneral fear of harm. But it is unclear whenthere considerations can be overridden.Within the liberal framework, the Commissionadvocates a so-called proportional version ofthe Precautionary Principle. This should beclearly distinguished from a welfare-basedapproach to precaution based on risk-aversiveweighing up of expected costs and benefits.However, in the last resort, the Commissiondoes seem to make a covert appeal to suchconsiderations. (shrink)
Plausibility models are Kripke models that agents use to reason about knowledge and belief, both of themselves and of each other. Such models are used to interpret the notions of conditional belief, degrees of belief, and safe belief. The logic of conditional belief contains that modality and also the knowledge modality, and similarly for the logic of degrees of belief and the logic of safe belief. With respect to these logics, plausibility models may contain too much information. A proper notion (...) of bisimulation is required that characterises them. We define that notion of bisimulation and prove the required characterisations: on the class of image-finite and preimage-finite models, two pointed Kripke models are modally equivalent in either of the three logics, if and only if they are bisimilar. As a result, the information content of such a model can be similarly expressed in the logic of conditional belief, or the logic of degrees of belief, or that of safe belief. This, we found a surprising result. Still, that does not mean that the logics are equally expressive: the logics of conditional and degrees of belief are incomparable, the logics of degrees of belief and safe belief are incomparable, while the logic of safe belief is more expressive than the logic of conditional belief. In view of the result on bisimulation characterisation, this is an equally surprising result. We hope our insights may contribute to the growing community of formal epistemology and on the relation between qualitative and quantitative modelling. (shrink)
It is common to define egalitarianism in terms of an inequality ordering, which is supposed to have some weight in overall evaluations of outcomes. Egalitarianism, thus defined, implies that levelling down makes the outcome better in respect of reducing inequality; however, the levelling down objection claims there can be nothing good about levelling down. The priority view, on the other hand, does not have this implication. This paper challenges the common view. The standard definition of egalitarianism implicitly assumes a context. (...) Once this context is made clear, it is easily seen that egalitarianism could be defined alternatively in terms of valuing a benefit to a person inversely to how well off he is relative to others. The levelling down objection does not follow from this definition. Moreover, the common definition does not separate egalitarian orderings from prioritarian ones. It is useful to do this by requiring that on egalitarianism, additively separable orderings should be excluded. But this requirement is stated as a condition on the alternative definition of egalitarianism, from which the levelling down objection does not follow. (shrink)
This paper asks whether the genuine representation of future generations brings any added value that could not be achieved by institutions or procedures installed to supplement and support ordinary representative democracy. On this background, it reviews some arguments for genuine representation of future generations. The analysis reveals that they tend to overlook the democratic costs of such representation, while they seem to ignore the alternative of giving consideration to the interests of future generations within current democracy. It is concluded that (...) what really matters in terms of the democratic ideal is to ensure an impartial deliberation which takes the interests of all affected parties sufficiently into account. (shrink)
This article seeks to defend and develop a stakeholder pragmatism advanced in some of the work by Edward Freeman and colleagues. By positioning stakeholder pragmatism more in line with the democratic and ethical base in American pragmatism (as developed by William James, John Dewey and Richard Rorty), the article sets forth a fallibilistic stakeholder pragmatism that seeks to be more useful to companies by expanding the ways in which value is and can be created in a contingent world. A dialogue (...) between a defence company and peace and arbitration society is used to illustrate the main plot of this article. (shrink)
This article updates the author’s 1982 argument that lutetium and lawrencium, rather than lanthanum and actinium, should be assigned to the d-block as the heavier analogs of scandium and yttrium, whereas lanthanum and actinium should be considered as the first members of the f-block with irregular configurations. This update is embedded within a detailed analysis of Lavelle’s abortive 2008 attempt to discredit this suggestion.
James Griffin has considered a form of superiority in value that is weaker than lexical priority as a possible remedy to the Repugnant Conclusion. In this article, I demonstrate that, in a context where value is additive, this weaker form collapses into the stronger form of superiority. And in a context where value is non-additive, weak superiority does not amount to a radical value difference at all. These results are applied on one of Larry Temkin's cases against transitivity. I demonstrate (...) that Temkin appeals to two conflicting notions of aggregation. I then spell out the consequences of these results for different interpretations of Griffin's suggestion regarding population ethics. None of them comes out very successful, but perhaps they nevertheless retain some interest. (shrink)