Contrary to stockholder theories that place the interests of profit-seeking owners above all else, stakeholder theorists argue that corporate executives have moral and ethical obligations to consider equally the interests of a wide range of stakeholders affected by the actions of a corporation. This paper argues that the stakeholder approach is particularly appropriate for the governance of news media companies and outlines an ethical framework to guide news company executives.
Ramsey famously pronounced that discounting “future enjoyments” would be ethically indefensible. Suppes enunciated an equity criterion implying that all individuals’ welfare should be treated equally. By contrast, Arrow accepted, perhaps rather reluctantly, the logical force of Koopmans’ argument that no satisfactory preference ordering on a sufficiently unrestricted domain of infinite utility streams satisfies equal treatment. In this paper, we first derive an equitable utilitarian objective based on a version of the Vickrey–Harsanyi original position, extended to allow a variable and uncertain (...) population with no finite bound. Following the work of Chichilnisky and others on sustainability, slightly weakening the conditions of Koopmans and co-authors allows intergenerational equity to be satisfied. In fact, assuming that the expected total number of individuals who ever live is finite, and that each individual’s utility is bounded both above and below, there is a coherent equitable objective based on expected total utility. Moreover, it implies the “extinction discounting rule” advocated by, inter alia, the Stern Review on climate change. (shrink)
Ramsey famously condemned discounting “future enjoyments” as “ethically indefensible”. Suppes enunciated an equity criterion which, when social choice is utilitarian, implies giving equal weight to all individuals’ utilities. By contrast, Arrow accepted, perhaps reluctantly, what he called Koopmans’ :287–309, 1960) “strong argument” implying that no equitable preference ordering exists for a sufficiently unrestricted domain of infinite utility streams. Here we derive an equitable utilitarian objective for a finite population based on a version of the Vickrey–Harsanyi original position, where there is (...) an equal probability of becoming each person. For a potentially infinite population facing an exogenous stochastic process of extinction, an equitable extinction biased original position requires equal conditional probabilities, given that the individual’s generation survives the extinction process. Such a position is well-defined if and only if survival probabilities decline fast enough for the expected total number of individuals who can ever live to be finite. Then, provided that each individual’s utility is bounded both above and below, maximizing expected “extinction discounted” total utility—as advocated, inter alia, by the Stern Review on climate change—provides a coherent and dynamically consistent equitable objective, even when the population size of each generation can be chosen. (shrink)
A global target of stabilizing greenhouse-gas concentrations at between 450 and 550 parts per million carbon-dioxide equivalent has proven robust to recent developments in the science and economics of climate change. Retrospective analysis of the Stern Review suggests that the risks were underestimated, indicating a stabilization target closer to 450 ppm CO2e. Climate policy at the international level is now moving rapidly towards agreeing an emissions pathway, and distributing responsibilities between countries. A feasible framework can be constructed in which (...) each country takes on its own responsibilities and targets, based on a shared understanding of the risks and the need for action and collaboration on climate change. The global deal should contain six key features: a pathway to achieve the world target of 50 per cent reductions by 2050, where rich countries contribute at least 75 per cent of the reductions; global emissions trading to reduce costs; reform of the clean development mechanism to scale up emission reductions on a sectoral or benchmark level; scaling up of R&D funding for low-carbon energy; an agreement on deforestation; and adaptation finance. (shrink)
Kim’s causal exclusion argument purports to demonstrate that the non-reductive physicalist must treat mental properties (and macro-level properties in general) as causally inert. A number of authors have attempted to resist Kim’s conclusion by utilizing the conceptual resources of Woodward’s (2005) interventionist conception of causation. The viability of these responses has been challenged by Gebharter (2017a), who argues that the causal exclusion argument is vindicated by the theory of causal Bayesian networks (CBNs). Since the interventionist conception of causation relies crucially (...) on CBNs for its foundations, Gebharter’s argument appears to cast significant doubt on interventionism’s antireductionist credentials. In the present article, we both (1) demonstrate that Gebharter’s CBN-theoretic formulation of the exclusion argument relies on some unmotivated and philosophically significant assumptions (especially regarding the relationship between CBNs and the metaphysics of causal relevance), and (2) use Bayesian networks to develop a general theory of causal inference for multi-level systems that can serve as the foundation for an antireductionist interventionist account of causation. (shrink)
Recent approaches to causal modelling rely upon the causal Markov condition, which specifies which probability distributions are compatible with a directed acyclic graph. Further principles are required in order to choose among the large number of DAGs compatible with a given probability distribution. Here we present a principle that we call frugality. This principle tells one to choose the DAG with the fewest causal arrows. We argue that frugality has several desirable properties compared to the other principles that have been (...) suggested, including the well-known causal faithfulness condition. _1_ Introduction _2_ The Causal Markov Condition _3_ Faithfulness _4_ Frugality _4.1_ Basic independences and frugality _4.2_ General properties of directed acyclic graphs satisfying frugality _4.3_ Connection to minimality assumptions _5_ Frugality as a Parsimony Principle _6_ Conclusion Appendix. (shrink)
Bayesians standardly claim that there is rational pressure for agents’ credences to cohere across time because they face bad (epistemic or practical) consequences if they fail to diachronically cohere. But as David Christensen has pointed out, groups of individual agents also face bad consequences if they fail to interpersonally cohere, and there is no general rational pressure for one agent's credences to cohere with another’s. So it seems that standard Bayesian arguments may prove too much. Here, we agree with Christensen (...) that there is no general rational pressure to diachronically cohere, but we argue that there are particular cases in which there is rational pressure to diachronically cohere, as well as particular cases in which interpersonal probabilistic coherence is rationally required. More generally, we suggest that Bayesian arguments for coherence apply whenever a collection (of agents or time slices) has a shared dimension of value and an ability to coordinate their actions in a range of cases relevant to that value. Typically, this shared value and ability to coordinate is very strong across the time slices of one human being, and very weak across different human beings, but there are special cases where these can switch—i.e., some groups of humans will have as much reason for their beliefs to cohere across a particular range of cases as the time slices of one human usually do, but some time slices of a human will have as much freedom to differ in their beliefs from the others as the members of a group usually do. (shrink)
Jim Joyce has argued that David Lewis’s formulation of causal decision theory is inadequate because it fails to apply to the “small world” decisions that people face in real life. Meanwhile, several authors have argued that causal decision theory should be developed such that it integrates the interventionist approach to causal modeling because of the expressive power afforded by the language of causal models, but, as of now, there has been little work towards this end. In this paper, I propose (...) a variant of Lewis’s causal decision theory that is intended to meet both of these demands. Specifically, I argue that Lewis’s causal decision theory can be rendered applicable to small world decisions if one analyzes his dependency hypotheses as causal hypotheses that depend on the interventionist causal modeling framework for their semantics. I then argue that this interventionist variant of Lewis’s causal decision theory is preferable to interventionist causal decision theories that purportedly generalize Lewis’s through the use of conditional probabilities. This is because Lewisian interventionist decision theory captures the causal decision theorist’s conviction that any correlation between what the agent does and cannot cause should be irrelevant to the agent’s choice, while purported generalizations do not. (shrink)
Meek and Glymour use the graphical approach to causal modeling to argue that one and the same norm of rational choice can be used to deliver both causal-decision-theoretic verdicts and evidential-decision-theoretic verdicts. Specifically, they argue that if an agent maximizes conditional expected utility, then the agent will follow the causal decision theorist’s advice when she represents herself as intervening, and will follow the evidential decision theorist’s advice when she represents herself as not intervening. Since Meek and Glymour take no stand (...) on whether agents should represent themselves as intervening, they provide more general advice than standard causal decision theorists and evidential decision theorists. But I argue here that even Meek and Glymour’s advice is not sufficiently general. This is because their advice is not sensitive to the distinct ways in which agents can fail to intervene, and there are decision-making contexts in which agents can reasonably have non-extreme confidence that they are intervening. I then show that the most natural extension of Meek and Glymour’s framework fails, but offer a generalization of my “Interventionist Decision Theory” that does not suffer from the same problems. (shrink)
Schupbach and Sprenger introduce a novel probabilistic approach to measuring the explanatory power that a given explanans exerts over a corresponding explanandum. Though we are sympathetic to their general approach, we argue that it does not adequately capture the way in which the causal explanatory power that c exerts on e varies with background knowledge. We then amend their approach so that it does capture this variance. Though our account of explanatory power is less ambitious than Schupbach and Sprenger’s in (...) the sense that it is limited to causal explanatory power, it is also more ambitious because we do not limit its domain to cases where c genuinely explains e. Instead, we claim that c causally explains e if and only if our account says that c explains e with some positive amount of causal explanatory power. 1Introduction 2The Logic of Explanatory Power 3Subjective and Nomic Distributions 3.1Actual degrees of belief 3.2The causal distribution 4Background Knowledge 4.1Conditionalization and colliders 4.2A helpful intervention 5Causal Explanatory Power 5.1The applicability of explanatory power 5.2Statistical relevance ≠ causal explanatory power 5.3Interventionist explanatory power 5.4E illustrated 6Conclusion. (shrink)
In this paper, I use interventionist causal models to identify some novel Newcomb problems, and subsequently use these problems to refine existing interventionist treatments of causal decision theory. The new Newcomb problems that make trouble for existing interventionist treatments involve so-called ‘exotic choice’—that is, decision-making contexts where the agent has evidence about the outcome of her choice. I argue that when choice is exotic, the interventionist can adequately capture causal decision-theoretic reasoning by introducing a new interventionist approach to updating on (...) exotic evidence. But I also argue that this new updating procedure is principled only if the interventionist trades in the typical interventionist conception of choice for an alternative Ramseyan conception. I end by arguing that the guide to exotic choice developed here may, despite its name, be useful in some everyday contexts. (shrink)
There are cases of ineffable learning — i. e., cases where an agent learns something, but becomes certain of nothing that she can express — where it is rational to update by Jeffrey conditionalization. But there are likewise cases of ineffable learning where updating by Jeffrey conditionalization is irrational. In this paper, we first characterize a novel class of cases where it is irrational to update by Jeffrey conditionalization. Then we use the d-separation criterion to develop a causal understanding of (...) when and when not to Jeffrey conditionalize that bars updating by Jeffrey conditionalization in these cases. Finally, we reflect on how the possibility of so-called “unfaithful” causal systems bears on the normative force of the causal updating norm that we advocate. (shrink)
McGee argues that it is sometimes reasonable to accept both x and x-> without accepting y->z, and that modus ponens is therefore invalid for natural language indicative conditionals. Here, we examine McGee's counterexamples from a Bayesian perspective. We argue that the counterexamples are genuine insofar as the joint acceptance of x and x-> at time t does not generally imply constraints on the acceptability of y->z at t, but we use the distance-based approach to Bayesian learning to show that applications (...) of modus ponens are nevertheless guaranteed to be successful in an important sense. Roughly, if an agent becomes convinced of the premises of a modus ponens argument, then she should likewise become convinced of the argument's conclusion. Thus we take McGee's counterexamples to disentangle and reveal two distinct ways in which arguments can convince. Any general theory of argumentation must take stock of both. (shrink)
Schupbach and Sprenger introduce a novel probabilistic approach to measuring the explanatory power that a given explanans exerts over a corresponding explanandum. Though we are sympathetic to their general approach, we argue that it does not adequately capture the way in which the causal explanatory power that c exerts on e varies with background knowledge. We then amend their approach so that it does capture this variance. Though our account of explanatory power is less ambitious than Schupbach and Sprenger’s in (...) the sense that it is limited to causal explanatory power, it is also more ambitious because we do not limit its domain to cases where c genuinely explains e. Instead, we claim that c causally explains e if and only if our account says that c explains e with some positive amount of causal explanatory power. (shrink)
Though common sense says that causes must temporally precede their effects, the hugely influential interventionist account of causation makes no reference to temporal precedence. Does common sense lead us astray? In this paper, I evaluate the power of the commonsense assumption from within the interventionist approach to causal modeling. I first argue that if causes temporally precede their effects, then one need not consider the outcomes of interventions in order to infer causal relevance, and that one can instead use temporal (...) and probabilistic information to infer exactly when X is causally relevant to Y in each of the senses captured by Woodward’s interventionist treatment. Then, I consider the upshot of these findings for causal decision theory, and argue that the commonsense assumption is especially powerful when an agent seeks to determine whether so-called “dominance reasoning” is applicable. (shrink)
It is a consequence of the theory of imprecise credences that there exist situations in which rational agents inevitably become less opinionated toward some propositions as they gather more evidence. The fact that an agent's imprecise credal state can dilate in this way is often treated as a strike against the imprecise approach to inductive inference. Here, we show that dilation is not a mere artifact of this approach by demonstrating that opinion loss is countenanced as rational by a substantially (...) broader class of normative theories than has been previously recognised. Specifically, we show that dilation-like phenomena arise even when one abandons the basic assumption that agents have (precise or imprecise) credences of any kind, and follows directly from bedrock norms for rational comparative confidence judgements of the form `I am at least as confident in p as I am in q'. We then use the comparative confidence framework to develop a novel understanding of what exactly gives rise to dilation-like phenomena. By considering opinion loss in this more general setting, we are able to provide a novel assessment of the prospects for an account of inductive inference that is not saddled with the inevitability of rational opinion loss. (shrink)
Reviewed Works:Reuben Hersh, Proving is Convincing and Explaining.Philip J. Davis, Visual Theorems.Gila Hanna, H. Niels Jahnke, Proof and Application.Daniel Chazan, High School Geometry Students' Justification for Their Views of Empirical Evidence and Mathematical Proof.
A number of authors have recently used causal models to develop a promising semantics for non-backtracking counterfactuals. Briggs shows that when this semantics is naturally extended to accommodate right-nested counterfactuals, it invalidates modus ponens, and therefore violates weak centering given the standard Lewis/stalnaker interpretation of the counterfactual in terms of nearness or similarity of worlds. In this paper, I explore the possibility of abandoning the Lewis/stalnaker interpretation for some alternative that is better suited to accommodate the causal modeling semantics. I (...) argue that a revision of McGee’s semantics can accommodate CM semantics without sacrificing weak centering, and that CM semantics can therefore be situated within a general semantics for counterfactuals that is based on the nearness or similarity of worlds. (shrink)
This thesis explores the merits and limits of John Hawthorne’s contextualist analysis of free will. First, I argue that contextualism does better at capturing the ordinary understanding of ‘free will’ than competing views because it best accounts for the way in which our willingness to attribute free will ordinarily varies with context. Then I consider whether this is enough to conclude that the contextualist has won the free will debate. I argue that this would be hasty, because the contextualist, unlike (...) her competitors, cannot tell us whether any particular agent is definitively free, and therefore cannot inform any practices that are premised on whether a particular agent is morally responsible. As such, I argue that whether the contextualist “wins the free will debate” depends on whether it is more important to capture the ordinary understanding of ‘free will’ or more important to inform our practices of ascribing moral responsibility. (shrink)
Does y obtain under the counterfactual supposition that x? The answer to this question is famously thought to depend on whether y obtains in the most similar world in which x obtains. What this notion of ‘similarity’ consists in is controversial, but in recent years, graphical causal models have proved incredibly useful in getting a handle on considerations of similarity between worlds. One limitation of the resulting conception of similarity is that it says nothing about what would obtain were the (...) causal structure to be different from what it actually is, or from what we believe it to be. In this paper, we explore the possibility of using graphical causal models to resolve counterfactual queries about causal structure by introducing a notion of similarity between causal graphs. Since there are multiple principled senses in which a graph G* can be more similar to a graph G than a graph G**, we introduce multiple similarity metrics, as well as multiple ways to prioritize the various metrics when settling counterfactual queries about causal structure. (shrink)
There are simple mechanical systems that elude causal representation. We describe one that cannot be represented in a single directed acyclic graph. Our case suggests limitations on the use of causal graphs for causal inference and makes salient the point that causal relations among variables depend upon details of causal setups, including values of variables.
Erratum to: Synthese 191:1925–1930 DOI:10.1007/s11229-013-0380-3 The authors were unaware that points in their article appeared in “Caveats for Causal Reasoning with Equilibrium Models,” by Denver Dash and Marek Druzdzel, published in S. Benferhat and P. Besnard : European Conferences on Symbolic and Quantitative Approaches to Reasoning with Uncertainty 2001, Lecture Notes in Artificial Intelligence 2143, pp. 192–203. The authors were unaware of this essay and would like to apologize to the authors for failing to cite their excellent work.
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.