It has become increasingly common for philosophers to make use of the concept of artistic value, and, further, to distinguish artistic value from aesthetic value. In a recent paper, ‘The Myth of (Non-Aesthetic) Artistic Value’, Dominic Lopes takes issue with this, presenting a kind of corrective to current philosophical practice regarding the use of the concept of artistic value. Here I am concerned to defend current practice against Lopes's attack. I argue that there is some unclarity as to what aspect (...) of this practice Lopes is objecting to, and I distinguish three kinds of objection that he could be read as making. I argue that none of these is adequately supported by Lopes's arguments, and that the corresponding three aspects of current philosophical practice are on firmer footing than Lopes's paper suggests. A new, plausible characterisation of artistic value will emerge from this discussion. (shrink)
Retrospective rule-making has few supporters and many opponents. Defenders of retrospective laws generally do so on the basis that they are a necessary evil in specific or limited circumstances, for example to close tax loopholes, to deal with terrorists or to prosecute fallen tyrants. Yet the reality of retrospective rule making is far more widespread than this, and ranges from ’corrective’ legislation to ’interpretive regulations’ to judicial decision making. The search for a rational justification for retrospective rule-making necessitates a (...) reconsideration of the very nature of the rule of law and the kind of law that can rule, and will provide new insights into the nature of law and the parameters of societal order. This book examines the various ways in which laws may be seen as retrospective and analyses the problems in defining retrospectivity. In his analysis Dr Charles Sampford asserts that the definitive argument against retrospective rule-making is the expectation of individuals that, if their actions today are considered by a future court, the applicable law was discoverable at the time the action was performed. The book goes on to suggest that although the strength of this ’rule of law’ argument should prevail in general, exceptions are sometimes necessary, and that there may even be occasions when analysis of the rule of law may provide the foundation for the application of retrospective laws. (shrink)
"Follow the money" has been the operational rule for historians and investigative journalists since at least the Watergate era, if not earlier. Futurists do not have a money trail to follow, but instead must predict the trajectory of economic relations based on assumptions of what technological and social developments the future may hold. Many futurists assume that nanotechnology in combination with Artificial Intelligence (AI) will yield a world of material abundance with little or no need for human labor. The nano/AI (...) cornucopia will rain down wealth upon one and all, giving slackers and solid workaholics equal access to almost anything they could ever need or want. But is this really the most likely scenario? (shrink)
Does general validity or real world validity better represent the intuitive notion of logical truth for sentential modal languages with an actuality connective? In (Philosophical Studies 130:436–459, 2006) I argued in favor of general validity, and I criticized the arguments of Zalta (Journal of Philosophy 85:57–74, 1988) for real world validity. But in Nelson and Zalta (Philosophical Studies 157:153–162, 2012) Michael Nelson and Edward Zalta criticize my arguments and claim to have established the superiority of real world validity. Section 1 (...) of the present paper introduces the problem and sets out the basic issues. In Sect. 2 I consider three of Nelson and Zalta’s arguments and find all of them deficient. In Sect. 3 I note that Nelson and Zalta direct much of their criticism at a phrase (‘true at a world from the point of view of some distinct world as actual’) I used only inessentially in Hanson (Philosophical Studies 130:436–459, 2006), and that their account of the philosophical foundations of modal semantics leaves them ill equipped to account for the plausibility of modal logics weaker than S5. Along the way I make several general suggestions for ways in which philosophical discussions of logical matters–especially, but not limited to, discussions of truth and logical truth for languages containing modal and indexical terms–might be facilitated and made more productive. (shrink)
Arthur Diamond comments that "it is not clear how a donor distributes money through Hanson's market". Let me try again to be clear. Imagine David Levy were to seek funding for the regression he suggests in his comments, on the relative impact of sports versus science spending on aggregate productivity. Consider what might happen under three different funding institutions.
What if we someday learn how to model small brain units, and so can "upload" ourselves into new computer brains? What if this happens before we learn how to make human-level artificial intelligences? The result could be a sharp transition to an upload-dominated world, with many dramatic consequences. In particular, fast and cheap replication may once again make Darwinian evolution of human values a powerful force in human history. With evolved values, most uploads would value life even when life is (...) hard or short, uploads would reproduce quickly, and wages would fall. But total wealth should rise, so we could all do better by accepting uploads, or at worse taxing them, rather than trying to delay or segregate them. (shrink)
Attempts to model interstellar colonization may seem hopelessly compromised by uncertainties regarding the technologies and preferences of advanced civilizations. If light speed limits travel speeds, however, then a selection effect may eventually determine frontier behavior. Making weak assumptions about colonization technology, we use this selection effect to predict colonists’ behavior, including which oases they colonize, how long they stay there, how many seeds they then launch, how fast and far those seeds fly, and how behavior changes with increasing congestion. This (...) colonization model explains several astrophysical puzzles, predicting lone oases like ours, amid large quiet regions with vast unused resources. (shrink)
In Everett's many worlds interpretation, quantum measurements are considered to be decoherence events. If so, then inexact decoherence may allow large worlds to mangle the memory of observers in small worlds, creating a cutoff in observable world size. Smaller world are mangled and so not observed. If this cutoff is much closer to the median measure size than to the median world size, the distribution of outcomes seen in unmangled worlds follows the Born rule. Thus deviations from exact decoherence can (...) allow the Born rule to be derived via world counting, with a finite number of worlds and no new fundamental physics. (shrink)
Does the real difference between non-consequentialist and consequentialist theories lie in their approach to value? Non-consequentialist theories are thought either to allow a different kind of value (namely, agent-relative value) or to advocate a different response to value ('honouring' rather than 'promoting'). One objection to this idea implies that all normative theories are describable as consequentialist. But then the distinction between honouring and promoting collapses into the distinction between relative and neutral value. A proper description of non-consequentialist theories can only (...) be achieved by including a distinction between temporal relativity and neutrality in addition to the distinction between agent-relativity and agent-neutrality. (shrink)
In Everett’s many-worlds interpretation, where quantum measurements are seen as decoherence events, inexact decoherence may let large worlds mangle the memories of observers in small worlds, creating a cutoff in observable world measure. I solve a growth–drift–diffusion–absorption model of such a mangled worlds scenario, and show that it reproduces the Born probability rule closely, though not exactly. Thus, inexact decoherence may allow the Born rule to be derived in a many-worlds approach via world counting, using a ﬁnite number of worlds (...) and no new fundamental physics. (shrink)
You are in a grocery store, and thinking of buying some meat. You think you know what buying and eating this meat would mean for your taste buds, your nutrition, and your pocketbook, and let's assume that on those grounds it looks like a good deal. But now you want to think about the..
This paper considers the question of whether predictions of wrongdoing are relevant to our moral obligations. After giving an analysis of ‘won’t’ claims (i.e., claims that an agent won’t Φ), the question is separated into two different issues: firstly, whether predictions of wrongdoing affect our objective moral obligations, and secondly, whether self-prediction of wrongdoing can be legitimately used in moral deliberation. I argue for an affirmative answer to both questions, although there are conditions that must be met for self-prediction to (...) be appropriate in deliberation. The discussion illuminates an interesting and significant tension between agency and prediction. (shrink)
The traditional view that all logical truths are metaphysically necessary has come under attack in recent years. The contrary claim is prominent in David Kaplan’s work on demonstratives, and Edward Zalta has argued that logical truths that are not necessary appear in modal languages supplemented only with some device for making reference to the actual world (and thus independently of whether demonstratives like ‘I’, ‘here’, and ‘now’ are present). If this latter claim can be sustained, it strikes close to the (...) heart of the traditional view. I begin this paper by discussing and refuting Zalta’s argument in the context of a language for propositional modal logic with an actuality connective (section 1). This involves showing that his argument in favor of real world validity his preferred explication of logical truth, is fallacious. Next (section 2) I argue for an alternative explication of logical truth called general validity. Since the rule of necessitation preserves general validity, the argument of section 2 provides a reason for affirming the traditional view. Finally (section 3) I show that the intuitive idea behind the discredited notion of real world validity finds legitimate expression in an object language connective for deep necessity. (shrink)
People love to pretend, and to watch others pretending. From story-telling to plays to movies to virtual reality, we keep getting better at making people feel like they are watching imagined places and events. We also keep getting better at role-playing, i.e., creating enviroments where several people can see what happens when they all pretend they are different people in another time and place. Eventually such role-playing simulations may get so good that people will often forget that it is just (...) a simulation. (shrink)
Humans clearly have trouble thinking about death. This trouble is often used to explain behavior like delay in writing wills or buying life insurance, or interest in odd medical and religious beliefs. But the problem is far worse than most people imagine. Fear of death makes us spend ﬁfteen percent of our income on medicine, from which we get little or no health beneﬁt, while we neglect things like exercise, which oﬀer large health beneﬁts.
In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals. Regarding a bet on one event given another event, only logarithmic versions preserve the probability of the given event. Logarithmic versions also preserve the conditional probabilities of other events, and so preserve conditional independence relations. Given logarithmic rules that elicit relative probabilities of (...) base event pairs, it costs no more to elicit estimates on all combinations of these base events. (shrink)
Economic growth is determined by the supply and demand of investment capital; technology determines the demand for capital, while human nature determines the supply. The supply curve has two distinct parts, giving the world economy two distinct modes. In the familiar slow growth mode, rates of return are limited by human discount rates. In the fast growth mode, investment is limited by the world's wealth. Historical trends suggest that we may transition to the fast mode in roughly another century and (...) a half. (shrink)
In "Logical consequence: A defense of Tarski" (Journal of Philosophical Logic, vol. 25, 1996, pp. 617-677), Greg Ray defends Tarski's account of logical consequence against the criticisms of John Etchemendy. While Ray's defense of Tarski is largely successful, his attempt to give a general proof that Tarskian consequence preserves truth fails. Analysis of this failure shows that de facto truth preservation is a very weak criterion of adequacy for a theory of logical consequence and should be replaced by a stronger (...) absence-of-counterexamples criterion. It is argued that the latter criterion reflects the modal character of our intuitive concept of logical consequence, and it is shown that Tarskian consequence can be proved to satisfy this criterion for certain choices of logical constants. Finally, an apparent inconsistency in Ray's interpretation of Tarski's position on the modal status of the consequence relation is noted. (shrink)
The purpose of this paper is to establish a proper context for reading Jacques Derrida's The Gift of Death, which, I contend, can only be understood fully against the backdrop of "Violence and Metaphysics." The later work cannot be fully understood unless the reader appreciates the fact that Derrida returns to "a certain Abraham" not only in the name of Kierkegaard but also in the name of Levinas himself. The hypothesis of the reading that follows therefore would be that Derrida (...) writes The Gift of Death not as an attempt to re-present Kierkegaard's Abraham either rightly or wrongly but as an effort to do with Kierkegaard's Abraham what is possible with his thought in a broadly Levinasian/Derridean framework. That the reading he provides of the Abraham story would not be recognizable to Kierkegaard is not the principal point of Derrida's effort; his aim is to demonstrate that Levinas should not have been so hasty to dismiss Kierkegaard but could have recovered his interpretation of Abraham for purposes that Derrida and Levinas both share. (shrink)
Some widely accepted arguments in the philosophy of mathematics are fallacious because they rest on results that are provable only by using assumptions that the con- clusions of these arguments seek to undercut. These results take the form of bicon- ditionals linking statements of logic with statements of mathematics. George Boolos has given an argument of this kind in support of the claim that certain facts about second-order logic support logicism, the view that mathematics—or at least part of it—reduces to (...) logic. Hilary Putnam has offered a similar argument for the view that it is indifferent whether we take mathematics to be about objects or about what follows from certain postulates. In this paper I present and rebut these arguments. (shrink)
The ‘Wrong Kind of Reason’ problem for buck-passing theories (theories which hold that the normative is explanatorily or conceptually prior to the evaluative) is to explain why the existence of pragmatic or strategic reasons for some response to an object does not suffice to ground evaluative claims about that object. The only workable reply seems to be to deny that there are reasons of the ‘wrong kind’ for responses, and to argue that these are really reasons for wanting, trying, or (...) intending to have that response. In support of this, it is pointed out that awareness of pragmatic or strategic considerations, unlike awareness of reasons of the ‘right kind’, are never sufficient by themselves to produce the responses for which they are reasons. I argue that this phenomenon cannot be used as a criterion for distinguishing reasons-for-a-response from reasons-for-wanting-to-have-a-response. I subsequently investigate the possibility of basing this distinction on a claim that the responses in question (e.g. admiration or desire) are themselves inherently normative; I conclude that this approach is also unsuccessful. Hence, the ‘direct response’ phenomenon cannot be used to rule out the possibility of pragmatic or strategic reasons for responses; and the rejection of such reasons therefore cannot be used to circumvent the Wrong Kind of Reason Problem. (shrink)
In this paper I look at attempts to develop forms of consequentialism which do not have a feature considered problematic in Direct Consequentialist theories (that is, those consequentialist theories that apply the criterion of rightness directly in the evaluation of any set of options). The problematic feature in question (which I refer to as ‘evaluative conflict’) is the possibility that, for example, a right motive might lead an agent to perform a wrong act. Theories aiming to avoid this phenomenon must (...) argue that causal relationship entails motives and acts (for example) having the same moral status. I argue that attempts to ensure such ‘evaluative consistency’ are themselves deeply problematic, and that we must therefore accept evaluative conflict. (shrink)
1. The philosophical version of the primary-secondary distinction concerns (a) the 'real' properties of matter, (b) the epistemology of sensation, and (c) a contrast challenged by Berkely as illusory. The scientific version of the primary-secondary distinction concerns (a') the physical properties of matter, (b') a contrast essential within the history of atomism, and (c') a contrast challenged by 20th century microphysics as de facto untenable. 2. The primary-secondary distinction within physics can be interpreted in two ways: a. it can refer (...) to content; e.g. 'Matter has the properties of mass, shape, density... etc. -- it only appears to have the properties of warmth, fragrance, etc.' Or, b. it can refer to form; e.g. 'Whatever properties our best theories accord to primary matter, e.g., electrons, these are by definition primary. All other properties of, e.g., macromatter, are derivative.' Concerning 2.a., this interpretation is simply false when 17th, 18th, or 19th century values for the property-variables are introduced. Concerning 2.b., this either uninformative or misleading. It is uninformative when it constitutes no more than a decision to use the word 'primary' as an umbrella-word for all the properties contemporary micro-physics accords to fundamental material particles, whatever these may be. It is misleading when it turns on an implicit contrast between certain properties particles may be said to have when 'harnessed' to a detector, and certain other properties these particles have when free and unharnessed to any detector. This contrast does not exist. Quantum-theoretic information is always about particles-and-their-detectors-in-combination. Dissolve this combination and you destroy any possible knowledge of the particle. Hence the notion of 'completely objectifiable properties of particles' is in principle unsound. (shrink)
Psychologism in logic is the doctrine that the semantic content of logical terms is in some way a feature of human psychology. We consider the historically influential version of the doctrine, Psychological Individualism, and the many counter-arguments to it. We then propose and assess various modifications to the doctrine that might allow it to avoid the classical objections. We call these Psychological Descriptivism, Teleological Cognitive Architecture, and Ideal Cognizers. These characterizations give some order to the wide range of modern views (...) that are seen as psychologistic because of one or another feature. Although these can avoid some of the classic objections to psychologism, some still hold. (shrink)
There is, to all appearances, a philosophic hostility to fashionable dress. Studying this contempt, this paper examines likely sources in philosophy's suspicion of change; anxiety about surfaces and the inessential; failures in the face of death; and the philosophic disdain for, denial of, the human body and human passivity. If there are feminist concerns about fashion, they should be radically different from those of traditional philosophy. Whatever our ineluctable worries about desire and death, whatever our appropriate anger and impatience with (...) the merely superficial, whatever our genuine need to mark off the serious from the trivial, feminism may be a corrective therapy for philosophy's bad humor and self-deception, as these manifest themselves when the subject turns to beautiful clothes. (shrink)
The problem of extreme demands is one of the most intractable in contemporary moral theory. On the one hand, it seems that a failure to prevent great suffering at little cost to ourselves is morally wrong; given the amount of suffering in the world and the comparatively trivial nature of the requisite sacrifices, this intuition demands that we give up quite a lot. On the other hand, it doesn’t seem to us that we act wrongly in living lives characterised by (...) only moderate sacrifice, in which our time and resources are disproportionately used to benefit ourselves and those close to us. These two intuitions are extremely difficult to reconcile within any moral theory that recognises a duty to promote the general good. In this paper, however, I will suggest one possible way of doing so. My suggestion requires taking a closer look at the way in which the demand to the promote the good is derived: specifically, at the way our option set is characterised and the information that we take into account in weighing these options. I will suggest that there are certain assumptions it is plausible to make regarding the relevance of information about our own and other agents’ actions, and that once these assumptions are made, we can see how permissions may be derived within the framework of good-promotion. (shrink)
The Matrix is a story of AIs who keep humans as slaves, by keeping them in a dream world, and of rebels who fight to teach people this truth and destroy this dream world. But we humans are today slaves to alien hyper-rational entities who care little about us, and who distract us with a dream world. We do not want to know this truth, and if anything fight to preserve our dream world. Go figure.
The growing prominence of computers in contemporary life, often seemingly with minds of their own, invites rethinking the question of moral responsibility. If the moral responsibility for an act lies with the subject that carried it out, it follows that different concepts of the subject generate different views of moral responsibility. Some recent theorists have argued that actions are produced by composite, fluid subjects understood as extended agencies (cyborgs, actor networks). This view of the subject contrasts with methodological individualism: the (...) idea that actions are produced only by human individuals. This essay compares two views of responsibility: moral individualism (the ethical twin of methodological individualism), and joint responsibility (associated with extended agency theory). It develops a view of what joint responsibility might look like, and considers the advantages it might bring relative to moral individualism as well as the objections that are sure to be raised against it. (shrink)