Retrospective rule-making has few supporters and many opponents. Defenders of retrospective laws generally do so on the basis that they are a necessary evil in specific or limited circumstances, for example to close tax loopholes, to deal with terrorists or to prosecute fallen tyrants. Yet the reality of retrospective rule making is far more widespread than this, and ranges from ’corrective’ legislation to ’interpretive regulations’ to judicial decision making. The search for a rational justification for retrospective rule-making necessitates a (...) reconsideration of the very nature of the rule of law and the kind of law that can rule, and will provide new insights into the nature of law and the parameters of societal order. This book examines the various ways in which laws may be seen as retrospective and analyses the problems in defining retrospectivity. In his analysis Dr Charles Sampford asserts that the definitive argument against retrospective rule-making is the expectation of individuals that, if their actions today are considered by a future court, the applicable law was discoverable at the time the action was performed. The book goes on to suggest that although the strength of this ’rule of law’ argument should prevail in general, exceptions are sometimes necessary, and that there may even be occasions when analysis of the rule of law may provide the foundation for the application of retrospective laws. (shrink)
"Follow the money" has been the operational rule for historians and investigative journalists since at least the Watergate era, if not earlier. Futurists do not have a money trail to follow, but instead must predict the trajectory of economic relations based on assumptions of what technological and social developments the future may hold. Many futurists assume that nanotechnology in combination with Artificial Intelligence (AI) will yield a world of material abundance with little or no need for human labor. The nano/AI (...) cornucopia will rain down wealth upon one and all, giving slackers and solid workaholics equal access to almost anything they could ever need or want. But is this really the most likely scenario? (shrink)
Arthur Diamond comments that "it is not clear how a donor distributes money through Hanson's market". Let me try again to be clear. Imagine David Levy were to seek funding for the regression he suggests in his comments, on the relative impact of sports versus science spending on aggregate productivity. Consider what might happen under three different funding institutions.
What if we someday learn how to model small brain units, and so can "upload" ourselves into new computer brains? What if this happens before we learn how to make human-level artificial intelligences? The result could be a sharp transition to an upload-dominated world, with many dramatic consequences. In particular, fast and cheap replication may once again make Darwinian evolution of human values a powerful force in human history. With evolved values, most uploads would value life even when life is (...) hard or short, uploads would reproduce quickly, and wages would fall. But total wealth should rise, so we could all do better by accepting uploads, or at worse taxing them, rather than trying to delay or segregate them. (shrink)
Attempts to model interstellar colonization may seem hopelessly compromised by uncertainties regarding the technologies and preferences of advanced civilizations. If light speed limits travel speeds, however, then a selection effect may eventually determine frontier behavior. Making weak assumptions about colonization technology, we use this selection effect to predict colonists’ behavior, including which oases they colonize, how long they stay there, how many seeds they then launch, how fast and far those seeds fly, and how behavior changes with increasing congestion. This (...) colonization model explains several astrophysical puzzles, predicting lone oases like ours, amid large quiet regions with vast unused resources. (shrink)
Does the real difference between non-consequentialist and consequentialist theories lie in their approach to value? Non-consequentialist theories are thought either to allow a different kind of value (namely, agent-relative value) or to advocate a different response to value ('honouring' rather than 'promoting'). One objection to this idea implies that all normative theories are describable as consequentialist. But then the distinction between honouring and promoting collapses into the distinction between relative and neutral value. A proper description of non-consequentialist theories can only (...) be achieved by including a distinction between temporal relativity and neutrality in addition to the distinction between agent-relativity and agent-neutrality. (shrink)
The traditional view that all logical truths are metaphysically necessary has come under attack in recent years. The contrary claim is prominent in David Kaplan’s work on demonstratives, and Edward Zalta has argued that logical truths that are not necessary appear in modal languages supplemented only with some device for making reference to the actual world (and thus independently of whether demonstratives like ‘I’, ‘here’, and ‘now’ are present). If this latter claim can be sustained, it strikes close to the (...) heart of the traditional view. I begin this paper by discussing and refuting Zalta’s argument in the context of a language for propositional modal logic with an actuality connective (section 1). This involves showing that his argument in favor of real world validity his preferred explication of logical truth, is fallacious. Next (section 2) I argue for an alternative explication of logical truth called general validity. Since the rule of necessitation preserves general validity, the argument of section 2 provides a reason for affirming the traditional view. Finally (section 3) I show that the intuitive idea behind the discredited notion of real world validity finds legitimate expression in an object language connective for deep necessity. (shrink)
This paper considers the question of whether predictions of wrongdoing are relevant to our moral obligations. After giving an analysis of ‘won’t’ claims (i.e., claims that an agent won’t Φ), the question is separated into two different issues: firstly, whether predictions of wrongdoing affect our objective moral obligations, and secondly, whether self-prediction of wrongdoing can be legitimately used in moral deliberation. I argue for an affirmative answer to both questions, although there are conditions that must be met for self-prediction to (...) be appropriate in deliberation. The discussion illuminates an interesting and significant tension between agency and prediction. (shrink)
You are in a grocery store, and thinking of buying some meat. You think you know what buying and eating this meat would mean for your taste buds, your nutrition, and your pocketbook, and let's assume that on those grounds it looks like a good deal. But now you want to think about the..
People love to pretend, and to watch others pretending. From story-telling to plays to movies to virtual reality, we keep getting better at making people feel like they are watching imagined places and events. We also keep getting better at role-playing, i.e., creating enviroments where several people can see what happens when they all pretend they are different people in another time and place. Eventually such role-playing simulations may get so good that people will often forget that it is just (...) a simulation. (shrink)
Humans clearly have trouble thinking about death. This trouble is often used to explain behavior like delay in writing wills or buying life insurance, or interest in odd medical and religious beliefs. But the problem is far worse than most people imagine. Fear of death makes us spend ﬁfteen percent of our income on medicine, from which we get little or no health beneﬁt, while we neglect things like exercise, which oﬀer large health beneﬁts.
In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals. Regarding a bet on one event given another event, only logarithmic versions preserve the probability of the given event. Logarithmic versions also preserve the conditional probabilities of other events, and so preserve conditional independence relations. Given logarithmic rules that elicit relative probabilities of (...) base event pairs, it costs no more to elicit estimates on all combinations of these base events. (shrink)
Economic growth is determined by the supply and demand of investment capital; technology determines the demand for capital, while human nature determines the supply. The supply curve has two distinct parts, giving the world economy two distinct modes. In the familiar slow growth mode, rates of return are limited by human discount rates. In the fast growth mode, investment is limited by the world's wealth. Historical trends suggest that we may transition to the fast mode in roughly another century and (...) a half. (shrink)
In Logical consequence: A defense of Tarski (Journal of Philosophical Logic, vol. 25, 1996, pp. 617–677), Greg Ray defends Tarski"s account of logical consequence against the criticisms of John Etchemendy. While Ray"s defense of Tarski is largely successful, his attempt to give a general proof that Tarskian consequence preserves truth fails. Analysis of this failure shows that de facto truth preservation is a very weak criterion of adequacy for a theory of logical consequence and should be replaced by a stronger (...) absence-of-counterexamples criterion. It is argued that the latter criterion reflects the modal character of our intuitive concept of logical consequence, and it is shown that Tarskian consequence can be proved to satisfy this criterion for certain choices of logical constants. Finally, an apparent inconsistency in Ray"s interpretation of Tarski"s position on the modal status of the consequence relation is noted. (shrink)
The purpose of this paper is to establish a proper context for reading Jacques Derrida’s The Gift of Death , which, I contend, can only be understood fully against the backdrop of “Violence and Metaphysics.” The later work cannot be fully understood unless the reader appreciates the fact that Derrida returns to “a certain Abraham” not only in the name of Kierkegaard but also in the name of Levinas himself. The hypothesis of the reading that follows therefore would be that (...) Derrida writes The Gift of Death not as an attempt to re-present Kierkegaard’s Abraham either rightly or wrongly but as an effort to do with Kierkegaard’s Abraham what is possible with his thought in a broadly Levinasian/Derridean framework. That the reading he provides of the Abraham story would not be recognizable to Kierkegaard is not the principal point of Derrida’s effort; his aim is to demonstrate that Levinas should not have been so hasty to dismiss Kierkegaard but could have recovered his interpretation of Abraham for purposes that Derrida and Levinas both share. (shrink)
1. The philosophical version of the primary-secondary distinction concerns (a) the 'real' properties of matter, (b) the epistemology of sensation, and (c) a contrast challenged by Berkely as illusory. The scientific version of the primary-secondary distinction concerns (a') the physical properties of matter, (b') a contrast essential within the history of atomism, and (c') a contrast challenged by 20th century microphysics as de facto untenable. 2. The primary-secondary distinction within physics can be interpreted in two ways: a. it can refer (...) to content; e.g. 'Matter has the properties of mass, shape, density... etc. -- it only appears to have the properties of warmth, fragrance, etc.' Or, b. it can refer to form; e.g. 'Whatever properties our best theories accord to primary matter, e.g., electrons, these are by definition primary. All other properties of, e.g., macromatter, are derivative.' Concerning 2.a., this interpretation is simply false when 17th, 18th, or 19th century values for the property-variables are introduced. Concerning 2.b., this either uninformative or misleading. It is uninformative when it constitutes no more than a decision to use the word 'primary' as an umbrella-word for all the properties contemporary micro-physics accords to fundamental material particles, whatever these may be. It is misleading when it turns on an implicit contrast between certain properties particles may be said to have when 'harnessed' to a detector, and certain other properties these particles have when free and unharnessed to any detector. This contrast does not exist. Quantum-theoretic information is always about particles-and-their-detectors-in-combination. Dissolve this combination and you destroy any possible knowledge of the particle. Hence the notion of 'completely objectifiable properties of particles' is in principle unsound. (shrink)
In this paper I look at attempts to develop forms of consequentialism which do not have a feature considered problematic in Direct Consequentialist theories (that is, those consequentialist theories that apply the criterion of rightness directly in the evaluation of any set of options). The problematic feature in question (which I refer to as ‘evaluative conflict’) is the possibility that, for example, a right motive might lead an agent to perform a wrong act. Theories aiming to avoid this phenomenon must (...) argue that causal relationship entails motives and acts (for example) having the same moral status. I argue that attempts to ensure such ‘evaluative consistency’ are themselves deeply problematic, and that we must therefore accept evaluative conflict. (shrink)
The ‘Wrong Kind of Reason’ problem for buck-passing theories (theories which hold that the normative is explanatorily or conceptually prior to the evaluative) is to explain why the existence of pragmatic or strategic reasons for some response to an object does not suffice to ground evaluative claims about that object. The only workable reply seems to be to deny that there are reasons of the ‘wrong kind’ for responses, and to argue that these are really reasons for wanting, trying, or (...) intending to have that response. In support of this, it is pointed out that awareness of pragmatic or strategic considerations, unlike awareness of reasons of the ‘right kind’, are never sufficient by themselves to produce the responses for which they are reasons. I argue that this phenomenon cannot be used as a criterion for distinguishing reasons-for-a-response from reasons-for-wanting-to-have-a-response. I subsequently investigate the possibility of basing this distinction on a claim that the responses in question (e.g. admiration or desire) are themselves inherently normative; I conclude that this approach is also unsuccessful. Hence, the ‘direct response’ phenomenon cannot be used to rule out the possibility of pragmatic or strategic reasons for responses; and the rejection of such reasons therefore cannot be used to circumvent the Wrong Kind of Reason Problem. (shrink)
There is, to all appearances, a philosophic hostility to fashionable dress. Studying this contempt, this paper examines likely sources in philosophy's suspicion of change; anxiety about surfaces and the inessential; failures in the face of death; and the philosophic disdain for, denial of, the human body and human passivity. If there are feminist concerns about fashion, they should be radically different from those of traditional philosophy. Whatever our ineluctable worries about desire and death, whatever our appropriate anger and impatience with (...) the merely superficial, whatever our genuine need to mark off the serious from the trivial, feminism may be a corrective therapy for philosophy's bad humor and self-deception, as these manifest themselves when the subject turns to beautiful clothes. (shrink)
Psychologism in logic is the doctrine that the semantic content of logical terms is in some way a feature of human psychology. We consider the historically influential version of the doctrine, Psychological Individualism, and the many counter-arguments to it. We then propose and assess various modifications to the doctrine that might allow it to avoid the classical objections. We call these Psychological Descriptivism, Teleological Cognitive Architecture, and Ideal Cognizers. These characterizations give some order to the wide range of modern views (...) that are seen as psychologistic because of one or another feature. Although these can avoid some of the classic objections to psychologism, some still hold. (shrink)
The growing prominence of computers in contemporary life, often seemingly with minds of their own, invites rethinking the question of moral responsibility. If the moral responsibility for an act lies with the subject that carried it out, it follows that different concepts of the subject generate different views of moral responsibility. Some recent theorists have argued that actions are produced by composite, fluid subjects understood as extended agencies (cyborgs, actor networks). This view of the subject contrasts with methodological individualism: the (...) idea that actions are produced only by human individuals. This essay compares two views of responsibility: moral individualism (the ethical twin of methodological individualism), and joint responsibility (associated with extended agency theory). It develops a view of what joint responsibility might look like, and considers the advantages it might bring relative to moral individualism as well as the objections that are sure to be raised against it. (shrink)
The Matrix is a story of AIs who keep humans as slaves, by keeping them in a dream world, and of rebels who fight to teach people this truth and destroy this dream world. But we humans are today slaves to alien hyper-rational entities who care little about us, and who distract us with a dream world. We do not want to know this truth, and if anything fight to preserve our dream world. Go figure.
The problem of extreme demands is one of the most intractable in contemporary moral theory. On the one hand, it seems that a failure to prevent great suffering at little cost to ourselves is morally wrong; given the amount of suffering in the world and the comparatively trivial nature of the requisite sacrifices, this intuition demands that we give up quite a lot. On the other hand, it doesn’t seem to us that we act wrongly in living lives characterised by (...) only moderate sacrifice, in which our time and resources are disproportionately used to benefit ourselves and those close to us. These two intuitions are extremely difficult to reconcile within any moral theory that recognises a duty to promote the general good. In this paper, however, I will suggest one possible way of doing so. My suggestion requires taking a closer look at the way in which the demand to the promote the good is derived: specifically, at the way our option set is characterised and the information that we take into account in weighing these options. I will suggest that there are certain assumptions it is plausible to make regarding the relevance of information about our own and other agents’ actions, and that once these assumptions are made, we can see how permissions may be derived within the framework of good-promotion. (shrink)
Why do we regulate the substances we can ingest, the advisors we can hear, and the products we can buy far more than similarly-important non-health choices? I review many possible arguments for such paternalistic policies, as well many possible holes in such arguments. I argue we should either be clearer about what justifies our paternalism, or we should back off and be less paternalistic.
The conceptual excitement of science often seems geared only to work in contemporary physics. Thus, philosophers regularly discuss current cosmology, relativity, or the foundations of microphysics. In these areas one's philosophy is stretched and strained far beyond what our ancestors might have anticipated. Historians of science have also focused attention on past events by remarking their analogies and similarities with perplexities in physics today. But there are statements, hypotheses and theories of the past which are rewarding in themselves, without having (...) to be referred to the agonies which now confound quantum theory and cosmology. Specifically, the First Law of Motion--the "Law of Inertia"--this has everything a logician of science could look for. Understanding the complexities and perplexities of this fundamental mechanical statement is in itself to gain insight into what theoretical physics in general really is. With this in view a study of the law is undertaken. (shrink)
A simple exogenous growth model gives conservative estimates of the economic implications of machine intelligence. Machines complement human labor when they become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At ﬁrst, expensive hardware and software does only the few jobs where computers have the strongest advantage over humans. Eventually, computers do most jobs. At ﬁrst, complementary eﬀects dominate, and human wages rise with computer productivity. But eventually substitution can (...) dominate, making wages fall as fast as computer prices now do. An intelligence population explosion makes per-intelligence consumption fall this fast, while economic growth rates rise by an order of magnitude or more. These results are robust to automating incrementally, and to distinguishing hardware, software, and human capital from other forms of capital. (shrink)
In Everett’s many-worlds interpretation, where quantum measurements are seen as decoherence events, inexact decoherence may let large worlds mangle the memories of observers in small worlds, creating a cutoff in observable world measure. I solve a growth–drift–diffusion–absorption model of such a mangled worlds scenario, and show that it reproduces the Born probability rule closely, though not exactly. Thus, inexact decoherence may allow the Born rule to be derived in a many-worlds approach via world counting, using a ﬁnite number of worlds (...) and no new fundamental physics. (shrink)
There is a widespread feeling that health is special; the rules that are usually used in other policy areas are not applied in health policy. Health economists, for example, tend to be reluctant to offer economists’ usual prescription of competition and consumer choice, even though they have largely failed to justify this reluctance by showing that health economics involves special features such as public goods, externalities, adverse selection, poor consumer information, or unusually severe consequences. Similarly, while some philosophers argue for (...) bioethical conclusions based on very general ethical intuitions,1 many others rely on moral intuitions that are specific to health and medicine to draw conclusions that are meant to apply mainly in health and medicine. For example, many authors appear to start from the strong moral intuition that it typically seems wrong to deny poor people access to health care, and then seek moral principles that can both account for such intuitions and justify the claim that people have some sort of right to health care.2 In metaethics, opinions on moral intuitions range from an extreme intuitionism, which accepts all case-specific moral intuitions at face value as reliable moral guides, to an extreme foundationalism, which rejects such intuitions as evidence regarding correct general moral principles. Between these extremes, opinions vary on how severe the errors in our moral intuitions are. The practice of bioethics seems to favor the extreme intuitionist end of this spectrum, and thus implicitly expects mild errors.3 In contrast, this essay will suggest that common practice in bioethics has seriously underestimated the errors in our moral intuitions. In this essay, I consider the evolutionary origin of our moral intuitions, but avoid the extreme positions of moral skepticism and “whatever evolved must be good,” both of which are commonly associated with evolution-. (shrink)
Although the use of possible worlds in semantics has been very fruitful and is now widely accepted, there is a puzzle about the standard definition of validity in possible-worlds semantics that has received little notice and virtually no comment. A sentence of an intensional language is typically said to be valid just in case it is true at every world under every model on every model structure of the language. Each model structure contains a set of possible worlds, and models (...) are defined relative to model structures, assigning truth-values to sentences at each world countenanced by the model structure. The puzzle is why more than one model structure is used in the definition of validity. There is presumably just one class of all possible worlds and just one model structure defined on this class that does correctly the things that model structures are supposed to do. (These include, but need not be limited to, specifying the set of individuals in each world as well as various accessibility relations between worlds.) Why not define validity simply as truth at every world under every model on this one model structure? What is the point of bringing in more model structures than just this one?
We investigate these questions in some detail and conclude that for many intensional languages the puzzle points to a genuine difficulty: the standard definition of validity is insufficiently motivated. We begin (Section 1) by showing that a plausible and natural account of validity for intensional languages can be based on a single model structure, and that validity so defined is analogous in important respects to the standard account of validity for extensional languages. We call this notion of validity "validity!", and in Section 2 we contrast it with the standard notion, which we call "validity2". Several attempts are made to discover a rationale for the almost universal acceptance of validity2, but in most of these attempts we come up empty-handed. So in Section 3 we investigate validity! for some intensional languages. Our investigation includes providing axiomatizations for several propositional and predicate logics, most of which are provably complete. The completeness proofs are given in the Appendix, which also contains a sketch of a compactness proof for one of the predicate logics. (shrink)
_The usual approach in Buddhist-Western writings uses Buddhist perspectives to help answer Western philosophical-psychological questions. This paper reverses the process and uses the Western philosophical perspective of Nietzsche to answer questions of Buddhist-conceived nirvana. Nietzsche's philosophy of will, expounded primarily through the Zarathustra essays, provides an active and affirmative alternative for understanding and attaining nirvana. His ideas of free will and will to power have commonalities with Buddhist practice and thought, including nonattachment, nihilism, no-self, and meditation. Nietzschean will revises the (...) Buddhist notion of right effort to answer questions about coping with inner suffering and outer-world corruption. It shows nirvana to be less a state of passive being and more a state of active becoming. Why approach such important matters as transcendence, power, and God from the standpoint of the 'I'? First, I-centered analysis can clarify egological concepts such as the subject-I, object-self, and conceptualizing-ego and what these concepts contribute to an experience-based metaphysics, for even the most objective factual or mathematical expression must be stated and understood by an active subject-I. Second, I-centered analysis can advance the phenomenological study of the role of the I in the subjective realms of mind. Third, it can help resolve issues in both Western and Buddhist philosophy such as activism-passivism, subjectivity-objectivity, will and freedom, I and other, and secular/sacred presence in consciousness_. (shrink)
Being read is not the same as being believed. Most reviewers have praised the book as original, well-written, thought-provoking, etc., and then gone on to take issue with one or more of Penrose's main theses. Penrose seems unfamiliar with the existing literature in cognitive science, philosophy of mind, and AI. The handful of reviewers who agree with Penrose don't seem to have paid much attention to his specific arguments - they always thought AI was bogus. See, for example, the 37 (...) reviews in Behavioral and Brain Sciences (BBS), Dec. 1990, V13, pp.643-705. (shrink)
In Everett’s many worlds interpretation, quantum measurements are considered to be decoherence events. If so, then inexact decoherence may allow large worlds to mangle the memory of observers in small worlds, creating a cutoﬀ in observable world size. Smaller world are mangled and so not observed. If this cutoﬀ is much closer to the median measure size than to the median world size, the distribution of outcomes seen in unmangled worlds follows the Born rule. Thus deviations from exact decoherence can (...) allow the Born rule to be derived via world counting, with a ﬁnite number of worlds and no new fundamental physics. (shrink)
Humans lie and deceive themselves, and often choose beliefs for reasons other than how closely those beliefs approximate truth. This is mainly why we disagree. Three future trends may reduce these epistemic vices. First, increased documentation and surveillance should make it harder to lie and self-deceive about the patterns of our lives. Second, speculative markets can create a relatively unbiased consensus on most debated topics in science, business, and policy. Third, brain modifications may allow our minds to be more transparent, (...) so that lies and self-deception become harder to hide. In evaluating these trends, we should be wary of moral arrogance. (shrink)
Within the past decade there has grown an acute and highly articulate group of critics of the orthodox interpretation of quantum theory,--the so-called "Copenhagen Interpretation." The writings of people like Bopp, Janossy, and particularly Bohm and Feyerabend, must be taken very seriously indeed. The future of some important discussions in the philosophy and the logic of science rests with these individuals. But they have, in their own writings, occasionally matched the inelegancies of Bohr and Heisenberg with as many inelegancies of (...) their own. The present paper is meant to present a quintet of considerations which may possibly lead to a reassessment of the issues between Bohr, Heisenberg, and their critics, especially Bohm and Feyerabend. (shrink)
It is commonly assumed that persons who hold abortions to be generally impermissible must, for the same reasons, be opposed to embryonic stem cell research [ESR]. Yet a settled position against abortion does not necessarily direct one to reject that research. The difference in potentiality between the embryos used in ESR and embryos discussed in the abortion debate can make ESR acceptable even if one holds that abortion is impermissible. With regard to their potentiality, in vitro embryos are here argued (...) to be more morally similar to clonable somatic cells than they are to in vivo embryos. This creates an important moral distinction between embryos in vivo and in vitro. Attempts to refute this moral distinction, raised in the recent debate in this journal between Alfonso Gómez-Lobo and Mary Mahowald, are also addressed. (shrink)
Human behavior regarding medicine seems strange; assumptions and models that seem workable in other areas seem less so in medicine. Perhaps we need to rethink the basics. Toward this end, I have collected many puzzling stylized facts about behavior regarding medicine, and have sought a small number of simple assumptions which might together account for as many puzzles as possible.
While a simple information market lets one trade on the probability of each value of a single variable, a full combinatorial information market lets one trade on any combination of values of a set of variables, including any conditional or joint probability. In laboratory experiments, we compare the accuracy of simple markets, two kinds of combinatorial markets, a call market and a market maker, isolated individuals who report to a scoring rule, and two ways to combine those individual reports into (...) a group prediction. We consider two environments with asymmetric information on sparsely correlated binary variables, one with three subjects and three variables, and the other with six subjects and eight variables (and so 256 states). (shrink)
Intuition is surely a theme of singular importance to phenomenology, and Henry writes sometimes as if intuition should receive extensive attention from phenomenologists. However, he devotes relatively little attention to the problem of intuition himself. Instead he off ers a complex critique of intuition and the central place it enjoys in phenomenological speculation. This article reconstructs Henry’s critique and raises some questions for his counterintuitive theory of intuition. While Henry cannot make a place for the traditional sort of intuition given (...) his commitment to the primacy of life as the natural and spontaneous habitation of consciousness, an abode entirely outside the world, there nevertheless with some modification to Henry’s thinking could be a role for intuition to play in discerning the traces of life in the world. (shrink)
Since market scoring rules have become popular as a form of market maker, it seems worth reviewing just what such mechanisms are intended to do. The main function performed by most market makers is to serve as an intermediary between people who prefer to trade at different times. Traders who have the same favorite times to trade can show up together to an ordinary continuous double auction, and then make and accept offers to trade. But when traders have different favorite (...) times, a market maker can help them by first making offers that some of them will accept, and then later making opposite offers which others will accept. By adjusting prices in his favor, a market maker can even profit from providing this service. By making offers, however, a market maker opens himself up to the risk of losing to informed traders who know more than he about asset values. It is a complex and difficult task to choose the price and duration of offers in order to profit the most from intermediary trades while suffering the least from informed trades. This task requires subtle judgments about the relative fraction of informed and intermediary trades at different times, prices, quantities, and trading histories. No simple algorithm could reasonably claim to do this task optimally. Very active markets have little need for market makers, as anyone can trade at anytime. In markets with large but sporadic trades, a human will likely find it profitable to apply their considerable intelligence to the complex task of market making. The question is what to do for smaller less-active markets, which cannot afford such human attention. Trading may simply not happen there if no intermediary can be found to make such markets. A computer program with less than human intelligence that attempts to make markets runs the risk of being out-smarted by human traders. Humans might even figure out how to turn that program into a money pump, giving up cash each time it is run through some cycle of trades.. (shrink)
The time-honored view that logic is a non-empirical enterprise is still widely accepted, but it is not always recognized that there are (at least) two distinct ways in which this view can be made precise. One way focuses on the knowledge we can have of logical matters, the other on the nature of the logical consequence relation itself. More specifically; the first way embodies the claim that knowledge of whether the logical consequence relation holds in a particular case is knowledge (...) that can be had a priori (if at all). The second way presupposes a distinction between structural and non-structural properties and relations, and it holds that logical consequence is to be defined exdusively in terms of the former. It is shown that the two ways are not coextensive by giving an example of a logic that is non-empirical in the second way but not in the first. (shrink)
Recent social theory has departed from methodological individualism’s explanation of action according to the motives and dispositions of human individuals in favor of explanation in terms of broader agencies consisting of both human and nonhuman elements described as cyborgs, actor-networks, extended agencies, or distributed cognition. This paper proposes that moral responsibility for action also be vested in extended agencies. It advances a consequentialist view of responsibility that takes moral responsibility to be a species of causal responsibility, and it answers objections (...) that might be raised on the basis of intentions and deserts. (shrink)
Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?
People have long noticed that speculative markets, though created for other purposes, also do a great job of aggregating relevant information. In fact, it is hard to find information not embodied by such market prices. This is, in part, because anyone who finds such neglected information can profit by trading on it, thereby reducing the neglect.1 So far, speculative markets have done well in every known head-to-head field comparison with other forecasting institutions. Orange juice futures improved on National Weather Service (...) forecasts,2 horse race markets beat horse race experts,3 Oscar markets beat columnist forecasts,4 gas-demand markets beat gas-demand experts,5 stock markets beat the official NASA panel at fingering the guilty company in the Challenger accident,6 election markets beat national opinion polls,7 and corporate sales markets beat official corporate forecasts.8 Recently, some have considered creating new markets specifically to take.. (shrink)
This paper questions the nature and existence of the ego and I from a Western and Eastern viewpoint, which has been a question for 2,500 years when the Buddha rejected the Brahman idea of ātman. The answer for an ego depends partly on the state of consciousness; the existence of the Western objectifying ego is undeniable in ordinary consciousness, but not in extraordinary consciousness with no objectifying. The subtle question remains about the existence of an I that is distinct from (...) the ego and that is best represented by most meditative or contemplative states. Here a subjectified, witnessing, consciousness-maintaining I still seems to exist. This may be called the "High-I," which appears to provide for all states of consciousness a constancy and awareness not provided by the ego. This finding has implications for psychology and religion as well as philosophy. (shrink)
Technologists think about specific future technologies, which they may foresee in some detail. Unfortunately, such technologists then mostly use amateur intuitions about the social world to predict the broader social implications of these technologies. This makes it hard for technologists to identify the technologies which will have the largest social impact.
Patients with a life-threatening illness can be confronted with various types of loneliness, one of which is existential loneliness (EL). Since the experience of EL is extremely disruptive, the issue of EL is relevant for the practice of end-of-life care. Still, the literature on EL has generated little discussion and empirical substantiation and has never been systematically reviewed. In order to systematically review the literature, we (1) identified the existential loneliness literature; (2) established an organising framework for the review; (3) (...) conducted a conceptual analysis of existential loneliness; and (4) discussed its relevance for end-of-life care. We found that the EL concept is profoundly unclear. Distinguishing between three dimensions of EL—as a condition, as an experience, and as a process of inner growth—leads to some conceptual clarification. Analysis of these dimensions on the basis of their respective key notions—everpresent, feeling, defence; death, awareness, difficult communication; and inner growth, giving meaning, authenticity—further clarifies the concept. Although none of the key notions are unambiguous, they may function as a starting point for the development of care strategies on EL at the end of life. (shrink)
Humans have slowly built more productive societies by slowly acquiring various kinds of capital, and by carefully matching them to each other. Because disruptions can disturb this careful matching, and discourage social coordination, large disruptions can cause a “social collapse,” i.e., a reduction in productivity out of proportion to the disruption. For many types of disasters, severity seems to follow a power law distribution. For some of types, such as wars and earthquakes, most of the expected harm is predicted to (...) occur in extreme events, which kill most people on Earth. So if we are willing to worry about any war or earthquake, we should worry especially about extreme versions. If individuals varied little in their resistance to such disruptions, events a little stronger than extreme ones would eliminate humanity, and our only hope would be to prevent such events. If individuals vary a lot in their resistance, however, then it may pay to increase the variance in such resistance, such as by creating special sanctuaries from which the few remaining humans could rebuild society. (shrink)
A world product time series covering two million years is well ﬁt by either a sum of four exponentials, or a constant elasticity of substitution (CES) combination of three exponential growth modes: “hunting,” “farming,” and “industry.” The CES parameters suggest that farming substituted for hunting, while industry complemented farming, making the industrial revolution a smoother transition. Each mode grew world product by a factor of a few hundred, and grew a hundred times faster than its predecessor. This weakly suggests that (...) within the next century a new mode might appear with a doubling time measured in days, not years. (shrink)
A loose analogy relates the work of Laplace and Hilbert. These thinkers had roughly similar objectives. At a time when so much of our analytic effort goes to distinguishing mathematics and logic from physical theory, such an analogy can still be instructive, even though differences will always divide endeavors such as those of Laplace and Hilbert.
Consider two agents who want to be Bayesians with a common prior, but who cannot due to computational limitations. If these agents agree that their estimates are consistent with certain easy-to-compute consistency constraints, then they can agree to disagree about any random variable only if they also agree to disagree, to a similar degree and in a stronger sense, about an average error. Yet average error is a state-independent random variable, and one agent's estimate of it is also agreed to (...) be state-independent. Thus suggests that disagreements are not fundamentally due to differing information about the state of the world. (shrink)
To classify is to organize the particulars in a body of information according to some meaningful scheme. Difficulty recognizing metaphor, synonyms and homonyms, and levels of generalization renders those applications of artificial intelligence that are currently in widespread use at a loss to deal effectively with classification. Indexing conveys nothing about relationships; it pinpoints information on particular topics without reference to anything else. Keyword searching is a form of indexing, and here artificial intelligence excels. Growing reliance on automated means of (...) accessing information brings an increase in indexing and a corresponding decrease in classification. This brings about a shift from the modernist view of the world as permanently and hierarchically structured to the indeterminacy and contingency associated with postmodernism. (shrink)