With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for standard utilitarians is not that we ought to maximize the pace of technological (...) development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur. This goal has such high utility that standard utilitarians ought to focus all their efforts on it. Utilitarians of a ‘person-affecting’ stripe should accept a modified version of this conclusion. Some mixed ethical views, which combine utilitarian considerations with other criteria, will also be committed to a similar bottom line. (shrink)
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...) our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. (shrink)
To what extent should we use technological advances to try to make better human beings? Leading philosophers debate the possibility of enhancing human cognition, mood, personality, and physical performance, and controlling aging. Would this take us beyond the bounds of human nature? These are questions that need to be answered now.
Cognitive enhancement takes many and diverse forms. Various methods of cognitive enhancement have implications for the near future. At the same time, these technologies raise a range of ethical issues. For example, they interact with notions of authenticity, the good life, and the role of medicine in our lives. Present and anticipated methods for cognitive enhancement also create challenges for public policy and regulation.
Numerous approaches to a quantum theory of gravity posit fundamental ontologies that exclude spacetime, either partially or wholly. This situation raises deep questions about how such theories could relate to the empirical realm, since arguably only entities localized in spacetime can ever be observed. Are such entities even possible in a theory without fundamental spacetime? How might they be derived, formally speaking? Moreover, since by assumption the fundamental entities cannot be smaller than the derived and so cannot ‘compose’ them in (...) any ordinary sense, would a formal derivation actually show the physical reality of localized entities? We address these questions via a survey of a range of theories of quantum gravity, and generally sketch how they may be answered positively. (shrink)
_Anthropic Bias_ explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy. There are the philosophical thought experiments and paradoxes: (...) the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room. And there are the applications in contemporary science: cosmology ; evolutionary theory ; the problem of time's arrow ; quantum physics ; game-theory problems with imperfect recall ; even traffic analysis. _Anthropic Bias_ argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox. (shrink)
This book explores both the embodied nature of social life and the social nature of human bodily life. It provides an accessible review of the contemporary social science debates on the body, and develops a coherent new perspective. Nick Crossley critically reviews the literature on mind and body, and also on the body and society. He draws on theoretical insights from the work of Gilbert Ryle, Maurice Merleau-Ponty, George Herbert Mead and Pierre Bourdieu, and shows how the work of (...) these writers overlaps in interesting and important ways which, when combined, provide the basis for a persuasive and robust account of human embodiment. The Social Body provides a timely review of the theoretical approaches to the sociology of the body. It offers new insights, and a coherent new perspective on the body. It will be valuable reading for students and academics in sociology, philosophy, anthropology, psychology, and cultural studies. (shrink)
I argue that at least one of the following propositions is true: the human species is very likely to become extinct before reaching a ’posthuman’ stage; any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history ; we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we shall one day become posthumans who run ancestor-simulations is false, unless we are currently living (...) in a simulation. I discuss some consequences of this result. (shrink)
The terms "imagination'' and "imaginative'' can be readily applied to a profusion of attitudes, experiences, activities, and further phenomena. The heterogeneity of the things to which they're applied prompts the thoughts that the terms are polysemous, and that there is no single, coherent, fruitful conception of imagination to be had. Nonetheless, much recent work on imagination ascribes implicitly to a univocal way of thinking about imaginative phenomena: the imitation theory, according to which imaginative experiences imitate other experiences. This approach is (...) infelicitous. It issues in unhelpful descriptions of imaginative activities, experiences, and attitudes, and frustrates theorizing about imagination's applications and intensional characteristics. A better way of thinking about imagination is the lens theory, according to which the imagination is a set of ways to focus, refine, clarify or concentrate the matter of other experiences. This approach offers better characterizations of imaginative phenomena, and promises brighter theoretical illumination of them. (shrink)
A pervasive and influential argument appeals to trivial truths to demonstrate that the aim of inquiry is not the acquisition of truth. But the argument fails, for it neglects to distinguish between the complexity of the sentence used to express a truth and the complexity of the truth expressed by a sentence.
This article responds to recent debates in critical algorithm studies about the significance of the term “algorithm.” Where some have suggested that critical scholars should align their use of the term with its common definition in professional computer science, I argue that we should instead approach algorithms as “multiples”—unstable objects that are enacted through the varied practices that people use to engage with them, including the practices of “outsider” researchers. This approach builds on the work of Laura Devendorf, Elizabeth Goodman, (...) and Annemarie Mol. Different ways of enacting algorithms foreground certain issues while occluding others: computer scientists enact algorithms as conceptual objects indifferent to implementation details, while calls for accountability enact algorithms as closed boxes to be opened. I propose that critical researchers might seek to enact algorithms ethnographically, seeing them as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas. To do so, I suggest thinking of algorithms not “in” culture, as the event occasioning this essay was titled, but “as” culture: part of broad patterns of meaning and practice that can be engaged with empirically. I offer a set of practical tactics for the ethnographic enactment of algorithmic systems, which do not depend on pinning down a singular “algorithm” or achieving “access,” but which rather work from the partial and mobile position of an outsider. (shrink)
What is it to know more? By what metric should the quantity of one's knowledge be measured? I start by examining and arguing against a very natural approach to the measure of knowledge, one on which how much is a matter of how many. I then turn to the quasi-spatial notion of counterfactual distance and show how a model that appeals to distance avoids the problems that plague appeals to cardinality. But such a model faces fatal problems of its own. (...) Reflection on what the distance model gets right and where it goes wrong motivates a third approach, which appeals not to cardinality, nor to counterfactual distance, but to similarity. I close the paper by advocating this model and briefly discussing some of its significance for epistemic normativity. In particular, I argue that the 'trivial truths' objection to the view that truth is the goal of inquiry rests on an unstated, but false, assumption about the measure of knowledge, and suggest that a similarity model preserves truth as the aim of belief in an intuitively satisfying way. (shrink)
Apologies can be profoundly meaningful, yet many gestures of contrition - especially those in legal contexts - appear hollow and even deceptive. Discussing numerous examples from ancient and recent history, I Was Wrong argues that we suffer from considerable confusion about the moral meanings and social functions of these complex interactions. Rather than asking whether a speech act 'is or is not' an apology, Smith offers a highly nuanced theory of apologetic meaning. Smith leads us though a series of rich (...) philosophical and interdisciplinary questions, explaining how apologies have evolved from a confluence of diverse cultural and religious practices that do not translate easily into secular discourse or gender stereotypes. After classifying several varieties of apologies between individuals, Smith turns to apologies from collectives. Although apologies from corporations, governments, and other groups can be quite meaningful in certain respects, we should be suspicious of those that supplant apologies from individual wrongdoers. (shrink)
What is the purpose of art? What drives us to make it? Why do we value it? Nick Zangwill argues that the function of art is to have certain aesthetic properties in virtue of its non-aesthetic properties, and this function arises because of the artist's insight into the nature of these dependence relations and her intention to bring them about.
This paper investigates the significance of T-duality in string theory: the indistinguisha- bility with respect to all observables, of models attributing radically different radii to space – larger than the observable universe, or far smaller than the Planck length, say. Two interpretational branch points are identified and discussed. First, whether duals are physically equivalent or not: by considering a duality of the familiar simple harmonic oscillator, I argue that they are. Unlike the oscillator, there are no measurements ‘outside’ string theory (...) that could distinguish the duals. Second, whether duals agree or disagree on the radius of ‘target space’, the space in which strings evolve according to string theory. I argue for the latter position, because the alternative leaves it unknown what the radius is. Since duals are physically equivalent yet disagree on the radius of target space, it follows that the radius is indeterminate between them. Using an analysis of Brandenberger and Vafa (1989), I explain why – even so – space is observed to have a determinate, large radius. The conclusion is that observed, ‘phenomenal’ space is not target space, since a space cannot have both a determinate and indeterminate radius: instead phenomenal space must be a higher-level phenomenon, not fundamental. (shrink)
I describe and defend the view in a philosophy of mind that I call 'Normative Essentialism', according to which propositional attitudes have normative essences. Those normative essences are 'horizontal' rational requirements, by which I mean the requirement to have certain propositional attitudes given other propositional attitudes. Different propositional attitudes impose different horizontal rational requirements. I distinguish a stronger and a weaker version of this doctrine and argue for the weaker version. I explore the consequences for knowledge of mind, and I (...) then consider objections to the view from mental causation, from empirical psychology, and from animals and small children. (shrink)
I consider the metaphysical consequences of the view that propositional attitudes have essential normative properties. I argue that realism should take a weak rather than a strong form. I argue that expressivism cannot get off the ground. And I argue that eliminativism is self-refuting.
This paper argues that at least one of the following propositions is true: the human species is very likely to go extinct before reaching a "posthuman" stage; any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history ; we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently (...) living in a simulation. A number of other consequences of this result are also discussed. (shrink)
Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally (...) opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision. (shrink)
Analytic moral philosophers have generally failed to engage in any substantial way with the cultural history of morality. This is a shame, because a genealogy of morals can help us accomplish two important tasks. First, a genealogy can form the basis of an epistemological project, one that seeks to establish the epistemic status of our beliefs or values. Second, a genealogy can provide us with functional understanding, since a history of our beliefs, values or institutions can reveal some inherent dynamic (...) or pattern which may be problematically obscured from our view. In this paper, I try to make good on these claims by offering a sketchy genealogy of emancipatory values, or values which call for the liberation of persons from systems of dominance and oppression. The real history of these values, I argue, is both epistemologically vindicatory and functionally enlightening. (shrink)
Articulate and perceptive, Intersubjectivity is a text that explains the notions of intersubjectivity as a central concern of philosophy, sociology, psychology, and politics. Going beyond this broad-ranging introduction and explication, author Nick Crossley provides a critical discussion of intersubjectivity as an interdisciplinary concept to shed light on our understanding of selfhood, communication, citizenship, power, and community. The volume traces the contributions of key thinkers engaged within the intersubjectivist tradition, including Husserl, Buber, Kojeve, Merlau-Ponty, Mead, Wittgenstein, Schutz, and Habermas. A (...) clear, concise introduction to a range of difficult concepts and thinkers, Intersubjectivity demystifies this very interdisciplinary subject for advanced and graduate-level students of philosophy, sociology, social psychology, and social and political theory. (shrink)
The human desire to acquire new capacities is as ancient as our species itself. We have always sought to expand the boundaries of our existence, be it socially, geographically, or mentally. There is a tendency in at least some individuals always to search for a way around every obstacle and limitation to human life and happiness.
'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition'. It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods.
Suppose that we develop a medically safe and affordable means of enhancing human intelligence. For concreteness, we shall assume that the technology is genetic engineering (either somatic or germ line), although the argument we will present does not depend on the technological implementation. For simplicity, we shall speak of enhancing “intelligence” or “cognitive capacity,” but we do not presuppose that intelligence is best conceived of as a unitary attribute. Our considerations could be applied to speciﬁc cognitive abilities such as verbal (...) ﬂuency, memory, abstract reasoning, social intelligence, spatial cognition, numerical ability, or musical talent. It will emerge that the form of argument that we use can be applied much more generally to help assess other kinds of enhancement technologies as well as other kinds of reform. However, to give a detailed illustration of how the argument form works, we will focus on the prospect of cognitive enhancement. (shrink)
I argue against motivational internalism. First I recharacterise the issue over moral motivation. Second I describe the indifference argument against motivation internalism. Third I consider appeals to irrationality that are often made in the face of this argument, and I show that they are ineffective. Lastly, I draw the motivational externalist conclusion and reflect on the nature of the issue.
Darwinian matters : life, force and change -- Biological difference -- The evolution of sex and race -- Nietzsche's Darwin -- History and the untimely -- The eternal return and the overman -- Bergsonian differences -- The philosophy of life -- Intuition and the virtual -- The future.
According to Arthur Danto, post-modern or post-historical art began when artists like Andy Warhol collapsed the Modern distinction between art and everyday life by bringing “the everyday” into the artworld. I begin by pointing out that there is another way to collapse this distinction: bring art out of the artworld and into everyday life. An especially effective way of doing this is to make street art, which, I argue, is art whose meaning depends on its use of the street. I (...) defend this definition and show how it handles graffiti and public art. (shrink)
Contemporary philosophical attitudes toward beauty are hard to reconcile with its importance in the history of philosophy. Philosophers used to allow it a starring role in their theories of autonomy, morality, or the good life. But today, if beauty is discussed at all, it is often explicitly denied any such importance. This is due, in part, to the thought that beauty is the object of “disinterested pleasure”. In this paper I clarify the notion of disinterest and develop two general strategies (...) for resisting the emphasis on it, in the hopes of getting a clearer view of beauty’s significance. I present and discuss several literary depictions of the encounter with beauty that motivate both strategies. These depictions illustrate the ways in which aesthetic experience can be personally transformative. I argue that they present difficulties for disinterest theories and suggest we abandon the concept of disinterest to focus instead on the special kind of interest beauty fuels. I propose a closer look at the Platonic thought that beauty is the object of love. (shrink)
The notion of more truth, or of more truth and less falsehood, is central to epistemology. Yet, I argue, we have no idea what this consists in, as the most natural or obvious thing to say—that more truth is a matter of a greater number of truths, and less falsehood is a matter of a lesser number of falsehoods—is ultimately implausible. The issue is important not merely because the notion of more truth and less falsehood is central to epistemology, but (...) because an implicit, false picture of what this consists in underpins and gives shape to much contemporary epistemology. (shrink)
Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, ‘sophisticated’ probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore (...) how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today. (shrink)
According to the additive view of sensory imagination, mental imagery often involves two elements. There is an image-like element, which gives the experiences qualitative phenomenal character akin to that of perception. There is also a non-image element, consisting of something like suppositions about the image's object. This accounts for extra- sensory features of imagined objects and situations: for example, it determines whether an image of a grey horse is an image of Desert Orchid, or of some other grey horse. The (...) view promises to give a simple and intuitive explanation of some puzzling features of imagination, and, further, to illuminate imagination 's relation to modal knowledge. I contend that the additive view does not fulfil these two promises. The explanation of how images come to be determinate is redundant: the content constituting the indeterminate mental images on which the view relies is sufficient to deliver determinate images too, so the extra resources offered by the view are not required.. (shrink)
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...) natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far-reaching implications for evolutionary psychology. (shrink)
Transhumanism is a loosely defined movement that has developed gradually over the past two decades. It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.
This is a chapter of the planned monograph "Out of Nowhere: The Emergence of Spacetime in Quantum Theories of Gravity", co-authored by Nick Huggett and Christian Wüthrich and under contract with Oxford University Press. (More information at www<dot>beyondspacetime<dot>net.) This chapter investigates the meaning and significance of string theoretic dualities, arguing they reveal a surprising physical indeterminateness to spacetime.
People are unable to report how they decide whether to move backwards or forwards to catch a ball. When asked to imagine how their angle of elevation of gaze would change when they caught a ball, most people are unable to describe what happens although their interception strategy is based on controlling changes in this angle. Just after catching a ball, many people are unable to recognise a description of how their angle of gaze changed during the catch. Some people (...) confidently choose incorrect descriptions that would guarantee failure of interception demonstrating unconscious knowledge co-existing with systematically different conscious beliefs. Where simple solutions to important evolutionary problems exist, unconscious perception needs to be impervious to conscious beliefs. (shrink)
I argue that an evaluational conception of love collides with the way we value love. That way allows that love has causes, but not reasons, and it recognizes and celebrates a love that refuses to justify itself. Love has unjustified selectivity, due to its arbitrary causes. That imposes a non-tradability norm. A love for reasons, rational love or evaluational love would be propositional, and it therefore allows that the people we love are tradable commodities. A moralized conception of love is (...) no less committed to treating those we love as tradable commodities; it is just that they are tradable moral commodities. An evaluative criterion of adequacy, I suggest, encourages the opposite view ? a non-rational and non-evaluational concept of love. Such a love can set up partial obligations, which may even demand that one sacrifice one's life. Only a love that has causes but not reasons can have the kind of value that we think love has, and thus it would only be rational to pursue and foster such a love. (shrink)
“Motivational externalism” is the externalism until they see more of what view that moral judgements have no motisuch a theory would be like. The mere posvational efficacy in themselves, and that sibility of such a theory is not sufficiently when they motivate us, the source of motireassuring, even given strong arguments vation lies outside the moral judgement in against the opposite position. For there may a separate desire. Motivational externalism also be objections to externalism. contrasts with “motivational internalism,” Moral philosophers (...) have not spent much which is either the view that our moral effort spelling out the details of an exterjudgements are partly constituted by motinalist model of moral motivation. Those vation, or else that they would be if we who have endorsed externalism include were rational. The major problem for mo- Philippa Foot, Michael Stocker, David tivational internalism—in either guise—is Brink, Al Mele, Sigrún Svavarsdóttir, and that it flies in the face of common obsermyself (Foot 1972, Stocker 1979, Brink vation and first-personal experience of the 1989, 1997, Mele 1996, Svavarsdóttir fact that we can, without irrationality, be 1999, Zangwill 1999). But even these phiindifferent to morality. Philippa Foot piolosophers concentrated mostly on arguing neered this argument (Foot 1972). The pheagainst internalism or defending moral renomenon of indifference encourages alism rather than articulating the externalist motivational externalism. alternative. As a consequence, it is not clear This paper will not revisit this difficulty how the externalism that can be gleaned for internalism, but will travel in the opfrom these writings can be defended posite dialectical direction. The aim is to against various objections to externalism. expound and defend externalism, not to This paper is concerned to fashion an atargue against internalism. This paper will tractive version of externalism, and show address, and try to soothe away, the reluchow it evades objections. tance of many philosophers to embrace In section 1, a particular version of motimotivational externalism.. (shrink)