This book offers the first comparative account of the changes and stabilities of public perceptions of science within the US, France, China, Japan, and across Europe over the past few decades. The contributors address the influence of cultural factors; the question of science and religion and its influence on particular developments (e.g. stem cell research); and the demarcation of science from non-science as well as issues including the âe~incommensurabilityâe(tm) versus âe~cognitive polyphasiaâe(tm) and the cognitive (in)tolerance of different systems of knowledge.
With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for standard utilitarians is not that we ought to maximize the pace of technological (...) development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur. This goal has such high utility that standard utilitarians ought to focus all their efforts on it. Utilitarians of a ‘person-affecting’ stripe should accept a modified version of this conclusion. Some mixed ethical views, which combine utilitarian considerations with other criteria, will also be committed to a similar bottom line. (shrink)
To what extent should we use technological advances to try to make better human beings? Leading philosophers debate the possibility of enhancing human cognition, mood, personality, and physical performance, and controlling aging. Would this take us beyond the bounds of human nature? These are questions that need to be answered now.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...) our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. (shrink)
In chapters ranging from "The Beautiful, the Dainty, and the Dumpy" to "Skin-deep or In the Eye of the Beholder?" Nick Zangwill investigates the nature of beauty as we conceive it, and as it is in itself. The notion of beauty is currently attracting increased interest, particularly in philosophical aesthetics and in discussions of our experiences and judgments about art. In The Metaphysics of Beauty, Zangwill argues that it is essential to beauty that it depends on the ordinary features (...) of things. He uses this principle to defend the notion of the aesthetic, to call for a version of aesthetic formalism, and to reconsider the reality of beauty. The Metaphysics of Beauty brings beauty to the center of intellectual consciousness in a manner informed by contemporary metaphysics and engages with beauty as an enduring object of human thought and experience. (shrink)
Darwinian matters : life, force and change -- Biological difference -- The evolution of sex and race -- Nietzsche's Darwin -- History and the untimely -- The eternal return and the overman -- Bergsonian differences -- The philosophy of life -- Intuition and the virtual -- The future.
This article argues that there can be epistemic dilemmas: situations in which one faces conflicting epistemic requirements with the result that whatever one does, one is doomed to do wrong from the epistemic point of view. Accepting this view, I argue, may enable us to solve several epistemological puzzles.
This article responds to recent debates in critical algorithm studies about the significance of the term “algorithm.” Where some have suggested that critical scholars should align their use of the term with its common definition in professional computer science, I argue that we should instead approach algorithms as “multiples”—unstable objects that are enacted through the varied practices that people use to engage with them, including the practices of “outsider” researchers. This approach builds on the work of Laura Devendorf, Elizabeth Goodman, (...) and Annemarie Mol. Different ways of enacting algorithms foreground certain issues while occluding others: computer scientists enact algorithms as conceptual objects indifferent to implementation details, while calls for accountability enact algorithms as closed boxes to be opened. I propose that critical researchers might seek to enact algorithms ethnographically, seeing them as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas. To do so, I suggest thinking of algorithms not “in” culture, as the event occasioning this essay was titled, but “as” culture: part of broad patterns of meaning and practice that can be engaged with empirically. I offer a set of practical tactics for the ethnographic enactment of algorithmic systems, which do not depend on pinning down a singular “algorithm” or achieving “access,” but which rather work from the partial and mobile position of an outsider. (shrink)
This book explores both the embodied nature of social life and the social nature of human bodily life. It provides an accessible review of the contemporary social science debates on the body, and develops a coherent new perspective. Nick Crossley critically reviews the literature on mind and body, and also on the body and society. He draws on theoretical insights from the work of Gilbert Ryle, Maurice Merleau-Ponty, George Herbert Mead and Pierre Bourdieu, and shows how the work of (...) these writers overlaps in interesting and important ways which, when combined, provide the basis for a persuasive and robust account of human embodiment. The Social Body provides a timely review of the theoretical approaches to the sociology of the body. It offers new insights, and a coherent new perspective on the body. It will be valuable reading for students and academics in sociology, philosophy, anthropology, psychology, and cultural studies. (shrink)
Subjects of ectogenesis—human beings that are developing in artificial wombs (AWs)—share the same moral status as newborns. To demonstrate this, I defend two claims. First, subjects of partial ectogenesis—those that develop in utero for a time before being transferred to AWs—are newborns (in the full sense of the word). Second, subjects of complete ectogenesis—those who develop in AWs entirely—share the same moral status as newborns. To defend the first claim, I rely on Elizabeth Chloe Romanis’s distinctions between fetuses, newborns and (...) subjects of ectogenesis. For Romanis, the subject of partial ectogenesis ‘is neither a fetus nor a baby’ but is, instead, a ‘new product of human reproduction’. In this essay, I begin by, expanding upon Romanis’s argument that subjects of partial ectogenesis are not fetuses while arguing that those subjects are newborns. Next, I show that the distinction that Romanis draws between subjects of partial ectogenesis and newborns needs to be revised. The former is a kind of the latter. This leads us to an argument that shows why different moral statuses cannot be justifiably assigned to subjects of partial ectogenesis and subjects of complete ectogenesis, respectively. As subjects of partial ectogenesis share the same moral status as newborns, it follows that subjects of complete ectogenesis share the same moral status as newborns as well. I conclude by considering implications that this essay may have for the research and development of AW technology and conceptual links between a subject’s moral status and birth. (shrink)
Numerous approaches to a quantum theory of gravity posit fundamental ontologies that exclude spacetime, either partially or wholly. This situation raises deep questions about how such theories could relate to the empirical realm, since arguably only entities localized in spacetime can ever be observed. Are such entities even possible in a theory without fundamental spacetime? How might they be derived, formally speaking? Moreover, since by assumption the fundamental entities cannot be smaller than the derived and so cannot ‘compose’ them in (...) any ordinary sense, would a formal derivation actually show the physical reality of localized entities? We address these questions via a survey of a range of theories of quantum gravity, and generally sketch how they may be answered positively. (shrink)
A dizzying trip through the mind(s) of the provocative and influential thinker Nick Land. During the 1990s British philosopher Nick Land's unique work, variously described as “rabid nihilism,” “mad black deleuzianism,” and “cybergothic,” developed perhaps the only rigorous and culturally-engaged escape route out of the malaise of “continental philosophy” —a route that was implacably blocked by the academy. However, Land's work has continued to exert an influence, both through the British “speculative realist” philosophers who studied with him, and (...) through the many cultural producers—writers, artists, musicians, filmmakers—who have been invigorated by his uncompromising and abrasive philosophical vision. Beginning with Land's early radical rereadings of Heidegger, Nietzsche, Kant and Bataille, the volume collects together the papers, talks and articles of the mid-90s—long the subject of rumour and vague legend (including some work which has never previously appeared in print)—in which Land developed his futuristic theory-fiction of cybercapitalism gone amok; and ends with his enigmatic later writings in which Ballardian fictions, poetics, cryptography, anthropology, grammatology and the occult are smeared into unrecognisable hybrids. Fanged Noumena gives a dizzying perspective on the entire trajectory of this provocative and influential thinker's work, and has introduced his unique voice to a new generation of readers. (shrink)
Contemporary philosophical attitudes toward beauty are hard to reconcile with its importance in the history of philosophy. Philosophers used to allow it a starring role in their theories of autonomy, morality, or the good life. But today, if beauty is discussed at all, it is often explicitly denied any such importance. This is due, in part, to the thought that beauty is the object of “disinterested pleasure”. In this paper I clarify the notion of disinterest and develop two general strategies (...) for resisting the emphasis on it, in the hopes of getting a clearer view of beauty’s significance. I present and discuss several literary depictions of the encounter with beauty that motivate both strategies. These depictions illustrate the ways in which aesthetic experience can be personally transformative. I argue that they present difficulties for disinterest theories and suggest we abandon the concept of disinterest to focus instead on the special kind of interest beauty fuels. I propose a closer look at the Platonic thought that beauty is the object of love. (shrink)
Articulate and perceptive, Intersubjectivity is a text that explains the notions of intersubjectivity as a central concern of philosophy, sociology, psychology, and politics. Going beyond this broad-ranging introduction and explication, author Nick Crossley provides a critical discussion of intersubjectivity as an interdisciplinary concept to shed light on our understanding of selfhood, communication, citizenship, power, and community. The volume traces the contributions of key thinkers engaged within the intersubjectivist tradition, including Husserl, Buber, Kojeve, Merlau-Ponty, Mead, Wittgenstein, Schutz, and Habermas. A (...) clear, concise introduction to a range of difficult concepts and thinkers, Intersubjectivity demystifies this very interdisciplinary subject for advanced and graduate-level students of philosophy, sociology, social psychology, and social and political theory. (shrink)
This is a chapter of the planned monograph "Out of Nowhere: The Emergence of Spacetime in Quantum Theories of Gravity", co-authored by Nick Huggett and Christian Wüthrich and under contract with Oxford University Press. (More information at www<dot>beyondspacetime<dot>net.) This chapter investigates the meaning and significance of string theoretic dualities, arguing they reveal a surprising physical indeterminateness to spacetime.
Apologies can be profoundly meaningful, yet many gestures of contrition - especially those in legal contexts - appear hollow and even deceptive. Discussing numerous examples from ancient and recent history, I Was Wrong argues that we suffer from considerable confusion about the moral meanings and social functions of these complex interactions. Rather than asking whether a speech act 'is or is not' an apology, Smith offers a highly nuanced theory of apologetic meaning. Smith leads us though a series of rich (...) philosophical and interdisciplinary questions, explaining how apologies have evolved from a confluence of diverse cultural and religious practices that do not translate easily into secular discourse or gender stereotypes. After classifying several varieties of apologies between individuals, Smith turns to apologies from collectives. Although apologies from corporations, governments, and other groups can be quite meaningful in certain respects, we should be suspicious of those that supplant apologies from individual wrongdoers. (shrink)
ABSTRACTThis paper compares Margaret Archer’s morphogenetic critical realism and Michel Foucault’s implicit discursive realism. It argues that there is a surprisingly high degree of correspondence between the two social ontologies. Specifically, both ontologies suggest that there are three largely autonomous domains in operation: cultural, structural, and agentive. Yet, while each of these domains have a level of independence, yet they are also partially constituted by the content and form of the others. This paper discusses the potential to integrate the two (...) approaches in such a way as to overcome their respective short-comings, namely the underdevelopment of culture in Archer’s ontology, and the underdevelopment of social agency in Foucault’s ontology. (shrink)
This paper investigates the significance of T-duality in string theory: the indistinguisha- bility with respect to all observables, of models attributing radically different radii to space – larger than the observable universe, or far smaller than the Planck length, say. Two interpretational branch points are identified and discussed. First, whether duals are physically equivalent or not: by considering a duality of the familiar simple harmonic oscillator, I argue that they are. Unlike the oscillator, there are no measurements ‘outside’ string theory (...) that could distinguish the duals. Second, whether duals agree or disagree on the radius of ‘target space’, the space in which strings evolve according to string theory. I argue for the latter position, because the alternative leaves it unknown what the radius is. Since duals are physically equivalent yet disagree on the radius of target space, it follows that the radius is indeterminate between them. Using an analysis of Brandenberger and Vafa (1989), I explain why – even so – space is observed to have a determinate, large radius. The conclusion is that observed, ‘phenomenal’ space is not target space, since a space cannot have both a determinate and indeterminate radius: instead phenomenal space must be a higher-level phenomenon, not fundamental. (shrink)
'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition'. It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods.
Cognitive enhancement takes many and diverse forms. Various methods of cognitive enhancement have implications for the near future. At the same time, these technologies raise a range of ethical issues. For example, they interact with notions of authenticity, the good life, and the role of medicine in our lives. Present and anticipated methods for cognitive enhancement also create challenges for public policy and regulation.
Views on addiction are often polarised - either addiction is a matter of choice, or addicts simply can't help themselves. But perhaps addiction falls between the two? This book contains views from philosophy, neuroscience, psychiatry, psychology, and the law exploring this middle ground between free choice and no choice.
_Anthropic Bias_ explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy. There are the philosophical thought experiments and paradoxes: (...) the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room. And there are the applications in contemporary science: cosmology ; evolutionary theory ; the problem of time's arrow ; quantum physics ; game-theory problems with imperfect recall ; even traffic analysis. _Anthropic Bias_ argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox. (shrink)
Epistemologists often appeal to the idea that a normative theory must provide useful, usable, guidance to argue for one normative epistemology over another. I argue that this is a mistake. Guidance considerations have no role to play in theory choice in epistemology. I show how this has implications for debates about the possibility and scope of epistemic dilemmas, the legitimacy of idealisation in Bayesian Epistemology, Uniqueness vs. Permissivism, sharp vs. mushy credences, and internalism vs. externalism.
I argue that at least one of the following propositions is true: the human species is very likely to become extinct before reaching a ’posthuman’ stage; any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history ; we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we shall one day become posthumans who run ancestor-simulations is false, unless we are currently living (...) in a simulation. I discuss some consequences of this result. (shrink)
Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, ‘sophisticated’ probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore (...) how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today. (shrink)
Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. (...) Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments. (shrink)
The received view of implicit bias holds that it is associative and unreflective. Recently, the received view has been challenged. Some argue that implicit bias is not predicated on “any” associative process, but it is unreflective. These arguments rely, in part, on debiasing experiments. They proceed as follows. If implicit bias is associative and unreflective, then certain experimental manipulations cannot change implicitly biased behavior. However, these manipulations can change such behavior. So, implicit bias is not associative and unreflective. This paper (...) finds philosophical and empirical problems with that argument. When the problems are solved, the conclusion is not quite right: implicit bias is not necessarily unreflective, but it seems to be associative. Further, the paper shows that even if legitimate non-associative interventions on implicit bias exist, then both the received view and its recent contender would be false. In their stead would be interactionism or minimalism about implicit bias. (shrink)
This paper is about epistemic dilemmas, i.e., cases in which one is doomed to have a doxastic attitude that is rationally impermissible no matter what. My aim is to develop and defend a position according to which there can be genuine rational indeterminacy; that is, it can be indeterminate which principles of rationality one should satisfy and thus indeterminate which doxastic attitudes one is permitted or required to have. I am going to argue that this view can resolve epistemic dilemmas (...) in a systematic way while also enjoying some important advantages over its rivals. (shrink)
Social networking sites have challenged ethical issues about users’ information security and privacy. SNS users are concerned about their privacy and need to control the information they share and its use. This paper examines the security of SNS by taking a look at the influence of users’ perceived control of information over their information-sharing behaviors. Employing an empirical study, this paper demonstrates the importance of perceived control in SNS users’ information-sharing behaviors. Specifically, perceived control has been found to be negatively (...) related to perceived privacy risk and attitude toward information sharing, which in turn has an impact on their information-sharing behaviors. In addition, gender has been shown to be an important factor that moderates the influences of both perceived control and perceived privacy risk on SNS users’ attitudes toward information sharing. Theoretical and practical implications are discussed. (shrink)
Analytic moral philosophers have generally failed to engage in any substantial way with the cultural history of morality. This is a shame, because a genealogy of morals can help us accomplish two important tasks. First, a genealogy can form the basis of an epistemological project, one that seeks to establish the epistemic status of our beliefs or values. Second, a genealogy can provide us with functional understanding, since a history of our beliefs, values or institutions can reveal some inherent dynamic (...) or pattern which may be problematically obscured from our view. In this paper, I try to make good on these claims by offering a sketchy genealogy of emancipatory values, or values which call for the liberation of persons from systems of dominance and oppression. The real history of these values, I argue, is both epistemologically vindicatory and functionally enlightening. (shrink)
With the increasing popularity of social media, a new ethics debate has arisen over marketing and technology in the current digital era. People are using online communities but they have concern about information credibility through word of mouth in these platforms. Social media is becoming increasingly influential in shaping individuals’ decision-making as more and better quality information about products is made available. In this research, a social word-of-mouth model proposes using a survey to test the model in a popular travel (...) community. The model highlights the role of social media and social support in social networking sites, identifying increasing credibility and information usefulness resulting in an ethical environment to adopt word of mouth. The theoretical and practical implications of the study are both detailed. (shrink)
This paper argues that at least one of the following propositions is true: the human species is very likely to go extinct before reaching a "posthuman" stage; any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history ; we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently (...) living in a simulation. A number of other consequences of this result are also discussed. (shrink)
What is it to know more? By what metric should the quantity of one's knowledge be measured? I start by examining and arguing against a very natural approach to the measure of knowledge, one on which how much is a matter of how many. I then turn to the quasi-spatial notion of counterfactual distance and show how a model that appeals to distance avoids the problems that plague appeals to cardinality. But such a model faces fatal problems of its own. (...) Reflection on what the distance model gets right and where it goes wrong motivates a third approach, which appeals not to cardinality, nor to counterfactual distance, but to similarity. I close the paper by advocating this model and briefly discussing some of its significance for epistemic normativity. In particular, I argue that the 'trivial truths' objection to the view that truth is the goal of inquiry rests on an unstated, but false, assumption about the measure of knowledge, and suggest that a similarity model preserves truth as the aim of belief in an intuitively satisfying way. (shrink)
How should we pursue aesthetic value, or incorporate it into our lives, if we want to? Is there an ideal of aesthetic life? Philosophers have proposed numerous answers to the analogous question in moral philosophy, but the aesthetic question has received relatively little attention. There is, in essence, a single view, which is that one should develop a sensibility that would give one sweeping access to aesthetic value. I challenge this view on two grounds. First, it threatens to undermine our (...) "aesthetic love", or the meaningful attachments we form with aesthetic items, e.g., poems, paintings, songs, or items of design and dress. Second, it fails to accommodate the motivational character of our encounter with beauty, which can diminish our desire to pursue the wider world of aesthetic value. I conclude that whatever the aesthetic ideal is, it must reconcile our desire to broaden our access to aesthetic value with our desire to maintain and cultivate our meaningful aesthetic attachments. I motivate the alternative thought that having style is the aesthetic ideal. (shrink)
Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally (...) opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision. (shrink)
A pervasive and influential argument appeals to trivial truths to demonstrate that the aim of inquiry is not the acquisition of truth. But the argument fails, for it neglects to distinguish between the complexity of the sentence used to express a truth and the complexity of the truth expressed by a sentence.
The human desire to acquire new capacities is as ancient as our species itself. We have always sought to expand the boundaries of our existence, be it socially, geographically, or mentally. There is a tendency in at least some individuals always to search for a way around every obstacle and limitation to human life and happiness.
Suppose that we develop a medically safe and affordable means of enhancing human intelligence. For concreteness, we shall assume that the technology is genetic engineering (either somatic or germ line), although the argument we will present does not depend on the technological implementation. For simplicity, we shall speak of enhancing “intelligence” or “cognitive capacity,” but we do not presuppose that intelligence is best conceived of as a unitary attribute. Our considerations could be applied to speciﬁc cognitive abilities such as verbal (...) ﬂuency, memory, abstract reasoning, social intelligence, spatial cognition, numerical ability, or musical talent. It will emerge that the form of argument that we use can be applied much more generally to help assess other kinds of enhancement technologies as well as other kinds of reform. However, to give a detailed illustration of how the argument form works, we will focus on the prospect of cognitive enhancement. (shrink)
Weyl symmetry of the classical bosonic string Lagrangian is broken by quantization, with profound consequences described here. Reimposing symmetry requires that the background space-time satisfy the equations of general relativity: general relativity, hence classical space-time as we know it, arises from string theory. We investigate the logical role of Weyl symmetry in this explanation of general relativity: it is not an independent physical postulate but required in quantum string theory, so from a certain point of view it plays only a (...) formal role in the explanation. (shrink)
Non-representational approaches to cognition have struggled to provide accounts of long-term planning that forgo the use of representations. An explanation comes easier for cognitivist accounts, which hold that we concoct and use contentful mental representations as guides to coordinate a series of actions towards an end state. One non-representational approach, ecological-enactivism, has recently seen several proposals that account for “high-level” or “representation-hungry” capacities, including long-term planning and action coordination. In this paper, we demonstrate the explanatory gap in these accounts that (...) stems from avoiding the incorporation of long-term intentions, as they play an important role both in action coordination and perception on the ecological account. Using recent enactive accounts of language, we argue for a non-representational conception of intentions, their formation, and their role in coordinating pre-reflective action. We provide an account for the coordination of our present actions towards a distant goal, a skill we call distal engagement. Rather than positing intentions as an actual cognitive entity in need of explanation, we argue that we take them up in this way as a practice due to linguistically scaffolded attitudes towards language use. (shrink)
According to the Interpersonal View of Testimony, testimonial justification is non-evidential in nature. I begin by arguing that the IVT has the following problem: If the IVT is true, then young children and people with autism cannot participate in testimonial exchanges; but young children and people with autism can participate in testimonial exchanges; thus, the IVT should be rejected on the grounds that it has over-cognized what it takes to give and receive testimony. Afterwards, I consider what I take to (...) be the two best motivations for the IVT and argue that they both fail. The overarching lesson, then, is that the IVT is unmotivated and false; we should think of testimonial justification as being evidential in nature. (shrink)
According to the Transmission View of Testimony : TVT: If a speaker testifies to a hearer that p, and if the hearer is justified in believing that p on the basis of that speaker's testimony, then the hearer's belief is justified by whatever justification the speaker has for believing that p. The aim of this paper is to develop and defend a novel objection to the TVT.