This interview with N. Katherine Hayles, one of the foremost theorists of the posthuman, explores the concerns that led to her seminal book How We Became Posthuman, the key arguments expounded in that book, and the changes in technology and culture in the ten years since its publication. The discussion ranges across the relationships between literature and science; the trans-disciplinary project of developing a methodology appropriate to their intersection; the history of cybernetics in its cultural and political context ; (...) the changed role for psychoanalysis in the technoscientific age; and the altering forms of mediated ‘embodiment’ in the posthuman context. (shrink)
In 1981 Eleonore Stump and Norman Kretzmann published a landmark article aimed at exploring the classical concept of divine eternity. 1 Taking Boethius as the primary spokesman for the traditional view, they analyse God's eternity as timeless yet as possessing duration. More recently Brian Leftow has seconded Stump and Kretzmann's interpretation of the medieval position and attempted to defend the notion of a durational eternity as a useful way of expressing the sort of life God leads. 2 However, there are (...) good reasons to reject the idea that divine timelessness should be thought of as having duration. The medievals probably did not accept it, as it contradicts a principle of classical metaphysics even more fundamental than the atemporality of the divine. In any case, it is not possible to express the notion of durational eternity in even a minimally coherent way, and the attempt to salvage the concept by appealing to the Thomistic doctrine of analogy is unsuccessful. The best analogy for God's eternity is still the one proposed by Augustine at the end of the fourth century. God lives in a timeless ‘present’, unextended like our temporal present, but immutable and encompassing all time. (shrink)
The poetry and journalistic essays of Katherine Tillman often appeared in publications sponsored by the American Methodist church. Collected together for the first time, her works speak to the struggles and triumphs of African-American women.
The world is remarkably stable -- amidst the flux, physical objects continue to persist. But how do things persist? Are they spread out through time as they are spread out through space? Or is persistence very different from spatial extension? These ancient metaphysical questions are at the forefront of contemporary debate once more. Katherine Hawley provides a wide-ranging yet accessible study of this key issue. She also makes a major contribution to current debates about change, vagueness, and language.
It has widely been assumed, by philosophers, that our first-person preferences regarding pleasurable and painful experiences exhibit a bias toward the future (positive and negative hedonic future-bias), and that our preferences regarding non-hedonic events (both positive and negative) exhibit no such bias (non-hedonic time-neutrality). Further, it has been assumed that our third-person preferences are always time-neutral. Some have attempted to use these (presumed) differential patterns of future-bias—different across kinds of events and perspectives—to argue for the irrationality of hedonic future-bias. This (...) paper experimentally tests these descriptive hypotheses. While as predicted we found first-person hedonic future-bias, we did not find that participants were time-neutral in all other conditions. Hence, the presumed asymmetry of hedonic/non-hedonic and first/third-person preferences cannot be used to argue for the irrationality of future-bias, since no such asymmetries exist. Instead, we develop a more fine-grained approach, according to which three factors—positive/negative valence, first/third-person, and hedonic/non-hedonic—each independently influence, but do not determine, whether an event is treated in a future-biased or time-neutral way. We discuss the upshots of these results for the debate over the rationality of future-bias. (shrink)
Katherine Hawley explores and compares three theories of persistence -- endurance, perdurance, and stage theories - investigating the ways in which they attempt to account for the world around us. Having provided valuable clarification of its two main rivals, she concludes by advocating stage theory.
Most of us display a bias toward the near: we prefer pleasurable experiences to be in our near future and painful experiences to be in our distant future. We also display a bias toward the future: we prefer pleasurable experiences to be in our future and painful experiences to be in our past. While philosophers have tended to think that near bias is a rational defect, almost no one finds future bias objectionable. In this essay, we argue that this hybrid (...) position is untenable. We conclude that those who reject near bias should instead endorse complete temporal neutrality. (shrink)
There are moments when things suddenly seem strange - objects in the world lose their meaning, we feel like strangers to ourselves, or human existence itself strikes us as bizarre and unintelligible. Through a detailed philosophical investigation of Heidegger's concept of uncanniness (Unheimlichkeit), Katherine Withy explores what such experiences reveal about us. She argues that while others (such as Freud, in his seminal psychoanalytic essay, 'The Uncanny') take uncanniness to be an affective quality of strangeness or eeriness, Heidegger uses (...) the concept to go beyond feeling uncanny to reach the ground of this feeling in our being uncanny. -/- "Heidegger on Being Uncanny" answers those who wonder whether human existence is fundamentally strange to itself by showing that we can be what we are only if we do not fully understand what it is to be us. This fundamental finitude in our self-understanding is our uncanniness. In this first dedicated interpretation of Heidegger's uncanniness, Withy tracks this concept from his early analyses of angst through his later interpretations of the choral ode from Sophocles's Antigone. Her interpretation uncovers a novel and robust continuity in Heidegger's thought and in his vision of the human being as uncanny, and it points the way toward what it is to live well as an uncanny human being. (shrink)
Philosophers working on time-biases assume that people are hedonically biased toward the future. A hedonically future-biased agent prefers pleasurable experiences to be future instead of past, and painful experiences to be past instead of future. Philosophers further predict that this bias is strong enough to apply to unequal payoffs: people often prefer less pleasurable future experiences to more pleasurable past ones, and more painful past experiences to less painful future ones. In addition, philosophers have predicted that future-bias is restricted to (...) first-person preferences, and that people’s third-person preferences are time-neutral. Philosophers disagree vigorously about the normative status of these preferences—i.e., they disagree about whether first-person future-bias is rationally permissible. Time-neutralists, for example, have appealed to the predicted asymmetry between first- and third-person preferences to argue for the rational impermissibility of future-bias. We empirically tested these predictions, and found that while people do prefer more past pain to less future pain, they do not prefer less future pleasure to more past pleasure. This was so in both first and third-person conditions. This suggests that future-bias is typically non-absolute, and is more easily outweighed in the case of positive events. We connect this result to the normative debate over future-bias. (shrink)
Although the principle of fair subject selection is a widely recognized requirement of ethical clinical research, it often yields conflicting imperatives, thus raising major ethical dilemmas regarding participant selection. In this paper, we diagnose the source of this problem, arguing that the principle of fair subject selection is best understood as a bundle of four distinct sub-principles, each with normative force and each yielding distinct imperatives: fair inclusion; fair burden sharing; fair opportunity; and fair distribution of third-party risks. We first (...) map out these distinct sub-principles, and then identify the ways in which they yield conflicting imperatives for the design of inclusion and exclusion criteria, and the recruitment of participants. We then offer guidance for how decision makers should navigate these conflicting imperatives to ensure that participants are selected fairly. (shrink)
The cognitive experience view of thought holds that the content of thought is determined by its cognitive-phenomenal character. Adam Pautz argues that the cognitive experience view is extensionally inadequate: it entails the possibility of mix-and-match cases, where the cognitive-phenomenal properties that determine thought content are combined with different sensory-phenomenal and functional properties. Because mix-and-match cases are metaphysically impossible, Pautz argues, the cognitive experience view should be rejected. This paper defends the cognitive experience view from Pautz’s argument. I build on resources (...) in the philosophy of mind literature to show that cognitive-phenomenal properties are modally independent from sensory-phenomenal and functional properties. The result is that mix-and-match cases, though modally remote, are metaphysically possible. The possibility of mix-and-match cases allows us to move from defensive posture to a critical one: it poses problems for any theory of content that imposes rationality constraints, including Pautz’s positive view, phenomenal functionalism. (shrink)
Katherine Hawley investigates what trustworthiness means in our lives. We become untrustworthy when we break promises, miss deadlines, or give unreliable information. But we can't be sure about what we can commit to. Hawley examines the social obstacles to trustworthiness, and explores how we can steer between overcommitment and undercommitment.
The standard formulation of Newcomb's problem compares evidential and causal conceptions of expected utility, with those maximizing evidential expected utility tending to end up far richer. Thus, in a world in which agents face Newcomb problems, the evidential decision theorist might ask the causal decision theorist: "if you're so smart, why ain’cha rich?” Ultimately, however, the expected riches of evidential decision theorists in Newcomb problems do not vindicate their theory, because their success does not generalize. Consider a theory that allows (...) the agents who employ it to end up rich in worlds containing Newcomb problems and continues to outperform in other cases. This type of theory, which I call a “success-first” decision theory, is motivated by the desire to draw a tighter connection between rationality and success, rather than to support any particular account of expected utility. The primary aim of this paper is to provide a comprehensive justification of success-first decision theories as accounts of rational decision. I locate this justification in an experimental approach to decision theory supported by the aims of methodological naturalism. (shrink)
In recent years, a disagreement has erupted between two camps of philosophers about the rationality of bias toward the near and bias toward the future. According to the traditional hybrid view, near bias is rationally impermissible, while future bias is either rationally permissible or obligatory. Time neutralists, meanwhile, argue that the hybrid view is untenable. They claim that those who reject near bias should reject both biases and embrace time neutrality. To date, experimental work has focused on future-directed near bias. (...) The primary aim of this paper is to shed light on the debate by investigating past-directed near bias. If people treat the past and future differently with respect to near bias, by being future-directed but not past-directed near biased, then this supports a particular version of the hybrid view: temporal metaphysic hybridism. If people treat the past and future the same with respect to near bias, then this supports a simple version of time neutralism, which explains both future bias and near bias in terms of the functioning of a single mechanism: the anticipatory/retrospectory mechanism. Our results undermine the claim that people are future-directed, but not past-directed, near biased, and hence do not support temporal metaphysic hybridism. They also fail to support simple time-neutralism; instead, they suggest that there are multiple mechanisms that differently shape future- and past-directed preferences. (shrink)
In this paper I develop Paul Redding’s suggestion that Peircean abduction and Hegel’s discussion of the syllogism can be seen as a working out of Kant’s treatment of the reflecting power of judgment, particularly concerning its role in conceptual change. After some historical background I regiment a use of singular terms, kind terms, and predicates across Hegel’s three syllogistic figures and reconstruct an account of comprehension and extension for this system suggested by Peirce. In doing so I show that reasoning (...) according to the ampliative syllogistic figures affects the content of these three classes of terms in precise ways. I close with a treatment of inference by analogy (associated by Hegel with the third syllogistic figure) as an exercise of reflection, and I discuss two cases in the history of science, one in astronomy and the other in biology, where a reflective exercise associated with analogical inference revised our understanding of the domain in question. (shrink)
Historically, the hypothesis that our world is a computer simulation has struck many as just another improbable-but-possible “skeptical hypothesis” about the nature of reality. Recently, however, the simulation hypothesis has received significant attention from philosophers, physicists, and the popular press. This is due to the discovery of an epistemic dependency: If we believe that our civilization will one day run many simulations concerning its ancestry, then we should believe that we are probably in an ancestor simulation right now. This essay (...) examines a troubling but underexplored feature of the ancestor-simulation hypothesis: the termination risk posed by both ancestor-simulation technology and experimental probes into whether our world is an ancestor simulation. This essay evaluates the termination risk by using extrapolations from current computing practices and simulation technology. The conclusions, while provisional, have great implications for debates concerning the fundamental nature of reality and the safety of contemporary physics. (shrink)
Despite growing appreciation in recent decades of the importance of shared intentional mental states as a foundation for everything from divergences in primate evolution, to the institution of communal norms, to trends in the development of modernity as a socio-political phenomenon, we lack an adequate understanding of the relationship between individual and shared intentionality. At the same time, it is widely appreciated that deontic reasoning concerning what ought, may, and ought not be done is, like reasoning about our intentions, an (...) exercise of practical rationality. Taking advantage of this fact, I use a plan-theoretic semantics for the deontic modalities as a basis for understanding individual and shared intentions. This results in a view that accords well with what we currently have reason to believe about the phylogenetic and ontogenetic development of norm psychology and shared intentionality in human beings, and where original intentionality can be understood in terms of the shared intentionality of a community. (shrink)
Future-biased individuals systematically prefer pleasures to be in the future (positive future-bias) and pains to be in the past (negative future-bias). Recent empirical research shows that negative future-bias exists and that it is strong: people prefer more past pain to less future pain. In fact, people prefer ten units of past pain to one unit of future pain. By contrast, this research shows that people do not prefer ten units of past pleasure to one unit of future pleasure. Thus the (...) question remains: is positive future-bias strong or weak? The answer is important for the philosophical literature: some arguments against the rationality of future-bias require positive future-bias to be strong, while others require it to be weak. We empirically investigate this question, and show that positive future-bias is indeed strong. Hence some arguments against future-bias are supported by empirical results while others are undermined. (shrink)
Inquiry into the metaphysics of essence tends to be pursued in a realist and model-theoretic spirit, in the sense that metaphysical vocabulary is used in a metalanguage to model truth conditions for the object-language use of essentialist vocabulary. This essay adapts recent developments in proof-theoretic semantics to provide a nominalist analysis for a variety of essentialist vocabularies. A metalanguage employing explanatory inferences is used to individuate introduction and elimination rules for atomic sentences. The object-language assertions of sentences concerning essences are (...) then interpreted as devices for marking off structural features of the explanatory inferences that, under a given interpretation, constitute the contents of the atoms of the language. On this proposal, object-language essentialist vocabulary is mentioned in a proof-theoretic metalanguage that uses a vocabulary of explanation. The result is a nominalist interpretation of essence as a modality, understood in the grammatical sense as a modification of the copula, and a view of metaphysical inquiry that is closely connected to the explanatory commitments present in first-order inquiry into things like sets, chemicals, and organisms. This result illustrates that some of the presuppositions that have animated analytic metaphysics over the last few decades can be profitably substituted with more practice-oriented conceptions of the forms of reasoning at work in different domains of human knowledge. (shrink)
I defend the thesis that at least some moral properties can be part of the contents of experience. I argue for this claim using a _contrast argument_, a type of argument commonly found in the literature on the philosophy of perception. I first appeal to psychological research on what I call emotionally empathetic dysfunctional individuals to establish a phenomenal contrast between EEDI s and normal individuals in some moral situations. I then argue that the best explanation for this contrast, assuming (...) non-skeptical moral realism, is that _badness_ is represented in the normal individual’s experience but not in the EEDI ’s experience. I consider and reject four alternative explanations of the contrast. (shrink)
Bayne and McClelland (2016) raise the matching content challenge for proponents of cognitive phenomenology: if the phenomenal character of thought is determined by its intentional content, why is it that my conscious thought that there is a blue wall before me and my visual perception of a blue wall before me don’t share any phenomenology, despite their matching content? In this paper, I first show that the matching content challenge is not limited to proponents of cognitive phenomenology but extends to (...) cases of cross-modal perception, threatening representationalism about consciousness in general. I then give two responses to the challenge, both of which appeal to intentional modes. The difference in intentional mode between a thought and a visual perception can either explain why we should not expect any phenomenal overlap between the two experiences, or it can make it clear why the phenomenal overlap is easy to overlook. I show that these responses are available to the representationalist about perceptual consciousness, as well as the proponent of cognitive phenomenology. The upshot is that, when it comes to the matching content challenge, both perceptual representationalism and cognitive representationalism stand on equal dialectical footing. (shrink)
: As Heidegger acknowledges, our understanding is essentially situated and so limited by the context and tradition into which it is thrown. But this ‘situatedness’ does not exhaust Heidegger's concept of ‘thrownness’. By examining this concept and its grammar, I develop a more complete interpretation. I identify several different kinds of finitude or limitation in our understanding, and touch on ways in which we confront and carry different dimensions of our past.
Discursive cognition of the sort that accompanies the grasp of a natural language involves an ability to self-govern by framing and following rules concerning what reason prescribes. In this essay I argue that the formal features of a planning semantics for the deontic and intentional modalities suggest a picture on which shared intentional mental states are a more primitive kind of cognition than that which accompanies the ability to frame and follow a rule, so that deontic cognition—and the autonomous rationality (...) attending the ability to speak a natural language—might be understood as an evolutionary development out of the capacity to share intentions. In the course of defending this picture, I argue that it is supported by work in social psychology, evolutionary anthropology, and primatology concerning the phylogenetic and ontogenetic development of norm psychology and shared intentionality in human beings. (shrink)
Scientific researchers welcome disagreement as a way of furthering epistemic aims. Religious communities, by contrast, tend to regard it as a potential threat to their beliefs. But I argue that religious disagreement can help achieve religious epistemic aims. I do not argue this by comparing science and religion, however. For scientific hypotheses are ideally held with a scholarly neutrality, and my aim is to persuade those who arecommittedto religious beliefs that religious disagreement can be epistemically beneficial for them too.
Many philosophers have assumed that our preferences regarding hedonic events exhibit a bias toward the future: we prefer positive experiences to be in our future and negative experiences to be in our past. Recent experimental work by Greene et al. (ms) confirmed this assumption. However, they noted a potential for some participants to respond in a deviant manner, and hence for their methodology to underestimate the percentage of people who are time neutral, and overestimate the percentage who are future biased. (...) We aimed to replicate their study using an alternative methodology that ensures there are no such deviant responses, and hence more accurately tracks future bias and time neutrality. Instead of finding more time neutrality than Greene et al., however, we found vastly more past bias. Our explanation for this surprising finding helps to reveal the rationale behind both future and past biased preferences, and undermines the generalisability of one of the most influential motivations for the rationality of hedonic future bias: Parfit’s My Past or Future Operations. (shrink)
This paper articulates an Aristotelian theory of professional virtue and provides an application of that theory to the subject of engineering ethics. The leading idea is that Aristotle’s analysis of the definitive function of human beings, and of the virtues humans require to fulfill that function, can serve as a model for an analysis of the definitive function or social role of a profession and thus of the virtues professionals must exhibit to fulfill that role. Special attention is given to (...) a virtue of professional self-awareness, an analogue to Aristotle’s phronesis or practical wisdom. In the course of laying out my account I argue that the virtuous professional is the successful professional, just as the virtuous life is the happy life for Aristotle. I close by suggesting that a virtue ethics approach toward professional ethics can enrich the pedagogy of professional ethics courses and help foster a sense of pride and responsibility in young professionals. (shrink)
It has been argued that humans can face an ethical/epistemic dilemma over the automatic stereotyping involved in implicit bias: ethical demands require that we consistently treat people equally, as equally likely to possess certain traits, but if our aim is knowledge or understanding our responses should reflect social inequalities meaning that members of certain social groups are statistically more likely than others to possess particular features. I use psychological research to argue that often the best choice from the epistemic perspective (...) is the same as the best choice from the ethical perspective: to avoid automatic stereotyping even when this involves failing to reflect social realities in our judgements. This argument has an important implication: it shows that it is not possible to successfully defend an act of automatic stereotyping simply on the basis that the stereotype reflects an aspect of social reality. An act of automatic stereotyping can be poor from an epistemic perspective even if the stereotype that is activated reflects reality. (shrink)
Du Châtelet’s 1740 text Foundations of Physics tackles three of the major foundational issues facing natural philosophy in the early eighteenth century: the problem of bodies, the problem of force, and the question of appropriate methodology. This paper offers an introduction to Du Châtelet’s philosophy of science, as expressed in her Foundations of Physics, primarily through the lens of the problem of bodies.
This paper focuses on smallholder agriculture and livelihoods in north-central Tanzania. It traces changes in agricultural production and asset ownership in one community over a 28 year period. Over this period, national development policies and agriculture programs have moved from socialism to neo-liberal approaches. Using a combination of qualitative and quantitative methods, we explore how farmers have responded to these shifts in the wider political-economic context and how these responses have shaped their livelihoods and ideas about farming and wealth. This (...) case study clearly debunks the idea that rural farmers are slow to respond to “modern” farming methods or that smallholder farming is stagnant and cannot reduce poverty. While changes overall are very positive in this rural community, challenges remain as land sizes are small and markets often unreliable. This research cautions against a shift in emphasis to large-scale farming as a strategy for national development. It suggests instead that increased investment in supporting smallholder farming is critical for addressing poverty and rural well-being. (shrink)
Symmetry considerations dominate modern fundamental physics, both in quantum theory and in relativity. Philosophers are now beginning to devote increasing attention to such issues as the significance of gauge symmetry, quantum particle identity in the light of permutation symmetry, how to make sense of parity violation, the role of symmetry breaking, the empirical status of symmetry principles, and so forth. These issues relate directly to traditional problems in the philosophy of science, including the status of the laws of nature, the (...) relationships between mathematics, physical theory, and the world, and the extent to which mathematics suggests new physics.This entry begins with a brief description of the historical roots and emergence of the concept of symmetry that is at work in modern science. It then turns to the application of this concept to physics, distinguishing between two different uses of symmetry: symmetry principles versus symmetry arguments. It mentions the different varieties of physical symmetries, outlining the ways in which they were introduced into physics. Then, stepping back from the details of the various symmetries, it makes some remarks of a general nature concerning the status and significance of symmetries in physics. (shrink)
Highlighting main issues and controversies, this book brings together current philosophical discussions of symmetry in physics to provide an introduction to the subject for physicists and philosophers. The contributors cover all the fundamental symmetries of modern physics, such as CPT and permutation symmetry, as well as discussing symmetry-breaking and general interpretational issues. Classic texts are followed by new review articles and shorter commentaries for each topic. Suitable for courses on the foundations of physics, philosophy of physics and philosophy of science, (...) the volume is a valuable reference for students and researchers. (shrink)
Where there is trust, there is also vulnerability, and vulnerability can be exploited. Epistemic trust is no exception. This chapter maps the phenomenon of the exploitation of epistemic trust. I start with a discussion of how trust in general can be exploited; a key observation is that trust incurs vulnerabilities not just for the party doing the trusting, but also for the trustee (after all, trust can be burdensome), so either party can exploit the other. I apply these considerations to (...) epistemic trust, specifically in testimonial relationships. There, we standardly think of a hearer trusting a speaker. But we miss an important aspect of this relationship unless we consider too that the speaker standardly trusts the hearer. Given this mutual trust, and given that both trustees and trusters can exploit each other, we have four possibilities for exploitation in epistemic-trust relationships: a speaker exploiting a hearer (a) by accepting his trust or (b) by imposing her trust on him, and a hearer exploiting a speaker (c) by accepting her trust or (d) by imposing his trust on her. One result is that you do not need to betray someone to exploit him – you can exploit him just as easily by doing what he trusts you for. (shrink)
Katherine Hawley explores the key ideas about trust in this Very Short Introduction. Drawing on a wide range of disciplines including philosophy, psychology, and evolutionary biology, she emphasizes the nature and importance of trusting and being trusted, from our intimate bonds with significant others to our relationship with the state.
When someone is prepunished, they are punished for a predicted crime they will or would commit. I argue that cases of prepunishment universally assumed to be merely hypothetical—including those in Philip K. Dick’s “The Minority Report”— are equivalent to some instances of the real-life punishment of attempt offenses. This conclusion puts pressure in two directions. If prepunishment is morally impermissible, as philosophers argue, then this calls for amendments to criminal justice theory and practice. At the same time, if prepunishment is (...) not imaginary, then the philosophers who reject it cannot claim that their view is supported by common sense. (shrink)
Katherin A. Rogers presents a new theory of free will, based on the thought of Anselm of Canterbury. We did not originally produce ourselves. Yet, according to Anselm, we can engage in self-creation, freely and responsibly forming our characters by choosing 'from ourselves' between open options. Anselm introduces a new, agent-causal libertarianism which is parsimonious in that, unlike other agent-causal theories, it does not appeal to any unique and mysterious powers to explain how the free agent chooses. After setting out (...) Anselm's original theory, Rogers defends and develops it by addressing a series of standard problems levelled against libertarianism. Finally, as a theory about self-creation, Anselmian Libertarianism must defend the tracing thesis, the claim that an agent can be responsible for character-determined choices, if he, himself, formed his character through earlier a se choices. Throughout, Rogers defends and exemplifies a new methodological suggestion: someone debating free will ought to make his background world view explicit. In the on-going debate over the possibility of human freedom and responsibility, Anselmian Libertarianism constitutes a new and plausible approach. (shrink)
ABSTRACTAn adequate semantics for generic sentences must stake out positions across a range of contested territory in philosophy and linguistics. For this reason the study of generic sentences is a venue for investigating different frameworks for understanding human rationality as manifested in linguistic phenomena such as quantification, classification of individuals under kinds, defeasible reasoning, and intensionality. Despite the wide variety of semantic theories developed for generic sentences, to date these theories have been almost universally model-theoretic and representational. This essay outlines (...) a range of proof-theoretic analyses for characterizing generics. Particular attention is given to an expressivist proof-theory that can be traced to 1) work on logical syntax that Carnap undertook prior to his turn toward truth-conditional model theory in the late 1930s, and 2) research on sequent calculi and natural deduction systems that originate in work from Gentzen and Prawitz.1. (shrink)