High temporal resolution event-related brain potential and electroencephalographic coherence studies of the neural substrate of short-term storage in working memory indicate that the sustained coactivation of both prefrontal cortex and the posterior cortical systems that participate in the initial perception and comprehension of the retained information are involved in its storage. These studies further show that short-term storage mechanisms involve an increase in neural synchrony between prefrontal cortex and posterior cortex and the enhanced activation of long-term memory representations of material (...) held in short-term memory. This activation begins during the encoding/comprehension phase and evidently is prolonged into the retention phase by attentional drive from prefrontal cortex control systems. A parsimonious interpretation of these findings is that the long-term memory systems associated with the posterior cortical processors provide the necessary representational basis for working memory, with the property of short-term memory decay being primarily due to the posterior system. In this view, there is no reason to posit specialized neural systems whose functions are limited to those of short-term storage buffers. Prefrontal cortex provides the attentional pointer system for maintaining activation in the appropriate posterior processing systems. Short-term memory capacity and phenomena such as displacement of information in short-term memory are determined by limitations on the number of pointers that can be sustained by the prefrontal control systems. Key Words: coherence; event-related potentials; imaging; long-term memory; memory; short-term memory; working memory. (shrink)
The goal of our target article is to establish that electrophysiological data constrain models of short-term memory retention operations to schemes in which activated long-term memory is its representational basis. The temporary stores correspond to neural circuits involved in the perception and subsequent processing of the relevant information, and do not involve specialized neural circuits dedicated to the temporary holding of information outside of those embedded in long-term memory. The commentaries ranged from general agreement with the view that short-term memory (...) stores correspond to activated long-term memory (e.g., Abry, Sato, Schwartz, Loevenbruck & Cathiard [Abry etal.], Cowan, Fuster, Grote, Hickok & Buchsbaum, Keenan, Hyönä & Kaakinen [Keenan et al.], Martin, Morra), to taking a definite exception to this view (e.g., Baddeley, Düzel, Logie & Della Sala, Kroger, Majerus, Van der Linden, Colette & Salmon [Majerus et al.], Vallar). (shrink)
Oxford Studies in Metaphysics is dedicated to the timely publication of new work in metaphysics, broadly construed. These volumes provide a forum for the best new work in this flourishing field. They offer a broad view of the subject, featuring not only the traditionally central topics such as existence, identity, modality, time, and causation, but also the rich clusters of metaphysical questions in neighbouring fields, such as philosophy of mind and philosophy of science. This book is the eighth volume in (...) the series. It contains essays by Cian Dorr and John Hawthorne, Maya Eddon, Shamik Dasgupta, Bill Dunaway, Cody Gilmore, Ted Sider, Aaron Cotnoir, Katherine Hawley, Frabrice Correia and Sven Rosencrantz, David Braddon-Mitchell, and Ross Cameron. (shrink)
This interview with N. Katherine Hayles, one of the foremost theorists of the posthuman, explores the concerns that led to her seminal book How We Became Posthuman, the key arguments expounded in that book, and the changes in technology and culture in the ten years since its publication. The discussion ranges across the relationships between literature and science; the trans-disciplinary project of developing a methodology appropriate to their intersection; the history of cybernetics in its cultural and political context ; (...) the changed role for psychoanalysis in the technoscientific age; and the altering forms of mediated ‘embodiment’ in the posthuman context. (shrink)
In 1981 Eleonore Stump and Norman Kretzmann published a landmark article aimed at exploring the classical concept of divine eternity. 1 Taking Boethius as the primary spokesman for the traditional view, they analyse God's eternity as timeless yet as possessing duration. More recently Brian Leftow has seconded Stump and Kretzmann's interpretation of the medieval position and attempted to defend the notion of a durational eternity as a useful way of expressing the sort of life God leads. 2 However, there are (...) good reasons to reject the idea that divine timelessness should be thought of as having duration. The medievals probably did not accept it, as it contradicts a principle of classical metaphysics even more fundamental than the atemporality of the divine. In any case, it is not possible to express the notion of durational eternity in even a minimally coherent way, and the attempt to salvage the concept by appealing to the Thomistic doctrine of analogy is unsuccessful. The best analogy for God's eternity is still the one proposed by Augustine at the end of the fourth century. God lives in a timeless ‘present’, unextended like our temporal present, but immutable and encompassing all time. (shrink)
The poetry and journalistic essays of Katherine Tillman often appeared in publications sponsored by the American Methodist church. Collected together for the first time, her works speak to the struggles and triumphs of African-American women.
Paul Sheehy has argued that the modal realist cannot satisfactorily allow for the necessity of God's existence. In this short paper I show that she can, and that Sheehy only sees a problem because he has failed to appreciate all the resources available to the modal realist. God may be an abstract existent outside spacetime or He may not be: but either way, there is no problem for the modal realist to admit that He exists at every concrete possible world.
The world is remarkably stable -- amidst the flux, physical objects continue to persist. But how do things persist? Are they spread out through time as they are spread out through space? Or is persistence very different from spatial extension? These ancient metaphysical questions are at the forefront of contemporary debate once more. Katherine Hawley provides a wide-ranging yet accessible study of this key issue. She also makes a major contribution to current debates about change, vagueness, and language.
There are moments when things suddenly seem strange - objects in the world lose their meaning, we feel like strangers to ourselves, or human existence itself strikes us as bizarre and unintelligible. Through a detailed philosophical investigation of Heidegger's concept of uncanniness (Unheimlichkeit), Katherine Withy explores what such experiences reveal about us. She argues that while others (such as Freud, in his seminal psychoanalytic essay, 'The Uncanny') take uncanniness to be an affective quality of strangeness or eeriness, Heidegger uses (...) the concept to go beyond feeling uncanny to reach the ground of this feeling in our being uncanny. -/- "Heidegger on Being Uncanny" answers those who wonder whether human existence is fundamentally strange to itself by showing that we can be what we are only if we do not fully understand what it is to be us. This fundamental finitude in our self-understanding is our uncanniness. In this first dedicated interpretation of Heidegger's uncanniness, Withy tracks this concept from his early analyses of angst through his later interpretations of the choral ode from Sophocles's Antigone. Her interpretation uncovers a novel and robust continuity in Heidegger's thought and in his vision of the human being as uncanny, and it points the way toward what it is to live well as an uncanny human being. (shrink)
Katherine Hawley explores and compares three theories of persistence -- endurance, perdurance, and stage theories - investigating the ways in which they attempt to account for the world around us. Having provided valuable clarification of its two main rivals, she concludes by advocating stage theory.
Social groups—like teams, committees, gender groups, and racial groups—play a central role in our lives and in philosophical inquiry. Here I develop and motivate a structuralist ontology of social groups centered on social structures (i.e., networks of relations that are constitutively dependent on social factors). The view delivers a picture that encompasses a diverse range of social groups, while maintaining important metaphysical and normative distinctions between groups of different kinds. It also meets the constraint that not every arbitrary collection of (...) people is a social group. In addition, the framework provides resources for developing a broader structuralist view in social ontology. (shrink)
Is there a distinctively epistemic kind of blame? It has become commonplace for epistemologists to talk about epistemic blame, and to rely on this notion for theoretical purposes. But not everyone is convinced. Some of the most compelling reasons for skepticism about epistemic blame focus on disanologies, or asymmetries, between the moral and epistemic domains. In this paper, I defend the idea that there is a distinctively epistemic kind of blame. I do so primarily by developing an account of the (...) nature of epistemic blame. My account draws on a prominent line of theorizing in moral philosophy that ties blame to our relationships with one another. I argue that with my account of epistemic blame on hand, the most compelling worries about epistemic blame can be deflated. There is a distinctively epistemic kind of blame. (shrink)
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to (...) a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain. (shrink)
Although the principle of fair subject selection is a widely recognized requirement of ethical clinical research, it often yields conflicting imperatives, thus raising major ethical dilemmas regarding participant selection. In this paper, we diagnose the source of this problem, arguing that the principle of fair subject selection is best understood as a bundle of four distinct sub-principles, each with normative force and each yielding distinct imperatives: fair inclusion; fair burden sharing; fair opportunity; and fair distribution of third-party risks. We first (...) map out these distinct sub-principles, and then identify the ways in which they yield conflicting imperatives for the design of inclusion and exclusion criteria, and the recruitment of participants. We then offer guidance for how decision makers should navigate these conflicting imperatives to ensure that participants are selected fairly. (shrink)
ABSTRACT:Deliberative democratic theory, commonly used to explore questions of “political” corporate social responsibility, has become prominent in the literature. This theory has been challenged previously for being overly sanguine about firm profit imperatives, but left unexamined is whether corporate contexts are appropriate contexts for deliberative theory in the first place. We explore this question using the case of Starbucks’ “Race Together” campaign to show that significant challenges exist to corporate deliberation, even in cases featuring genuinely committed firms. We return to (...) the underlying social theory to show that this is not an isolated case: for-profit firms are predictably hostile contexts for deliberation, and significant normative and strategic problems can be expected should deliberative theory be imported uncritically to corporate contexts. We close with recent advances in deliberative democratic theory that might help update the PCSR project, and accommodate the application of deliberation to the corporate context, albeit with significant alterations. (shrink)
According to Ross Cameron's version of the moving spotlight theory of time, (1) Past and future entities exist; (2) the properties and relations they have are those they have now; but nevertheless (3) there are no fundamental past- or future-tensed facts; instead, tensed facts are made true by fundamental facts about the possession of temporal distributional properties and facts about how old things are. I argue that the account isn't sufficiently distinct from the B-theory to fit the usual A-theorist's (...) tastes and arguments, since i) like the traditional spotlight it consists of a B-theoretic metaphysics with one small A-theoretic element tacked on, and since ii) in a sense it does not admit fundamental change. I also argue that the proposed grounding of tensed facts in tenseless facts does not work in certain cases. (shrink)
Katherine Hawley investigates what trustworthiness means in our lives. We become untrustworthy when we break promises, miss deadlines, or give unreliable information. But we can't be sure about what we can commit to. Hawley examines the social obstacles to trustworthiness, and explores how we can steer between overcommitment and undercommitment.
Decades of research conducted in Western, Educated, Industrialized, Rich, & Democratic (WEIRD) societies have led many scholars to conclude that the use of mental states in moral judgment is a human cognitive universal, perhaps an adaptive strategy for selecting optimal social partners from a large pool of candidates. However, recent work from a more diverse array of societies suggests there may be important variation in how much people rely on mental states, with people in some societies judging accidental harms just (...) as harshly as intentional ones. To explain this variation, we develop and test a novel cultural evolutionary theory proposing that the intensity of kin-based institutions will favor less attention to mental states when judging moral violations. First, to better illuminate the historical distribution of the use of intentions in moral judgment, we code and analyze anthropological observations from the Human Area Relations Files. This analysis shows that notions of strict liability—wherein the role for mental states is reduced—were common across diverse societies around the globe. Then, by expanding an existing vignette-based experimental dataset containing observations from 321 people in a diverse sample of 10 societies, we show that the intensity of a society's kin-based institutions can explain a substantial portion of the population-level variation in people's reliance on intentions in three different kinds of moral judgments. Together, these lines of evidence suggest that people's use of mental states has coevolved culturally to fit their local kin-based institutions. We suggest that although reliance on mental states has likely been a feature of moral judgment in human communities over historical and evolutionary time, the relational fluidity and weak kin ties of today's WEIRD societies position these populations' psychology at the extreme end of the global and historical spectrum. (shrink)
Social groups, including racial and gender groups and teams and committees, seem to play an important role in our world. This article examines key metaphysical questions regarding groups. I examine answers to the question ‘Do groups exist?’ I argue that worries about puzzles of composition, motivations to accept methodological individualism, and a rejection of Racialism support a negative answer to the question. An affirmative answer is supported by arguments that groups are efficacious, indispensible to our best theories, and accepted given (...) common sense. Then, I turn to an examination of the features of social groups. I argue that social groups can be divided into two sorts. Groups of Type 1 are organized social groups like courts and clubs. Groups of Type 2 are groups like Blacks, women, and lesbians. While groups of both sorts have some features in common, they also have marked differences in features. Finally, I turn to views of the nature of social groups. I argue that the difference in features provides evidence that social groups do not have a uniform nature. Teams and committees are structured wholes, while race and gender groups are social kinds. (shrink)
This paper provides a critical overview of recent work on epistemic blame. The paper identifies key features of the concept of epistemic blame and discusses two ways of motivating the importance of this concept. Four different approaches to the nature of epistemic blame are examined. Central issues surrounding the ethics and value of epistemic blame are identified and briefly explored. In addition to providing an overview of the state of the art of this growing but controversial field, the paper highlights (...) areas where future work is needed. (shrink)
In this paper I argue for a view of groups, things like teams, committees, clubs and courts. I begin by examining features all groups seem to share. I formulate a list of six features of groups that serve as criteria any adequate theory of groups must capture. Next, I examine four of the most prominent views of groups currently on offer—that groups are non-singular pluralities, fusions, aggregates and sets. I argue that each fails to capture one or more of the (...) criteria. Last, I develop a view of groups as realizations of structures. The view has two components. First, groups are entities with structure. Second, since groups are concreta, they exist only when a group structure is realized. A structure is realized when each of its functionally defined nodes or places are occupied. I show how such a view captures the six criteria for groups, which no other view of groups adequately does, while offering a substantive answer to the question, “What are groups?”. (shrink)
Where there is trust, there is also vulnerability, and vulnerability can be exploited. Epistemic trust is no exception. This chapter maps the phenomenon of the exploitation of epistemic trust. I start with a discussion of how trust in general can be exploited; a key observation is that trust incurs vulnerabilities not just for the party doing the trusting, but also for the trustee (after all, trust can be burdensome), so either party can exploit the other. I apply these considerations to (...) epistemic trust, specifically in testimonial relationships. There, we standardly think of a hearer trusting a speaker. But we miss an important aspect of this relationship unless we consider too that the speaker standardly trusts the hearer. Given this mutual trust, and given that both trustees and trusters can exploit each other, we have four possibilities for exploitation in epistemic-trust relationships: a speaker exploiting a hearer (a) by accepting his trust or (b) by imposing her trust on him, and a hearer exploiting a speaker (c) by accepting her trust or (d) by imposing his trust on her. One result is that you do not need to betray someone to exploit him – you can exploit him just as easily by doing what he trusts you for. (shrink)
One challenge in developing an account of the nature of epistemic blame is to explain what differentiates epistemic blame from mere negative epistemic evaluation. The challenge is to explain the difference, without invoking practices or behaviors that seem out of place in the epistemic domain. In this paper, I examine whether the most sophisticated recent account of the nature of epistemic blame—due to Jessica Brown—is up for the challenge. I argue that the account ultimately falls short, but does so in (...) an instructive way. Drawing on the lessons learned, I put forward an alternative approach to the nature of epistemic blame. My account understands epistemic blame in terms of modifications to the intentions and expectations that comprise our “epistemic relationships” with one another. This approach has a number of attractions shared by Brown’s account, but it can also explain the significance of epistemic blame. (shrink)
It has been argued that humans can face an ethical/epistemic dilemma over the automatic stereotyping involved in implicit bias: ethical demands require that we consistently treat people equally, as equally likely to possess certain traits, but if our aim is knowledge or understanding our responses should reflect social inequalities meaning that members of certain social groups are statistically more likely than others to possess particular features. I use psychological research to argue that often the best choice from the epistemic perspective (...) is the same as the best choice from the ethical perspective: to avoid automatic stereotyping even when this involves failing to reflect social realities in our judgements. This argument has an important implication: it shows that it is not possible to successfully defend an act of automatic stereotyping simply on the basis that the stereotype reflects an aspect of social reality. An act of automatic stereotyping can be poor from an epistemic perspective even if the stereotype that is activated reflects reality. (shrink)
How should we determine the distribution of psychological traits—such as Theory of Mind, episodic memory, and metacognition—throughout the Animal kingdom? Researchers have long worried about the distorting effects of anthropomorphic bias on this comparative project. A purported corrective against this bias was offered as a cornerstone of comparative psychology by C. Lloyd Morgan in his famous “Canon”. Also dangerous, however, is a distinct bias that loads the deck against animal mentality: our tendency to tie the competence criteria for cognitive capacities (...) to an exaggerated sense of typical human performance. I dub this error “anthropofabulation”, since it combines anthropocentrism with confabulation about our own prowess. Anthropofabulation has long distorted the debate about animal minds, but it is a bias that has been little discussed and against which the Canon provides no protection. Luckily, there is a venerable corrective against anthropofabulation: a principle offered long ago by David Hume, which I call “Hume’s Dictum”. In this paper, I argue that Hume’s Dictum deserves a privileged place next to Morgan’s Canon in the methodology of comparative psychology, illustrating my point through a discussion of the debate over Theory of Mind in nonhuman animals. (shrink)
Our prominent definitions of cognition are too vague and lack empirical grounding. They have not kept up with recent developments, and cannot bear the weight placed on them across many different debates. I here articulate and defend a more adequate theory. On this theory, behaviors under the control of cognition tend to display a cluster of characteristic properties, a cluster which tends to be absent from behaviors produced by non-cognitive processes. This cluster is reverse-engineered from the empirical tests that comparative (...) psychologists use to determine whether a behavior was generated by a cognitive or a non-cognitive process. Cognition should be understood as the natural kind of psychological process that non-accidentally exhibits the properties assessed by these tests (as well as others we have not yet discovered). Finally, I review two plausible neural accounts of cognition's underlying mechanisms?one based in localization of function to particular brain regions and another based in the more recent distributed networks approach to neuroscience?which would explain why these properties non-accidentally cluster. While this notion of cognition may be useful for a number of debates, I here focus on its application to a recent crisis over the distinction between cognition and association in comparative psychology. (shrink)
: As Heidegger acknowledges, our understanding is essentially situated and so limited by the context and tradition into which it is thrown. But this ‘situatedness’ does not exhaust Heidegger's concept of ‘thrownness’. By examining this concept and its grammar, I develop a more complete interpretation. I identify several different kinds of finitude or limitation in our understanding, and touch on ways in which we confront and carry different dimensions of our past.
The paper critically examines recent work on justifications and excuses in epistemology. I start with a discussion of Gerken’s claim that the “excuse maneuver” is ad hoc. Recent work from Timothy Williamson and Clayton Littlejohn provides resources to advance the debate. Focusing in particular on a key insight in Williamson’s view, I then consider an additional worry for the so-called excuse maneuver. I call it the “excuses are not enough” objection. Dealing with this objection generates pressure in two directions: one (...) is to show that excuses are a positive enough normative standing to help certain externalists with important cases; the other is to do so in a way that does not lead back to Gerken’s objection. I show how a Williamson-inspired framework is flexible enough to deal with both sources of pressure. Perhaps surprisingly, I draw on recent virtue epistemology. (shrink)
Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. The most impressive attempt to confront this problem has been mounted recently by Robert Lurz. However, Lurz' approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. Moreover, participants in this (...) debate do not agree on criteria for representation. As such, future debate should either abandon the representational idiom or confront underlying semantic disagreements. (shrink)
Scientific researchers welcome disagreement as a way of furthering epistemic aims. Religious communities, by contrast, tend to regard it as a potential threat to their beliefs. But I argue that religious disagreement can help achieve religious epistemic aims. I do not argue this by comparing science and religion, however. For scientific hypotheses are ideally held with a scholarly neutrality, and my aim is to persuade those who are committed to religious beliefs that religious disagreement can be epistemically beneficial for them (...) too. (shrink)
Some language encourages essentialist thinking. While philosophers have largely focused on generics and essentialism, I argue that nouns as a category are poised to refer to kinds and to promote representational essentializing. Our psychological propensity to essentialize when nouns are used reveals a limitation for anti-essentialist ameliorative projects. Even ameliorated nouns can continue to underpin essentialist thinking. I conclude by arguing that representational essentialism does not doom anti-essentialist ameliorative projects. Rather it reveals that would-be ameliorators ought to attend to the (...) propensities for our representational devices to essentialize and to the complex relationship between essentialism and prejudice. (shrink)
BackgroundResponsive neurostimulation has been utilized as a treatment for intractable epilepsy. The RNS System delivers stimulation in response to detected abnormal activity, via leads covering the seizure foci, in response to detections of predefined epileptiform activity with the goal of decreasing seizure frequency and severity. While thalamic leads are often implanted in combination with cortical strip leads, implantation and stimulation with bilateral thalamic leads alone is less common, and the ability to detect electrographic seizures using RNS System thalamic leads is (...) uncertain.ObjectiveThe present study retrospectively evaluated fourteen patients with RNS System depth leads implanted in the thalamus, with or without concomitant implantation of cortical strip leads, to determine the ability to detect electrographic seizures in the thalamus. Detailed patient presentations and lead trajectories were reviewed alongside electroencephalographic analyses.ResultsAnterior nucleus thalamic leads, whether bilateral or unilateral and combined with a cortical strip lead, successfully detected and terminated epileptiform activity, as demonstrated by Cases 2 and 3. Similarly, bilateral centromedian thalamic leads or a combination of one centromedian thalamic alongside a cortical strip lead also demonstrated the ability to detect electrographic seizures as seen in Cases 6 and 9. Bilateral pulvinar leads likewise produced reliable seizure detection in Patient 14. Detections of electrographic seizures in thalamic nuclei did not appear to be affected by whether the patient was pediatric or adult at the time of RNS System implantation. Sole thalamic leads paralleled the combination of thalamic and cortical strip leads in terms of preventing the propagation of electrographic seizures.ConclusionThalamic nuclei present a promising target for detection and stimulation via the RNS System for seizures with multifocal or generalized onsets. These areas provide a modifiable, reversible therapeutic option for patients who are not candidates for surgical resection or ablation. (shrink)
Recently several philosophers have argued that racial, gender, and other social generic generalizations should be avoided given their propensity to promote essentialist thinking, obscure the social nature of categories, and contribute to oppression. Here I argue that a general prohibition against social generics goes too far. Given that the truth of many generics require regularities or systematic rather than mere accidental correlations, they are our best means for describing structural forms of violence and discrimination. Moreover, their accuracy, their persistence in (...) the face of counterexamples, and features of the contemporary socio-political context make generics useful linguistic tools in social justice projects. (shrink)
Resolving religious disagreements is difficult, for beliefs about religion tend to come with strong biases against other views and the people who hold them. Evidence can help, but there is no agreed-upon policy for weighting it, and moreover bias affects the content of our evidence itself. Another complicating factor is that some biases are reliable and others unreliable. What we need is an evidence-weighting policy geared toward negotiating the effects of bias. I consider three evidence-weighting policies in the philosophy of (...) religion and advocate one of them as the best for promoting the resolution of religious disagreements. (shrink)
Du Châtelet’s 1740 text Foundations of Physics tackles three of the major foundational issues facing natural philosophy in the early eighteenth century: the problem of bodies, the problem of force, and the question of appropriate methodology. This paper offers an introduction to Du Châtelet’s philosophy of science, as expressed in her Foundations of Physics, primarily through the lens of the problem of bodies.
Symmetry considerations dominate modern fundamental physics, both in quantum theory and in relativity. Philosophers are now beginning to devote increasing attention to such issues as the significance of gauge symmetry, quantum particle identity in the light of permutation symmetry, how to make sense of parity violation, the role of symmetry breaking, the empirical status of symmetry principles, and so forth. These issues relate directly to traditional problems in the philosophy of science, including the status of the laws of nature, the (...) relationships between mathematics, physical theory, and the world, and the extent to which mathematics suggests new physics.This entry begins with a brief description of the historical roots and emergence of the concept of symmetry that is at work in modern science. It then turns to the application of this concept to physics, distinguishing between two different uses of symmetry: symmetry principles versus symmetry arguments. It mentions the different varieties of physical symmetries, outlining the ways in which they were introduced into physics. Then, stepping back from the details of the various symmetries, it makes some remarks of a general nature concerning the status and significance of symmetries in physics. (shrink)
Slurs are expressions that can be used to demean and dehumanize targets based on their membership in racial, ethnic, religious, gender, or sexual orientation groups. Almost all treatments of slurs posit that they have derogatory content of some sort. Such views—which I call content-based—must explain why in cases of appropriation slurs fail to express their standard derogatory contents. A popular strategy is to take appropriated slurs to be ambiguous; they have both a derogatory content and a positive appropriated content. However, (...) if appropriated slurs are ambiguous, why can only members in the target group use them to express a non-offensive/positive meaning? Here, I develop and motivate an answer that could be adopted by any content-based theorist. I argue that appropriated contents of slurs include a plural fi rst-person pronoun. I show how the semantics of pronouns like ‘we’ can be put to use to explain why only some can use a slur to express its appropriated content. Moreover, I argue that the picture I develop is motivated by the process of appropriation and helps to explain how it achieves its aims of promoting group solidarity and positive group identity. (shrink)
Highlighting main issues and controversies, this book brings together current philosophical discussions of symmetry in physics to provide an introduction to the subject for physicists and philosophers. The contributors cover all the fundamental symmetries of modern physics, such as CPT and permutation symmetry, as well as discussing symmetry-breaking and general interpretational issues. Classic texts are followed by new review articles and shorter commentaries for each topic. Suitable for courses on the foundations of physics, philosophy of physics and philosophy of science, (...) the volume is a valuable reference for students and researchers. (shrink)
Distinguishing between excuses and exemptions advances our understanding of a standard range of problem cases in debates about epistemic norms. But it leaves open a problem of accounting for blameless norm violation in ‘prospective agents’. By shifting focus in our theory of excuses from rational excellence to norms governing the dispositions of agents, we can account for a fuller range of normative phenomena at play in debates about epistemic norms.
Katherin A. Rogers presents a new theory of free will, based on the thought of Anselm of Canterbury. We did not originally produce ourselves. Yet, according to Anselm, we can engage in self-creation, freely and responsibly forming our characters by choosing 'from ourselves' between open options. Anselm introduces a new, agent-causal libertarianism which is parsimonious in that, unlike other agent-causal theories, it does not appeal to any unique and mysterious powers to explain how the free agent chooses. After setting out (...) Anselm's original theory, Rogers defends and develops it by addressing a series of standard problems levelled against libertarianism. Finally, as a theory about self-creation, Anselmian Libertarianism must defend the tracing thesis, the claim that an agent can be responsible for character-determined choices, if he, himself, formed his character through earlier a se choices. Throughout, Rogers defends and exemplifies a new methodological suggestion: someone debating free will ought to make his background world view explicit. In the on-going debate over the possibility of human freedom and responsibility, Anselmian Libertarianism constitutes a new and plausible approach. (shrink)
It is clear throughout Cognitive Gadgets Heyes believes the development of cognitive capacities results from the interaction of genes and experience. However, she opposes cognitive instincts theorists to her own view that uniquely human capacities are cognitive gadgets. Instinct theorists believe that cognitive capacities are substantially produced by selection, with the environment playing a triggering role. Heyes’s position is that humans have similar general learning capacities to those present across taxa, and that sophisticated human cognition is substantially created by our (...) socioculturally transmitted environment. It is a core strategy of Heyes to provide evidence of learning altering a cognitive capacity to conclude that a capacity is a cognitive gadget and not an instinct. We draw on recent work on the evolution of learning preparedness to examine the adequacy of this strategy. In particular, we analyse experimental evolution work showing how selection affects cognition within the laboratory. First, this work reveals that change due to learning can still be retained under genetic assimilation. This suggests that domain-specific adaptation can coexist with learning, moderate nativism, an option missed by the instinct versus gadget distinction. Second, we describe the conditions that select for increased preparedness in learning: certainty, reliability, and particular costs. We consider how these conditions can be used when conducting evolutionary reasoning about cognition, applying them to the important capacity for imitation. We find that the conditions lend theoretical support to moderate nativism about the capacity to imitate, which is supported by psychological evidence. (shrink)