fusion theory challenges efforts to see theory as inhibiting by presenting an approach that is innovative, eclectic, and subtle in order to draw out competing and constellating ideas and opinions. This collected volume of essays examines fusion theory and demonstrates how the theory can be applied to the reading of various works of Indian English novelists.
The precipitous cliffs, rolling headlands, and rocky inlets of the Big Sur coast of California prompted Robinson Jeffers to extol their wild beauty throughout his long career as a poet. This extraordinary volume brings together Jeffers’s haunting poetry with magnificent photographs of Big Sur by his friend and neighbor, famed photographer Morley Baer.
Mill's most famous departure from Bentham is his distinction between higher and lower pleasures. This article argues that quality and quantity are independent and irreducible properties of pleasures that may be traded off against each other – as in the case of quality and quantity of wine. I argue that Mill is not committed to thinking that there are two distinct kinds of pleasure, or that ‘higher pleasures’ lexically dominate lower ones, and that the distinction is compatible with hedonism. I (...) show how this interpretation not only makes sense of Mill but allows him to respond to famous problems, such as Crisp's Haydn and the oyster and Nozick's experience machine. (shrink)
Consequentialists typically think that the moral quality of one's conduct depends on the difference one makes. But consequentialists may also think that even if one is not making a difference, the moral quality of one's conduct can still be affected by whether one is participating in an endeavour that does make a difference. Derek Parfit discusses this issue – the moral significance of what I call ‘participation’ – in the chapter of Reasons and Persons that he devotes to what he (...) calls ‘moral mathematics’. In my paper, I expose an inconsistency in Parfit's discussion of moral mathematics by showing how it gives conflicting answers to the question of whether participation matters. I conclude by showing how an appreciation of Parfit's error sheds some light on consequentialist thought generally, and on the debate between act- and rule-consequentialists specifically. (shrink)
Well-Being and Death addresses philosophical questions about death and the good life: what makes a life go well? Is death bad for the one who dies? How is this possible if we go out of existence when we die? Is it worse to die as an infant or as a young adult? Is it bad for animals and fetuses to die? Can the dead be harmed? Is there any way to make death less bad for us? Ben Bradley defends the (...) following views: pleasure, rather than achievement or the satisfaction of desire, is what makes life go well; death is generally bad for its victim, in virtue of depriving the victim of more of a good life; death is bad for its victim at times after death, in particular at all those times at which the victim would have been living well; death is worse the earlier it occurs, and hence it is worse to die as an infant than as an adult; death is usually bad for animals and fetuses, in just the same way it is bad for adult humans; things that happen after someone has died cannot harm that person; the only sensible way to make death less bad is to live so long that no more good life is possible. (shrink)
A puzzling feature of paradigmatic cases of dehumanization is that the perpetrators often attribute uniquely human traits to their victims. This has become known as the “paradox of dehumanization.” We address the paradox by arguing that the perpetrators think of their victims as human in one sense, while denying that they are human in another sense. We do so by providing evidence that people harbor a dual character concept of humanity. Research has found that dual character concepts have two independent (...) sets of criteria for their application, one of which is descriptive and one of which is normative. Across four experiments, we found evidence that people deploy a descriptive criterion according to which being human is a matter of being a Homo sapiens; as well as a normative criterion according to which being human is a matter of possessing a deep-seated commitment to do the morally right thing. Importantly, we found that people are willing to affirm that someone is human in the descriptive sense, while denying that they are human in the normative sense, and vice versa. In addition to providing a solution to the paradox of dehumanization, these findings suggest that perceptions of moral character have a central role to play in driving dehumanization. (shrink)
This paper defends the view, put roughly, that to think that p is to guess that p is the answer to the question at hand, and that to think that p rationally is for one’s guess to that question to be in a certain sense non-arbitrary. Some theses that will be argued for along the way include: that thinking is question-sensitive and, correspondingly, that ‘thinks’ is context-sensitive; that it can be rational to think that p while having arbitrarily low credence (...) that p; that, nonetheless, rational thinking is closed under entailment; that thinking does not supervene on credence; and that in many cases what one thinks on certain matters is, in a very literal sense, a choice. Finally, since there are strong reasons to believe that thinking just is believing, there are strong reasons to think that all this goes for belief as well. (shrink)
To what extent do we know our own minds when making decisions? Variants of this question have preoccupied researchers in a wide range of domains, from mainstream experimental psychology to cognitive neuroscience and behavioral economics. A pervasive view places a heavy explanatory burden on an intelligent cognitive unconscious, with many theories assigning causally effective roles to unconscious influences. This article presents a novel framework for evaluating these claims and reviews evidence from three major bodies of research in which unconscious factors (...) have been studied: multiple-cue judgment, deliberation without attention, and decisions under uncertainty. Studies of priming and the role of awareness in movement and perception are also given brief consideration. The review highlights that inadequate procedures for assessing awareness, failures to consider artifactual explanations of “landmark” results, and a tendency to uncritically accept conclusions that fit with our intuitions have all contributed to unconscious influences being ascribed inflated and erroneous explanatory power in theories of decision making. The review concludes by recommending that future research should focus on tasks in which participants' attention is diverted away from the experimenter's hypothesis, rather than the highly reflective tasks that are currently often employed. (shrink)
I show that intuitive and logical considerations do not justify introducing Leibniz’s Law of the Indiscernibility of Identicals in more than a limited form, as applying to atomic formulas. Once this is accepted, it follows that Leibniz’s Law generalises to all formulas of the first-order Predicate Calculus but not to modal formulas. Among other things, identity turns out to be logically contingent.
The distinction between perception and cognition has always had a firm footing in both cognitive science and folk psychology. However, there is little agreement as to how the distinction should be drawn. In fact, a number of theorists have recently argued that, given the ubiquity of top-down influences, we should jettison the distinction altogether. I reject this approach, and defend a pluralist account of the distinction. At the heart of my account is the claim that each legitimate way of marking (...) a border between perception and cognition deploys a notion I call ‘stimulus-control.’ Thus, rather than being a grab bag of unrelated kinds, the various categories of the perceptual are unified into a superordinate natural kind. (shrink)
This paper aims to uncover the explanatory profile of an idealized version of Karl Ernst von Baer’s notion of individuation, wherein the special develops from the general. First, because such sequences can only be exemplified by a multiplicity of causally-related events, they should be seen as the topics of historical why-questions, rather than initial condition why-questions. Second, because historical why-questions concern the diachronic unity or genidentity of the events under consideration, I argue that the von Baerian pattern elicits a (...) distinctive response to such questions, wherein we are inclined to simultaneously affirm and reject the temporal unity of these events. I buttress this claim by considering non-biological expressions of the von Baerian principle, drawn from institutional history and literature. In the second half of the paper, I consider the implications of my findings for ontogenetic and phylogenetic sequences. I argue that the explanatory profile of von Baer’s principle neatly describes the distinctive speciation events that characterize deep metazoan phylogeny, as described by Stuart Newman. I also argue that parallel considerations should move us to accept a sense in which ontogenetic stages are not diachronically unified. (shrink)
In this article, I attempt to resuscitate the perennially unfashionable distinctive feeling theory of pleasure (and pain), according to which for an experience to be pleasant (or unpleasant) is just for it to involve or contain a distinctive kind of feeling. I do this in two ways. First, by offering powerful new arguments against its two chief rivals: attitude theories, on the one hand, and the phenomenological theories of Roger Crisp, Shelly Kagan, and Aaron Smuts, on the other. Second, by (...) showing how it can answer two important objections that have been made to it. First, the famous worry that there is no felt similarity to all pleasant (or unpleasant) experiences (sometimes called ‘the heterogeneity objection’). Second, what I call ‘Findlay’s objection’, the claim that it cannot explain the nature of our attraction to pleasure and aversion to pain. (shrink)
According to hedonism about well-being, lives can go well or poorly for us just in virtue of our ability to feel pleasure and pain. Hedonism has had many advocates historically, but has relatively few nowadays. This is mainly due to three highly influential objections to it: The Philosophy of Swine, The Experience Machine, and The Resonance Constraint. In this paper, I attempt to revive hedonism. I begin by giving a precise new definition of it. I then argue that the right (...) motivation for it is the ‘experience requirement’ (i.e., that something can benefit or harm a being only if it affects the phenomenology of her experiences in some way). Next, I argue that hedonists should accept a felt-quality theory of pleasure, rather than an attitude-based theory. Finally, I offer new responses to the three objections. Central to my responses are (i) a distinction between experiencing a pleasure (i.e., having some pleasurable phenomenology) and being aware of that pleasure, and (ii) an emphasis on diversity in one’s pleasures. (shrink)
Ben Fine traces the origins of social capital through the work of Becker, Bourdieu and Coleman and comprehensively reviews the literature across the social sciences. The text is uniquely critical of social capital, explaining how it avoids a proper confrontation with political economy and has become chaotic. This highly topical text addresses some major themes, including the shifting relationship between economics and other social sciences, the 'publish or perish' concept currently burdening scholarly integrity, and how a social science interdisciplinarity requires (...) a place for political economy together with cultural and social theory. (shrink)
It’s a platitude – which only a philosopher would dream of denying – that whereas words are connected to what they represent merely by arbitrary conventions, pictures are connected to what they represent by resemblance. The most important difference between my portrait and my name, for example, is that whereas my portrait and I are connected by my portrait’s resemblance to me, my name and I are connected merely by an arbitrary convention. The first aim of this book is to (...) defend this platitude from the apparently compelling objections raised against it, by analysing depiction in a way which reveals how it is mediated by resemblance. -/- It’s natural to contrast the platitude that depiction is mediated by resemblance, which emphasises the differences between depictive and descriptive representation, with an extremely close analogy between depiction and description, which emphasises the similarities between depictive and descriptive representation. Whereas the platitude emphasises that the connection between my portrait and me is natural in a way the connection between my name and me is not, the analogy emphasises the contingency of the connection between my portrait and me. Nevertheless, the second aim of this book is to defend an extremely close analogy between depiction and description. -/- The strategy of the book is to argue that the apparently compelling objections raised against the platitude that depiction is mediated by resemblance are manifestations of more general problems, which are familiar from the philosophy of language. These problems, it argues, can be resolved by answers analogous to their counterparts in the philosophy of language, without rejecting the platitude. So the combination of the platitude that depiction is mediated by resemblance with a close analogy between depiction and description turns out to be a compelling theory of depiction, which combines the virtues of common sense with the insights of its detractors. (shrink)
Recent debate in metaethics over evolutionary debunking arguments against morality has shown a tendency to abstract away from relevant empirical detail. Here, I engage the debate about Darwinian debunking of morality with relevant empirical issues. I present four conditions that must be met in order for it to be reasonable to expect an evolved cognitive faculty to be reliable: the environment, information, error, and tracking conditions. I then argue that these conditions are not met in the case of our evolved (...) faculty for moral judgement. (shrink)
The issue of whether emotions are rational is at the centre of philosophical and psychological discussions. I believe that emotions are rational, but that they follow different principles to those of intellectual reasoning. The purpose of this paper is to reveal the unique logic of emotions. I begin by suggesting that we should conceive of emotions as a general mode of the mental system; other modes are the perceptual and intellectual modes. One feature distinguishing one mode from another is the (...) logical principles underlying its information processing mechanism. Before describing these principles, I clarify the notion of ‘rationality,’ arguing that in an important sense emotions can be rational. (shrink)
This paper considers some puzzling knowledge ascriptions and argues that they present prima facie counterexamples to credence, belief, and justification conditions on knowledge, as well as to many of the standard meta-semantic assumptions about the context-sensitivity of ‘know’. It argues that these ascriptions provide new evidence in favor of contextualist theories of knowledge—in particular those that take the interpretation of ‘know’ to be sensitive to the mechanisms of constraint.
Motivated by weaknesses with traditional accounts of logical epistemology, considerable attention has been paid recently to the view, known as anti-exceptionalism about logic, that the subject matter and epistemology of logic may not be so different from that of the recognised sciences. One of the most prevalent claims made by advocates of AEL is that theory choice within logic is significantly similar to that within the sciences. This connection with scientific methodology highlights a considerable challenge for the anti-exceptionalist, as two (...) uncontentious claims about scientific theories are that they attempt to explain a target phenomenon and prove their worth through successful predictions. Thus, if this methodological AEL is to be viable, the anti-exceptionalist will need a reasonable account of what phenomena logics are attempting to explain, how they can explain, and in what sense they can be said to issue predictions. This paper makes sense of the anti-exceptionalist proposal with a new account of logical theory choice, logical predictivism, according to which logics are engaged in both a process of prediction and explanation. (shrink)
This essay challenges a "meta-theory" in just war analysis that purports to bridge the divide between just war and pacifism. According to the meta-theory, just war and pacifism share a common presumption against killing that can be overridden only under conditions stipulated by the just war criteria. Proponents of this meta-theory purport that their interpretation leads to ecumenical consensus between "just warriors" and pacifists, and makes the just war theory more effective in reducing recourse to war. Engagement with the new (...) meta-theory reveals, however, that these purported advantages are illusory, made possible only by ignoring fundamental questions about the nature and function of political authority that are crucial to all moral reflection on the problem of war. (shrink)
Given the plethora of competing logical theories of validity available, it’s understandable that there has been a marked increase in interest in logical epistemology within the literature. If we are to choose between these logical theories, we require a good understanding of the suitable criteria we ought to judge according to. However, so far there’s been a lack of appreciation of how logical practice could support an epistemology of logic. This paper aims to correct that error, by arguing for a (...) practice-based approach to logical epistemology. By looking at the types of evidence logicians actually appeal to in attempting to support their theories, we can provide a more detailed and realistic picture of logical epistemology. To demonstrate the fruitfulness of a practice-based approach, we look to a particular case of logical argumentation—the dialetheist’s arguments based upon the self-referential paradoxes—and show that the evidence appealed to support a particular theory of logical epistemology, logical abductivism. (shrink)
I examine the origins of ordinary racial thinking. In doing so, I argue against the thesis that it is the byproduct of a unique module. Instead, I defend a pluralistic thesis according to which different forms of racial thinking are driven by distinct mechanisms, each with their own etiology. I begin with the belief that visible features are diagnostic of race. I argue that the mechanisms responsible for face recognition have an important, albeit delimited, role to play in sustaining this (...) belief. I then argue that essentialist beliefs about race are driven by some of the mechanisms responsible for “entitativity perception”: the tendency to perceive some aggregates of people as more genuine groups than others. Finally, I argue that coalitional thinking about race is driven by a distinctive form of entitativity perception. However, I suggest that more data is needed to determine the prevalence of this form of racial thinking. (shrink)
What is it for a life to be meaningful? In this article, I defend what I call Consequentialism about Meaning in Life, the view that one's life is meaningful at time t just in case one's surviving at t would be good in some way, and one's life was meaningful considered as a whole just in case the world was made better in some way for one's having existed.
The roots and evolution of two concepts usually thought to be Western in origin-musica mundana (the music of the spheres) and musica humana (music's relation to the human soul)-are explored. Beginning with a study of the early creeds of the Near East, Professor Meyer-Baer then traces their development in the works of Plato and the Gnostics, and in the art and literature of the Middle Ages and the Renaissance. Previous studies of symbolism in music have tended to focus on (...) a single aspect of the problem. In this book the concepts of musica humana and musica mundane are related to philosophy, aesthetics, and the history of religion and are given a rightful place in the history of civilization. Originally published in 1970. The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905. (shrink)
Anyone wearing rose-tinted glasses might be forgiven if s/he comes to the conclusion that the world out there is rosier than it actually is. With his Fish Story, Sir Arthur Eddington warned us how analogous illusions might have happened in our models of the physical world. His allegory describes how observer characteristics can be inadvertently assigned to the systems being observed. If Eddington's conjecture is applicable, the most fundamental properties of nature will turn out to be the construction rules of (...) the observer who measures nature. Since no one exactly knows how the brain works and because it is the final measuring instrument that collapses the wave function at the end of von Neumann's measurement chain, it is likely that observer characteristics have been falsely attributed to physical reality and our theories of it. These errors may prevent us from understanding consciousness because they mask the actual operations of the psyche. Starting with Velmans' model of consciousness I analyse the role of cognitive models in the development of science. I then model how both the set-up of experiments and the interpretation of resulting data could be influenced to arrive at erroneous theories. Using examples I show how potential errors, due to our incomplete understanding of the conscious process, have crept into physics. These need to be corrected if we are to evolve a concept of physical reality that includes conscious experiences. (shrink)
A plausible principle about the felicitous use of indicative conditionals says that there is something strange about asserting an indicative conditional when you know whether its antecedent is true. But in most contexts there is nothing strange at all about asserting indicative conditionals like ‘If Oswald didn’t shoot Kennedy, then someone else did’. This paper argues that the only compelling explanation of these facts requires the resources of contextualism about knowledge.
While anti-exceptionalism about logic is now a popular topic within the philosophy of logic, there’s still a lack of clarity over what the proposal amounts to. currently, it is most common to conceive of AEL as the proposal that logic is continuous with the sciences. Yet, as we show here, this conception of AEL is unhelpful due to both its lack of precision, and its distortion of the current debates. Rather, AEL is better understood as the rejection of certain traditional (...) properties of logic. The picture that results is not of one singular position, but rather a cluster of often connected positions with distinct motivations, understood in terms of their rejection of clusters of the various traditional properties. In order to show the fruitfulness of this new conception of AEL, we distinguish between two prominent versions of the position, metaphysical and epistemological AEL, and show how the two positions need not stand or fall together. (shrink)
Three plausible views—Presentism, Truthmaking, and Independence—form an inconsistent triad. By Presentism, all being is present being. By Truthmaking, all truth supervenes on, and is explained in terms of, being. By Independence, some past truths do not supervene on, or are not explained in terms of, present being. We survey and assess some responses to this.
This paper defends the simple view that in asserting that p, one lies iff one knows that p is false. Along the way it draws some morals about deception, knowledge, Gettier cases, belief, assertion, and the relationship between first- and higher-order norms.
_Foucault’s Law_ is the first book in almost fifteen years to address the question of Foucault’s position on law. Many readings of Foucault’s conception of law start from the proposition that he failed to consider the role of law in modernity, or indeed that he deliberately marginalized it. In canvassing a wealth of primary and secondary sources, Ben Golder and Peter Fitzpatrick rebut this argument. They argue that rather than marginalize law, Foucault develops a much more radical, nuanced and coherent (...) theory of law than his critics have acknowledged. For Golder and Fitzpatrick, Foucault’s law is not the contained creature of conventional accounts, but is uncontainable and illimitable. In their radical re-reading of Foucault, they show how Foucault outlines a concept of law which is not tied to any given form or subordinated to a particular source of power, but is critically oriented towards alterity, new possibilities and different ways of being. _Foucault’s Law_ is an important and original contribution to the ongoing debate on Foucault and law, engaging not only with Foucault’s diverse writings on law and legal theory, but also with the extensive interpretive literature on the topic. It will thus be of interest to students and scholars working in the fields of law and social theory, legal theory and law and philosophy, as well as to students of Foucault’s work generally. (shrink)
The move to satisficing has been thought to help consequentialists avoid the problem of demandingness. But this is a mistake. In this article I formulate several versions of satisficing consequentialism. I show that every version is unacceptable, because every version permits agents to bring about a submaximal outcome in order to prevent a better outcome from obtaining. Some satisficers try to avoid this problem by incorporating a notion of personal sacrifice into the view. I show that these attempts are unsuccessful. (...) I conclude that, if satisficing consequentialism is to remain a position worth considering, satisficers must show (i) that the move to satisficing is necessary to solve some problem, whether it be the demandingness problem or some other problem, and (ii) that there is a version of the view that does not permit the gratuitous prevention of goodness. (shrink)
The devastating impact of the COVID‐19 (coronavirus disease 2019) pandemic is prompting renewed scrutiny of practices that heighten the risk of infectious disease. One such practice is refusing available vaccines known to be effective at preventing dangerous communicable diseases. For reasons of preventing individual harm, avoiding complicity in collective harm, and fairness, there is a growing consensus among ethicists that individuals have a duty to get vaccinated. I argue that these same grounds establish an analogous duty to avoid buying and (...) eating most meat sold today, based solely on a concern for human welfare. Meat consumption is a leading driver of infectious disease. Wildlife sales at wet markets, bushmeat hunting, and concentrated animal feeding operations (CAFOs) are all exceptionally risky activities that facilitate disease spread and impose immense harms on human populations. If there is a moral duty to vaccinate, we also should recognize a moral duty to avoid most meat. The paper concludes by considering the implications of this duty for policy. (shrink)
Purposeful infection of healthy volunteers with a microbial pathogen seems at odds with acceptable ethical standards, but is an important contemporary research avenue used to study infectious diseases and their treatments. Generally termed ‘controlled human infection studies’, this research is particularly useful for fast tracking the development of candidate vaccines and may provide unique insight into disease pathogenesis otherwise unavailable. However, scarce bioethical literature is currently available to assist researchers and research ethics committees in negotiating the distinct issues raised by (...) research involving purposefully infecting healthy volunteers. In this article, we present two separate challenge studies and highlight the ethical issues of human challenge studies as seen through a well-constructed framework. Beyond the same stringent ethical standards seen in other areas of medical research, we conclude that human challenge studies should also include: independent expert reviews, including systematic reviews; a publicly available rationale for the research; implementation of measures to protect the public from spread of infection beyond the research setting; and a new system for compensation for harm. We hope these additions may encourage safer and more ethical research practice and help to safeguard public confidence in this vital research alternative in years to come. (shrink)
Thanks to David Kaplan (1989a, 1989b), we all know how to handle indexicals like ‘I’. ‘I’ doesn’t refer to an object simpliciter; rather, it refers to an object only relative to a context. In particular, relative to a context C, ‘I’ refers to the agent of C. Since different contexts can have different agents, ‘I’ can refer to different objects relative to different contexts. For example, relative to a context cwhose agent is Gottlob Frege, ‘I’ refers to Frege; relative to (...) a context 0* whose agent is Alexius.. (shrink)
Strategies to increase influenza vaccination rates have typically targeted healthcare professionals and individuals in various high-risk groups such as the elderly. We argue that they should focus on increasing vaccination rates in children. Because children suffer higher influenza incidence rates than any other demographic group, and are major drivers of seasonal influenza epidemics, we argue that influenza vaccination strategies that serve to increase uptake rates in children are likely to be more effective in reducing influenza-related morbidity and mortality than those (...) targeting HCPs or the elderly. This is true even though influenza-related morbidity and mortality amongst children are low, except in the very young. Further, we argue that there are no decisive reasons to suppose that children-focused strategies are less ethically acceptable than elderly or HCP-focused strategies. (shrink)
If musical works are abstract objects, which cannot enter into causal relations, then how can we refer to musical works or know anything about them? Worse, how can any of our musical experiences be experiences of musical works? It would be nice to be able to sidestep these questions altogether. One way to do that would be to take musical works to be concrete objects. In this paper, we defend a theory according to which musical works are concrete objects. In (...) particular, the theory that we defend takes musical works to be fusions of performances. We defend this view from a series of objections, the first two of which are raised by Julian Dodd in a recent paper and the last of which is suggested by some comments of his in an earlier paper. (shrink)
In non-ideal theory, the political philosopher seeks to identify an injustice, synthesize social scientific work to diagnose its underlying causes, and propose morally permissible and potentially efficacious remedies. This paper explores the role in non-ideal theory of the identification of a plausible agent of change who might bring about the proposed remedies. I argue that the question of the agent of change is connected with the other core tasks of diagnosing injustice and proposing practical remedies. In this connection, I criticize (...) two linked postures that nonideal theorists sometimes adopt: a technocratic mode of neutral policy recommendation, whereby philosophers say what “we” must do to address some problem, without attending to the way agency enters the problem and its possible resolution; and the tendency to treat non-ideal theory as primarily consisting int eh enumeration of duties we are failing to fulfill, and specification of who is under what additional duties in light of this shortfall. My argument is that these tendencies fail to register in a coherent way the practical character of political philosophy. (shrink)
The evil God challenge is an argumentative strategy that has been pursued by a number of philosophers in recent years. It is apt to be understood as a parody argument: a wholly evil, omnipotent and omniscient God is absurd, as both theists and atheists will agree. But according to the challenge, belief in evil God is about as reasonable as belief in a wholly good, omnipotent and omniscient God; the two hypotheses are roughly epistemically symmetrical. Given this symmetry, thesis belief (...) in an evil God and belief in a good God are taken to be similarly preposterous. In this paper, we argue that the challenge can be met, suggesting why the three symmetries that need to hold between evil God and good God – intrinsic, natural theology and theodicy symmetries – can all be broken. As such, we take it that the evil God challenge can be met. (shrink)
I argue that in addressing worries about the validity and reliability of implicit measures of social cognition, theorists should draw on research concerning “entitativity perception.” In brief, an aggregate of people is perceived as highly “entitative” when its members exhibit a certain sort of unity. For example, think of the difference between the aggregate of people waiting in line at a bank versus a tight-knit group of friends: the latter seems more “groupy” than the former. I start by arguing that (...) entitativity perception modulates the activation of implicit biases and stereotypes. I then argue that recognizing this modulatory role will help researchers to address concerns surrounding the validity and reliability of implicit measures. (shrink)