Philosophers have long noted, and empirical psychology has lately confirmed, that most people are “biased toward the future”: we prefer to have positive experiences in the future, and negative experiences in the past. At least two explanations have been offered for this bias: belief in temporal passage and the practical irrelevance of the past resulting from our inability to influence past events. We set out to test the latter explanation. In a large survey, we find that participants exhibit significantly (...) less future bias when asked to consider scenarios where they can affect their own past experiences. This supports the “practical irrelevance” explanation of future bias. It also suggests that future bias is not an inflexible preference hardwired by evolution, but results from a more general disposition to “accept the things we cannot change”. However, participants still exhibited substantial future bias in scenarios in which they could affect the past, leaving room for complementary explanations. Beyond the main finding, our results also indicate that future bias is stake-sensitive and that participants endorse the normative correctness of their future-biased preferences and choices. In combination, these results shed light on philosophical debates over the rationality of future bias, suggesting that it may be a rational response to empirical realities rather than a brute, arational disposition. (shrink)
Our focus here is on whether, when influenced by implicit biases, those behavioural dispositions should be understood as being a part of that person’s character: whether they are part of the agent that can be morally evaluated.[4] We frame this issue in terms of control. If a state, process, or behaviour is not something that the agent can, in the relevant sense, control, then it is not something that counts as part of her character. A number of theorists have argued (...) that individuals do not have control, in the relevant sense, over the operation of implicit bias. We will argue that this claim is mistaken. We articulate and develop a notion of control that individuals have with respect to implicit bias, and argue that this kind of control can ground character-based evaluation of such behavioural dispositions. (shrink)
If you care about securing knowledge, what is wrong with being biased? Often it is said that we are less accurate and reliable knowers due to implicit biases. Likewise, many people think that biases reflect inaccurate claims about groups, are based on limited experience, and are insensitive to evidence. Chapter 3 investigates objections such as these with the help of two popular metaphors: bias as fog and bias as shortcut. Guiding readers through these metaphors, I argue that they (...) clarify the range of knowledge-related objections to implicit bias. They also suggest that there will be no unifying problem with bias from the perspective of knowledge. That is, they tell us that implicit biases can be wrong in different ways for different reasons. Finally, and perhaps most importantly, the metaphors reveal a deep—though perhaps not intractable—disagreement among theorists about whether implicit biases can be good in some cases when it comes to knowledge. (shrink)
_Anthropic Bias_ explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy. There are the philosophical thought experiments and paradoxes: (...) the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room. And there are the applications in contemporary science: cosmology ; evolutionary theory ; the problem of time's arrow ; quantum physics ; game-theory problems with imperfect recall ; even traffic analysis. _Anthropic Bias_ argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox. (shrink)
_Anthropic Bias_ explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy. There are the philosophical thought experiments and paradoxes: (...) the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room. And there are the applications in contemporary science: cosmology ; evolutionary theory ; the problem of time's arrow ; quantum physics ; game-theory problems with imperfect recall ; even traffic analysis. _Anthropic Bias_ argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox. (shrink)
“Implicit bias” is a term of art referring to relatively unconscious and relatively automatic features of prejudiced judgment and social behavior. While psychologists in the field of “implicit social cognition” study “implicit attitudes” toward consumer products, self-esteem, food, alcohol, political values, and more, the most striking and well-known research has focused on implicit attitudes toward members of socially stigmatized groups, such as African-Americans, women, and the LGBTQ community.[1] For example, imagine Frank, who explicitly believes that women and men are (...) equally suited for careers outside the home. Despite his explicitly egalitarian belief, Frank might nevertheless implicitly associate women with the home, and this implicit association might lead him to behave in any number of biased ways, from trusting feedback from female co-workers less to hiring equally qualified men over women. Psychological research on implicit bias is relatively recent (§1), but a host of metaphysical (§2), epistemological (§3), and ethical questions (§4) about implicit bias are pressing.[2]. (shrink)
Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the (...) existence of the bias itself, making it difficult to identify, mitigate, or evaluate using standard resources in epistemology and ethics. I demonstrate these points in the case of mitigation techniques by presenting what I call ‘the Proxy Problem’. One reason biases resist revision is that they rely on proxy attributes, seemingly innocuous attributes that correlate with socially-sensitive attributes, serving as proxies for the socially-sensitive attributes themselves. I argue that in both human and algorithmic domains, this problem presents a common dilemma for mitigation: attempts to discourage reliance on proxy attributes risk a tradeoff with judgement accuracy. This problem, I contend, admits of no purely algorithmic solution. (shrink)
Research on bias in peer review examines scholarly communication and funding processes to assess the epistemic and social legitimacy of the mechanisms by which knowledge communities vet and self-regulate their work. Despite vocal concerns, a closer look at the empirical and methodological limitations of research on bias raises questions about the existence and extent of many hypothesized forms of bias. In addition, the notion of bias is predicated on an implicit ideal that, once articulated, raises questions (...) about the normative implications of research on bias in peer review. This review provides a brief description of the function, history, and scope of peer review; articulates and critiques the conception of bias unifying research on bias in peer review; characterizes and examines the empirical, methodological, and normative claims of bias in peer review research; and assesses possible alternatives to the status quo. We close by identifying ways to expand conceptions and studies of bias to countenance the complexity of social interactions among actors involved directly and indirectly in peer review. (shrink)
Future-biased agents care not only about what experiences they have, but also when they have them. Many believe that A-theories of time justify future bias. Although presentism is an A-theory of time, some argue that it nevertheless negates the justification for future bias. Here, I claim that the alleged discrepancy between presentism and future bias is a special case of the cross-time relations problem. To resolve the discrepancy, I propose an account of future bias as a (...) preference for certain tensed truths properly relativized to the present. (shrink)
In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of as-sessments: (...) an ethical risk assessment and a narrower, technical algo-rithmic bias assessment. We explain how the two assessments depend on each other, highlight the importance of situating the algorithm within its particular socio-technical context, and discuss a number of lessons and challenges for algorithm assessments and, potentially, for algorithm audits. The discussion builds on our team’s experience of advising and conducting ethical risk assessments for clients across dif-ferent industries in the last four years. Our main goal is to reflect on the key factors that are potentially ethically relevant in the use of algo-rithms, and draw lessons for the nascent algorithm assessment and audit industry, in the hope of helping all parties minimize the risk of harm from their use. (shrink)
Having a confirmation bias sometimes leads us to hold inaccurate beliefs. So, the puzzle goes: why do we have it? According to the influential argumentative theory of reasoning, confirmation bias emerges because the primary function of reason is not to form accurate beliefs, but to convince others that we’re right. A crucial prediction of the theory, then, is that confirmation bias should be found only in the reasoning domain. In this article, we argue that there is evidence (...) that confirmation bias does exist outside the reasoning domain. This undermines the main evidential basis for the argumentative theory of reasoning. In presenting the relevant evidence, we explore why having such confirmation bias may not be maladaptive. (shrink)
Sally Haslanger has recently argued that philosophical focus on implicit bias is overly individualist, since social inequalities are best explained in terms of social structures rather than the actions and attitudes of individuals. I argue that questions of individual responsibility and implicit bias, properly understood, do constitute an important part of addressing structural injustice, and I propose an alternative conception of social structure according to which implicit biases are themselves best understood as a special type of structure.
This book represents the first major attempt by any author to provide an integrated account of the evidence for bias in human reasoning across a wide range of disparate psychological literatures. The topics discussed involve both deductive and inductive reasoning as well as statistical judgement and inference. In addition, the author proposes a general theoretical approach to the explanations of bias and considers the practical implications for real world decision making. The theoretical stance of the book is based (...) on a distinction between preconscious heuristic processes which determine the mental representation of 'relevant' features of the problem content, and subsequent analytic reasoning processes which generate inferences and judgements. Phenomena discussed and interpreted within this framework include feature matching biases in propositional reasoning, confirmation bias, biasing and debiasing effects of knowledge on reasoning, and biases in statistical judgement normally attributed to 'availability' and 'representativeness' heuristics. In the final chapter, the practical consequences of bias for real life decision making are considered, together with various issues concerning the problem of 'debiasing'. The major approaches discussed are those involving education and training on the one hand, and the development of intelligent software and interactive decision aids on the other. (shrink)
Recently, amid growing awareness that computer algorithms are not neutral tools but can cause harm by reproducing and amplifying bias, attempts to detect and prevent such biases have intensified. An approach that has received considerable attention in this regard is the Value Sensitive Design (VSD) methodology, which aims to contribute to both the critical analysis of (dis)values in existing technologies and the construction of novel technologies that account for specific desired values. This article provides a brief overview of the (...) key features of the Value Sensitive Design approach, examines its contributions to understanding and addressing issues around bias in computer systems, outlines the current debates on algorithmic bias and fairness in machine learning, and discusses how such debates could profit from VSD-derived insights and recommendations. Relating these debates on values in design and algorithmic bias to research on cognitive biases, we conclude by stressing our collective duty to not only detect and counter biases in software systems, but to also address and remedy their societal origins. (shrink)
Most people show unconscious bias in their evaluations of social groups, in ways that may run counter to their conscious beliefs. This volume addresses key metaphysical and epistemological questions about implicit bias, including its effect on scientific research, gender stereotypes in philosophy, and the role of heuristics in biased reasoning.
(This contribution is primarily based on "Implicit Bias, Moods, and Moral Responsibility," (2018) Pacific Philosophical Quarterly. This version has been shortened and significantly revised to be more accessible and student-oriented.) Are individuals morally responsible for their implicit biases? One reason to think not is that implicit biases are often advertised as unconscious. However, recent empirical evidence consistently suggests that individuals are aware of their implicit biases, although often in partial and inarticulate ways. Here I explore the implications of this (...) evidence of partial awareness for individuals’ moral responsibility. First, I argue that responsibility comes in degrees. Second, I argue that individuals’ partial awareness of their implicit biases makes them (partially) morally responsible for them. I argue by analogy to a close relative of implicit bias: moods. (shrink)
Following Kahneman and Tversky, I examine the term ‘bias’ as it is used to refer to systematic errors. Given the central role of error in this understanding of bias, it is helpful to consider what it is to err and to distinguish different kinds of error. I identify two main kinds of error, examine ethical issues that pertain to the relation of these types of error, and explain their moral significance. Next, I provide a four-level explanatory framework for (...) understanding biases: personal, sub-personal, situational, and systemic levels. Finally, I examine some of the ethical complexities involved in attributing biases to oneself and to others. (shrink)
This paper offers an unorthodox appraisal of empirical research bearing on the question of the low representation of women in philosophy. It contends that fashionable views in the profession concerning implicit bias and stereotype threat are weakly supported, that philosophers often fail to report the empirical work responsibly, and that the standards for evidence are set very low—so long as you take a certain viewpoint.
There is abundant evidence that most people, often in spite of their conscious beliefs, values and attitudes, have implicit biases. 'Implicit bias' is a term of art referring to evaluations of social groups that are largely outside conscious awareness or control. These evaluations are typically thought to involve associations between social groups and concepts or roles like 'violent,' 'lazy,' 'nurturing,' 'assertive,' 'scientist,' and so on. Such associations result at least in part from common stereotypes found in contemporary liberal societies (...) about members of these groups. Implicit Bias and Philosophy brings the work of leading philosophers and psychologists together to explore core areas of psychological research on implicit bias, as well as the ramifications of implicit bias for core areas of philosophy. Volume 2: Moral Responsibility, Structural Injustice, and Ethics is comprised of three sections. 'Moral Responsibility for Implicit Bias' contains chapters examining the relationship of implicit biases to concepts that are central to moral responsibility, including control, awareness, reasons-responsiveness, and alienation. The chapters in the second section--'Structural Injustice'--explore the connections between the implicit biases held by individuals and the structural injustices of the societies in which they are situated. And finally, the third section--'The Ethics of Implicit Bias: Theory and Practice'--contains chapters examining strategies for implicit attitude change, the ramifications of research on implicit bias for philosophers working in ethics, and suggestions for combatting implicit biases in the fields of philosophy and law. (shrink)
What is the mental representation that is responsible for implicit bias? What is this representation that mediates between the trigger and the biased behavior? My claim is that this representation is neither a propositional attitude nor a mere association. Rather, it is mental imagery: perceptual processing that is not directly triggered by sensory input. I argue that this view captures the advantages of the two standard accounts without inheriting their disadvantages. Further, this view also explains why manipulating mental imagery (...) is among the most efficient ways of counteracting implicit bias. (shrink)
Nearly everyone prefers pain to be in the past rather than the future. This seems like a rationally permissible preference. But I argue that appearances are misleading, and that future-biased preferences are in fact irrational. My argument appeals to trade-offs between hedonic experiences and other goods. I argue that we are rationally required to adopt an exchange rate between a hedonic experience and another type of good that stays fixed, regardless of whether the hedonic experience is in the past or (...) future. (shrink)
The term 'implicit bias' has very swiftly been incorporated into philosophical discourse. Our aim in this paper is to scrutinise the phenomena that fall under the rubric of implicit bias. The term is often used in a rather broad sense, to capture a range of implicit social cognitions, and this is useful for some purposes. However, we here articulate some of the important differences between phenomena identified as instances of implicit bias. We caution against ignoring these differences: (...) it is likely they have considerable significance, not least for the sorts of normative recommendations being made concerning how to mitigate the bad effects of implicit bias. (shrink)
This chapter distinguishes between two concepts of moral responsibility. We are responsible for our actions in the first sense only when those actions reflect our identities as moral agents, i.e. when they are attributable to us. We are responsible in the second sense when it is appropriate for others to enforce certain expectations and demands on those actions, i.e. to hold us accountable for them. This distinction allows for an account of moral responsibility for implicit bias, defended here, on (...) which people may lack attributability for actions caused by implicit bias but are still accountable for them. What this amounts to is leaving aside appraisal-based forms of moral criticism such as blame and punishment in favor of non-appraising forms of accountability. This account not only does more justice to our moral experience and agency, but will also lead to more effective practices for combating the harms of implicit bias. (shrink)
Recent empirical research has substantiated the finding that very many of us harbour implicit biases: fast, automatic, and difficult to control processes that encode stereotypes and evaluative content, and influence how we think and behave. Since it is difficult to be aware of these processes - they have sometimes been referred to as operating 'unconsciously' - we may not know that we harbour them, nor be alert to their influence on our cognition and action. And since they are difficult to (...) control, considerable work is required to prevent their influence. We here focus on the implications of these findings for epistemology. We first look at ways in which implicit biases thwart our knowledge seeking practices (sections 1 & 2). Then we set out putative epistemic benefits of implicit bias, before considering ways in which epistemic practices might be improved (section 3). Finally, we consider the distinctive challenges that the findings about implicit bias pose to us as philosophers, in the context of feminist philosophy in particular (section 4). (shrink)
This paper examines the role of prestige bias in shaping academic philosophy, with a focus on its demographics. I argue that prestige bias exacerbates the structural underrepresentation of minorities in philosophy. It works as a filter against (among others) philosophers of color, women philosophers, and philosophers of low socio-economic status. As a consequence of prestige bias our judgments of philosophical quality become distorted. I outline ways in which prestige bias in philosophy can be mitigated.
Are individuals morally responsible for their implicit biases? One reason to think not is that implicit biases are often advertised as unconscious, ‘introspectively inaccessible’ attitudes. However, recent empirical evidence consistently suggests that individuals are aware of their implicit biases, although often in partial and inarticulate ways. Here I explore the implications of this evidence of partial awareness for individuals’ moral responsibility. First, I argue that responsibility comes in degrees. Second, I argue that individuals’ partial awareness of their implicit biases makes (...) them (partially) morally responsible for them. I argue by analogy to a close relative of implicit bias: moods. (shrink)
Many philosophers appeal to intuitions to support some philosophical views. However, there is reason to be concerned about this practice as scientific evidence has documented systematic bias in philosophically relevant intuitions as a function of seemingly irrelevant features (e.g., personality). One popular defense used to insulate philosophers from these concerns holds that philosophical expertise eliminates the influence of these extraneous factors. Here, we test this assumption. We present data suggesting that verifiable philosophical expertise in the free will debate-as measured (...) by a reliable and validated test of expert knowledge-does not eliminate the influence of one important extraneous feature (i.e., the heritable personality trait extraversion) on judgments concerning freedom and moral responsibility. These results suggest that, in at least some important cases, the expertise defense fails. Implications for the practice of philosophy, experimental philosophy, and applied ethics are discussed. (shrink)
It has been argued that implicit biases are operative in philosophy and lead to significant epistemic costs in the field. Philosophers working on this issue have focussed mainly on implicit gender and race biases. They have overlooked ideological bias, which targets political orientations. Psychologists have found ideological bias in their field and have argued that it has negative epistemic effects on scientific research. I relate this debate to the field of philosophy and argue that if, as some studies (...) suggest, the same bias also exists in philosophy then it will lead to hitherto unrecognised epistemic hazards in the field. Furthermore, the bias is epistemically different from the more familiar biases in respects that are important for epistemology, ethics, and metaphilosophy. (shrink)
Written by a diverse range of scholars, this accessible introductory volume asks: What is implicit bias? How does implicit bias compromise our knowledge of others and social reality? How does implicit bias affect us, as individuals and participants in larger social and political institutions, and what can we do to combat biases? An interdisciplinary enterprise, the volume brings together the philosophical perspective of the humanities with the perspective of the social sciences to develop rich lines of inquiry. (...) Its 12 chapters are written in a non-technical style, using relatable examples that help readers understand what implicit bias is, its significance, and the controversies surrounding it. Each chapter includes discussion questions and additional annotated reading suggestions. And a companion webpage contains teaching resources. The volume is invaluable resource for students―and researchers―seeking to understand criticisms surrounding implicit bias, as well as how one might answer them by adopting a more nuanced understanding of bias and its role in maintaining social injustice. (shrink)
In this paper, we consider three competing explanations of the empirical finding that people’s causal attributions are responsive to normative details, such as whether an agent’s action violated an injunctive norm—the intervention view, the bias view, and the responsibility view. We then present new experimental evidence concerning a type of case not previously investigated in the literature. In the switch version of the trolley problem, people judge that the bystander ought to flip the switch, but they also judge that (...) she is more responsible for the resulting outcome when she does so than when she refrains. And, as predicted by the responsibility view, but not the intervention or bias views, people are more likely to say that the bystander caused the outcome when she flips the switch. (shrink)
In this paper, I argue against the view that the representational structure of the implicit attitudes responsible for implicitly biased behaviour is propositional—as opposed to associationist. The proposal under criticism moves from the claim that implicit biased behaviour can occasionally be modulated by logical and evidential considerations to the view that the structure of the implicit attitudes responsible for such biased behaviour is propositional. I argue, in particular, against the truth of this conditional. Sensitivity to logical and evidential considerations, I (...) contend, proves to be an inadequate criterion for establishing the true representational structure of implicit attitudes. Considerations of a different kind, which emphasize the challenges posed by the structural social injustice that implicit attitudes reflect, offer, I conclude, better support for deciding this issue in favour of an associationist view.En este artículo cuestiono la tesis de que la estructura representacional de las actitudes implícitas responsables del comportamiento implícitamente sesgado es proposicional—en lugar de asociacionista. De acuerdo con la propuesta criticada, si la conducta implícita sesgada puede ocasionalmente ser modulada por consideraciones lógicas y evidenciales, entonces la estructura de las actitudes implícitas responsables de esa conducta es proposicional. Cuestiono, en particular, la verdad de este condicional. Sostengo que la sensibilidad de las actitudes implícitas a consideraciones lógicas y evidenciales resulta ser un criterio inadecuado para establecer su verdadera estructura representacional. Consideraciones de otro tipo, que enfatizan los desafíos planteados por la injusticia social estructural que las actitudes implícitas reflejan, ofrecen, concluyo, un mejor apoyo para decidir esta cuestión a favor de una visión asociacionista. (shrink)
To arrive at their final evaluation of a manuscript or grant proposal, reviewers must convert a submission’s strengths and weaknesses for heterogeneous peer review criteria into a single metric of quality or merit. I identify this process of commensuration as the locus for a new kind of peer review bias. Commensuration bias illuminates how the systematic prioritization of some peer review criteria over others permits and facilitates problematic patterns of publication and funding in science. Commensuration bias also (...) foregrounds a range of structural strategies for realigning peer review practices and institutions with the aims of science. (shrink)
Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the (...) algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping. (shrink)
In this paper I explore the nature of confabulatory explanations of action guided by implicit bias. I claim that such explanations can have significant epistemic benefits in spite of their obvious epistemic costs, and that such benefits are not otherwise obtainable by the subject at the time at which the explanation is offered. I start by outlining the kinds of cases I have in mind, before characterising the phenomenon of confabulation by focusing on a few common features. Then I (...) introduce the notion of epistemic innocence to capture the epistemic status of those cognitions which have both obvious epistemic faults and some significant epistemic benefit. A cognition is epistemically innocent if it delivers some epistemic benefit to the subject which would not be attainable otherwise because alternative (less epistemically faulty) cognitions that could deliver the same benefit are unavailable to the subject at that time. I ask whether confabulatory explanations of actions guided by implicit bias have epistemic benefits and whether there are genuine alternatives to forming a confabulatory explanation in the circumstances in which subjects confabulate. On the basis of my analysis of confabulatory explanations of actions guided by implicit bias, I argue that such explanations have the potential for epistemic innocence. I conclude that epistemic evaluation of confabulatory explanations of action guided by implicit bias ought to tell a richer story, one which takes into account the context in which the explanation occurs. (shrink)
Written by a diverse range of scholars, this accessible introductory volume asks: What is implicit bias? How does implicit bias compromise our knowledge of others and social reality? How does implicit bias affect us, as individuals and participants in larger social and political institutions, and what can we do to combat biases? An interdisciplinary enterprise, the volume brings together the philosophical perspective of the humanities with the perspective of the social sciences to develop rich lines of inquiry. (...) It is written in a non-technical style, using relatable examples that help readers understand what implicit bias is, its significance, and the controversies surrounding it. Each chapter includes discussion questions and additional reading suggestions. A companion webpage contains teaching resources. The volume will be an invaluable resource for students—and researchers—seeking to understand criticisms surrounding implicit bias, as well as how one might answer them by adopting a more nuanced understanding of bias and its role in maintaining social injustice. (shrink)
When interests and preferences of researchers or their sponsors cause bias in experimental design, data interpretation or dissemination of research results, we normally think of it as an epistemic shortcoming. But as a result of the debate on science and values, the idea that all extra-scientific influences on research could be singled out and separated from pure science is now widely believed to be an illusion. I argue that nonetheless, there are cases in which research is rightfully regarded as (...) epistemologically deficient due to the influence of preferences on its outcomes. I present examples from biomedical research and offer an analysis in terms of social epistemology. (shrink)
When ethical decisions have to be taken in critical, complex medical situations, they often involve decisions that set the course for or against life-sustaining treatments. Therefore the decisions have far-reaching consequences for the patients, their relatives, and often for the clinical staff. Although the rich psychology literature provides evidence that reasoning may be affected by undesired influences that may undermine the quality of the decision outcome, not much attention has been given to this phenomenon in health care or ethics consultation. (...) In this paper, we aim to contribute to the sensitization of the problem of systematic reasoning biases by showing how exemplary individual and group biases can affect the quality of decision-making on an individual and group level. We are addressing clinical ethicists as well as clinicians who guide complex decision-making processes of ethical significance. Knowledge regarding exemplary group psychological biases (e.g. conformity bias), and individual biases (e.g. stereotypes), will be taken from the disciplines of social psychology and cognitive decision science and considered in the field of ethical decision-making. Finally we discuss the influence of intuitive versus analytical (systematical) reasoning on the validity of ethical decision-making. (shrink)
This paper takes as its focus efforts to address particular aspects of sexist oppression and its intersections, in a particular field: it discusses reform efforts in philosophy. In recent years, there has been a growing international movement to change the way that our profession functions and is structured, in order to make it more welcoming for members of marginalized groups. One especially prominent and successful form of justification for these reform efforts has drawn on empirical data regarding implicit biases and (...) their effects. Here, we address two concerns about these empirical data. First, critics have for some time argued that the studies drawn upon cannot give us an accurate picture of the workings of prejudice, because they ignore the intersectional nature of these phenomena. More recently, concerns have been raised about the empirical data supporting the nature and existence of implicit bias. Each of these concerns, but perhaps more commonly the latter, are thought by some to undermine reform efforts in philosophy. In this paper, we take a three- pronged approach to these claims. First, we show that the reforms can be motivated quite independently of the implicit bias data, and that many of these reforms are in fact very well suited to dealing with intersectional worries. Next, we show that in fact the empirical concerns about the implicit bias data are not nearly as problematic as some have thought. Finally, we argue that while the intersectional concerns are an immensely valuable criticism of early work on implicit bias, more recent work is starting to address these worries. (shrink)
In this chapter, we explore whether agents have an epistemic duty to eradicate implicit bias. Recent research shows that implicit biases are widespread and they have a wide variety of epistemic effects on our doxastic attitudes. First, we offer some examples and features of implicit biases. Second, we clarify what it means to have an epistemic duty, and discuss the kind of epistemic duties we might have regarding implicit bias. Third, we argue that we have an epistemic duty (...) to eradicate implicit biases that have negative epistemic impact. Finally, we defend this view against the objection that we lack the relevant control over implicit bias that’s required for such a duty. We argue that we have a kind of reflective control over the implicit biases that we are duty-bound to eradicate. And since, as we show, we have this control over a wide variety of implicit biases, there are a lot of implicit biases that we have epistemic duties to eradicate. (shrink)
The reasonable person standard is used in adjudicating claims of self-defence. In US law, an individual may use defensive force if her beliefs that a threat is imminent and that force is required are beliefs that a reasonable person would have. In English law, it is sufficient that beliefs in imminence and necessity are genuinely held; but the reasonableness of so believing is given an evidential role in establishing the genuineness of the beliefs. There is, of course, much contention over (...) how to spell out when, and in virtue of what, such beliefs are reasonable. In this chapter, we identify some distinctive issues that arise when we consider that implicit racial bias might be implicated in the beliefs in imminence and necessity. Considering two prominent interpretations of the reasonable person standard, we argue that neither is acceptable. On one interpretation, we risk unfairness to the defendant-who may non-culpably harbour bias. On another, the standard embeds racist stereotypes. Whilst there are formulations of the defence that may serve to mitigate these problems, we argue that they cannot be avoided in the presence of racist social structures. (shrink)
Why does social injustice exist? What role, if any, do implicit biases play in the perpetuation of social inequalities? Individualistic approaches to these questions explain social injustice as the result of individuals’ preferences, beliefs, and choices. For example, they explain racial injustice as the result of individuals acting on racial stereotypes and prejudices. In contrast, structural approaches explain social injustice in terms of beyond-the-individual features, including laws, institutions, city layouts, and social norms. Often these two approaches are seen as competitors. (...) Framing them as competitors suggests that only one approach can win and that the loser offers worse explanations of injustice. In this essay, we explore each approach and compare them. Using implicit bias as an example, we argue that the relationship between individualistic and structural approaches is more complicated than it may first seem. Moreover, we contend that each approach has its place in analyses of injustice and raise the possibility that they can work together—synergistically—to produce deeper explanations of social injustice. If so, the approaches may be complementary, rather than competing. (shrink)
We argue that an essential element of understanding the moral salience of algorithmic systems requires an analysis of the relation between algorithms and agency. We outline six key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making.
A cognitive bias is a pattern of deviation in our judgment or our processing of what we perceive. Its raison d'être is the evolutionary need to produce immediate judgments in order to adopt a position quickly in response to stimuli, problems, or situations that catch our attention for some reason. They have a social dimension because they are present in the interactions and decision-making processes of ordinary life. They can be understood to be an adaptive response to human inability (...) to process all of the available information that we receive and selectively filter through biased reasoning or through procedures and heuristic rules. (shrink)
Moral, social, political, and other “nonepistemic” values can lead to bias in science, from prioritizing certain topics over others to the rationalization of questionable research practices. Such values might seem particularly common or powerful in the social sciences, given their subject matter. However, I argue first that the well-documented phenomenon of motivated reasoning provides a useful framework for understanding when values guide scientific inquiry (in pernicious or productive ways). Second, this analysis reveals a parity thesis: values influence the social (...) and natural sciences about equally, particularly because both are so prominently affected by desires for social credit and status, including recognition and career advancement. Ultimately, bias in natural and social science is both natural and social— that is, a part of human nature and considerably motivated by a concern for social status (and its maintenance). Whether the pervasive influence of values is inimical to the sciences is a separate question. (shrink)
Implicit Bias and Philosophy brings the work of leading philosophers and psychologists together to explore core areas of psychological research on implicit bias, as well as the ramifications of implicit bias for core areas of philosophy. Volume I: Metaphysics and Epistemology addresses key metaphysical and epistemological questions on implicit bias, including the effect of implicit bias on scientific research, gender stereotypes in philosophy, and the role of heuristics in biased reasoning. Volume 2: Moral Responsibility, Structural (...) Injustice, and Ethics explores the themes of moral responsibility in implicit bias, structural injustice in society, and strategies for implicit attitude change. (shrink)
Humans typically display hindsight bias. They are more confident that the evidence available beforehand made some outcome probable when they know the outcome occurred than when they don't. There is broad consensus that hindsight bias is irrational, but this consensus is wrong. Hindsight bias is generally rationally permissible and sometimes rationally required. The fact that a given outcome occurred provides both evidence about what the total evidence available ex ante was, and also evidence about what that evidence (...) supports. Even if you in fact evaluate the ex ante evidence correctly, you should not be certain of this. Then, learning the outcome provides evidence that if you erred, you are more likely to have erred low rather than high in estimating the degree to which the ex ante evidence supported the hypothesis that that outcome would occur. (shrink)