Skip to main content

Advertisement

Log in

Achieving Equity with Predictive Policing Algorithms: A Social Safety Net Perspective

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Whereas using artificial intelligence (AI) to predict natural hazards is promising, applying a predictive policing algorithm (PPA) to predict human threats to others continues to be debated. Whereas PPAs were reported to be initially successful in Germany and Japan, the killing of Black Americans by police in the US has sparked a call to dismantle AI in law enforcement. However, although PPAs may statistically associate suspects with economically disadvantaged classes and ethnic minorities, the targeted groups they aim to protect are often vulnerable populations as well (e.g., victims of human trafficking, kidnapping, domestic violence, or drug abuse). Thus, determining how to enhance the benefits of PPA while reducing bias through better management is important. In this paper, we propose a policy schema to address this issue. First, after clarifying relevant concepts, we examine major criticisms of PPAs and argue that some of them should be addressed. If banning AI or making it taboo is an unrealistic solution, we must learn from our errors to improve AI. We next identify additional challenges of PPAs and offer recommendations from a policy viewpoint. We conclude that the employment of PPAs should be merged into broader governance of the social safety net and audited publicly by parliament and civic society so that the unjust social structure that breeds bias can be revised.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For example, the RAND Safety and Justice Program defines predictive policing as “the application of analytical techniques—particularly quantitative techniques—to identify likely targets for police intervention and crime prevention or solve past crimes by making statistical predictions” (Perry et al., 2013, pp. 1–2). It also notes that “[t]he use of statistical and geospatial analyses to forecast crime levels has been around for decades.” “[A] surge of interest in analytical tools that draw on very large data sets to make predictions in support of crime prevention” occurs only recently (Perry et al., 2013, p. 2).

  2. For example, the applications include PredPol, PreCobs, Hunchlab and Crime Anticipation System, etc. (Hardyns & Rummens, 2018).

  3. See also Kleinberg et al. (2018) and Skeem and Lowenkamp (2016). If the inclusion of feature-specific factors helps advance justice, it may be permissible to do so. For example, recently, in Wisconsin v. Loomis (2016), the Wisconsin Supreme Court noted that women are typically less likely to participate in crime and held that a trial court’s use of an algorithmic risk assessment that took gender into account served the nondiscriminatory purpose of promoting accuracy.

  4. As one of our anonymous reviewers pointed out, the problem of ethics washing is usually the failure to track if the ethics guidelines are actually implemented in practice. AI ethics guidelines have told us “the ‘what’ of AI ethics,” but still, there is “the ‘how’ [question] of AI ethics”—how to translate these guidelines into practices (Morley et al., 2019). Ethics washing is a pointer to the need to bridge the gap between the ethics discourses on the one hand and the technical ones on the other.

  5. In Japan, for example, most factors affecting public cooperation with the police are not racial (Tsushima & Hamai, 2015). Also, in Germany and Japan, albeit they still have room to improve, area-based predictive policing is reported to be initially successful (Egbert & Krasmann, 2020; Ohyama & Amemiya, 2018).

  6. As one of our anonymous reviewers pointed out, a lack of proof of the lack of usefulness of PPAs is not required to stop supporting it. The burden of argument should be on those who want to support PPAs. It is a fair point. We argue that in order to measure the effectiveness of PPAs, PPAs must be conceived in the broader context of law enforcement operations. From this perspective, the usefulness of PPAs is to be integrated into broader systems. For example, in Chicago, predictive algorithms are part of the police department’s strategic decision support centers (SDSCs). We can take the following passage from a recent report of SDSCs as an upbeat assessment of the deployment of PPAs (Hollywood et al., 2019, p. 70):

    As a result, policing decisions can be made with a much higher level of quality—timelier, more complete, and more accurate—than was typically possible before.... More broadly, we see SDSCs as a promising model for improving law enforcement agencies’ awareness of their communities, improving their decisionmaking, and carrying out more effective and more efficient operations that lead to crime reductions and other policing benefits.

  7. Besides, police departments adopting the technologies must acknowledge these tools’ vulnerabilities and the following limitations of the conclusions drawn to make room for auditing and improving these technologies' performance. For example, we may establish auditing mechanisms to check the quality of inputs from the algorithms. While predictive policing programs are not completely bias-free, it is not a sufficient reason to dismantle PPAs.

  8. There are other challenges, such as privacy, that we have not discussed in this article but have elsewhere. For example, a dilemma is that if the algorithmic prediction is accurate, it must be trained on a massive amount of biometric data that risk privacy, but if it is not accurate, the false positives threaten the human rights of misidentified targets. Please see Hung (2020), Hung and Yen (2020), and Lin et al. (2020).

  9. Here is an example. The Crime and Victimization Risk Model (CVRM) was a statistical model used by the Chicago Police Department. It used arrest and crime incident data from within the Chicago Police Department’s record management systems to estimate an individual’s risk for becoming a party to violence. As shown on the department’s Violence Reduction Strategy web page (https://home.chicagopolice.org/information/violence-reduction-strategy-vrs/), the CVRM was “for the sole purpose of finding just the small group that may be at highest risk, so that the details of their crime records can be studied by experts for purposes of prioritizing the Custom Notifications program.” We do not need to know the complete details of how the algorithm works as long as we can decide whether it fulfilled its assigned purpose. It was reported that “among the individuals with the highest CVRM risk scores, approximately 1 in 3 will be involved in a shooting or homicide in the next 18 months” (Illinois Institute of Technology, 2019, p. 3). As reported on the department’s Violence Reduction Strategy web page (https://home.chicagopolice.org/information/violence-reduction-strategy-vrs/), that was a reasonably effective piece of information “to help to prioritize the Custom Notifications process,” given that “a Chicago resident with no arrests in the past four years has about a 1 in 2300 chance of being a shooting victim [in the next 18 months].”

  10. Take CVRM as an example again. In a 2019 review, the RAND Corporation found that the Chicago Police Department initially was not fully transparent about what was done by CVRM and that “left a great deal of room for concerns to grow and spread unchecked” (Hollywood et al., 2019, p. 38). CVRM is a party-to-violence prediction system by design. In practice, it becomes a victim prediction system because the clearance rates for shootings in Chicago were constantly low. The Chicago Police Department’s lack of communicative transparency made it mistakenly conceived as a criminal prediction system. As a result, individuals noted by the system face unnecessary stigmatization, and the system lost social acceptability.

  11. The subgroup identified by CVRM, for example, was individuals at risk of being victimized. For the program to succeed, the followed-up interventions must be guided towards reducing the likelihood for the subgroup to be victims of violence in the future. Without sufficient information to identify needed interventions for the targeted group, the program’s chance to succeed was slim.

  12. According to the New Orleans Police Department (2011–2014), when the high-risk subgroups in the community are provided with the resources to improve, for example, their job prospects, there is indeed a significant reduction in homicide and gang-involved murders (Ferguson, 2017). The statics shows a significant difference in whether resources are implemented to increase the targeted individuals’ opportunities and chances to escape crime.

  13. For the situation in the UK, see Babuta and Oswald (2020) and Crawford and Evans (2012).

  14. As Roberson et al. (2020, p. 55) put it, these programs “offer a venue for service providers from various sectors (police, education, addictions, social work, mental health, etc.) to regularly convene and discuss clients who meet a defined threshold of risk. The intent of these discussions is to formulate a plan of intervention that mobilizes multiple sectors, collaborating to provide services and support to the individual or families. To mitigate risk before harm occurs, [they] aim to connect clients to services within 24 to 48 h of a case being presented to the group.”

  15. The programs “[hold] that violent crime can be dramatically reduced when law enforcement, community members, and social services providers join together to directly engage with street groups and gangs to clearly communicate: (1) a law enforcement message that any future violence will be met with clear, predictable, and certain consequences; (2) a moral message against violence by community representatives; and (3) an offer of help for those who want it” (von Ulmenstein and Sultan, 2011, p. 7). For details, see Kennedy and Friedrich (2014).

  16. For example, as part of the efforts of the Office of Community Oriented Policing Services of the U.S. Department of Justice, law enforcement is encouraged to practice community policing by working with the communities they serve. It is noted that the role of law enforcement in the group violence intervention program is to identify the high-risk groups of exposure to violence, either as victims or as perpetrators, and to notify the targeted individuals who “are subjects of special law enforcement attention” (Kennedy & Friedrich, 2014, p. 26). The notification usually includes a custom legal assessment explaining the target’s legal exposure and information on social service resources available for the target and his/her families. Also see Braga et al. (2018) for a review of recent research on the effectiveness of this approach. They note that the existing theoretical literature and empirical evidence suggest that this approach generates noteworthy crime reductions.

References

Download references

Funding

Funding was provided by Ministry of Science and Technology, Taiwan (Grant No. 107-2410-H-001-101-MY3).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tzu-Wei Hung.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yen, CP., Hung, TW. Achieving Equity with Predictive Policing Algorithms: A Social Safety Net Perspective. Sci Eng Ethics 27, 36 (2021). https://doi.org/10.1007/s11948-021-00312-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11948-021-00312-x

Keywords

Navigation