Skip to main content

Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions

  • Chapter
  • First Online:
The 2020 Yearbook of the Digital Ethics Lab

Part of the book series: Digital Ethics Lab Yearbook ((DELY))

  • 348 Accesses

Abstract

Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this chapter AI-Crime (AIC). AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This chapter offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space.

Previously published: King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2020). Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions. Science and Engineering Ethics, 26(1), 89-120.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    “Essential” (instead of “necessary”) is used to indicate that while there is a logical possibility that the crime could occur without the support of AI, this possibility is negligible. That is, the crime would probably not have occurred but for the use of AI. The distinciton can be clarified with an example. One might consider transport to be essential to travel between Paris and Rome, but one could always walk: transport is not in this case (strictly speaking), necessary. Furthermore, note that AI-crimes as defined in this chapter involve AI as a contributory factor, but not an investigative, enforcing, or mitigating factor.

  2. 2.

    The choice of English criminal law is only due to the need to ground the analysis to a concrete and practial framework sufficiently generalisable. The analysis and conclusions of the chapter are easily exportable to other legal systems.

  3. 3.

    Because much of AI is fueled by data, some of its challenges are rooted in data governance (Cath et al. 2017), particularly issues of consent, discrimination, fairness, ownership, privacy, surveillance, and trust (Floridi and Taddeo 2016).

  4. 4.

    The following search phrase was used for all search engines aside from SSRN, which faced technical difficulties: (“Artificial Intelligence” OR “Machine Learning” OR Robot* OR AI) AND (Crime OR Criminality OR lawbreaking OR illegal OR *lawful). The phrases used for SSRN were: Artificial Intelligence Crime, and Artificial Intelligence Criminal. The number of papers returned were: Google=50* (first 50 reviewed), Philpapers=27, Scopus=43, SSRN=26, and Web of Science=10.

  5. 5.

    However, it was not required that AI’s role was sufficient for the crime because normally other technical and non-technical elements are likely to be needed. For example, if robotics are instrumental (e.g., involving autonomous vehicles) or causal in crime, then any underlying AI component must be essential for the crime to be included in the analysis.

  6. 6.

    An absence of a concern in the literature and in the subsequent analysis does not imply that the concern should be absent from AIC studies.

  7. 7.

    Claims to the contrary can be dismissed as mere hype, the result of specific, ad hoc constraints, or just tricks; see for example the chatterbot named “Eugene Goostman”, see https://en.wikipedia.org/wiki/Eugene_Goostman

  8. 8.

    Common knowledge is a property found in epistemic logic about a proposition P and a set of agents. P is common knowledge if and only if each agent knows P, each agent knows the other agents know P, and so on. Agents may acquire common knowledge through broadcasts, which provide agents with a rational basis to act in coordination (e.g., collectively turning up to a meeting following the broadcast of the meeting’s time and place).

  9. 9.

    The adjective “moral” is taken from the cited work, which considers unethical behaviour to constitute crossing system boundaries, whereas here the concern addresses criminal acts or omissions, which may have a negative, neutral, or positive ethical evaluation. “Moral” is used in order to avoid misrepresenting the cited work, and not to imply that the criminal law coincides with ethics.

  10. 10.

    To this end a cursory search for “Artificial Intelligence” on prominent darkweb markets returned a negative result. Specifically, the search checked: “Dream Market”, “Silk Road 3.1”, and “Wallstreet Market”. The negative result is not indicative of AIC-as-a-service’s absence on the darkweb, which may exist under a different guise or on more specialised markets. For example some services offer to extract personal information from a user’s computer, and even if such services are genuine the underlying technology (e.g., AI-fuelled pattern recognition) remains unknown.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luciano Floridi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

King, T.C., Aggarwal, N., Taddeo, M., Floridi, L. (2021). Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. In: Cowls, J., Morley, J. (eds) The 2020 Yearbook of the Digital Ethics Lab. Digital Ethics Lab Yearbook. Springer, Cham. https://doi.org/10.1007/978-3-030-80083-3_14

Download citation

Publish with us

Policies and ethics