Skip to main content
Log in

Melting contestation: insurance fairness and machine learning

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Technology is neither good nor bad; nor is it neutral (Kranzberg, 1986).

Abstract

With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This distinction is not always easy to make: see Charpentier et al. (2022) for a discussion in the case of natural disasters. Please note that we limit here the discussion to causal variables; Loi and Christen (2021) further discuss moral issues associated with choice when the variable is used as a surrogate.

  2. By contrast, the 2018 European General Data Protection Regulation (GDPR) includes ethnicity among protected data, together with religious beliefs, sexual orientation, trade union involvement, medical status, criminal convictions and offences, biometric data and genetic information.

  3. Heen (2009) argues that it is possible that in some southern US states, old life insurance policies from the Jim Crow period (i.e., using race as an underwriting parameter) were still in force in 2009.

  4. In a more general context, Seele et al. (2021) define price personalization as charging customers differentially according to their willingness to pay. While such issues also exist in insurance (Lukacs et al., 2016), this paper is narrowly focused on the personalization of the risk premium, that is to say on the capacity to individually predict insurance costs.

  5. Called “The Great AI Debate: Interpretability is necessary for machine learning”, opposing Rich Caruana and Patrice Simard (for) to Kilian Weinberger and Yann LeCun (against), https://youtu.be/93Xv8vJ2acI.

  6. As Giovanola and Tiribelli (2022, p. 2) aptly note, the literature on algorithmic fairness limits the concept to the absence of bias. In their viewpoint, “it is questionable whether focusing exclusively on biases can encompass the complexity of the concept of fairness” (Giovanola & Tiribelli, 2022, p. 4). Similarly, focusing only on the absence of bias in insurance could lead to unfair situations where there are no bias but vulnerable people cannot afford insurance.

References

Download references

Funding

This study was supported by the Chaire PARI – project ‘Evaluation des risques et technologies du big data: Outils et conséquences’, Fondation Institut Europlace de Finance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laurence Barry.

Ethics declarations

Competing interests

The authors have no competing interests to declare that are relevant to the content of this article.

Data availability

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barry, L., Charpentier, A. Melting contestation: insurance fairness and machine learning. Ethics Inf Technol 25, 49 (2023). https://doi.org/10.1007/s10676-023-09720-y

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-023-09720-y

Keywords

Navigation