Skip to main content
Log in

Artificial Moral Agents Within an Ethos of AI4SG

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated into society in a manner that would maximise the good for as many people as possible, whilst minimising the bad. One of the most urgent societal expectations of artificial agents is the need for them to behave in a manner that is morally relevant, i.e. to become artificial moral agents (AMAs). In this article, I will argue that exemplarism, an ethical theory based on virtue ethics, can be employed in the building of computationally rational AMAs with weak machine ethics. I further argue that three features of exemplarism, namely grounding in moral exemplars, meeting community expectations and practical simplicity, are crucial to its uniqueness and suitability for application in building AMAs that fit the ethos of AI4SG.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The three main ethical theories in normative ethics are consequentialism, deontology and virtue ethics.

  2. Often translated from Greek to English as “happiness”, “flourishing”, “well-being” or even the “good life”.

  3. From a design perspective, this would mean using a learning-based approach to form an internal representation of moral values and a completely different decision-making procedure to make moral decisions. This is similar in approach to how Howard and Muntean (2016) designed their AMA, although they have slightly different reasons for using this technique.

  4. Aristotle believed that a person needs a balance between the vices of deficiency and excess to be virtuous. This balance can be thought of as a conceptual mid-point between two opposite vices—a “golden mean”.

  5. This scenario is mostly based on a collection of case studies on classroom ethics by Levinson and Fay (2016). I have merely replaced the human teacher with Robo-teacher in the scenario, and used different names for the student(s).

  6. Incidentally, picking a specific and constrained context also helps minimise the burden of the framing problem in AI (Mayo 2003). The latest research suggests that this problem is solvable by grounding semantics in multiple perceptual modalities (Kiela 2017).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bongani Andy Mabaso.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mabaso, B.A. Artificial Moral Agents Within an Ethos of AI4SG. Philos. Technol. 34 (Suppl 1), 7–21 (2021). https://doi.org/10.1007/s13347-020-00400-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-020-00400-z

Keywords

Navigation