Knowledge representation and acquisition for ethical AI: challenges and opportunities

Ethics and Information Technology 25 (1):1-12 (2023)
  Copy   BIBTEX

Abstract

Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the potential for learned algorithms to become biased against certain groups. More generally, in so much that the decisions of ML models impact society, both virtually (e.g., denying a loan) and physically (e.g., driving into a pedestrian), notions of accountability, blame and responsibility need to be carefully considered. In this article, we advocate for a two-pronged approach ethical decision-making enabled using rich models of autonomous agency: on the one hand, we need to draw on philosophical notions of such as beliefs, causes, effects and intentions, and look to formalise them, as attempted by the knowledge representation community, but on the other, from a computational perspective, such theories need to also address the problems of tractable reasoning and (probabilistic) knowledge acquisition. As a concrete instance of this tradeoff, we report on a few preliminary results that apply (propositional) tractable probabilistic models to problems in fair ML and automated reasoning of moral principles. Such models are compilation targets for certain types of knowledge representation languages, and can effectively reason in service some computational tasks. They can also be learned from data. Concretely, current evidence suggests that they are attractive structures for jointly addressing three fundamental challenges: reasoning about possible worlds + tractable computation + knowledge acquisition. Thus, these seems like a good starting point for modelling reasoning robots as part of the larger ecosystem where accountability and responsibility is understood more broadly.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,752

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Instructions for Authors.[author unknown] - 2001 - Ethics and Information Technology 3 (2):151-154.
Instructions for Authors.[author unknown] - 2003 - Ethics and Information Technology 5 (4):239-242.
Instructions for Authors.[author unknown] - 1999 - Ethics and Information Technology 1 (1):87-90.
Instructions for authors.[author unknown] - 2002 - Ethics and Information Technology 4 (1):93-96.
Instructions for Authors.[author unknown] - 2000 - Ethics and Information Technology 2 (4):257-260.
Instructions for Authors.[author unknown] - 2001 - Ethics and Information Technology 3 (4):303-306.
Editorial.[author unknown] - 2005 - Ethics and Information Technology 7 (2):49-49.
Governing (ir)responsibilities for future military AI systems.Liselotte Polderman - 2023 - Ethics and Information Technology 25 (1):1-4.
Just consequentialism and computing.James H. Moor - 1999 - Ethics and Information Technology 1 (1):61-65.

Analytics

Added to PP
2023-03-14

Downloads
24 (#654,246)

6 months
16 (#154,895)

Historical graph of downloads
How can I increase my downloads?