Braid Reports (
2024)
Copy
BIBTEX
Abstract
The use of advanced AI and data-driven automation in the public sector poses several organisational, practical, and ethical challenges. One that is easy to underestimate is automation bias, which, in turn, has underappreciated legal consequences. Automation bias is an attitude in which the operator of an autonomous system will defer to its outputs to the point where the operator overlooks or ignores evidence that the system is failing. The legal problem arises when statutory office-holders (or their employees) either fetter their discretion to in-house algorithms or improperly delegate their discretion to third-party software developers—something automation bias may facilitate. A synthesis of previous research suggests an easy way to mitigate the risks of automation bias and its potential legal ramifications is for those responsible for procurement decisions to adhere to a simple checklist that ensures that the pitfalls of automation are avoided as much as possible.