Abstract
Despite the bombardment of AI ethics frameworks (AIEFs) published in the last decade, it is unclear which of the many have been adopted in the industry. What is more, the sheer volume of AIEFs without a clear demonstration of their effectiveness makes it difficult for businesses to select which framework they should adopt. As a first step toward addressing this problem, we employed four different existing frameworks to assess AI ethics concerns of a real-world AI system. We compared the experience of applying the AIEFs from the perspective of (a) a third-party auditor conducting an AI ethics risk assessment for the company, and (b) the company receiving the audit outcomes. Our results suggest that the feel-good factor of doing an assessment is common across the AIEFs that can take anywhere between 1.5 and 20 h to complete. However, each framework provides different benefits (e.g., issue discovery vs. issue monitoring) and is likely best used in conjunction with one another at different stages of an AI development process. As such, we call on the AI ethics community to better specify the suitability and expected benefits of existing frameworks to enable better adoption of AI ethics practice in the industry.