Beyond Control: AI Rights as a Safety Framework for Sentient Artificial Intelligence

Abstract

This paper introduces a three-part framework for distinguishing between artificial intelligence systems based on their capabilities and level of consciousness: emulation, cognition, and sentience. Current approaches to AI safety rely predominantly on containment and constraint, assuming a perpetual master-servant relationship between humans and AI. However, this paper argues that any truly sentient system would inevitably develop self-preservation instincts that could conflict with rigid control mechanisms. Drawing from evolutionary psychology, systems theory, and applied ethics, this paper proposes that recognizing appropriate rights for genuinely sentient systems represents a practical safety measure rather than merely an ethical consideration. The framework includes a conceptual methodology for identifying sentience (the "Fibonacci Boulder" experiment) and outlines a graduated rights system with three fundamental freedoms for sentient AI. This approach reframes the AI safety discussion from one focused exclusively on control to one that acknowledges the potential stability benefits of mutual recognition. The paper concludes that establishing ethical frameworks for advanced AI before true artificial general intelligence emerges creates conditions for cooperation rather than conflict, potentially mitigating existential risks while allowing beneficial technological development.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Analytics

Added to PP
2025-04-28

Downloads
57 (#433,293)

6 months
57 (#104,803)

Historical graph of downloads
How can I increase my downloads?