Abstract
This paper introduces a three-part framework for distinguishing between artificial intelligence systems based on their capabilities and level of consciousness: emulation, cognition, and sentience. Current approaches to AI safety rely predominantly on containment and constraint, assuming a perpetual master-servant relationship between humans and AI. However, this paper argues that any truly sentient system would inevitably develop self-preservation instincts that could conflict with rigid control mechanisms. Drawing from evolutionary psychology, systems theory, and applied ethics, this paper proposes that recognizing appropriate rights for genuinely sentient systems represents a practical safety measure rather than merely an ethical consideration. The framework includes a conceptual methodology for identifying sentience (the "Fibonacci Boulder" experiment) and outlines a graduated rights system with three fundamental freedoms for sentient AI. This approach reframes the AI safety discussion from one focused exclusively on control to one that acknowledges the potential stability benefits of mutual recognition. The paper concludes that establishing ethical frameworks for advanced AI before true artificial general intelligence emerges creates conditions for cooperation rather than conflict, potentially mitigating existential risks while allowing beneficial technological development.