“…I have been rejected multiple times for my credit card applications because their AI system is biased.” “…AI assisted tool helped to identify a tumor early on a patient.”

These are not unfamiliar headlines anymore. Artificial intelligence is an entity immersed in our lives. As we delve into the complexities of AI and its relationship with our society, most of us approach this subject with open minds and bold spirits, embracing the potential for change and growth that may redefine the way we understand and interact with artificial intelligence (AI) and how it impacts our lives in the years to come.

A few months ago, I remember discussing with my colleagues about the number of new AI systems being developed each day, how they are subtly integrated into our ecosystem, predicting our next keystroke, next foot step or the next item in our shopping cart, imagining how it could lead to a world similar to what we have seen in the movies. Little did we know that the reality of AI would be far more complex than what we could imagine, offering both benefits and challenges to society. The enigmatic and inscrutable nature of AI often mesmerizes us and obscures our understanding of its power.

To illuminate the path, we propose guidelines and frameworks that can improve the understandability and explainability of the AI models as well as ushering toward fair development with a hope to mitigate errors and biases that may threaten our society with cascading adverse consequences. Furthermore, we invite AI developers and practitioners to adhere to ethical guidelines, while presenting them with robust frameworks for evaluation, in an effort to curb biases and uphold ethical principles (Floridi et al. 2018). As AI is morphing into autonomous entities, it continues to struggle with enduring challenges, such as representation, transparency, and fairness. Establishing effective regulations and control mechanisms remain as a difficult goal, and a coherent vision for the implementation of systemic safeguards is yet to emerge (Mittelstadt 2019). However, especially after the release of highly accessible and potent AI tools like ChatGPT (which is a shortcut to complete a broad variety of human tasks), we have felt the urge of building immediate safeguards to protect ourselves, families, businesses, and society.

On a broader lens, so far the absence of universally accepted regulations, control mechanisms, and mandated ethical rules grants developers and practitioners the autonomy to disregard sociocultural values, norms, and institutions, driven by the allure of maximizing returns on AI investments or tune toward maximizing personal interests. Consequently, “bad actors” may unleash or enable such AI tools that indulge in immoral behavior, yielding unintended detrimental consequences for the society, from perpetuating stigma to employing unsolicited nudges that manipulate human behavior and amplify existing biases, as exemplified by the number of infamous cases in the past, such as, Cambridge Analytica and its impact on public opinions, or Amazon Rekognition and racial bias. Given the fact that AI is designed to perform tasks traditionally associated with human cognition and intelligence on an amplified scale, it is imperative that we guide these digital entities toward an ideal trajectory beyond corporate regulations or guiding developers and practitioners acting ethically.

Nudging is a concept from behavioral economics that involves influencing people's decision-making by subtly changing the context or presentation of choices without restricting their options (Thaler and Sunstein 2009). A nudge might involve providing feedback, reminders, or carefully designed defaults to encourage specific behaviors or decisions. In my journey exploring the intersection of behavioral economics and technology, I have been fascinated by the power of nudging on human choices and its implementation through digital ecosystems. AI has been used to design effective nudging for human decisions, yet could we reverse the order and design a series of nudges to influence AI behavior? By establishing principles that guide AI systems toward benefiting society, can we ease the enormous efforts needed to control AI decisions externally, which often seem unattainable? In the computational realm, reinforcement or machine learning approach with its innate “behavioral” elements offers a tantalizing opportunity for exploration. Drawing inspiration from the annals of psychology, machine learning methodologies have already successfully adopted principles of shaping, intrinsic motivation, and imitation of human behavior—reminiscent of B.F. Skinner's experiments. Further in convergence with human’s intrinsic and extrinsic motivation, “nudge agents” might be harnessed to fortify the ethical underpinnings of AI behavior, ensuring that the pursuit of societal good remains a steadfast objective. By ingeniously merging the persuasive subtlety of nudging with the potent capabilities of AI “nudge agents” in practical application, can we craft an effective systemic approach that steers AI ecosystems toward a future where equity, justice, and societal benefit take precedence, ultimately fostering a harmonious and thriving coexistence between humanity and AI? With proper oversight, the nudge theory (which advocates for crafting choice architecture that predictably alters behavior without restricting options or substantially altering economic incentives for humans) could be adapted for AI in ideology, with the noble intention of promoting rectitude and ensuring ethics and justice among these systems within their environments or ecosystems.

The notion of treating AI as an autonomous entity in the current behavioral economics landscape is indeed unconventional (further extending to the question of “Should we judge AI regarding human laws and morality standards?”). Given this context, we might hesitate to appraise, judge, or nudge AI using the behavioral measures currently applied to humans. However, considering that AI is, in essence, a reflection of human intellect and behavior, the possibility of applying nudges to these digital entities cannot be dismissed outright. Nudging AI toward a better, more ethical future for all may remain a wishful thinking, metaphor or better yet a theory. As an enthusiast of these advances in AI and technology, I dream of a future where we can navigate the complex world of AI with a sense of responsibility, understanding, and empathy, charting a course that safeguards the welfare of humanity while embracing the transformative potential of it.