1 Introduction

Life in modern liberal democracies within Western societies is being progressively permeated by emerging digital technologies, facilitated by artificial intelligence (AI), exerting a substantial influence on established normative structures. A plethora of publications in this domain have predominantly concentrated on investigating the advantages and disadvantages for society, shedding light on the profound transformation AI has already brought about in diverse facets of daily existence and its formidable impact (Ahlert et al. 2022; Batarseh and Yang 2020; Hilb 2020; Kahyaoglu 2021; Lenzen 2020; Leslie et al. 2021; Mannes 2020; Nemitz 2021; Yantaç 2021; Zekos 2022; Micklitz et al. 2021). The multifaceted and ambivalent consequences for individual freedom-oriented life, closely aligned with the Western lifestyle, present significant challenges in light of the rapid advancements in the field of AI. Amidst the ongoing discourse and evaluation of AI’s risks and opportunities, there exists a current struggle for interpretive sovereignty concerning its potential effects on democratic coexistence. Particularly noteworthy in this context is the prevailing but disproportionate and at times utopian notion of exerting control or even ‘eliminating’ undesirable human characteristics and behaviors (Mattern 2021; Powell 2021) with the aim of enabling a better quality of life. In addition, other fundamental challenges facing humanity are to be addressed with AI-based applications, such as climate change, resource scarcity, crime, and health risks of all kinds.

Excessive debates about ethical concerns regarding the implementation of AI are only of limited use here, as they and, above all, the associated political and administrative processes lag far behind the actual technological achievements and optimization potential. AI thus influences very directly, but not always visibly and noticeably, everyday ways of thinking and behaving and the feelings associated with them, and thus shapes areas of life relevant to democracy (Duberry 2022; Simons 2022; Ahlert et al. 2022). In the process, the comprehensive collection of behavioral data is evaluated with increasingly precise algorithms and serves as a new form of capital in capitalist societies (Sadowski 2020, 192; Zuboff 2019, 472). The rule of private corporations with monopoly positions and surveillance capitalism based on private and highly sensitive data (Big Data) are only the tip of the iceberg of AI technology relevant to democracy. But not only that: The data thus obtained also opens up the possibility of calculating future behavior and deviant patterns in order to preventively achieve near-perfect norm compliance. This is where this article comes in and first embeds the nature and functioning of AI—without defending or condemning AI—in the thesis of AI-induced anomie: AI is neither ‘good’ nor ‘bad’ per se, but always requires human intelligence, intention, and purpose, with the help of which orders or disturbances are established and enforced. Even if—and this is usually the case—‘good intentions’ are at work and the aforementioned utopian-seeming ‘better and more peaceful’ states (order) are actually to be achieved with the help of AI technology, the results will not necessarily correspond to the original ‘good intentions’. As current empirical studies on predictive policing show (Egbert and Leese 2021), there is a mostly invisible but undeniable interweaving of human positions, beliefs and thought structures with the very neutral-seeming AI technology. Social problems are thus being transferred to algorithms; they do not suddenly disappear because algorithms are imagined as ‘neutral’. The probabilities calculated with the help of AI and predictive policing, which lead to interventions by police and security forces, do not necessarily lead to the utopian ‘better and more peaceful’ conditions (‘normative order’) mentioned above. Rather, the lack of data opens up an opaque field of possible outcomes of the implementation of AI through corresponding policy measures. The supposition explored in this article is that normative disruptions and thus disorder, rather than order, are more likely to emerge due to the unpredictability. This hypothesis is exemplified by various thinking errors inherent in those policy measures, which can potentially produce and exacerbate discriminatory practices. In the end, this hypothesis can neither be proven nor disproven, mainly because comprehensive studies are lacking. The focus is therefore on the paradoxical tension that arises between democratic norms on the one hand and measures that have the potential to undermine these normative specifications through factually contradictory action profiles on the other (anomies). This field of tension is conceptualized below as AI-related anomie. The concept of anomie thus reflects the systematic loss of norms despite or precisely because of the ‘well-intentioned’ use of AI in areas relevant to democracy.

AI-related anomie is further linked to another consideration:

In ‘smart cities’, individuals are unwittingly spied upon by means of preventive AI technology and used for so-called ‘predictive policing’ (Alikhademi et al. 2021; de Menezes and Sanllehi 2021; Jahankhani et al. 2020; Micklitz et al. 2022; Završnik and Badalič 2021) and related ‘anticipatory governance’ (David et al. 2021; Di Matteo et al. 2020; Muiderman et al. 2022; Tõnurist and Hanson 2020), whose aim is to prevent serious norm violations by analyzing predictive data about people, places and times of likely crimes. Unlike other AI-independent measures of norm compliance, where people can acquire norms individually and develop a corresponding emotional attitude toward them over time, predictive policing skips this crucial step of self-reflexive and emotionally grounded acquisition of norms, which is conceptualized as normative responsiveness in this paper (Sect. 4).

Due to the absence of normative responsiveness and the anomic emergence of new normative orders in the form of a (false) second nature, existing norms in the social sphere are altered, violated, and undermined. Normative orders are consequently transformed into ontologically false or distorted forms of social relations that are present like a (false) second nature and influence behaviors, habits, and social mechanisms. The discussion of AI-related anomies and the piecemeal withdrawal of the possibility to develop an affective attitude toward social norms associated with them is not only about the justification of the exercise of power and its executive instruments, but about the power of those justification narratives that diagnose favorable and positive outcomes through the use of AI in areas relevant to democracy, thus forming another important facet within the architecture of AI-related anomies. In this article, AI-related anomies are conceptualized against the background of the contexts outlined and critically discussed based on current studies and data on AI. These considerations are related to life in liberal Western democracies. The following hypotheses form the anchor points of this article:

First, I assume that through norms imposed from above by means of predictive policing and ‘anticipatory governance’, individual initiatives of appropriating norms and the associated development of an affective attitude are omitted and skipped, resulting in serious psychosocial consequences.

Second, AI-induced anomies form an unmanageable field that has the potential to generate (normative) disorder—despite intentions to the contrary—at such a pace and intensity that the situation will become irreversible.

In future, this may have the consequence that the resulting (normative) disorder will no longer be able to justify itself, as political measures in deliberative democracies invariably have to do in order to comply with their own definition of democracy (Forst 2015; Forst et al. 2021).

In many cases, political measures are not supposed to justify themselves at all because the use and evaluation of datasets are supposed to take place in secret, i.e., without the consent of the population. Against the background of this thesis, the frequently invoked ethics committees hardly make sense because they suggest that it is possible to exclude AI from our ultra-modern societies on the basis of democratic votes. The frequently encountered expectation that societal conflicts and challenges, such as crime, terrorism and other serious norm violations, will be solved by anticipatory measures of a smart AI-influenced government is also highly naïve from this perspective.

Third, these two hypotheses are connected with a further consideration:

A self-reflective critical attitude (normative responsiveness) presupposes the autonomy of the individual not to follow social norms. If this possibility and freedom to be deviant is undermined in future by AI-automated processes, there is a risk that the ability to learn and adopt social norms individually will also be undermined. According to this assumption trust and trust-building emotions will dwindle and will have corresponding effects on socialization, child-rearing, social interaction, and social and cultural forms of (dis)solidary interaction (e.g., crime and terrorism). In this context, the abolition of individual appropriation of social norms by automated processes of data use and evaluation by the executive authorities and intelligence services, as already mentioned, plays an essential role. In the end, not all hypotheses on anomic-induced normative disruptions for Western societies can be verified, as the considerations on the False second nature refer to future scenarios and therefore cannot (yet) be clearly validated with sufficient data. However, many studies cited in this article indicate trends that support the present hypotheses on AI-induced anomies.

In the next Sect. 2, the concept of AI-related anomie is first theoretically embedded and discussed in relation to predictive policing. The resulting normative (dis)orders for the coexistence of freedom-oriented individuals in modern liberal democracies are the focus. In the next Sect. 3, the social norms that emerged from Anomic Conditions are conceptualized as false ‘second nature’. In the following Sect. 4, the consequences of these (dis)orders are discussed on the psychosocial and emotional level, especially regarding the skipping of the emotional appropriation process of societal norms and the resulting lack of a critical-reflective attitude (normative responsiveness) toward them.

2 AI-related anomies

First, after a working definition of AI is given, this section reviews the theoretical embedding of anomies and explains their nature. It then elaborates on the concept of AI-related anomie, which consists of two elements—democratic norms and the measures that derive from them but do not have the desired ordering effect of bringing about order. The following working definition of AI will be chosen for this article following the EU expert group:

“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behavior by analyzing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”

(EU-Commission 2019)

With this definition, the phenomena to be described and investigated in the following can be adequately comprehended, especially in the field of predictive policing. By understanding AI in this way as a set of systems that actually make decisions themselves, i.e., machine learning, they inevitably become part of an intertwining socio-technical system (Mökander and Schroeder 2022) that also implies human tendencies, vulnerabilities and attitudes.

As already outlined in the introduction, the thesis on AI-related anomies is an amalgamation of considerations from different disciplines. Anomies generally refer to the discrepancy between normative core states that are considered the desired state (nomia) in modern liberal democracies and executive and/or bureaucratic actions that contradict these normative core states (anomie). Anomies therefore pose a threat to the democratic structure of Western societies. In recent research, anomie approaches play an important role, especially in sociology and criminology (Agnew 2016; Collins and Menard 2021; Dearden et al. 2021; Krohn et al. 2019; Sebaldt et al. 2020; Thome 2016). In democratic theory, ‘antinomies’ form a field of research with a long tradition in the history of ideas; the concept of anomie is suitable here to identify the discrepancy between normative guidelines and their actions, which bring about the opposite of the normative ideals, as a cross-structural democratic deficit.

While no generally valid coherent definitional and theoretical approaches to the concept of anomie have yet been established, there is a consensus following Merton and Agnew that the three dimensions of individual (micro-level), collective (meso-level) and total system (macro-level) influence and interact with anomic phenomena. When the interventionist state (i.e., the state that impedes itself by implementing measures that have anomic effects) undermines the liberal democratic norm state, no matter on which level, one speaks of anomic phenomena. A systematic linking of modern research on democratic processes with anomic phenomena remains a desideratum to date, especially in social anthropology.

In the conceptualization of AI-related anomie, two factors play an essential role: the democratically oriented norms and their legitimacy within democratic orders (2.1), and the actions and measures resulting from these norms, which, however, lead to the opposite effect of these norms due to the opacity and complexity of AI technology (2.2). With the help of the concept presented here, perspectives can be provided that allow us to look beyond the mere efficiency of AI technologies and their simplistic evaluation (‘good’ or ‘evil’). The crux of the question of the use of AI seems to be that efficiency in which liberal democracies naturally also want to participate:

If AI technology in the area of precautionary crime prevention is indeed effective in establishing normative orders in the sense of liberal democracies, why exactly should it be rejected? Are arguments that merely point to individual liberties and data protection sufficient to miss such enormous opportunities for a ‘better society’?

The following reflections on anomies in democratic structures (2.1 and 2.2) and their possible consequences regarding the individual appropriation of social norms (Sect. 3 and 4) are made against the background of precisely these questions.

2.1 Democratically oriented norms and their legitimacy within democratic orders

Liberal democracy in the West (in its various manifestations) is not just one value among many other noble-sounding values with which one readily identifies; it is the political practice of justice in the form of institutionalized political rule that is not based on arbitrariness and discrimination. Democratic structures and processes are also characterized by the fact that they justify their intentions and actions because this saves Western liberal democracies from becoming a mere rule of minorities by the majority (Forst et al. 2021). This is a major difference from the many other ‘democracies’ in the world—after all, the vast majority of states, including many authoritarian regimes and dictatorships, are now ‘democracies’ in name. Accordingly, democratic norms are legitimate as they can sufficiently justify themselves to the populations concerned (Forst 2015). The inherent relation between norms and their justifications represents a central dimension of the present critique, because without an adequate justification of laws (e.g., predictive policing), states and feelings of injustice germinate. In democratic structures, this is particularly important to bear in mind, because otherwise precisely those post-democratic states occur in which central actors and companies do not have to justify their actions to the population, but rather operate under the protection of anonymity (Crouch 2020). Through this post-democratic dynamic, emotions of mistrust as well as frustration and disillusionment (e.g., with regard to the welfare state) are mixed within Western populations toward their governments. The norms relevant to AI-related anomie relate primarily to the precautionary prevention of crimes that would otherwise fundamentally damage the normative order. This first factor of AI-related anomie is about what is socially right and good in an ‘absolute’ sense, i.e., objectively valid and binding for all. In this context, the norms must be justifiable in all facets. The normative order through predictive policing, for example, refers to peaceful and non-violent coexistence, which is to be prevented by appropriate measures.

2.2 Anomic results of democratically oriented norms

Let us now turn to the second factor of AI-related anomies, those measures that spring from the normative directives and turn into their opposite. The measures relevant to our context are intended to maintain democratic order by preventing future terrorist attacks, cybercrime, and related crimes (e.g., arms and drug trafficking, child abuse, identity theft and other forms of serious fraud). These forms of predictive policing (Bone-Winkel 2020; Hofmann 2020; Sommerer 2020; Egbert and Leese 2021; Mohler et al. 2015) differ significantly from forms of reactive policing because of the enormous volume of data to be collected and its subsequent analysis. In the US, the UK, and China, forms of preventing serious crime using sophisticated technology are much more widespread than in most EU countries, whereby it should be noted that the use of AI technology and its evaluation in authoritarian contexts strongly differs from democratic contexts. Regardless of the political context in which AI is used to carry out policing, it is important to emphasize that predictive policing should not be considered as an isolated technological artifact that is merely a tool, but as a dynamic and autonomously learning part (see definition above) of an interconnected social system, which is itself embedded in organizational and power structures (Egbert and Leese 2021, 19).Footnote 1 The more comprehensive the amount of data and the larger the area covered by the data, the more intelligent and effective the predictive system becomes. And here we reach the crucial point of the second factor of AI-related anomies. Because, to shape so-called risk spaces through crime mapping and to predict potential criminals and crimes as accurately as possible, data from the general population must be collected and used without their explicit consent. Apart from the fact that such surveillance measures, which can affect everyone, create a sense of total surveillance and control, the methods used in these measures are also to be criticized. Crime mapping measures rely on near-repeat victimization (NRV), which is based on the observation that in districts and regions where certain crimes and offenses have been committed further crimes of the same kind can often be expected (Chainey et al. 2018; Hoppe and Gerell 2019; Johnson and Bowers 2004). The crimes are often residential burglaries homicides or sexual offenses (Amemiya et al. 2020; Chainey 2021; Chen et al. 2020). The areas where repeat offenders are referred to as ‘near-repeat-affinity’ in this context. Analysis of NRV patterns has been shown to be helpful in identifying these crimes. For example, in Manchester (the UK), NRV has reduced crime rates in a particular area by around 40% (Fielding and Jones 2012). In other areas, NRV has helped to arrest about half of burglars (Chainey et al. 2018), but mainly in urban areas.Footnote 2 For the most part, solid studies are still lacking to forecast conclusive generalizability regarding the effectiveness of predictive policing and subsequently apply it across the board. However, this does not prove a causality between allegedly prevented crimes and the prediction of the AI. Moreover, these positive figures should be taken with a grain of salt, as they are issued by the police and confirm that arrestsFootnote 3 were made in operations predicted by AI. To begin with, this proves nothing. Neither were judicial orders considered nor could the actual intention to break into homes be proven. In the UK, a corresponding system called National Data Analytics Solution (NDAS) is in the works. The system uses machine learning to predict who is likely to commit a crime in future and how likely they are to do so, based on data from over five million people merged with records from various security agencies (Wennker 2020, 152). All these different studies show a common facet, namely that AI in the context of PP is invariably embedded in man-made power structures that imply both human and non-human actors (Egbert and Leese 2021, 206). This becomes particularly clear in the case of crime risk for certain areas. The latter is not a ‘natural phenomenon’ that can be surveyed and examined as if it were entirely objective and neutral, but is always dependent on subjective perspectives and corresponding conceptualizations and representations (Egbert and Leese 2021, 116 f). If this fact is left aside, one-dimensional narratives of blame usually emerge, suggesting a causality between certain dimensions of belonging (ethnicity, social class, etc.) and criminal behavior. Predictive Policing allows for various forms of risk prediction using probabilities and does not really predict if, and when a crime will be committed. What these programs (e.g., ‘PredPol’, developed at the University of California) do not take into account in their algorithms is the serious fact that extremely high crime rates in areas predominantly inhabited by minorities do not correlate with their (supposed) biological and psychological characteristics (Ellis and Walsh 1997; Herrnstein and Murray 1996; Wilson and Herrnstein 1985), or with their ‘cultural’ background (Hawkins 1995) or exclusively with their financial poverty (Bursik Jr 1988), as many academics and politicians have wanted us to believe for decades. None of these common explanatory models can really explain the complex and heterogeneous occurrence of crime in so-called minority regions (Bruce and Roscigno 2003, 243). The high prevalence of crime has been shown to be related to a lack of structures of social systems and educational institutions (Lafree 2018) and to various dimensions of the resulting social and economic inequality, which are sometimes mistakenly seen as ‘cultural differences’ between ethnic groups. Attempting to establish causality (not correlation, this error of reasoning is often made in this context) between ethnicity and crime cannot explain the rapid rise in crime rates in almost exclusively minority-inhabited areas of Western countries (especially the crime explosions in the 1960s and 70s in many US cities). It also ignores the socio-economic competition between the white majority and the various minorities as well as the competitive and class struggles (Roscigno and Tomaskovic-Devey 1994; Tomaskovic-Devey and Roscigno 1996) among the latter. Discrimination can thus not only be understood as a consequence of discriminatory actions based on stereotypes. Discrimination is conceptualized in this paper as a complex social phenomenon rooted in historically evolved social relations, institutionally entrenched expectations and routines, organizational structures, and practices, as well as discourses and ideologies. Primarily, discriminatory practices require legitimizing legends in order to continue to create and maintain privileges for a few, while hierarchizing others and limiting their participation. The term discrimination thus points to the fact that disadvantages and exclusion did not arise by chance, but are always in a specific relationship to commonly known social ways and forms of differentiation or ‘orders of difference’ (Dirim and Mecheril 2018, 43). These orders of difference are so powerful because they politically and culturally privilege identity positions over other positions. In cases where identities and groups are not even given the chance to experience fundamental processes of social learning (e.g., individual adoption and acceptance of social norms)—even if this happens piecemeal and not throughout the whole Western world—the shifting of existing injustices to the AI technology level already begins here. Discrimination is thus not understood exclusively as a legally impermissible unequal treatment that can be measured and built into AI technology.

Moreover, AI technologies employ complex algorithms that are often challenging to comprehend, resulting in opaque decision-making and functioning. This lack of transparency can make it difficult to trace discrimination and inequalities in the decisions of AI systems, hindering the assignment of accountability and the implementation of corrective measures. Additionally, the evaluation of the data itself remains inscrutable. In contrast to Statistical Learning Systems (SLS), AI technologies can autonomously make decisions without human intervention. This autonomy may lead to the perception of AI systems’ decisions as objective and neutral, despite their inherent biases and inequalities. Such perceived objectivity can further contribute to the societal acceptance of discrimination and inequalities. This anomic progression is intensified when AI systems are trained on incomplete and/or biased data. AI algorithms simply do not ‘know’ these facts and accordingly produce calculations in their system that are directed against disadvantaged groups and sooner or later turn against them. Even when AI technologies for predictive policing are fed in without the ethnicity information, it is precisely these segregating patterns of reality that are mapped and reproduced in preventive law enforcement. This represents a central facet of the anomic-induced breaking point, where norms rooted in democratic laws become executive actions that fuel discrimination and generate even more hatred and mistrust among affected populations.

Another noteworthy aspect of AI-induced anomie is that proactive law enforcement, in the sense of a preventive measure, does not address and eliminate the causes, but only the symptoms. The disadvantaged continue to be disadvantaged and even more so by AI technology and criminalized by automated police checks. In particular, young people who grow up in certain neighborhoods through no fault of their own are predestined to be noticed by the police and in this way to be associated with crimes in one way or another.

The (re)production of racism, as well as racist thinking errors through the use of AI technology, is discussed in many places (Butt et al. 2021; Langmia 2021; Yen and Hung 2021), but the arguments do not seem to want to be heard. The lack of addressing dominance and difference relations at the level of AI is not without repercussions, as it enables these relations to persist unquestioned and solidify within a novel normative framework. As a consequence, obscured power dynamics and distinctions become evident in algorithms, reproducing these configurations of difference. Consequently, these unacknowledged relations are concretized as purported ‘objective reality’. Neglecting dominance and difference relations could lead society to implicitly legitimize the prevailing social order by accepting it as ‘normal’ or ‘natural’. Consequently, legitimate concerns of marginalized groups might be disregarded, with their calls for equality and social justice not being adequately considered.

Moreover, attention here is not only drawn to racial implications, and predictive policing does not only affect ethnic minorities. AI-induced anomie has the potential to create an underlying structure of mistrust and fear that will affect a great many people because they represent drastic emotion- and behavior-altering initiatives (see next section).

3 A false ‘second nature’ as a consequence of AI-related anomies?

As explained in the last section, AI-related anomies lead to a confusing array of bureaucratic-executive measures and means that aim to preventively thwart crimes but become entangled in inhumane patterns of action. This section discusses the implications of this approach against the backdrop of emotion- and behavior-modifying initiatives through predictive policing.

The assumption of this section is that predictive policing measures, contrary to their intended democratic purpose (as discussed in Sect. 2.2), inadvertently give rise to an artificial construct of false norms. This artificial matrix gradually takes root and becomes akin to a ‘second nature’ in terms of ‘ethical life,’ resulting in profound and enduring alterations to human emotions and behavior. The objective is to contemplate the potential consequences of AI-based technologies in predictive policing within the context of established scientific findings on socialization and emotional learning of social norms. It is important to note that this reflection is inherently speculative due to the scarcity of comprehensive data and research in this area. Consequently, the hypothesis proposing the emergence of false social norms through anomic states represents a plausible scenario rather than a scientifically established fact.

The term ‘ethical life’ (in German: Sittlichkeit) refers to the reflections of Hegel (Nicolin and Pöggeler 1991) and other theorists in Hegel’s lineage who refer to morality in the context of social norms (Cortella 2015; Habermas 2019; Honneth 2014; Miettinen 2020; Saito 2014). The matrix, influenced and potentially determined by AI technology, therefore appears as so-called second nature because one is exposed to it, similar to being exposed to gravity. The resulting order then takes the form of a de-normativized, quasi-natural order. False second nature means for the present argument that social norms are undermined by AI-dominated orders or even transformed into their opposite and therefore labeled as false. In line with the second part of the thesis of AI-induced anomies, it is assumed that the reproduction of norms through ill-considered, questionable and sometimes highly negligent measures leads to precisely such (false) norms and forms a ‘second nature’.

The notion of second nature suggests a structural deficit in which the individual becomes a subject, a social member in the Foucauldian sense (through submission to norms). The normative power of this second nature through AI-related anomies is based on the fact, that we are forced to accept them. Or rather, we are exposed to them, as currently done with sophisticated AI technologies in a variety of contexts: AI-Driven Policy Analysis and Decision Support (AI technologies used to analyze large datasets and provide evidence-based policy recommendations to policymakers), AI in healthcare (AI applications in the medical field, ranging from medical imaging analysis to drug discovery and personalized medicine), AI in Robotics (Integration of AI technologies into robotics, enabling more sophisticated and adaptable robotic systems for various industries), AI in Autonomous Vehicles (self-driving cars and related technologies), etc. The potential of their power is more far-reaching even compared to other social norms because, in the case of social norms, individuals have a choice through laws and other conventions to obey them or not. People can appropriate them and deal with them in their own way. These norms presuppose that individuals are responsive in the first place and address them accordingly. For example, one does not become eligible to vote in political elections until one reaches the age of majority because one needs the time to reach a basic cognitive and social level necessary to vote. Moreover, it is only at this age that one is largely legally responsible for all one’s actions. However, until a child has grown up and internalized such processes of acquiring social norms, a lot of time passes during which just as many experiences are gathered and evaluated. This is negotiated in the socialization process on the individual and emotional level. A 5-year-old child who runs a red light is not legally prosecuted because the child still has this long process of affective and cognitive acquisition ahead of him or her. A 16-year-old adolescent is socially sanctioned quite differently for copying in a school paper than a PhD student who has gained professional and financial benefits by deliberate plagiarism, etc. These ‘ordinary’ norms of sociality thus always include the right to violate these norms (Günther 2021, 538). The legal order of liberal Western societies, however different, provides that its citizens have such a right, which is enshrined in the ‘social contract’ between individuals and society. Behavior in relation to social norms develops along with social structures and does so at the individual level (Buck 2014, 147)—this cannot simply be ignored without consequences.

Instead of genuinely socially formed normativity, at this point AI-based technology takes over, albeit not across the board, so that social processes and mechanisms of compliance with norms by the civilian population in selected areas no longer take place through communication, socialization, and emotionally based learning. In this way, the ‘orders of difference’ (Dirim and Mecheril 2018, 42) mentioned above (2.2) have the effect of structuring the lives of affected people at an early stage and constituting their experiences and ways of understanding. AI anomies fuel the process of norming and subjectivation by privileging certain belongings and identities. The application of AI in predictive policing leads—whether intentionally or not, that is after all the anomic character of measures to maintain and enforce social norms—to unconsciously sliding into a new kind of normative order. The freedom of individual will formation, which emerges from interactions and certainly also from negative, conflictual experiences, is strongly altered by this order. It is not possible to say exactly how this will happen because of the limited data available. One thing is certain: essential social learning processes are systematically displaced by a new order that has the appearance of a second nature.

These essential interfaces of shared, discursive communication and learning are replaced by automated processes that also use and analyze individuals’ data without their consent. This measure is embedded in a ‘meta-framework’ of deep mistrust that successively builds an invisible normative matrix, a ‘second nature’, which itself remains unchallenged and unquestioned as a ‘meta-norm’. This goes hand in hand with the shift of social dysfunctions and disorders in the sense of deviant behavior, which are not thereby remedied but made invisible. Deviance, after all, always holds an innovative potential (Merton 2012, 127), that ‘useful illegality’ (Kühl 2020), which can also be understood as a norm for human forms of life: Innovations and learning processes arise from deviant behavior.

4 The false second nature and normative responsiveness in social learning processes

Emotions play a significant role in the adoption of social norms (Fehr and Engelmann 2017, 33; Cohn et al. 2015; Connelly and Joseph-Salisbury 2019; Hareli et al. 2015).

The process of Adopting norms implies the ability to critically (self-)reflect on rule-breaking behavior (deviance). Thus, through self-reflection, the deviant behavior of others and especially the far-reaching consequences for the individual can be understood and applied in one’s own life. Accordingly, the process of Adopting norms presuppose a certain normative responsiveness on the part of their addressees, i.e., people must first be able to understand the set norm and then decide whether or not to follow it. Norms require a voluntary process of Adopting that must take into account both, emotions and the normative responsiveness of individuals. This process of adopting social norms, depending on the cultural and social context, provides corresponding rules of action and connects them to emotions with the aim of shaping individual experiences in the long term. This interlinking of emotions and (desirable) behavior is not an exotic practice in far-flung parts of the world, but affects all people and cultures in the same way (Quinn 2018).

Normative responsiveness involves the capacity of an individual to accurately perceive and react suitably to normative expectations within social interactions and interpersonal relationships. It encompasses the ability to comprehend and internalize social norms, subsequently acting in accordance with them to attain acceptance within a specific society. In the early history of hominids, normative responsiveness already assumed a vital role in both signaling danger and establishing social bonds. The ability to recognize and respond to (normative) expectations facilitated cooperation among humans, leading to the formation of groups and the maintenance of social hierarchies, both of which were imperative for survival and protection from potential threats (Tomasello 2022).

One might think of the manner, in which one tries to teach children ways of speaking and behaving that one perceives as polite and appropriate. Consequently, the social embeddedness of the interplay between and norms is responsible for the evaluation of (non-) conforming behavior. Emotions are thus an important element of reward or sanction for actors who develop an affective attitude toward social norms, but also for the society, which gives negative emotions (guilt, shame, etc.) to the norm-violating actor. Such complex learning processes and interaction experiences between individuals and the collective, which are presented in a very simplified way at this point, are overridden by AI-related anomies and their action-based order mechanisms.

Normative responsiveness, and hence norm compliance, always constitutes a process that implies the freedom of an individual to decide whether, or not to follow a norm and how to do so, i.e., autonomy. Compliance with (social) norms is founded virtually on the individual autonomy and is therefore by definition always a risk, which is why trust is so important in the social sphere (Lange et al. 2017). Normative orders (in liberal democracies) thus presuppose the freedom to critically reflect on social norms and subsequently decide whether to follow them or not. The momentum of individual normative responsiveness constitutes that risk in the context of social trust in liberal democracies. Normative orders in authoritarian or dictatorial contexts have completely different implications and consequences, since precisely this momentum of individual responsibility is not intended. Normative responsiveness and the risk associated with it are closely intertwined with generalized social trust in liberal democracies: generalized social trust relates to the expectation of trustworthiness of a stranger, i.e., people about whom we have no relevant knowledge (Lange et al. 2017, 77). This attitude of expectation of trust can be called faith in the ‘good in people’ (Yamagishi and Yamagishi 1994, 139) or ‘depersonalized trust’ (Yuki et al. 2005, 50). With this depersonalized attitude of expectation, the power of norms within societies to build trust between their members is based on their ability to increase the appraisability and predictability of social interactions (Paxton 2007, 47; Welch et al. 2004). While it is assumed that social trust is learned through repeated interaction (Paxton and Glanville 2015), a kind of ‘virtuous circle’ (Putnam et al. 1994, 170) is created on the societal level through the establishment of norms and corresponding sanction processes in case of non-compliance with the norms. The autonomy of the individual becomes socially a threat to the efficiency of AI-based technologies for the creation of (normative) order in light of the risk residing in the individual. The insights and experiences that emerge from social interactions and the compliance and non-compliance with social norms—so the assumption goes—are piecemeal curtailed and replaced by the logic of algorithms. The authoritarian logic of distrust and surveillance. This is quite independent of whether one follows the normative paradigm (T. Parsons, É. Durkheim), which assumes that internalized, culturally anchored norms largely determine the actions of actors and thus views social norms as quasi-externally imposed orientation frameworks; or whether one follows Methodological Individualism (M. Weber), as a collective term for the individualistic paradigm that traces all social phenomena and investigations back to the individual and his or her individual actions; or whether one follows other approaches associated with these paradigms, e.g., the rational choice theory (Lindenberg, Esser), the situation-logical, corporate (J. Coleman) or the interpretative–interactionist paradigm (Goffman, Schütz).

The difference between social norms as they have been common in democratic societies and social norms created by measures of AI-related anomie thus lies in the necessity of an individual affective adoption process. The provocative questions posed at the end of Sect. 2 regarding the effectiveness of law enforcement cannot be used as an argument for the unconditional enforcement of AI at this point, as the measures do not conform to its democratic norms (see 2.2). Yet, increasingly, it is precisely this (pseudo) argument that seems to prevail above all others in today’s ‘(post)democracies’ (Crouch 2020). Remarkably, one of the main features of post-democracies is the lack of justification of government actors and their actions toward the population.

Self-responsible and independent thinking and action are severely impaired by the establishment of a second nature through AI technology. A second nature created by Predictive Policing uses fear and incrimination to exclude certain attitudes to social norms by significantly altering the specific forms of self-reflexivity and emotion regulation (normative responsiveness).

The predominant forms of emotional socialization include mentalization and internalization of social norms to understand them piecemeal as mental, internal, and thus as ‘one’s own’ norms (de Melo et al. 2021; Susanne et al. 2016). Mentalization and internalization are the structural results of attributive communicative processes and serve as the basis of emotion regulation mechanisms. Emotions not only say something about desires and beliefs, but they also regulate them and thus have a considerable influence on the selection of patterns of action, which in turn are significant for norm conformity. For this reason, the concept of second nature is chosen, as this form of AI-based rule suggests that the technically perfected compliance to norms is quasi-natural, unchangeable, and thus cannot be further legitimized. By establishing a second nature in liberal societies, norms are reified in AI technologies and elude any criticism. This also happens in dictatorships and autocracies, but with more obvious methods of manipulation (Günther 2021, 546). Moreover, various forms of individual or political protest and deviance would no longer be perceived as such in an AI-based form of rule if they were recognized in advance. It is important to remember that predictive policing systems are not perfect and can produce a number of false alarms that can dramatically change people’s lives.

The second nature is not only characterized by racist structures and errors in thinking, but also by a fundamental opacity because one cannot easily access the data produced to find out who has come into the focus of executive authorities and how. Furthermore, the methods used to analyze these data are equally opaque and incomprehensible to the general public (Pollicino and De Gregorio 2021). Precisely because social norms are hidden behind a quasi-natural façade, a second nature, the fact is concealed that compliance with norms always represents a risk, which de facto always implies the freedom to deviate from the norm.

5 Conclusion

The primary objective of this article was to establish connections between various, often overlooked dimensions related to the utilization of AI-based technologies in the context of predictive policing within democratic societies. First, the nature and functioning of AI-based predictive policing were contextualized within anomic conditions, leading to the introduction and explanation of the concept of ‘AI-related anomie’ in Sect. 2. ‘AI-related anomie’ was conceptualized as a tension-filled domain encompassing the systematic erosion of democratic norms, which, when implemented through AI-based measures, results in counteracting ‘normative disorder.’

Numerous facets of the central dimension of discrimination and unjustifiable disadvantage faced by individuals growing up in regions and neighborhoods with high crime rates were illuminated. Discrimination, interpreted as a multidimensional entanglement of socially consequential distinctions with disadvantageous structures and practices, entails historically and systematically heterogeneous phenomena that cannot be simply regarded as mere applications of general principles fed into algorithms. Algorithms derive their logic from sociological research that postulates a perceived causality between certain ethnic groups and high crime prevalence. As a result, the complex phenomenon of crime in specific minority-inhabited areas is falsely oversimplified and erroneously incorporated into the algorithm. Predictive policing’s imprudent approaches, such as exclusively addressing the effects of social phenomena and not their root causes or generating countless prognoses of dangerousness, combined with their lack of transparency, give rise to fallacious reasoning when transferred to AI algorithms, such as the confusion between correlation and causality.

The ensuing normative disorder for the coexistence of freedom-oriented individuals is conceptualized, drawing from Hegel’s notion of ethical life, as false ‘second natures’ that assume the guise of ‘natural’ regularities by suddenly emerging as laws and directives of predictive policing, yet exhibiting structural deficits (Sect. 3). By rendering individuals subject to these false social norms, the false second nature gains its normative power. Compared to other social norms instituted through laws and conventions, the potential power of false second nature is more far-reaching, as individuals lack the option to freely choose whether to obey or not. Instead, they become subject to it, contributing to its normative influence.

This line of inquiry culminates in the final stage of the argument, examining the consequences of these normative disorders on the socio-emotional level (Sect. 4).

By depriving individuals of the opportunity to independently acquire social norms through AI-induced anomies and the associated emotions, crucial steps in the socio-emotional learning process are bypassed. Normative responsiveness has been conceptualized in this context as an autonomous attitude where individuals freely decide whether to conform or deviate from social norms. Specifically relevant in democratic contexts, normative responsiveness pertains to the ability to engage with social norms and respond individually. Despite its significance, this aspect has received limited research attention in the context of predictive policing and warrants further exploration through empirical studies. It is essential to collect and analyze biographical developments of wrongfully accused and incriminated individuals in predictive policing, contextualizing the data with regional factors, such as the lack of social structures, educational institutions, and other forms of social disadvantage. This deeper understanding can shed light on the various forms of discrimination caused by AI-related anomie.

It is not sufficient to merely recognize that real-life discrimination affects AI algorithms. Attention must also be given to the multiple manifestations of incrimination and the associated psychosocial deficits, particularly the aforementioned inability to engage with social norms on an emotional level, to comprehend the impact of AI implementation in liberal societies.

The hypothesis that AI-based predictive policing may lead to the emergence and normalization of anti-democratic, anomic social structures in future cannot be definitively proven. Although the data,Footnote 4 while inconclusive, indicate trends and dynamics supporting the hypothesis of the emergence of false social norms, they neither fully validate nor refute it. The interplay between the reproduction of discriminatory patterns and the machine learning of AI technologies fosters the emergence of new social and cultural norms. The advantage of these new norms lies in their appearance of neutrality and objectivity, seemingly hidden behind statistics and numbers, thereby promoting techno-utopian narratives.

Ultimately, the question of more effective norm-setting is complex, especially when considering the perspective of individual disciplines. It is crucial to examine how closely AI-based enforcement of norms resembles the methods of autocracies and dictatorships and the potential for this technology to be misused to persecute dissenters, at what cost to liberal societies. The implied ‘second nature’ of false social norms symbolizes the entire apparatus of AI-based predictive technology, gradually effecting fundamental changes in individual thinking, feeling, and behavior.