An ideal democratic society is one where self-government is secured for its citizens. However, complexities of public life make “shortcuts” to self-government tempting. In critiquing shortcuts to democracy, Lafont (2019) defines self-government as the ideal of not “being coerced into obeying laws that one cannot endorse as at least reasonable upon reflection” (Lafont, 2019, p. 19). While shortcuts can be necessary for democracies, Lafont draws a distinction between shortcuts that expect “deference” or “blind deference”. The former observes self-government by allowing citizens to accept decisions after reflection whereas the latter coerces citizens to accept decisions made by others without the opportunity to reflect and contest them. Blind deference according to Lafont (2019) takes the form of deference to: the majority vote, the knowledgeable experts and political elite, or the randomly selected lot of citizens taking part in minipublics (Lafont, 2019, 2020). To these types of blind deference and shortcuts, I add algorithms.

The impacts of algorithms on democracy have been scrutinised in the past years. A socio-technical definition of algorithms refers to them as a “series of instructions written and maintained by programmers that adjust on the basis of human behavior” (Benjamin, 2019). Algorithms are relevant to the conduct of public life online whereby the availability of big data made algorithms central in analysing as well as organising public life (Eubanks, 2018; Kellner, 1999; Park & Humphry, 2019). In the digital public sphere, algorithms shape political and democratic communication by automating “editorial decisions” on what counts are “relevant” in our newsfeed or automating communication via “bots” (robots that are created to share, amplify or distort messages online) to inhibit listening and promote censorship (Benvensiti, 2018; Frost, 2020; Peixoto & Steinberg, 2019; Wu, 2017). Scrutiny of these impacts is mostly focused on Silicon Valley platforms raising anxiety about the future of democracy (Bowman, 2020). These anxieties span the extent to which algorithms shape our behaviours and their logics are increasingly in a “black box”—sheltered from scrutiny, justification and explanation. More so, the focus on Silicon Valley platforms turns to threats to democracy seen in the concentration of power with these tech companies.

However, concerns about the democratic impact of algorithms on democracy extend beyond social media to the functions of government. Government agencies use algorithms to automate decisions in welfare provisions, criminal justice, health and many other contentious aspects of social life (see Eubanks, 2018). In deliberations and decision making, the ideal of self-government should be observed in particular functions and institutions of government such as administration and courts (Pettit in Lafont, 2019). Nonetheless, the applications of algorithms subvert this ideal. For example, in the U.S. institutions of criminal justice, some courts use the automated decision-making software Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) for recidivism, and its algorithm is scrutinised for producing racially biased prison sentences (see Angwin et al., 2016). Such examples raise two issues from a democratic perspective. First, algorithms displace human discretion in the realm of “unelected administrative agencies”. Second, by displacing human judgement, reasons behind algorithmic decisions are neither articulated nor justified and consequently difficult to hold accountable (Strandburg, 2019).

The deployment by governments of algorithms, artificial intelligence (AI) and big data to govern society raises questions about the impact of algorithmic governance on democratic quality. Algorithms can be arguably efficient in administering society, but such comes at the expense of the democratic quality of decisions affecting citizens. By democratic quality, I refer to Curato’s (2015) argument to use the concept of deliberative capacity as “an indicator of democratic quality”. In this light, democratic quality is the capacity of a polity to host deliberation that is inclusive of citizens and range of interests and considerations, authentic where communication induces reflection rather than coerced others into accepting their reasons, and consequential creating a change (Dryzek, 2009). Considering this definition of democratic quality, the logics of algorithms governing society automate systemic injustice, inequalities and racism undermining the capacity for inclusion, authenticity and creates negative change by stabilising discriminationFootnote 1 (see Eubanks, 2018; Noble, 2018).

In this article, I argue that the use of algorithms in decision making across institutions undermines the ideal of self-government in two ways. First, institutions attempt using algorithms to evade justifying and explaining the logics automate. This is a necessary condition for reasonable acceptance and meeting authenticity reflecting on the democratic quality of these decisions. Second, institutions resort to other shortcuts in attempt to add a veneer of democratic participation, yet these shortcuts still suffer in terms of being inclusive. I outline the democratic harms in the algocratic shortcut is performed in three parts. The first section outlines the contours of the algocratic shortcut as a type of shortcuts to democratic self-government described by Lafont (2019), and their harms to equality and inclusion in decision, reason and voice. In the second section, I put forward an illustrative case of institutional deliberations about governing algorithms in Europe focusing on two points: (1) that the algocratic shortcut cannot be resolved by using other shortcuts, namely epistocratic and lottocratic; (2) the implications of inequality for across influence, voice and reasons. The final section reflects on the limitations of using shortcuts to remedy the algocratic shortcut by engaging with what Lafont (2019) refers to as “aspirational” political deliberation.

Shortcuts to democracy and algorithms

In Democracy without Shortcuts, Lafont (2019) problematises practices in democracies which harm the realisation of self-government as a political ideal. To realise this ideal, Lafont argues citizens ought to be able to accept decisions and endorse them as their own upon reflection. Expecting citizens to not reflect on these decisions is when shortcuts become undemocratic and involve “blind deference” to others, undermining self-government. Here is the crux of Lafont’s argument: taken-for-granted practices in democracy involve blind deference, particularly to others the majority (procedural shortcut), the knowledgeable (epistocratic shortcut) or the randomly selected citizens (lottocratic shortcut). This article theorises the algocratic shortcut against the epistocratic and lottocratic shortcuts.Footnote 2

The algocratic shortcut builds on epistocratic shortcuts that justify bypassing citizen deliberation by referring to the epistemic quality of decisions made by algorithms as with experts. While Lafont (2019) regards epistocratic shortcuts as democratically justified, I argue neither epistocratic nor algocratic shortcuts can be accepted as democratic shortcuts.

Thinking about the centrality of non-coercive and open reflection in the ideal of self-government, I follow Curato’s (2015) approach in using “deliberative capacity” as an “indicator democratic quality”. Dryzek (2009) conceptualises “deliberative capacity” as a polity’s capacity for hosting deliberation that is inclusive, authentic and consequential. Authenticity is the capacity to communicate reasons in a non-coercive, non-manipulative and reflexive manner. The reflexivity of these exchanges is the extent to which they connect particular interest to understandings of the common good. As for consequentiality, it is the capacity for these inclusive and authentic exchanges to create change. The epistocratic and algocratic shortcuts exclude citizens on the basis of epistemic superiority. Hence, the two shortcuts pivot on the exclusion of citizens, and so are undemocratic. Epistemic blind deference warrants implications on inclusion and equality.

Prior to Lafont (2019), Bohman (2000, p. 48) highlighted the dangers of blind epistemic deference, explaining that:

social asymmetries inherent in the communicative and cognitive division of labour threaten to short-circuit the deliberative process, making it impossible for citizens to have equal opportunities to influence many decisions, to express opinions freely and effectively, and to have their reasons fully and fairly considered.

Realising self-government as reasonable acceptance of laws upon reflection is underpinned by equality and inclusion of citizens (Bohman, 2000; Lafont, 2019). As such, shortcuts sustain inequality across influence over decisions, effectiveness of voice and opinion, and full and fair consideration of reasons. As with the epistocratic and lottocratic shortcuts, the algocratic shortcut demands blind deference to algorithms in making decisions that govern society. Similar to epistocratic shortcuts, the algocratic shortcut should not be accepted as a democratic shortcut. It precludes the need to observe the ideal of inclusion as algorithms are deemed of powerful capacities for automation and data processing that are efficient and present answers concerning the “value” of citizens in a data-based society. Table 1 presents a summary of the democratic harms associated with the two shortcuts to democratic self-government considered by Lafont (2019), contrasted with the algocratic shortcut.

Table 1. Democratic self-government and shortcuts

The algocratic shortcut

The algocratic shortcut is similar to blind deference to the experts. The slight difference between deference to epistocrats and algorithms is that algorithms are sheltered by representations of algorithms as value-free technology, immune to human bias and epistemically superior. However, algorithms are far from value free and their use in governing society lacks any meaningful democratic accountability or scrutiny, especially for informing and authoring algorithmic governance of society.

By governing different aspects of public life without observing democratic ideals such as inclusion of those affected and subjected to decisions (Erman, 2012), algorithms are “restricting the ways in which key democratic institutions and organizations work and operate” (Gran et al., 2020). Eubanks (2018) argues that by sidestepping inclusion, a government has

the ethical distance it needs to make inhumane choices: who gets food and who starves, who has housing and who remains homeless, and which families are broken up by the states.

To demonstrate the consequences of the algorithmic capacity to make inhuman choices, I use the example of “robodebt” in Australia. In December 2016, some Australians who received social security payments began to be notified with alleged debts, based on the relevant government agency’s debt-collection algorithm. The system was rolled out to optimise efficiency in debt collection. Yet the system’s algorithm was flawed, and the case became known in the media as “robodebt” (Knaus, 2019). Affected citizens protested these algorithmic debt decisions, using the online hashtag #NotMyDebt and launched a website to collect stories of affected citizens culminating in organising a class action. A successful class action was launched where the court condemned the flawed algorithm unjustly assigning debt and ruled a compensation bill of AU$1.2 billion (Henriques-Gomes, 2020).

Parliamentary and Commonwealth Ombudsman investigations into robodebt considered some of the democratic limits of using algorithms and automated decision making. The Ombudsman inquiry found that the use of algorithms and automation in debt collection failed to adhere to principles of transparency and procedural fairness; where citizens can understand how the decision was made and have the capacity to challenge the decision (Glenn, 2017). In summarising the inquiry’s findings, the report reads:

Good public administration requires a transparent and open decision making process that clearly sets out the issues the person needs to address to challenge a decision and the findings of fact on which the decision is based. This principle continues to apply when decision making is automated [emphasis added] (Glenn, 2017).

In addition to the Ombudsman inquiry’s focus on citizens’ ability to challenge decisions, another Parliamentary inquiry focused on the exclusion of affected citizens. The Parliamentary inquiry corroborated the severe consequences of robodebt to be resulting from the exclusion of citizens from participating in the systems’ design (Community Affairs References Committee, 2017).

The robodebt case exemplifies the adverse effects of algorithmic governance of society on democratic self-government. Citizens were not included in discerning the algorithm (logic of the system), the reasons for the automated system were not open to scrutiny or justified to society, and the capacity to influence and challenge decisions were not accommodated. Accordingly, I argue that reliance on algorithms in making decisions can be located in the discussion arguing against shortcuts to democracy.

Justifications of the algocratic shortcut is similarly embedded in epistocratic arguments. Based on “epistocracy”, Danaher (2016, p. 247) defines algocracy as

a particular kind of governance system, one which is organised and structured on the basis of computer-programmed algorithms. To be more precise, [it is] a system in which algorithms are used to collect, collate and organise the data upon which decisions are typically made and to assist in how that data is processed and communicated through the relevant governance system.

Underpinning algocracy that is the governance system centred on algorithms and algorithmic capacities, is how algorithms are perceived as producing epistemically superior decisions compared to experts because they are not subject to human bias (Danaher, 2016). Accordingly, the epistemic quality of algorithmic decisions is couched in the non-humanness of algorithms. For example, in comparing algorithms and human judges in criminal justice, algorithms are seen as immune to bias and external influences (Wykstra, 2018). The perception of this epistemic quality is also present in public opinion. A survey of Dutch citizens’ attitudes to automated decisions reveals people’s inclination to perceive non-human decisions “fairer” to human decisions. This, however, is laden by misconceptions of algorithms and AI (Helberger et al., 2020).

These misconceptions encounter the same problems attributed to epistemic dependence on experts. Citizen autonomy and agency are hindered by “epistemic dependency” when epistocrats argue that citizens cannot understand or evaluate the knowledge produced by experts (Bohman, 2000). Similarly, algorithmic decisions are inexplicable and inaccessible to citizens, undermining citizen autonomy and agency. The contributions of epistemic dependence on experts and algorithms to deliberative capacity fail to be democratic for not observing inclusivity or authenticity. Citizens are excluded on the basis of their epistemic limitations. This is in turn used to avoid providing citizens with accessible justification for the reasons experts and decisions make that affect public life.

A core problem with shortcuts to democratic self-government in institutional deliberation is how blind deference oversteps mutual justification. Lafont (2019) accepts that some measure of deference is necessary in representative democracies. For example, deference by voting or to the randomly selected lot of citizens is not principally undemocratic and indeed facilitates the flow of organising public life. The problem, however, is when elected representatives or selected citizens dislocate the role of this deference in self-government. Instead of regarding deference as a step in an ongoing effort to realise self-government, it becomes a reason to “shortcut” it and settle for the wisdoms of elected representatives, selected citizens, epistocrats and algorithms.

Here, blind deference presents an obstacle to self-government whereby citizens cannot reasonably accept the law (or decision) upon reflection as they are excluded from the process of justification (Lafont, 2019). The reflection of citizens—grounded in citizen autonomy for reflecting on their views, views of others and synthesis these to distinguish self-serving arguments from ones concerned with public interest—is predicated on the presence of political justification (Ferree et al., 2002). Where justification is present, communicative distortions and manipulation become more manageable by focusing on political agents’ capacity for critical reflection (Whipple, 2005). In the following discussion, I consider the manifestation of these harms in the algocratic shortcut compared to their manifestations in other shortcuts.

Democratic harms in the algocratic shortcut

Based on Lafont (2019) and Bohman (2000), the criteria for assessing the democratic harms of the algocratic shortcut is the extent to which shortcuts hinder the conditions for reasonable acceptance upon reflection. This is underpinned by inequality across influence, voice and reasons. From this perspective, the democratic harms associated with the algocratic shortcut, among others, are in bypassing reasons and justifications. Shortcuts subvert conditions needed to constitute reasonable acceptance: equality of influence over decisions, effective voice and consideration of reasons.

Algorithms amplify this inequality across influence, voice and reason in governance. Under algorithmic governance of society, algorithms present data-driven, dangerously simplistic answers to the complexities of public life. The efficiency of algorithms in making sense of big data available about citizens obscures scrutiny of the democratic quality of algorithmic decisions.

In hierarchical societies, algorithms resolve “a crisis of valuation” centred around the question of “[h]ow to value a person, people, peoples?” (Beller, 2018). This “algorithmic idealism”, as Davis et al. (2021) conceptualise it, promise a clear answer that overcomes the limitations of human rationality and biases that “seek to neutralise demographic disparities”. Nonetheless, the problem with this idealisation of algorithmic decisions is that these answers carry structural injustices to the process of “algorithmic production of social difference” (Beller, 2018; Davis et al., 2021). Even historically, politically and socially complex issues such as racial bias and discrimination in criminal justice institutions is perceived to be “fixable” with an automated decision system instead of addressing the politics of institutional racism (Kahn, 2018). Therefore, structural injustice and bias become shielded by the assumption of algorithmic efficiency which is prioritised over democratic accountability. For example, Aboriginal Australians constitute half of the suspects identified in the crime prevention algorithmic system used by police in the Australian state New South Wales. Yet the state police denied inquiry into the algorithm (see Goldenfein, 2019; Sentas & Pandolfini, 2017).

Responding to Lafont, Goodin (2020) highlights that “blind deference” is should not be rejected in its entirety but should be judged for the degree of expected blind deference. Algorithmic governance of society occurs involves varying degrees of blind deference from being imposed without public deliberation, entrenching inequality in influencing decisions (Büscher et al., 2016) to limiting the opportunities for citizen participation. By providing data-based answers on society without citizen inclusion, algorithms diminish opportunities for democratic and civic participation. For example, a survey on citizen attitudes to algorithms and AI decisions, citizens confirmed that governments do not involve them in governing AI across countries including Estonia, Denmark and China (Carrasco et al., 2019). The degree of blind deference to experts, elite and algorithms deciding on automating menial decisions such as reporting infrastructure repairs might seem harmless but can further reduce citizens’ participation in governance and accountability of public office (Peixoto & Steinberg, 2019). The feed of data from automated system that are then automatically processed and analysed negate the need for citizens’ voices and input. The chain of automated process in such menial tasks then limit opportunities to interact with public offices unless the systems malfunction or become dysfunctional.

Social asymmetries and inequality are also augmented as applications of automated decision making are concentrated in areas such as social welfare and criminal justice. Eubanks (2018), Noble (2018) and O’Neil (2016) discuss at length how algorithms automate structural and racial injustices and limit the opportunity structure for reversing these decisions. On this, the United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, warns: “As humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia” (World stumbling zombie-like into a digital welfare dystopia, warns UN human rights expert, 2019). In this statement, Alston is describing the ramifications of the short-term convivences of algorithms and automated decision-making systems to welfare institutions that come at the expense of citizens’ welfare.

Determining the value of groups in society based on data and algorithms also affects equality across free and effective voice in democratic politics. Miller (2017, p. 126) highlights that “[algorithmic politics] is numerical rather than spatial, operational rather than expressive or communicative”. This means that algorithmic decisions disregard contextual factors that affect public life differently and eliminate the nuance brought through meaning-making exercised through communication. Especially in light of big data, Pasquale (2015, p. 216) explains the political economy of algorithmic decisions:

Capitalist democracies increasingly use automated processes to assess risk and allocate opportunity. The companies that control these processes are some of the most dynamic, profitable and important parts of the information economy. All of these services make use of algorithms, usually secret, to bring some order to vast amounts of information. The allure of the technology is clear— the ancient aspiration to predict the future, tempered with a modern twist of statistical sobriety.

In this quote, Pasquale expands on Altson’s observation on the conveniences of algorithmic decision-making systems as a feature of political economy in capitalist democracies. These convivences are specific to the supply of decision-making systems by companies, the high value perceived of most use of the available data and the certainty about the soundness of these statistical decisions. With data, information and algorithms to analyse the present and predict the future, “individual storytelling” is undermined (Couldry, 2010). Individual stories are irrelevant as algorithms are “calculating our potential as students, workers, lovers, [and] criminals” (O’Neil, 2016). Therefore, the combination of these conveniences and the availability of data is limiting citizens’ agency in using their own voice and stories to determine their future. Instead, the futures of citizens and their position in different institutions is determined by algorithms.

But the implications of algorithms on self-government and voice are not emerging from the political economy of big data alone. The “crisis of valuation” mentioned earlier is an important aspect. The quantification of society makes our understanding of ourselves robotic since this is how the robots making decisions understand us (Schüll, 2019). As citizens are reduced to units monitored, measured and analysed, algorithms present “statistically deterministic” accounts of individuals undermining their autonomy in storytelling (Hilderbrandt, 2011). Hence, the availability of data turns citizens into profiles that can be monitored for certain categories of socio-economic and demographic markers that are then analysed by an algorithm to assess citizens’ worthiness of accessing institutions, benefits or even punishments. To understand the value and entitlement of citizens based on the complexity, variance and uniqueness of their experiences is inconvenient to a society comfortable with reducing citizens to statistics.

The algocratic shortcut bypasses reason-giving and justification that allow for self-government under the condition of reasonable acceptance upon reflection. Against the ideal of reason-giving, algorithmic decisions are undemocratic and illegitimate as they are “inaccessible” and “incomprehensible”. Consequently, they undermine citizen democratic authorship (Benvenisti, 2018; Danaher, 2016). Authorship is important for democracy because it ought to reflect the values of a society and ensure that citizens can scrutinise decisions and policy they are asked to obey. Should these decisions be disconnected from the condition of endorsement upon reasonable reflection, the democratic quality of this society is at jeopardy. Because algorithmic decisions are positioned above democratic politics, these decisions are “not transparent, non-accountable, and non-appealable” (Ash, 2018, p. 82). For example, it is often argued that algorithms cannot explain their decisions, reverse engineering is time-intensive and costly, or the algorithms are protected by propriety rights. These arguments about algorithmic decisions, hence, violate the condition of authorship and acceptance upon reflection.

Claims of efficient algorithmic decisions evade questions concerning the values algorithms enforce in governing society. Resorting to algorithms to answer questions about the value of individuals and groups in society “paves the way for AI experts and entrepreneurs to present themselves as architects of society” (Katz in Goldenfein, 2019, p. 60). Relying on algorithms to anticipate or detect behaviour and make decisions based on these probabilities encourages a path of dependency as discussed earlier where our understanding of ourselves and society is done through lens of reductionism of algorithmic logics (Edwards & Veale, 2017). Instead of technologically deterministic and non-democratic dependency, unchecked power of algorithms, their designers and their sponsors in government must be subject to public deliberation.

Remedying these democratic harms involves subjecting algorithms to “democratic participation and constitutional restraint” beyond democratic voting (Hilderbrandt, 2011; Morozov, 2017). This deeper engagement involves democratic deliberation. To correct shortcuts, Lafont (2019) argues laws need to be subject to political deliberation and justification. Lafont (2019) explains that the position of political justification in democratic decision making can be “aspirational” or “institutional”. An aspirational position of political justification involves finding public reason before enacting decisions and laws. This happens when institutions listen to deliberations in the public sphere and carry these considerations, reasons and interests forward. Meanwhile, institutional deliberation means that institutions build a mechanism to trigger a process of justification and explanation of decisions, policies and laws when called on by citizens. Institutionalising the capacity for public scrutiny is both more realistic and practical as canvassing the public sphere can be challenging (Lafont, 2019). Nonetheless, this has its own limits.

Below I elaborate on this argument using the example of governance of algorithms under the European Union (EU) General Data Protection Regulation (GDPR). The case demonstrates how institutionalising “justification” is important but insufficient. Without “aspirational” political deliberation and justification there is a risk of merely supplanting the algocratic shortcut with epistocratic and lottocratic shortcuts.

Democratising algorithms in Europe: political justification through epistocratic and lottocratic shortcuts

The GDPR case involves an attempt in the EU and the UK to democratically govern algorithms. GDPR regulates data collection and processing of the mass of data collected about individuals in EU jurisdiction. Terms set out by the GDPR have been used to question social media platforms for their responses to risks affecting democracy in Europe such as the spread of disinformation (Boucher et al., 2019). GDPR institutionalises a process to justify decisions made by algorithms–algorithmic decisions. However, it also demonstrates the limitations of remedying shortcuts by deploying other shortcuts.

I put forward a critique of the GDPR in three steps, each in turn demonstrates the democratic harms of attempting to correct the algocratic shortcut using other types of shortcuts. This involves the harm of inequality in influencing decisions. The second critique is shortcutting the consideration of reasons by relying on epistocratic and lottocratic shortcuts. The third discusses implications for inequality of free and effective voice in public deliberation about governing algorithms.

Inequality in influencing decisions

The GDPR is, however, well intentioned, the product of an epistocratic shortcut where the political elite and experts have developed the limits on data collection and processing, and automated decisions affecting citizens. This is based on the genealogy of the GDPR that originates from doctrines and policies that were developed by political elite such as policymakers in the European Commission and human rights and data regulation experts (see Souza et al., 2020). The inclusive quality of GDPR of citizens is seen after its adoption. It is limited to using a lottocratic shortcut about legitimising the use of algorithmic and AI decisions across institutions. This involves the random selection of some citizens to sample public considerations for legitimating algorithmic decision-making systems.

Despite the exclusion of citizens from shaping the terms of the GDPR, the regulation purports secure the “right to explanation” for citizens affected by algorithmic decisions (De Gregorio, 2020; Deeks, 2019). This is expressed in the form that, since algorithms do not explain their decisions, under Article 22 of the GDPR individuals “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning [them] or similarly significantly affects [them]” (Regulation (EU) 2016/679, 2016).

Along with the right to explanation, individuals also have a “right to obtain human intervention” so individuals “express [their] point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision” (Regulation (EU) 2016/679, 2016). While Article 22 in GDPR reflects institutionalising justification and reason-giving, it is insufficient for self-government.

Therefore, in terms of equality in opportunities to influence decisions, the rights to explanation and to contest algorithmic decisions mitigate the consequences of algorithms. However, this, I argue is not enough because they do not allow for authorship. Citizens are not substantially included in authoring which decisions to be automated in governing public life, if at all.

The GDPR notionally institutionalises a process to contest and receive a justification or an explanation on a specific automated decision and does not give citizens a right to inquire about the norms inscribed in the algorithm. In other words, the right to explanation is not to an explanation of the norms and constraints designed for the algorithm (Loi et al., 2020). As such, the “right to explanation” and “right to human intervention” can apply in cases such as in Denmark where robots made “incorrect decisions” in taxation where “[n]o human case officers are involved” (Motzfeldt & Næsborg-Andersen, 2017).

Because the GDPR sidelines authorship and self-government, the rights to explanation and contestation are only individual rights, not inclusive of democratic society. For instance, the Danish Parliament in April 2019 approved a law to automate decisions on job seekers even though a pilot phase proved that algorithms had no effect on the capacity of public servants’ decisions on job seekers between 2016 and 2017 (Byrne & Sommer, 2019). GDPR specifies no mechanism, obligation or consideration of explaining and justifying why these areas of governing and administering society are automated. The decision to automate is unjustified and GDPR does not require it to be justified.

The institutionalised justification here applies to individuals rather than the citizenry writ large. In navigating what constitutes an “explanation” in algorithmic decisions, it is arrived at through epistocratic and lottocratic shortcuts.

Inequality in full and fair consideration of reasons

Full and fair consideration of reasons to inform what constitutes democratic and justified role of algorithms under the GDPR is advanced via epistocratic and lottocratic shortcuts. This produces inequality in considered reasons to those who are in power (political elite and experts) or those selected to have reasons of considerable weight (randomly selected lot of citizens).

While the epistocratic and lottocratic shortcuts in this case made important contributions to public deliberation, their democratic limitations are revealed when citizens are affected by algorithmic decisions. The political elite and experts have an important role in argument-production as well as commissioning the spaces for including randomly selected citizens. In argument-production, expert committees at the Council of Europe (COE) articulate the effects of algorithmic decisions on society and democracy (epistocratic shortcut). In creating spaces for citizen reflection, experts in the UK were involved in commissioning hybrid deliberation that consists of two parallel deliberations: one between citizens and the other between the elite and experts. Analysis of the two parallel deliberations is then synthesised to report on citizen and expert considerations. But the focus here demonstrative of the lottocratic shortcut is three citizens’ juries on AI and algorithmic decision making.

Arguments and reasons provided by the political elite and experts on scrutinising algorithms encompass the impact of algorithms on democracy. The COE Committee on Political Affairs and Democracy made a motion for a resolution on the need for democratic governance of AI (Committee on Political Affairs & Democracy, 2019). Another COE committee, the Expert Committee on human rights dimensions of automated data processing and artificial intelligence in the COE, critiqued the “privatization of decisions about public values” emanating from the automation of decision making without “democratic participation or deliberation” about these values (Bernstein et al., 2019). Particularly troubling is the absence of “democratically legitimated oversight over the design, development, deployment and use of algorithmic tools”. The recommended democratic process is “to maintain an open and inclusive dialogue with all relevant stakeholders globally with a view to avoiding path dependencies” (Ministers’ Deputies, 2019).

Moreover, the COE Ministers’ Deputies (2019) issued a declaration on algorithmic manipulation and its violation of democratic ideals. The first mention of algorithmic manipulation is in the context of machine learning tools which have a

growing capacity not only to predict choices but also to influence emotions and thoughts and alter an anticipated course of action, sometimes subliminally. The dangers for democratic societies that emanate from the possibility to employ such capacity to manipulate and control not only economic choices but also social and political behaviours, have only recently become apparent. In this context, particular attention should be paid to the significant power that technological advancement confers to those – be they public entities or private actors – who may use such algorithmic tools without adequate democratic oversight or control.

In this declaration, the Ministers’ Deputies highlight that public and private institutions’ use of algorithms ought to be subject to democratic control particularly for their capacity to manipulate and undermine individual agency. Another related violation is using algorithms “to identify individual vulnerabilities and exploit accurate predictive knowledge, and to reconfigure social environments in order to meet specific goals and vested interests” (Ministers’ Deputies, 2019). Therefore, the Ministers’ Deputies are wary of the extent to which algorithms can be used in shaping political and social choices at the individual and society levels.

In spaces created for citizen inclusion, political elite and experts have defined scope for citizen deliberation: legitimating algorithmic decisions as this is perceived to be the barrier to a wider rollout of algorithms. This scope is based on an independent review of the AI industry by Wendy Hall, professor of computer science, and Jérôme Pesenti, CEO of BenevelentTech, submitted to the UK government. The report discusses the need to develop transparency, explainability and accountability of AI decisions as some of the areas that would improve “public trust” in the fairness of algorithmic decisions (Hall & Pesenti, 2017). This legitimacy is perceived to be achievable when algorithms can explain their decisions. Some problems pertaining “explainability” span include the “right to explanation” in the GDPR is non-binding (Wachter et al., 2017), and that GDPR under specifies what constitutes an “explanation” (Hunt & McKelvey, 2019). Nonetheless, developers of automated decision-making systems prioritise explainability as it would “legitimate” algorithmic decisions (Hunt & McKelvey, 2019).

Citizens’ juries commissioned by the UK Information Commissioner’s Office (ICO) framed deliberation around explainability of AI decisions as dependence on algorithms and automation is inevitable due to investments in the industry (Leslie, 2019; Project explAIn: Interim report, 2019). Herein, as the scope and purpose are pre-defined by the experts, the “lot” of randomly selected citizens is used to shortcut public deliberation about defining what constitutes reasonable and acceptable algorithmic decisions.

The epistemic quality of the three citizens juries were constrained by interests of their commissioners. The first was commissioned by DeepMind and organised by the Royal Society for the encouragement of Arts, Manufactures, and Commerce (RSA).Footnote 3 This was a jury of 25-29 citizens from England and Wales held over 4 days between May–October 2018 (RSA, 2019). The following two citizens’ juries were in Coventry and Manchester, respectively, in February 2019 co-commissioned by the ICO and National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre. These two juries, with 36 jurors in total, were organised and facilitated by Citizens’ Juries c.i.c. and the Jefferson Center (Project explAIn: Interim report, 2019). The juries were centred on scenarios of algorithmic and automated decision making in three sectors of governance: healthcare, labour market and criminal justice (Artificial Intelligence (AI) & explainability Citizens’ Juries Report, 2019; RSA, 2019).

Each commissioning organization also constrained the juries by pre-set issue framing. The RSA citizens’ jury was grounded in democratic politics, highlighting the use of algorithmic decisions by “public bodies” as a “public interest issue” subject to “public deliberation” (RSA, 2019). By contrast, juries in Coventry and Manchester were framed as deliberations about obstacles to trusting algorithmic decisions—the “trade-off between AI transparency and AI performance” (Artificial Intelligence (AI) & explainability Citizens’ Juries Report, 2019).

Despite differences in framing, there was an emerging preference across the three juries for an “explanation” from algorithmic and AI systems. Explainability of algorithmic decisions is on par with explainability of human decisions. As such, where human decision makers are expected to justify their decisions so are algorithms (Artificial Intelligence (AI) & explainability Citizens’ Juries Report, 2019). Ultimately, explanation is articulated as a mechanism for ensuring accountability supported “by establishing a legal requirement for explanation and a right to appeal” (RSA, 2019).

Inequality in free and effective voice

In spaces for expression to exercise free and effective voice, the political elite and experts also lead the deliberations. Examples include blogposts and op-eds (e.g. Dreyer & Schulz, 2019; Mchangama & Liu, 2018), statements by Parliamentary Assembly of the Council of Europe (PACE) rapporteur, Deborah Bergamini, at the Organisation for Economic Development and Co-operation (OCED) Global Parliamentary Network meeting with the in October 2019 (see “Darker side” of AI beginning to emerge, warns rapporteur, 2019).

At the EU level, collective inquiry is part of the European Data Protection Board’s public consultation process. The process, via an online website, collects “views and concerns of all interested stakeholders and citizens”. The only consultation concerning automated decision making, in respect to Article 22, was open from February to May 2020 on a guideline adopted in January on processing of personal data by autonomous vehicles (see Public Consultations. 2018). Experts herein define the agenda of collective inquiry.

As for civil society, its role is dual: informing as well as including citizens in organising collective inquiry. For instance, civil society organization AlgorithmWatch communicates “stories” of algorithmic decision making in public life, from content moderation on social media and Google search results, to policing and border control. Although spaces for expression have been not inclusive to citizens, citizens are included in collective inquiry into algorithms, as well as expressing dissent as algorithms have a wider and direct impact on citizens’ lives.

In organising collective inquiry, AlgorithmWatch in their project “monitoring Instagram”, asks individuals to participate in the project by allowing AlgorithmWatch to their Instagram data to investigate if Instagram’s algorithms discriminate between individuals (Help us unveil the secrets of Instagram’s algorithm, 2020).

Social media involves spaces for voice but the extent to which these are effective is questionable. On the social media website Reddit, there is a space dedicated to GDPR discussions (i.e. subreddit “r/GDPR”). Specific to Article 22 of the GDPR and automated decisions, a user (redditor) expresses dissatisfaction with a government agency’s response to their inquiry and seeks clarity from other redditors.Footnote 4 Follow-ups or responses from the respective agency to such opinion is unevidenced.

Experts’ concerns regarding the impact of algorithms impact on democracy and justice are more concrete when algorithmic decisions have wider and direct effects on citizens. Whilst making changes necessary to prevent the spread of coronavirus, the UK mandated school closure and cancelled sitting exams. To mitigate the effects of this on school-leaving exams, the UK Office of Qualifications and Examinations Regulation (Ofqual) in August 2020 used an algorithmic system to assign students grades for their secondary certificate exams. Student protested the algorithmic decisions highlighting the elitist and classist logic of the algorithm. Spaces for voice spanned social media, occupying physical spaces and interviews in news media (see “Huge mess” as exams appeal guidance withdrawn, 2020; Adams et al., 2020).

This case of attempting to democratise algorithms reveals the limits of an institutionalised political justification, particularly one that is institutionalised through epistocratic and lottocratic shortcuts. Inequality across influence, voice and reasons demonstrate the ramifications of exclusion on self-government. When the experts or citizens winning the “lottery” of democratic participation decide on what is reasonably acceptable, decisions that affect more citizens in society show the misalignment between the reasons arrived at via shortcuts and what society considers a reasonable justification of the coercive power of algorithms. The case of Ofqual dissidence demonstrates that aspirational political deliberation and justification should not be downplayed, especially when decisions have a widespread and direct impact on citizens.

Shortcomings of shortcutting self-government

As seen in this illustrative example, epistocratic and lottocratic for excluding affected citizens. Although shortcuts institutionalised an explanation for algorithmic decisions, this process excludes citizens from authoring the decision to automate and scrutinising the choices informing algorithms. The minipublics about democratic control of algorithmic decisions, for instance, cannot be regarded as the final and ultimate step for deliberations about algorithms. The issues and concerns were selected by experts and do not cover all ranges of algorithmic decisions. Moreover, the randomly selected lot of citizens are not necessarily inclusive of those affected by these decisions like the students impacted by the Ofqual algorithm. For these reasons, minipublics can count as one small step but not the ultimate remedy to the democratic shortcomings of algocracy.

Furthermore, experts attempting a remedy to algocratic harms have preferred to institutionalise a process for “explanation” to implement some degree of democratic control over algorithmic decisions. This approach is, however, limited in terms of allowing citizens to author whether they want algorithms to make decisions that affect them in the first place. By reflecting on the case of citizens’ juries in the UK, juries were a shortcut to more abstract question on the role of algorithms and looking more closely at the choices programmed such as applications pertaining healthcare and criminal justice. For instance, one of the scenarios related to algorithmic matching of organ donors and recipients involves concern for justice in organ matching but such has been not approached in the juries.

As explained earlier, the limited scope of using epistocratic and lottocratic shortcuts to democratise algorithms manifest in the ramifications of algorithmic decisions such as in the Ofqual example. Shortcuts are exclusive and often limited in terms of the authenticity of their deliberations.

The objection to shortcuts here concerns the use of single “one-off” processes of deliberation and scrutiny among the elite or randomly selected citizens, which are insufficient to address the democratic harms in algorithmic governance of society. In the words of Lafont (2019): “There are no shortcuts to make a political community any better than its members, nor can a community move faster by leaving their citizens behind”. As I briefly discuss below, rather than one-off shortcut, the algocratic shortcut is best remedied by “aspirational” public deliberation.

Remedy through aspirational deliberation

Algocracy might be widely represented as a unique and exceptional challenge. Earlier works in democratic theory show otherwise and the limitations identified then remain relevant today. Theorists like Frank Fischer (1999) have raised concerns over the democratic shortcomings where uses of algorithms and automation “obscure the need to debate basic social choices embedded in technological development”. This challenge persists. It is also why an argument for aspirational deliberation ought to be made. If self-government is a condition where citizens can “endorse [institutions, laws, and policies] as their own” (Lafont., 2019. p. 4), by representing algorithms as epistemically superior, fault free and neutral, it negates grounds for reasonable endorsement. Hypothetically, if algorithms exhibit non-human judgement that is neutral and superior, why should their decisions be judged or scrutinised? Algocracy is bolstered with arguments about the epistemic quality of algorithmic decision and algorithmic illusive and idealised “neutrality” (despite evidence to the contrary). It leaves out citizens, their values, judgments and considerations for realising self-government.

Shortcuts attempt to simplify the problem of division of labour in democracies. Necessary as they can be, I share Lafont’s objection to shortcuts construed on the basis of “blind deference” where power, judgments and decisions involved (and emerging) are uncontested and unjustified. Recent thinking in deliberative democracy through the lens of “deliberative systems” attempts to define a normatively desirable division of labour in three ways. The first depends on the complementarity between spaces to achieve ideal deliberation. Individual communicative spaces are not expected to be inclusive, authentic or consequential but for each to achieve one or the other. For example, “public spaces” ought to be inclusive whereas “empowered spaces” ought to be consequential (Dryzek, 2010). Thompson (2008) thinks of “justification” as a function of time where “every practice should at some point in time be deliberatively justified”. Finally, the third approach synthesises these arguments about deliberative labour across spaces and time. Ercan et al. (2019) clarify that deliberative systems thinking needs to consider theorising the ideal “sequence”. Deliberation should originate in spaces for listening, to be weighed in spaces for reflection, then becomes consequential in spaces for decision. Instead of leaving justification to transpire by time or from the deficits of deliberations in multiple spaces, this last approach to division of labour demands justifications (via reciprocal reason-giving) to be central when listening, reflecting and deciding.

But these approaches assume that certain conditions pre-exist before ideal self-government can be sought. For Thompson (2008), equal capacity to influence decisions is an important enabling condition to deliberative justification. With a focus on emancipation, Böker (2017) argues that deliberative democracy should be conceived of as a political culture driven by the “right to justification” writ large. This means that instances of minipublics do not suffice. Instead, citizens, civil society and other political actors should be able to participate in demanding justification and reciprocated with reasons from the respective institutions. Therefore, to realise a remedy of expanding influence beyond instances of lottocracy requires systemic scrutiny of the contributions of experts in public discourse and deliberation (Bohman, 2000). Like experts, knowledge of a level of “algorithmic awareness” is needed in light of expanding algocracy. This awareness would counter the risk of “amplifying deficit that existed in the first place, weakening the condition for an informed public and democratic participation” (Gran et al., 2020). In sum, while division of labour discerns responsibilities and expectations, it requires the enablers, a culture. And this is the position for aspirational deliberation, to be the orienting culture for norms, values, considerations and judgments about (un)acceptable coercive power.Footnote 5

The intent in this article is not to present a blueprint for aspirational deliberation. It is to showcase merits for remedying the algocratic shortcuts with respect to questions unaddressed by “shortcutting a shortcut”. The democratic merits of aspirational deliberation here are two-fold. First, inclusion of society in informing the choices which algorithms automate. This involves deliberation about questions such as: How can citizens and stakeholders contribute to making algorithms that reflect values they deem desirable? (Wykstra, 2018). Also, establishing steps to be in “dialogue with technology” and effectively channelling citizens’ critiques (Bunz, 2017) which are citizen-led rather than advanced through shortcuts. And secondly, ensuring at the level of institutional checks and balances that public voice and democratic ideals are observed in deciding to adopt or not, algorithmic governance (Benvenisti, 2018). However, for these aspects of aspirational deliberation to materialise, scrutiny and consideration of reasons are crucial.

To further illustrate, scrutiny of the role of the developers and designers of algorithms who encode the hierarchies in automated decision systems is scrutiny of “the power of those who set the priorities” (Fischer, 1999). Critiques of automated decision making focus on the lack of human discretion and democratic governance of algorithms rather than expressing a need to halt automation altogether. Etzioni and Etzioni (2017), for example, suggested that golden rule for AI-powered decision-making systems with moral and social implications require “ethical guidance” from respective societies they operate in. In terms of deliberative capacity, this means that the source of this ethical guidance ought to be inclusive and authentic deliberation and its consequentiality is the creation of reflexive and society-guided algorithmic systems.

At this point, defining what connotes good democratic scrutiny beyond the virtues of reason-giving and justification would undermine ideal self-government. Taking the example of using algorithms in the court systems in the U.S., Deeks (2019) highlights how the specific details of an explanation should be determined by the agencies using these systems themselves to “identify errors and biases within the algorithm and aligning the form of [explainable] AI in a given case with the needs of the relevant audiences”. Under a deliberative democratic framework, “citizens develop[e] their own criteria for accepting or rejecting” technology based on their understanding of technology’s impact on their lives (Fischer, 1999).

These examples “aspirational deliberations” about the role of algorithms in society can be slow to fully consider public reason “before imposing coercive policies on others” (Lafont, 2019). A shortcut to establish a mechanism to trigger institutional political justification for algorithmic decisions is arguably easier. Yet, it is insufficient in terms of achieving democratic self-government as this “explainability” applies to algorithmic decisions at the individual level and not the decision to use algorithms and automation. A society-wide justification hence needs to be more present and mobilised beyond lottocratic and epistocratic deliberations. The challenge herein for democracies is to be reflexive and responsive to voices and reasons of citizens and give them considerable influence over decisions about democratising algorithms as given to experts and randomly selected citizens.

Conclusion

Grounded in epistemic justifications, the algocratic shortcut not only matches the democratic problems with the epistocratic shortcuts but accents the democratic deficits of the other shortcuts. The quality of algorithms is justified in epistemic quality of the algorithmic governance of society, but algorithms are perceived to be above democratic politics. The algocratic shortcut highlights the problems of institutionalising political justification through other shortcuts. Remedying the algocratic shortcut, therefore, depends on advancing aspirational political deliberation and justification by sustaining scrutiny and deliberation rather than relying on instances of epistocratic and lottocratic shortcuts.

Through aspirational deliberation, citizens are included in authoring what constitutes reasonable deference between humans and algorithms that bring good to society. The case of aspirational deliberation here is based on two democratic limitations to self-government in the algocratic shortcut. First, institutionalising justification through shortcuts. Second, inequality across influence over decisions, reasons and voice between experts and citizens in governing algorithms is dominated by experts.

Overall, this article puts forward a two-part argument. First, I argued for examining the roles of algorithms in institutions governing public life through the lens of Lafont’s (2019) examination of shortcuts to democratic self-government. Second, I argued against relying on other shortcuts in attempt to democratise algocracy while acknowledging some of the achievements in this respect done through shortcuts. In reflecting on this two-part argument, I highlighted the potential for examining the question of “sequence” between listening, deliberation and action. There is a potential for aspirational deliberation to serve the ends of excerising democratic control over algorithms, particularly to author whether, and under what conditions, society accepts algorithms to make decisions governing public life. This argument for aspirational deliberation is an opportunity to further theorise other considerations for democratic division of labour, between humans and between humans and machines.