1 Introduction

Having undergone three industrial revolutions, which had a tremendous impact on the way people live all over the globe, it seems that humanity is well on its way towards a fourth. The ability to simulate intelligent behaviour through what we call “AI” has proven invaluable to our species. For, AI enables us to pursue our goals in numerous core domains (including industry, business, healthcare, transportation, warfare, surveillance, and security) more efficiently and effectively than ever before. Amongst its diverse applications, the AI of today is able to pilot airplanes, detect underwater mines, diagnose diseases, explore space, play games, and help to store and manage inconceivable amounts of data. It is moreover beyond doubt that the power and scope of AI applications will continue increasing at an unprecedented pace.

The recent volume ‘Smart Technologies and Fundamental Rights’ (2020, Brill), edited by John-Stewart Gordon and published as part of his series Philosophy and Human Rights, constitutes a timely scholarly contribution to the many legal, ethical, socio-political, and technical challenges that present themselves alongside the immeasurable opportunities afforded by AI as a rapidly evolving field. The importance of this contribution is underscored by the fact that our legal and ethical reasoning all-too-often lags behind technological developments, which could render these technologies potentially hazardous to the well-being and values of society. Relatedly, ‘Smart Technologies and Fundamental Rights’ exemplifies both proactive and reactive reasoning towards AI ethics and law, which positions it to contribute to solving both problems in the here-and-now, as well as anticipating those of futurity.

2 The Interview

What follows is an interview that Brill conducted with John-Stewart Gordon in celebration of publishing the aforementioned volume. In the interview, Gordon touches upon a number of important issues, including robot rights, questions related to AI accountability, machine bias, AI laws and policies, and social media shaming. The interview with Gordon captures some of the main challenges in the context of AI that may prevent humanity from reaping the full benefit from the implementation of different AI systems, as well as the ways in which these challenges may necessitate changes in perspective with regard to traditional ethical and legal paradigms (consider, for instance, issues relating to robot rights in particular). The following interview, then, illustrates perfectly how ‘Smart Technologies and Fundamental Rights’ may help to advance many important debates in the context of AI, law, and ethics. It moreover illustrates why such scholarly contributions are of utmost importance to the future of humanity in the world of AI.

  1. 1.

    How does ‘Smart Technologies…’ contribute to the moral and political discussion about AI and robotics? What do you think is the role of philosophy in this debate?

This book contains 14 comprehensive and challenging chapters at the cutting edge of ethics, socio-political philosophy, law, and information sciences mostly written by senior scholars as well as certain promising young academics. The ground covered hereby encompasses issues relating to moral status and robot rights, to AI governance, AI and law, healthcare, and social media, as well as issues relating to AI vis-à-vis standardization and machine bias. This volume’s coverage is broad, affording substantial insights into many of the current debates in the context of AI and robotics, which are of central importance to the ongoing discourse in moral and political philosophy. The main role of philosophy in these debates is to review and systematize arguments and objections, as well as analyzing and clarifying key concepts (e.g., robot rights, moral status). Furthermore, a philosophical perspective on, for instance, the relational approach in the context of social machines, to solving complex moral and socio-political problems and challenges (e.g., machine bias, issues relating to privacy and surveillance, autonomous transportation, AI governance) is indispensable in making reasonable suggestions while highlighting pitfalls with regard to the wide-spread application of AI and robotics. Philosophy, then, is poised as a methodology that can further the flourishing of human society in an era of increasing automation.

  1. 2.

    The book makes a distinction between fundamental rights and human rights. Can you briefly explain the difference between the two? Why would it be of interest to grant robots (fundamental) rights?

The distinction between fundamental rights and human rights is of utmost importance. For, all human rights are likewise fundamental rights, but not vice versa. For example, it is generally agreed upon that (higher) animals (like great apes and elephants) enjoy some fundamental rights, including the right not to be harmed or killed. Since animals are by definition not human beings, they cannot enjoy, stricto sensu, human rights when defined according to species membership. Rather, the concept of personhood substantiates their claim to adequate moral and legal protection. Likewise, some environmental rights are of utmost importance and are therefore fundamental owing either to intrinsic value or because they are instrumentally valuable for human beings. Again, these rights are not peculiarly human since the environment does not belong to the human species, though it nevertheless demands protection. Against this background, one could entertain arguments in support of, for example, robot rights, at least once intelligent and autonomous robots exist and potentially match (or even exceed) human capabilities. Nonetheless, such robots would not be entitled to human rights (owing, of course, to their lack of humanity). Still, they would justifiably enjoy fundamental rights based on, and in relation to, their technological sophistication.

  1. 3.

    An important aspect of (having) rights is the principle of accountability. When someone violates another's rights, or shows grave negligence for them, they can be held accountable through various mechanisms, such as the rule of law. How do you see this with AI and robotics? What would constitute an effective mechanism for holding robots accountable once they are granted rights? Could such robots likewise hold others accountable?

I think it is important to distinguish between two different scenarios with respect to holding robots legally accountable for their actions. The first is concerned with current and near-future situations whereby, for instance, machines in the context of autonomous transportation are held, or should be held, legally accountable for their mistakes, at least if neither the driver, car producer, engineer, nor the informaticist is to be thus implicated. In that event, one could, for example, introduce a compensation scheme based on a particular insurance for autonomous vehicles, which must be in place before using the car. The second scenario concerns mid- and long-term situations whereby robots’ intelligence equals human capabilities, owing to which robots would be able to cause damage or harm others in a morally comparable sense. In this context, it is interesting to consider similar strategies that are in place for human beings. It has moreover been suggested in the literature that one could reprogram the robot in such cases (which amounts to brainwashing) or delete the program altogether (which would be tantamount to the death penalty). Furthermore, if some intelligent robots have fundamental rights, then they should be able to hold other beings accountable. What do we owe to intelligent robots? This question is certainly of great importance in cases such as patent law. For instance, who owns the profit generated by inventions concocted by intelligent robots who have a moral and legal status? These and related questions must be discussed in more detail to arrive at fair conclusions.

  1. 4.

    As intelligent and autonomous systems, do you think AI and robots can be involved in policymaking and law? Ought they be?

The application of AI is deep-penetrating and widespread. It extends to almost all domains of human life and affairs, which includes the fields of governance and law. The biggest problem is that we currently do not know how to solve issues relating to machine bias and the so-called “black box problem”. It is certainly not recommended, for example, to use AI algorithms trained on historical data as a support system for judges in the context of law (see, for instance, the COMPAS scandal in the US where it has been revealed that the system betrays a significant bias towards African-Americans). Nor is it recommended to use such systems in sensitive fields such as governance, in which it is essential that decision-making is transparent and can be clearly explained, thereby heeding citizens’ right to explanation. Current AI systems are considered to be black boxes, viz. as non-transparent, which causes problems concerning the aforementioned principle of explainability. At this time, it seems fair to suggest that one should not use such systems unless they meet some reasonable thresholds. Indeed, it would be irresponsible to apply deficient AI systems in sensitive areas that may jeopardize the welfare of human beings while undermining key moral values, like justice and equality.

  1. 5.

    Will we see a change of power distribution with the advance of AI, or merely a reinforcement of current power structures?

Whether the advance of AI will lead to either a change in power distribution or a reinforcement of the status quo ultimately depends upon AI’s availability. The idea of a so-called “Open AI”, which is broadly available through the internet (thereby ensuring that general AI benefits all humanity), could become a viable option in the move towards a redistribution of social and societal power. Those who are in a position to make use of AI are then able to do so free of charge and to any end they so choose. However, the possibility of an open access option for general AI systems (once they exist) could likewise be called into question in virtue of the possible misuse of AI for transgressive personal ends. This is a serious problem that must be examined in greater detail. On the other hand, the reinforcement of current power structures based on the development of AI vis-à-vis states and big companies (like Facebook, Google, and Amazon) is already in evidence. I do not have a quick and easy reply to this complex question and its solution may necessitate a joint effort on the part of different stakeholders in human society.

  1. 6.

    What is the place of social media shaming in the debate on fundamental rights and smart technologies?

There are, at least, two centrally important functions of social media shaming (henceforth: SMS) in debates about fundamental rights and the use of smart technologies. First, SMS is an important tool for raising awareness in the general population regarding serious social and moral issues, for instance, a violation of a fundamental right vis-à-vis the application of a given AI system. Second, SMS can be leveraged to compel either companies or the state to change or cease using certain AI systems, with the aim of circumventing the incursion of serious harms to individuals, groups, or the general public. Well-known examples of this include Google Translate, which does not honour gender-neutral language, Amazon’s recruitment tool, which commits gender biases, and COMPAS, which perpetuates racial bias in law. Excepting the continuous use of COMPAS (which constitutes a grave human rights violation), the other mentioned AI systems have been improved (Google Translate) or are obsolete (Amazon’s recruitment tool) thanks to SMS. The protection of fundamental rights is of utmost importance and requires public awareness and sensitivity with respect to many socio-political and moral issues. The democratic system should not be taken for granted: it must be defended on a regular basis. One way of defending the democratic system, which is undergirded by fundamental rights, is SMS.

  1. 7.

    ‘Smart Technologies…’ is the first volume in a new subseries on philosophy and human rights. As the editor of the series, can you tell us something about other forthcoming volumes? How can potential authors propose their work and what sort of material will be considered for publication?

The series Philosophy and Human Rights provides a venue for outstanding scholarship on contemporary and emerging issues in the context of human rights theory and practice in philosophy. The series favors monographs on human rights that are in the vanguard of ethics/moral philosophy, social and political philosophy, and law. Potential authors, whose manuscripts meet the above criteria, may submit to the general editor their proposals or full manuscripts for inclusion in the subseries. We are looking forward to more submissions given the fierce competition of numerous book series in academic publishing. The following upcoming volume, “A Legal Justification of Academic Freedom as a Fundamental Right”, is written by Ausrine Pasvenskiene (Vytautas Magnus University).

3 Conclusion

In conclusion, the advent of AI has brought about a number of significant legal, ethical, socio-political, and technological challenges that need to be met as soon as possible. Brill’s interview with Gordon succeeds in underlining the role played by the humanities and philosophy vis-à-vis devising solutions to the aforementioned challenges. Both the development of AI systems and AI laws ultimately need to be sensitive to fundamental ethical and social values, which are the proper objects of philosophical investigation. Thus, one could argue that the role of philosophy is not only to systematize and clarify arguments and key concepts, but to take part in finding solutions to the most fundamental axiological problems.

The interview is also especially illuminating when it comes to outlining the diversity of challenges relating to AI and smart technologies. Here, one could point to technical issues regarding machine learning, such as machine bias and the “black-box problem”. We could also mention, however, the associated legal and socio-political problems, including so-called “responsibility gaps” with regard to the development and decision-making of autonomous systems, fairness and equity regarding access to AI and its use, as well as considerations regarding how AI affects existing power structures. To this end, it may be worth stressing that many of these problems are complex and multifaceted. For instance, “Open AI” has some clear benefits (ease of access), as well as shortcomings (possibility of personal misuse); both need to be taken into account, whereas the question of power structures concerns not only Tech Giants (like Facebook, Google, or Amazon), but smaller businesses and the impact of AI on the job market more globally.

Finally, as noted in the interview, smart technologies have given rise to new social phenomena, such as social media shaming. Although the interview depicts some of the important positive aspects of this phenomenon, social medias are also being used as platforms for hate speech, slander, defamation, disinformation, and spam. Indeed, AI bots are being deployed in a dedicated manner to achieve these unhappy ends. The ultimate upshot is that challenges relating to AI and smart technologies are some of the most complex and important issues requiring resolution in the coming decades. Owing to this, scholarly contributions regarding such issues will go a long way towards making the inevitable Fourth Industrial Revolution as safe as possible.