Skip to content
Publicly Available Published by De Gruyter October 9, 2020

Introduction: Philosophical reflection and technological change

  • James Tartaglia and Stephen Leach
From the journal Human Affairs

Technological change has now reached a speed at which people can observe it changing their lives. Somebody born in 1910 might remember first seeing cars and aeroplanes, and then, in their twilight years, have noted the rise of computers, perhaps even used them. The life they lived would probably have been seriously affected by technological change, such as when they first acquired a washing machine. Somebody born in 1970 may now spend most of their waking hours on a computer, and will have witnessed their own personal transition to this new way of living as a human being. They will currently see the town centres, that probably provided a focal point for their youth, being emptied, as retail and leisure moves online; if they have children, they will see in them a preoccupation with a virtual world that can be accessed just as well from home as from a different physical location.

You might think that how somebody born in 2030 will see their lives affected by technological change is something we can only guess at. To some extent you would be right, of course: for every prescient anticipation—Leonardo da Vinci’s aircraft or Margaret Cavendish’s motorboats—there are a host of misfires; Ridley Scott’s Blade Runner (1982) can imagine the super-intelligent robots and flying cars which did not populate Los Angeles in 2019, but not flat screen TVs. However, to think of the technological change of the future as essentially a guessing game is a dubious and disconcerting notion. Surely we should be thinking hard about how we want our future to develop, and making sure the conclusions drawn from our discussions are put into practice. We will never be able to exactly determine the future, just as we cannot exactly determine our own individual lives, but we can at least hope to determine the general direction by conscious and collective reason. Marx famously said that, “The philosophers have only interpreted the world, in various ways. The point, however, is to change it.” Arguably, philosophers are not currently changing the world; except for dead ones whose ideas have been absorbed so thoroughly that they are barely reflected upon anymore. Certainly, technologists are changing the world. The main question we put to our contributors was: should philosophical reflection have a role to play in this?

This special issue opens with Ibo van de Poel distinguishing three different perspectives on technological change in current debates about Artificial Intelligence (AI)—AI being perhaps the most dramatic new technology currently envisaged, and as it has been for a long time now, such as in Blade Runner. The first of these perspectives is technological determinism, the idea that technological change is an autonomous force determining the direction of society. The second is that technology is created by people in accordance with their interests and values. The third is a co-evolutionary perspective according to which neither exclusively determines the other, since the two evolve together and influence each other. As Van de Poel emphasises, many advocates of the second perspective, human deliberation and choice, would be willing to grant the existence of co-evolution. But what makes this third perspective distinctive, he argues, is its emphasis on the unintended novelties thrown up by technological change, and therefore also the unforeseen consequences which cannot be governed by prior rational deliberations. Van de Poel argues that current debates about AI would benefit from greater focus on this third perspective. A fourth option is provided by Barry Allen, whose paper is based on the overlooked work of Georges Simondon in his 1958 book, On the Modes of Existence of the Technical Object. On this conception, ‘technological change’ is something of a misnomer, because the radical changes to our lives which we observe are not a product of technological innovation as such, but rather changes in the infrastructure of technology, that is, of our manner of using technology and integrating it with our routines.

The essay by Stephen Leach is the first of three to step back into the history of the philosophy of technology. Leach’s paper is no less than an assault on common sense, which takes as its focus Bertrand Russell’s social and political writings. Russell, as Leach explains, saw these works as distinct from his philosophical work—a matter of journalism, rather than philosophy, and as an expression of common sense in a world in which Russell worried that our development of nuclear weapons threatened the continued existence of human beings. As Leach points out, Russell’s social and political writings, as well as his activism, were quite effective. And yet common sense itself is the problem, as Leach sees it, rather than something we might turn to for salvation. The task of philosophy is to challenge common sense in order to change it, and the common sense of our own day is particularly in need of changing, since it tends to take continued technological change along lines we are already familiar with, and which many dread, as an inevitability. In short, contemporary common sense has internalised the first of Van de Poel’s perspectives on technological change, namely technological determinism, and hence is in need of reform.

Kieran Brayford also targets technological determinism, which he finds embodied in the philosophy of Martin Heidegger, who had a foundational influence on 20th century philosophy of technology. Brayford takes technological determinism to be one of two prevalent myths (two aspects of common sense, Leach might say), the other being the ‘myth of progress’; the latter is a major theme of the work of John Gray, whom Brayford discusses. The combination of these two myths makes technological change dangerous, argues Brayford, but he finds a more adequate approach to the philosophy of technology in the work of Heidegger’s one-time student, Herbert Marcuse, which, when combined with insights from philosophers such as Hans Jonas (another of Heidegger’s students), as well as contemporary philosophers like Gray and Luciano Floridi, might allow philosophical reflection to enter more significantly and beneficially into practical deliberations about technological change. Then in the next essay, Ferreira de Souza looks back into the history of Marcuse’s approach, the one which Brayford considers the most promising for addressing our current concerns. Tracing it to the romanticism of Friedrich Schiller, and ultimately Kant’s Critique of Judgement (1790), Ferreira de Souza emphasises the importance of aesthetic considerations in reflection on technological change, so that we might develop a new sensibility more in tune with nature.

Joseph C. Pitt’s paper punctuates proceedings with a strong note of scepticism. Philosophy is not influencing technological change because there is no such thing as the philosophy of technology, just a bunch of philosophers and political scientists pointlessly talking to each other about philosophical issues. If philosophers want to influence technological change, they need to learn about specific technologies and get involved with their development. Gaining some practical experience in industry will make them better philosophers. Matthew Dennis focuses on a particular kind of technology in his paper, namely self-care apps based upon Stoic philosophy; Apple’s ‘Stoic’ app, for example, provides the user with a regular feed of exercises on their phone based on the teachings of the ancient Stoic philosophers and is designed to help us ‘cope with stress, increase productivity, build resilience and confidence’, as it says on its webpages. Dennis argues that these apps could be improved by heeding some of the lessons to be found in Foucault’s reflections on the value of Stoic philosophy for contemporary life.

Katherine Dormandy coins the term ‘digital whiplash’ to describe our inability to cope with the pace of technological change. She argues that naturally evolved human cognition is badly equipped to deal with the rapid transition to digital life we are all undergoing, with her particular focus being digital surveillance. What is needed is time to reflect on the bigger picture, so as to ensure we are building a future that reflects our values. And yet the cognitive skills needed for this kind of reflection are exactly those that increasing reliance on digital technology degrades, while the areas of the humanities which cultivate it, such as philosophy, are now struggling for public recognition and funding. Many might reply that we are choosing to transfer our lives online in this manner, that we give our consent every time we open a webpage or buy into a new technology, but Alkim Erol argues that ‘freedom’ and ‘consent’ have themselves been appropriated and transformed by digital technologies. Inspired by the work of Gilles Deleuze and Félix Guattari, Erol argues that our desires are now shaped by digital capitalism. To take a key example from the paper, and one which also features in Barry Allen’s paper, since the system profits from learning our personal details, we have come to actively want to display these details in the public domain – we seek ‘likes’. Justin Cruickshank looks for the resources needed to encourage a radical critique of contemporary global techno-capitalism within the pragmatist philosophy of Richard Rorty, Paulo Freire’s views on education, and Gianni Vattimo and Santiago Zabala’s ‘hermeneutic communism’.

The question of autonomy and freedom is again taken up in the paper by Elena Popa, this time in the context of the endeavour to create artificial life. The aim is to create machines which exhibit genuine autonomy, but unlike with biological life, Popa argues, the behaviour we observe in these machines is designed in accordance with human goals – and these are goals that need to be spelled out and evaluated. Luis de Miranda also sees AI as inextricably entangled with human goals. Our tendency to overlook this, combined with an increasing blurring of boundaries between humans and machines, has led us to see the world against what he calls an ‘anthrobotic’ horizon, one which locks us into a disenfranchising belief in technological determinism. De Miranda’s proposal for a solution, which he labels ‘Crealectics’, combines a number of suggestions we have seen in previous papers, such as a casting off of the contemporary common sense of technological determinism, and the development of a more aesthetic sensibility attuned to reflected desires, one which attends to the big picture of what human life amounts to and where we want to take it.

Following these discussions of the appropriateness, or otherwise, of directing an objectifying gaze towards the machines we design, Ashley Shew ends this special issue with a personal and arresting complaint about the manner in which disabled people are sometimes treated as objects, often ‘as less than human’, within discussions in the philosophy of technology. The use of ‘underthought’ experiments, where particular disabilities are used as examples to illustrate some philosophical point or another, serves to further marginalise disabled people, she argues, while narratives of technological change produced by disabled people diverge dramatically from those of transhumanists who envisage a future without disability. The eugenic elimination of disability is – it is worth remembering – a technological change people have tried to implement and the idea has not gone away.

Published Online: 2020-10-09
Published in Print: 2020-10-27

© 2020 Institute for Research in Social Communication, Slovak Academy of Sciences

Downloaded on 24.4.2024 from https://www.degruyter.com/document/doi/10.1515/humaff-2020-0041/html
Scroll to top button