“Where is the wisdom,

That we have lost in knowledge

Where is the knowledge

We have lost in information”

T.S. Eliot (The Rock, 1936)

In the age of Taylorism, automation and industrial rationalisation of the 1970s, the Human Centred movement was concerned with ‘skill’ gaps being created by deskilling through the use of computer-controlled machines that almost stole the skill of the best workers and deskilled those who followed ‘preordained’-Tayloristic routines. With the rise of the expert system in the 1980s, these concerns shifted to ‘knowledge’, where knowledge of the expert was being explicated, objectified and turned into logical rules; cognitive models of mind raised the spectre of the demise of common sense and diminution of the tacit dimension. From the 1990s onwards, with the rise of the Internet, neural networks and genetic algorithms, we became aware of the potential of intelligent applications in robotics, medicine, business, finance and consequently the ethical implications of these technological innovations. Now in the era of ubiquitous technology, with big data and the Internet of things, we ponder on how we have allowed ourselves to be led, and our world to be transformed, by the rise of big data mountain, connected to the internet, monitored and directed by computational algorithms, and built upon quantifying and measuring what we know, thereby leaving very little room for uncertainty, ambiguity, anticipation and reflection on known unknowns or unknown knowns. Then what? One wonders what will happen to us in the digitised world if we end up excluding the human from the human–machine–human cycle? More importantly, what is it all for? What is the purpose and whose purpose does the Internet of Things serve?

We are being led to believe that the ‘Internet of things’ is going to change the quality of our lives. It is just a matter of time before our gadgets and services in the home, at work, in our surroundings, and including our physical being, will be connected to the Internet, continuously monitoring, directing, modifying and adapting our relationships to the physical world including our movements, our well-being, our interactions and relationships. It is suggested that wireless sensor systems would monitor food from production to harvest and consumption; the Internet of Things will help us live longer, create businesses that are more productive and profitable, and develop methods to save our planet. Since the Internet is already changing your life on multiple levels, including healthcare, safety, business, energy efficiency and overall quality of life, the Internet of Things is being propagated as a natural step in the continuing march of technology.

Whilst there may exist some exciting possibilities for Internet-connected objects, there is concern as to how we can cope with the risks and crises that accompany and result from them. Although there is talk of designing smart devices for preserving our core rights to privacy and security, there is also an awareness that ‘it is rare for technology to entirely solve the challenges which technology creates, so we need new privacy laws that are savvy and wise. In ‘Sales Pitches From Your Refrigerator’, Ryan Calo says that, ‘ultimately, we need to think comprehensively about the impact of new technology on a range of values, and head off efforts to turn our appliances into salespeople’. Aleecia M. McDonald in ‘Better Engineering, and Better Laws’, says that ‘most people want what a data-driven future can provide, but we have learned the hard way that we cannot trust companies or governments to exercise basic decency and restraint in collecting our data. Where a new technology involves the collection of information, data tend to be the focus of scrutiny. And for good reason: the Internet of Things could provide an unparalleled window into the consumer’s home life. But we should not lose sight of the fact that smart and networked things will not just record our world, but also will act upon it’. Although the Internet of Things may present not just ‘a window to peer through, but a door upon which to knock’, we should be alert to the possibility that there may not be any body there to open the door, it may just be closed for ever.

The BCS recognises the concerns of privacy, transparency and accountability arising out of the collection and analysis of the enormous quantity of information, and alerts us to the challenges of understanding and managing the ‘complexity of interacting systems that will underpin critical social infrastructure’. In addition to the concerns about the inadequacy of data protection regimes to ensure privacy and protection of individuals, there are also concerns for security and the protection of critical infrastructure itself. Moreover, could the algorithmic vision of the Internet of Things be alert to the ‘complexity and the unforeseen—and arguably unforeseeable—consequences of the interactions between complex, large, distributed systems acting in real time, and with consequences that go very directly to the wellbeing of individuals and communities?’

Tim O’Reilly’s concept of ‘algorithmic regulation’ is being seen by some public officials as a resource ‘to make the smartest, most efficient decisions, as fast as possible’. Moreover, it is suggested that this trend of algorithmic decision making ‘can be based on what we know, instead of what we think. This can only lead to better policy and governance’. Such a vision of algorithmic society seems to miss the point that ‘what we know’ is not the same as what we should know or anticipate what we ought to know. ‘What we know’ reflects just reality as we observe it, encompassing our biases and prejudices, and not the actuality of the world as it is, encompassing the multiplicity of social, cultural, economic, political and ethical dimensions; these dimensions are not amenable to algorithms. The irony is that by excluding ‘what we think’, this algorithmic vision diminishes ‘what we think’ and consequently what we would have known, that is, the very dimension that makes us human.

Evegeny Morozov takes issue with O’Reilly’s vision for algorithmic regulation, subscribing to the specification of computational algorithms his ‘belief that big data, harnessed through collective intelligence, would allow us to get at the right answer to every problem, making both representation and deliberation unnecessary. After all, why let contesting factions battle it out in the public sphere if we can just study what happens in the real world—with our sensors, databases, and algorithms?’ The danger of this data-driven techno-centric vision of society is that both the designers of technology and the regulators and monitors of technological systems tend to act as risk averse agents situated in the quantitative measurement universe, rather than acting as crisis management collaborators, anticipating and building in proactive mechanisms and processes whenever a disaster occurs. This is illustrated by cases such as Japan’s Fukushima nuclear plant explosion and the recent disaster in the financial systems, both arguably the products of and governed by the algorithmic culture.

In the midst of this talk about the Internet of things, we can revisit the issue of the data-driven society that was raised in a previous AIS Editorial (AI&Society, Vols. 28.2 & 28.3), and reflect on whether we are slowly heading towards a data-driven future and inhabiting a trend of seeing our complex human systems (social, institutional and organisational), as just cause-effect and input–output feedback data systems. Or do we still have the vision and nerve to mould emerging technologies, such as neuroscience, nanotechnology, biotechnology, cultural robotics, genomics and disruptive innovations, for the benefit of humanity at large? These technological waves also raise questions of the impact of emerging technologies on society. We should not only be mindful, but also bridge the gaps between social responsibility on the one hand and algorithmic specification on other hand, between those researchers who are ‘pushing technological boundaries’ and those who are interested in the wider social, cultural, ethical and economic implications, including practitioners and policy makers, who all share responsibilities of affecting and influencing broader technological policies and use environments.

The Internet of Things, envisioning the ‘inclusion of internet-enabled computer chips’ in everyday gadgets, from washing machines to cookers, fridges to garden plants, cars, wearables and bodily extensions, goes much beyond the ‘dystopian horror’ depicted in George Orwell’s Nineteen Eighty-Four. The envisioned technology would not only monitor the physical being, but also monitor our thoughts and thought processes, a scenario of over-arching control, surveillance and monitoring of our existence. John Naughton warns us that even if we care not to participate in the Internet of Things bandwagon, we just cannot avoid being monitored and directed by the big data with its inbuilt chip technology in what we buy, wherever and whenever we travel, what media we watch, what phone we use and even when we go for a walk. He points out that ‘predictive analysis’ techniques can be used to construct or deconstruct data on us without our knowledge or involvement directly in the Internet of Things networks. Naughton further points out that it is not clear what protection we can expect from national and European data protection laws and privacy legislation, should our personal data, personal information or personal identity be misused, threatened or maligned, without our knowledge.

As the digital world is increasingly taking hold of our lives, and ‘more and more of value in our lives migrates online’, we may be shutting many of the windows of freedom which enable us to seek reconciliation with the world around us, especially freedom from ‘exploitation by others’ and external forces. Even when digital technology opens new windows of ‘freedom’ such as ‘social networking’, it can use the same window as instruments of control and unfreedom, as illustrated by the surveillance software, Riot, which can track ‘people’s movements and predict future behaviour by mining data from social networking websites’. Given the vulnerability and the clouded nature of the digital migration, it may not be long before any remaining ‘residual digital euphoria’ is replaced by ‘growing ethical and privacy unease’, especially when we are not certain ‘what it means to be secure in an online realm’. Could it be that this transition from social to digital engagement is being propagated as a strategic proposition by governments and public organisations in transforming more and more public services, education, health, welfare and employment to the digital realm, as if these socially situated services were another form of data management service? The consequence of this digitally oriented policy is likely to be the weakening of human presence in the transformative cycle of interaction, mediation and interlocution, which facilitates the interpretation, dissemination and communication of contextually relevant and personally and socially responsive services, which are central to the ethos of these human services. This weakening of the human presence, or we may call it ‘human window’, is a step towards closing doors of freedom, a freedom to engage with, influence and shape public services. It is important to recognise that the design of digital systems, however technically competent the system may be, invariably comes with vulnerabilities, defaults and brittleness, and their malfunctioning cannot be anticipated. A system, even a technically competent digital system, is only as effective as its weakest component, and any complex technology-mediated system should leave at least some windows open for dealing with uncertain, unforeseen and unanticipated situations. We need not only look around for gaps, but also to anticipate gaps in what is and what could be.

In this last and fourth issue of the anniversary volume, our authors continue to examine and reflect on some of the underlying issues of the ubiquitous technology, including the Internet of things. Contributions in this volume range from Increasing uneasiness of the unstoppable dominance of IT; ethical concerns of technological powered society; critical eye on the prevailing unjust and dysfunctional economic system; the role of practical knowledge in envisioning and shaping the future from ethical perspectives; an increasing uneasiness of the unstoppable dominance of IT; synergetic roots of human action and human appraisal; the place of the argumentation system in resolving conflict among experts; New Mind; autonomy of fully automated robotic systems; music as a real-time communicative interaction; and machine ethics.

There is an increasing unease about the unstoppable dominance of IT as it becomes embedded in many facets of societies and influences our everyday life. This calls for reassessing the design of IT systems and their evaluation beyond the dominant techno-centric reality to encompassing the actuality of societies in which live. This means that we need to develop a new ‘appreciation’ methodology, which enables us to rethink IT systems as serving the needs of society rather than society fitting into the constraints of technology and bounded rationality of observed reality. The age of the technological society demands that ethical concerns are not forgotten. Indian thought suggests that if a mental state of equanimity without contention prevails as a process, the evils and demerits disappear and ethical dissonance reduces because there is no common evil. Further, it no longer becomes necessary to translate potential consequences of choices in terms of risks. Liberty, peace and love in this technological time comes through a state where the approach is to be hands-off.

In a previous issue of the Journal (Vol. 28.2), we introduced the evolving concept of New Mind. Seen as a process not confined to the brain but spread through the body and world, New Mind is covered by a family of views labelled ‘externalism’. The authors, Riccardo Manzotti and Robert Pepperell, suggest that there is now sufficient momentum in favour of externalism of various kinds to mark a historical shift in the way the mind is understood, dubbed as an emerging externalist tendency the ‘New Mind’. This view of New Mind encapsulates an emerging ‘post-neural’ view of consciousness that could supplant the classical brain-centred model that has dominated for so long. Some key features of the New Mind are as follows: (1) that the mind and mental properties in general are not confined to activity within neural tissue in the head. (2) Reality and the mind share the same ontological status rather than being, as in the classical model, ontologically distinct realms. (3) Our access to the world is understood, to a greater or lesser extent, to be direct rather than mediated, representational or illusory. The authors suggest that this New Mind presents us with a profoundly different conception of this most fundamental attribute of our human condition from that which has held sway for hundreds of years. They further argue that if we take the New Mind seriously as a model of how we and our conscious experience fit into the world, then it will have major consequences for how we understand own being. A response to this debate on New Mind argues that we can talk about the contents of the mind and/or about the vehicles of those contents, but that we should not conflate the two: conflation of content and vehicles come with a price. In this volume of the journal, we meet this debate again, clarifying that the concept of New Mind is beyond a complete and monolithic description of consciousness, arguing that this holistic developing hypothesis of New Mind, if validated, would have far reaching implications for the way we understand the nature of the mind. It would require a departure from centuries of engrained assumptions and beliefs. The New Mind resists the notion of a mind isolated in a head and, instead, promotes a view of the mind that extends or spreads into the body and world beyond. This description is challenged by Andreas Elpidorou in this volume, arguing that it fails to specify what aspect of the mind it extends. This gives rise to the question, what does the term ‘mind’ in ‘New Mind’ denote?

In addressing issues related to building an autonomous social robot, a question arises about the ways many different approaches to social cognition inform the design of social robots. It is arguable that regardless of which theoretical approach to social cognition one favours, instantiating that approach in a workable robot will involve designing that robot on enactive principles? Any discussion on ethical ramifications of machine ethics (ME) raises a number of key issues concerning the relation between technology and ethics, and the nature of what it is to have moral status. Notwithstanding the obvious tensions between the infocentric perspective on one side and the biocentric and ecocentric perspectives on the other, it is proposed that there are also striking parallels in the way that each of these three approaches generates challenges to an anthropocentric ethical hegemony and provides possible scope for some degree of convergence. As computers have become ubiquitous during our lifetimes, they can be used to help us to reflect on our own intellectual foundations, on our words, actions and research, as seen from inside and outside.

In the aftermath of the recent financial meltdown, we can ask ourselves whether it is possible for us to work a way out of the greed of the possessive market society and usher in a radical change away from an unjust and dysfunctional economic system. If so, would mainstream society be willing to try new ideas, moral ideas, and reject the principles of unlimited accumulation and almost unlimited convertibility, and thereby transcend the Faustian bargain myth and adopt the necessity of hope. Such a hopeful scenario should highlight the importance of involving people to envision the future starting from the perspective of their everyday life. Such a dimension of human action should be given a more prominent role in social developments. However, human action is grounded in appraisals or sense-saturated coordination. It is suggested that, as for other species, human appraisal is based in synergies. Humans extend embodiment by linking real-time activity to actions through which they collectively impose a variable degree of control over how an individual realises values.

We are introduced to Carneades Argumentation System to resolve a conflict among art experts on the attribution of a painting to Leonardo da Vinci. This focuses on an analysis of the structure of the interlocking argumentation in the case, using argument mapping tools to track the accumulation of evidence, pros and cons. In an argument on the autonomy of fully automated robotic systems, we are made aware of the difficulties and paradoxes of their decision-making processes alongside their reliability. Moreover, it is shown how difficult it is to respond to these difficulties and paradoxes by calling into play a strong formulation of the precautionary principle. In probing change in organisational systems, it is argued that insights from chaos-complexity research in the natural sciences, which underpins the dynamics of flux and change to unravel the hidden, the unexplained, the disordered, should be built upon to explore the phenomena of change from a social psychological perspective.

As part of the 25th anniversary celebration of AI&Society, our north American editor, Victoria Vesna, organised a symposium, celebrating Alan Turing’s centenary. Hundred years have passed since Alan Turing was born, and we celebrate this historically important individual together with many organisations around the world. The symposium showed his eccentric creativity in addition to reminding all of the huge contribution he made to computation and artificial intelligence. The event consisted of short talks by computer/neuro/nano scientists and humanists, accompanied by artists inspired by Turing’s legacy and persona. Additionally, students from UCLA participated with their ideas of how Turing informs and inspires their work and lives in this time when social networking, robotics and automatic brains are part of daily life. A selected number of symposium talks are presented in this anniversary volume of the journal. These contributions include the description of Turing’s work on the concept of embryonic morphogenesis, propounding a computational framework for pattern formation within the developing embryo. It further notes Turing’s foresight and vision in creating the field of computational biology and mathematical modelling in biological systems. A few linguistic and bio-futuristic musings are presented in honour of Alan Turing and his legacy, informing the reader of connections Turing used or made between humanness, language, intelligence, deception, gender, sexual orientation and computational modelling, in his exploration of the world. The discussion includes how Turing’s work and life have influenced artistic practices. Commenting on Alan Turing’s essay ‘Computing machinery and intelligence’, it is claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics.

On this 25th anniversary occasion, we pay special tribute to our authors, reviewers, readers and well-wishers the world over, who continue to support AI&Society through their writings, review comments, critical observations and constructive suggestions. Special thanks go to our authors of this volume, Ari-Veikko Anttiroiko, Parthasarathi Banerjee, Zach Blas, Mike Cooley, Roberto Cordeschi, Alan Cottey, Stephen Cowley, Ian Cross, Richard Ennals, Andreas Elpidorou, Shaun Gallagher, Eunice McCarthy, Yuval Marton, Robert Pepperell, Tore Nordenstam, Robert Pepperell, Siddharth Ramakrishnan, Satoshi SUZUKI, Steve Torrance, Kenichi Uchiyama, Georgina Voss, Douglas Walton.

The two poems, ‘WAYS OF KNOWING’ and ‘INSULTING MACHINES’ in the Preface by Mike Cooley, in this volume, go to the very heart of AI&Society tradition, expressing many of the concerns and dilemmas of the road to the Internet of Things. A reflection upon the last quarter century of AI&Society debates draws to our attention that no concept, innovation or practice is totally independent of context, and that all debates, innovations, actions and reflections have important contextual societal dimensions, including the discussion on what is it be human in the era of ubiquitous technology. Just as we wondered about what it would be like being to be human if all knowledge were to be explicated and turned into big data, we now wonder what it would be like to be human if big data and the Internet of Things were to control what we do, what we know, and guide us about what we ought to do, know and think. Just as the Human Centred movement of the 1970s saw through the bounded rationality vision of skill, we need to be vigilant of the algorithmic-bounded rationality of the Internet of Things. Reflecting on getting the ‘right answer’ formulation of big data reveals why we should be deeply concerned about the ‘Faustian Exchange’ of the algorithmic vision of technology. I wonder what would TS Eliot have said on the ‘Internet of Things! then what?’

Where is the knowledge

That we are losing in big data

Where is the wisdom

We are losing in the virtual

Where is the human

We are losing in the Internet of Things

1 Sources

AI&Society, journal of knowledge, culture and communication, Springer, vol. 28.2

Ryan Calo (2013), Sales Pitches From Your Refrigerator. http://www.nytimes.com/roomfordebate/2013/09/08/privacy-and-the-internet-of-things/the-internet-of-things-will-change-marketing-and-the-law

Brett Goldstein (2013), When Government Joins the Internet of Things, New York Times, 8 sept. 2013. http://www.nytimes.com/roomfordebate/2013/09/08/privacy-and-the-internet-of-things/when-government-joins-the-internet-of-things

Aleecia M. McDonald (2013), Better Engineering, and Better Laws, 8 Sept. 2013. http://www.nytimes.com/roomfordebate/2013/09/08/privacy-and-the-internet-of-things/laws-can-ensure-privacy-in-the-internet-of-things

EVGENY MOROZOV. The Meme Hustler, The Baffler No. 22, 2013. https://www.thebaffler.com/past/the_meme_hustler

John Naughton (2013), Why big data has made your privacy a thing of the past

The Observer, Sunday 6 October 2013

Algorithmic Regulation Spreading Across Government? http://eaves.ca/2012/01/26/algorithmic-regulation-spreading-across-government/

Scholars Debate the Internet of Things. http://www.techpolicy.com/Blog/September-2013/Scholars-Debate-the-Internet-of-Things.aspx Scholars Debate the Internet of Things, Posted on September 20, 2013

The Internet of Things is Changing Your World, So Pay Attention: http://www.digi.com/blog/community/2013-forecast-the-internet-of-things-is-changing-your-world-so-pay-attention/

The BCS (2013). Time for debate about the societal impact of the Internet of Things. http://www.bcs.org/upload/pdf/societal-impact-report-feb13.pdf

http://blogs.oii.ox.ac.uk/policy/time-for-debate-about-the-societal-impact-of-the-internet-of-things/