Next Article in Journal
The Alphabet Effect Re-Visited, McLuhan Reversals and Complexity Theory
Next Article in Special Issue
Exploring the Computational Explanatory Gap
Previous Article in Journal
Political Correctness between Wise Stoicism and Violent Hypocrisy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Of Cyborgs and Brutes: Technology-Inherited Violence and Ignorance

by
Tommaso Bertolotti
1,*,†,
Selene Arfini
2,† and
Lorenzo Magnani
1,†
1
Philosophy Section, Department of Humanities, University of Pavia, 27100 Pavia, Italy
2
Education and Economical-quantitative Sciences, Department of Philosophy, University of Chieti and Pescara, 66100 Chieti, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Philosophies 2017, 2(1), 1-14; https://doi.org/10.3390/philosophies2010001
Submission received: 2 November 2016 / Revised: 9 December 2016 / Accepted: 9 December 2016 / Published: 26 December 2016
(This article belongs to the Special Issue Cyberphenomenology: Technominds Revolution)

Abstract

:
The broad aim of this paper is to question the ambiguous relationship between technology and intelligence. More specifically, it addresses the reasons why the ever-increasing reliance on smart technologies and wide repositories of data does not necessarily increase the display of “smart” or even “intelligent” behaviors, but rather increases new instances of “brutality” as a mix of ignorance and violence. We claim that the answer can be found in the cyborg theory, and more specifically in the possibility to blend (not always for the best) different kinds of intentionality.

1. Introduction

In the twenty-sixth chant of Dante’s Inferno [1], the poet meets Ulysses in the shape of a double flame. The Homeric hero is facing eternal punishment, together with Diomedes, among the fraudulent rhetoricians because of their ruses (including the wooden horse which ultimately led to the fall of Troy). Relying on narratives not originally present in the Odyssey, Dante makes Ulysses tell of his last journey, after his wanderlust prevented him from finding comfort in his long-sought home island of Ithaca: to spur his associates into taking to the sea with him once more, Ulysses proffers a formidable appeal, which would become an everlasting hymn to curiosity and knowledge.
Consider your sowing: you were not made to live
like brutes, but to follow virtue and knowledge1
Since then, the pursuit of knowledge and virtue has been seen as the opposite of living like brutes, where brutality elicits a mixed sense of violence and ignorance. Today, we live in the narrative of a “knowledge society”, epitomized by the internet and other “information technologies”. The internet is commonly perceived as an infinite repository of knowledge, whose immediate and pervasive availability is meant to draw us further and further away from brute-like states. Another keyword, repeated as a mantra, is the unprecedented possibility for communication afforded by contemporary technologies, supposed on the one hand to facilitate the exchange of knowledge and on the other to foster virtuous open-mindedness.
Both news headlines and individual experience keep showing us that this idyllic picture is often not so actual because the impact of technology on society is not always for the best. How is it possible that we are still afflicted by so much violence and ignorance, by such brutality when, as people of the “information society”, it would be so structurally easy for us to pursue “virtue” and “knowledge”? In order to answer, we will not refer to some metaphysical evil allegedly present in the human being, but rather focus on one of the most interesting traits of human cognition: its capacity to distribute into the environment and match with external structures for achieving higher cognitive scopes. In other words, human beings’ natural propensity to become cyborgs. We will claim that, notwithstanding higher motivations and inspirations, the capacity to seamlessly hybridize with external and technological cognitive artifacts carries the burden of a cognitively-uncritical acceptance of the brutality that may be embedded in the artifacts themselves.

2. Of Cyborgs and Men

The concept of cyborg was not coined in science-fiction, but by two scientists, Manfredi E. Clynes and Nathan S. Kline, at the Rockland State Hospital, Orangeburg, N. Y.:
For the exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously, we propose the term “Cyborg”. The Cyborg deliberately incorporates exogenous components extending the self-regulatory control function of the organism in order to adapt it to new environments.
[2] (p. 27)
Cyborgs (obtained by endowing men with transparent implants) were advocated for allowing man’s adaptation to new environments—think of outer space—that either could not be adapted, or would require a major genetic (hence hereditary) adaptation, spontaneous or induced. It is important to note that, since the beginning, the notion of cyborg was connoted by what, today, could be seen as an ecological-cognitive necessity [3]. The cyborg’s eco-cognitive nature derives from the stress on adaptation and on the cognitive functions: the artifactual additions have always been considered as something that ought to be transparent to one’s cognition and often capable of expanding one’s cognitive capabilities [4].
We will now briefly review two insightful positions in cyborg-related studies, which will be crucial for the rest of our argument: Donna Haraway’s feminist theory [5] and Andy Clark’s cognitive-oriented approach [6].

2.1. Haraway’s Cyborg from Feminist Theory

The untarnished fertility of Haraway’s Cyborg Manifesto was recently shown by a paper interestingly exploring the cyborg-like features of Facebook [7]. While we will later return to its recent use, it is worth sparing a few words on Haraway’s contentions. As presented by Haraway herself, the theory is deeply embedded in Feminist arguments. Nevertheless, some of her takes may be discussed and accepted regardless of one’s sharing the ideology they are meant to support. Haraway inserted in her definition of cyborg a trait that was seminal (or concerning an elite) at the time of her writing, but that is fully developed now: the strict dependence of our cyborgean nature on social cognition.
A cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction. Social reality is lived social relations, our most important political construction, a world-changing fiction.
[5] (p. 149)
Haraway’s further insight about the relationship between cyborgs and boundaries is perhaps the one that is most useful to the actual discourse. Her pivotal claim is that high-tech culture, epitomized by the actualization of the cyborg, openly challenges the dualisms that have been determining the practical and intellectual lives of human beings for millennia: “some of those troubling dualisms are self/other, mind/body, culture/nature, male/female, civilized/primitive, reality/appearance, whole/part, agent/resource, maker/ made, active/passive, right/wrong, truth/illusion, total/partial, God/man” [5] (p. 177). In this respect, it is extremely interesting to consider how the actual “being cyborgs” impacts our perception of the World, the way we make sense of our perceptual judgements and how we direct attention: it is the issue explored by Verbeek as augmented intentionality [8] essential to phenomenology and cognitive science. As noticed by Waite and Bourke, this denotation of the cyborg is most fitting to investigate actual phenomena such as the use of social networking websites (SNS), considering to what extent they habitually collapse dualisms and dichotomies such as the real/virtual one, which was left relatively unharmed in previous modalities of virtual pro-sociality (i.e., forums, chatrooms) which would foster a juxtaposition of different social worlds rather than a homogenous blend.

2.2. Clark’s “Natural Born Cyborg”

One of the largest debates in cyborg-related studies has been on where to set the line dividing what is a cyborg and what is not (yet) a cyborg. Inclusive and exclusive positions would argue about the use of extensions to one’s physical and cognitive capabilities (esthetic or prosthetic) and the possibility (or lack thereof) to detach the extension. In our view, one of the most interesting attempts to go beyond this debate was achieved by philosopher and cognitive scientist Andy Clark. His fundamental goal was to perform a gestalt shift and discard the iconic view (highly influenced by decades of sci-fi interweaving with philosophy and cognitive science) of the cyborg as an esthetically intriguing mixture of man and machine characterized by a hard notion of incorporation, and rather focus on the fact that:
What is special about human brains, and what best explains the distinctive features of human intelligence, is precisely their ability to enter into deep and complex relationships with nonbiological constructs, props, and aids. This ability, however, does not depend on physical wire-and-implant mergers, so much as on our openness to information-processing mergers. Such mergers may be consummated without the intrusion of silicon and wire into flesh and blood, as anyone who has felt himself thinking via the act of writing already knows.
([6], original emphasis, p. 5)
Clark’s contention, explained by the oxymoronic expression depicting human beings as “natural-born cyborgs”, is coherent with his studies concerning the extended mind and the distribution of cognition [9], and cannot be overlooked firstly because of the naturalization of the cyborg he performed. It must be said that Clark’s theory is quite polarizing in the cyborg studies, causing either strong adherence or stern refusal. His going beyond a hard incorporation criterion, present in Clynes and Klyne’s original seminal of cyborg [2], is taken by his opponents at best as a misunderstanding and at worst as a betrayal. For instance, a view is that his theories make sense, but he should refer to processes of hybridization and not necessarily to cyborgs, as Susan Greenfield does [10]. Opposed to Clark’s view, Kevin Warwick maintains that a cyborg is something that is part-animal, part-machine, and whose capabilities are extended beyond normal limits [11]—thus stressing a hard conception of incorporation. Conversely, is Clark’s view that it can be sensibly contended that, if we look for a cyborg, a student massively relying on her iPhone should be a satisfactory instance? From the perspective of distributed cognition, we can admit that there is indeed no qualitative shift from using pen and paper to spell out a complicate choice to relying on one’s smartphone to evaluate the best course of action.2
As we said, the acceptance of Clark’s views is not unanimous within the community of cyborg scholars. From our perspective, we could say that albeit Clark’s theory is coherent (and telling) as for the cognitive perspective, his convincing expansion of the concept of cyborg eventually fires back, exploding the concept itself. What is defused, in our opinion, is the explanatory power of the cyborg concept as far as it concerns hybrids made of humans plus a peculiar kind of highly-technological endowment. In other words, the issue we have with Clark is not about his forsaking a hard incorporation criterion, but about not stressing the difference in the hybridization enough with a non-cognizing device (such as a pair of glasses, or paper and a pen) and with an artificially-cognizing device (such as a smartphone, to make the dullest example).

3. The Diffused-AC Cyborg (DACC)

The Internet, and those devices that rely on it to the point of being meaningless offline, empower us to become this peculiar kind of cyborg. The notion of “empowerment” [13] has always been crucial in the technoethical discourse. New technologies empower users in a number of ways: they empower their epistemic capacities (Internet, thanks to its pervasiveness, can make us know more information and in a faster way, nearly zeroing the costs of provision), and from this epistemic empowerment follows a civic empowerment (as the Internet equips us with the means to check the accountability of our governors and politicians, compare with other realities, be engaged at lesser costs, etc.) [14]. The most recent IT media also empowered us emotionally and morally, making distant communication easier, more reliable and stimulating higher proximity—also moral proximity.
Thanks to the Internet, our “selves” today largely consist of an externally stored quantity of data, information, images, and texts that concern us as individuals (sometimes produced by ourselves, sometimes not), and the result is a “cyborg” of both flesh and electronic data that relate to us. The “implant places” of this kind of cyborg consists of a series of devices we use actively and passively (from smartphones and computers to GPS receivers and highway cameras), which contribute to create a mutual interaction between our offline and online presence, making the two less and less separable. To have a better picture of these mechanisms, we could say that most alterations (of any kind, physical, geographical, emotional) of our offline variables provoke modifications in our online presence, which first of all cause, in their turn, a series of modifications of the original physical variables, and secondarily (but not of least importance) they interfere with other online presences causing modifications in the physical variables of other people.
The “traditional” idea of cyborg focused on the connection between a human being and high-tech artifacts. Andy Clark suggested that the high in high-tech is not qualitatively crucial in defining a cyborg. Our take is that two elements, relating to contemporary available technologies, do indeed matter in the individuation of at least a peculiar kind of cyborg concept whose explanatory power could much benefit the current debate. These are:
  • Technologies that diffuse and de-localize the activity of the subject, yet without transporting her in a cognitively separate world;
  • Technologies that let the user rely on more or less complex forms of artificial cognition.
Before moving on, it is necessary to stress what we mean by artificial cognition: if, by artificial intelligence, we intend a complete or even partial simulation of the human mind, we are quite far from it. Nevertheless, if instead of “intelligence” we consider a less demanding notion, such as “cognition”, then things change. The Stanford Philosophy Encyclopedia defines “cognition” as “being constituted by the processes used to generate adaptive or flexible behavior”. The algorithms regulating social networking websites, news feeds, or navigating apps such as Waze (Waze Mobile Ltd., Tel Aviv, Israel) are not intelligent as a human being would be, but they do behave in an adaptive and flexible with respect to their input (of course always within the scope of their programming). It can be of help, to understand the difference between artificial cognition and artificial intelligence, to refer to the Aristotelian tripartition of the soul in vegetative, sensitive and rational. The vegetative and sensitive parts of the soul were thought to be shared by humans and animals alike, and they can be stretched to represent “cognition” in the sense that animals display cognitive abilities without having full-blown human-like intelligence. This division is akin to the difference we introduce between artificial intelligence and artificial cognition.
It is the hybridization with this kind of cognition, together with the diffused nature of such agencies, that can be summed up by the notion of a Diffused and Artificially-Cognizing Cyborg, (hence DACC). Let us analyze it in greater detail, starting with its diffused nature.
Since the early 2000s, the virtualization of the Internet usage, first understood as the creation of a decoupled universe, witnessed an inverted trend. Popular websites and services such as MSN, Myspace and later Facebook and Twitter dramatically impacted our cyborgean nature in a way that would not affect our virtualization but rather the organism-side of the cyborg. Indeed, the text-and-image-based interfaces of websites such as Facebook, Twitter, Amazon, Wikipedia (or even the infamous Ask.fm) are scarcely impressive, and require far less ability than most video-games, no matter how intuitive the latter can be.3 However, the former subtly play an incredible role in turning users into diffused cyborgs—not only at the cognitive level, but also at the perceptual and emotional one. As Waite and Bourke contend, referring to social networking sites:
In one instance, Facebook is a website clearly identifiable as a virtual online space, but simultaneously, it is thoroughly embedded in, and informed by the material lives of the individuals without whom it would not exist in its current, recognizable form […] More than a ‘space’ or a ‘destination’, Facebook can be conceived of as a virtual network of interacting ‘digital bodies’.
[7] (p. 4)
Waite and Bourke [7], referring to research they conducted in a Australian rural community, stressed the cyborgean nature of social networking itself: phones and computers are the gateways to a world that is not self-standing (as a Second Life, or World of Warcraft would be), but augments the material reality that people live in everyday. Anticipating some conclusive remarks of this paper, it must be acknowledged how the subtlety displayed by social networking websites, and other kinds of pro-social technologies, in determining our existence as cyborgs begs for a cognitive and philosophical analysis. If being cyborgs meant for instance carrying around powerful implanted weaponry, then the effects would be apparent to anyone: conversely, we are mostly unaware of the effects of our diffused social cyborgization apart from brief moments of wonder after some peculiar coincidences, or—we will see it in the next section—when the outcomes become violent. As already suggested by Bertolotti and Magnani [16], evolutionarily oriented cognitive studies cannot be spared from having their say about the matter inasmuch as pro-social technology is effective by reproducing and simulating our social-cognitive affordances (for instance, our inclination towards gossip), making the cyborgization of our selves utterly transparent.
Another reason justifying the cognitive interest in the diffused cyborg is still spelled out by Waite and Bourke [7], in their contention that Facebook itself, as most SNS, can be conceptualized as a massive, loosely bordered, cyborg—blending human and cybernetic parts. In this sense, it may be valuable to add that the dualism-collapsing might of the cyborg does not only concern the human/machine dichotomy, but the self/other one as well. The diffused cyborg that prosocial technology is turning us into does not only absolve us from the physical boundaries—still replicating and augmenting a kind of sociality that is based upon the functioning of the “real world” one—but blends us in a real super-cyborg (just similar to the sociobiological notion of super-organism) not only as we epistemologically partake of an objectified shared knowledge base more real than any collective unconscious (real inasmuch as it is hard stored on data servers), but also because, by functions such as tagging and commenting, one’s possession over texts and images becomes, in some cases, undistinguishable from the possession of other users.4
Such widening of the field of our analysis, and of the agent it concerns, requires the introduction of the second element defining the DACC cyborg, namely the reliance on forms of Artificial Cognition. The Artificial Cognition capabilities displayed by the technologies we connect through (and with) can be extremely modest, yet still they represent the possibility to react smartly to modifications in one’s environment. Such cognitive capabilities to “make sense” of a given environment in order to pursue a certain scope may be perceived by users as randomness: this is especially true if we consider all of the cases in which an artificial system suggests something to the user—for instance, Amazon’s recommendations or Facebook’s People You Might Know.
The historical division between organic and inorganic—to rely once again on dualisms—would correspond to the division between constructor and constructed (in the lexicon of cognitive niche theory): decision making processes (connected with cognition) could be only ascribed to biological organisms. As shown macroscopically by the actual debate about the development of autonomous killer drones [17], the computational revolution introduced constructed constructors, artificially cognizing, non-biologic systems able to make autonomous assessments and forecasts, and initiate courses of actions (from simple safety recognition devices to elaborate financial applications capable of determining the fate of entire nations). The presence of such artifactual cognizers becomes further complicated once, as cyborgs, we start becoming hybridized with them, originating holistic beings that embed a biologic brain and several artificial forms of auxiliary cognition on which our primary cognition (often unaware) relies.
Consider the SNS user: we showed how it can be convincingly framed as a diffused cyborg, but that was not the end to it. In order to function, the super-cyborg relies on a series of standalone algorithms handling searches (and their priorities), reports, suggestions, and order of the news. Facebook, for instance, actively impacts the “real lives” of its cyborg users (who, in turn, are users because of the impact that Facebook has in their real lives), but they are not necessarily aware of how, in their being cyborgs, their lives are partially determined by the artificial cognitive processes implanted in the network they holistically partake of, for instance deciding which updates are more “important” and showing them first. One picture seen on Facebook can trigger an unthinkable chain reaction in a user’s life, but it was the system (hybridized with the user) that picked what picture would be more relevant for the user to see first.
Such reflections can be easily extended to most contemporary technologies mediating pro-sociality by the Internet. Whereas the diffusion of remote communication systems (mail, telephone, text messages, emails…)5 turned human beings into diffused cyborgs (it is not impossible, in agreement with Clark’s idea, to imagine a Renaissance man as cyborg made up of body, paper, and ink as he would engage in correspondence with his fellows over Europe), contemporary technologies add the artificial cognition element, determining users as Diffused and Artificially-Cognizing Cyborgs. This is not only true as far as social cognition is concerned, but also the reliance on geo-localization and guiding devices (assessing road conditions and making forecasts about traffic and weather), or eCommerce: basically, just as SNS make human beings into social DACCs, advanced eCommerce platforms such as Amazon turn consumers into DACC consumers because of the integration between the buyer’s desires, her possibilities and their processing operated by the website. The same can be said of most contemporary and foreseen “augmented” experiences, as they will rely on a diffusion of the self through the Internet in order to achieve a maximized experience in her real life, but this can happen only if massively supported by transparent artificial decision makers assisting the user, ranging from contemporary artificial cognition to a full blown artificial intelligence.

4. Cyborg Intentionality: Beware of What You Hybridize with

Verbeek develops a most interesting analysis of the cyborg by relying on the phenomenological concept of intentionality, describing the relationship between human beings and their world. Verbeek’s elaboration, in turn, relies on Ihde’s notion of mediated intentionality [18].
Firstly, technologies can be embodied by their users, establishing a relationship between humans and their world. When looking through a pair of glasses, the glasses are not noticed explicitly but are “incorporated”; they become extensions of the human body. Secondly, technologies can be the terminus of our experience. In this “alterity relation”, human beings interact with a device, as is the case when taking money from an ATM. A third human–technology relation is the “hermeneutic relation”. In this relation, technologies provide representations of reality, which need interpretation in order to constitute a “perception”—like a thermometer, which does not produce an actual experience of heat or cold, but delivers a value which needs to be “read” in order to tell something about temperature. The fourth human–technology relation Ihde distinguishes is the background relation, where technologies are not experienced directly, but rather create a context for our perceptions, like the humming of the air conditioning, or the automatic switching on and off of the refrigerator, et cetera.
[8] (p. 389)
Verbeek goes further, introducing the hybrid, or cyborg intentionality “of human–technology hybrids, in which the human and the technological are merged into a new entity, rather than interrelated, as in Ihde’s human–technology relations” [8] (p. 390).
The difference, according to Verbeek, is that in cyborg intentionality, it is not possible, nor it does not make sense, to separate the technological and the human “share” in the intentional relationship. Conversely, the embodied kind of intentionality is, for instance, the relationship between two persons speaking via a cellphone, where you can tell the intentional contributions apart quite easily.
In order to fully understand the scope of the “cyborg intentionality,” another concept has to be introduced: the composite intentionality. In this case, “the intentionalities of technological artifacts themselves play a central role, in cooperation with the intentionalities of the human beings using these artifacts” [8] (p. 392).
Merging, becoming hybrids with things that are capable of processing more information than we do, that are connected to almost limitless repositories of knowledge that keep growing as we speak, seems prima facie a guarantee of growing intelligence—also understood as better abilities to cope with the uncertainty coming from our surroundings. Indeed, it would be foolish to totally deny this: technologies—and our ability to seamlessly hybridize with them—are indeed responsible for many contemporary advancements that do go in the sense of Ulysses’ plea for us “to follow virtue and knowledge”. However, sometimes we have clear evidence that this is not the case. Sometimes, our close relationship with technologies goes from strongly impairing our welfare, to making us dumb, or all the way to killing us outright.
Let us give a few examples of when, in our opinion, the cyborg relationship has nefarious consequences because of the alteration of intentionality it brings about. The fact that (an abuse of) social media makes us dumber seems self evident: in Section 3, we accepted the definition of Facebook as a cyborg, and therefore the dumbness induced by social media can be linked to the intentional issues that we are debating. Nevertheless, some examples are even more adamantine.
If we search on the Internet for “GPS related accidents,” we are given an impressive list of more or less gruesome episodes in which drivers, by “following” their GPS devices, went the wrong way, dove into a canal, jumped off a cliff, and so on. On second thought, is it right to say that we “follow” our GPS devices? The system composed of a human, a car and a GPS (the two latter may be fused into one) is clearly a cyborg. The instructions given by the GPS correspond to the device’s way of “intentionalizing” the world, according to its “sensorium” composed of pre-loaded maps, live updates and satellite positioning. As GPS users can tell from their experience, the process of learning to use a GPS equals learning to rely on it. The “mistake” new users do is questioning the GPS and adopting a hybrid driving style, sometimes following the instruction and sometimes pretending to “know best”: this usually ends up in getting lost and taking much longer to get to the destination. Learning to rely on the GPS amounts to learning to “know through” the GPS device. In our current lexicon, the process amounts to accepting a cyborg, composite intentionality made up by our intentionality and the intentionality of the device. Relying on the GPS affects our intentionality to the point that, even if we do not know how to reach a destination, we do not panic or feel lost because the composite intentionality can cope with it. The fact that we yield our total control to the device, which influences our feeling of knowing, is responsible for consequences that can be either ludicrous or dramatic.
A similar, but far more dramatic, scenario of composite intentionality and its nefarious outcomes can be applied to air travels. Governing an airplane can be seen as the process of becoming a cyborg with the airplane. Airplanes, as of today, cannot completely fly themselves without human help, but, at the same time, a human being could not fly a plane in certain conditions (for instance, at night or in particularly bad weather), without accepting that its intentionality be hybridized with the artifactual sensorium of the airplane and its array of sensors and captors. Expert pilots intentionalize the world through the intentionality of the airplane, and, by doing so, they assume the intentionality of the airplane is accurate, that is to say, it provides a view of the world that is different from the pilots’ but coherent with theirs.
On 1 June 2009, one of the deadliest airplane accidents in recent history took place, when Air France Flight 447 en route from Rio de Janeiro to Paris crashed in the Atlantic Ocean killing all 228 people on board. Investigations concluded that the aircraft crashed after temporary inconsistencies between the airspeed measurements—likely due to the aircraft’s pitot tubes being obstructed by ice crystals—caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall from which it did not recover. In other words, the crash was caused by a failure in the “composite intentionality”. The intentionality feeding into the hybrid system from the airplane’s sensorium faulted, and this triggered a response from the human part that was not adapted with the “reality” (actual speed and altitude) but was consistent with the picture provided by the hybrid intentional system.
A much more lighthearted example of problematic cyborg intentionality can explain some of the troubles caused by the recently released game Pokemon Go. Pokemon Go is based on augmented reality. The game “augments” the reality of the player by inserting Pokemons to be caught, arenas to fight other players, and so on, in the players’ own environment. This further layer, the augmentation of reality, is intentionalized by the app running on a smartphone. By looking at the screen, we can play the game if our intentionality becomes “cyborg” through the composite intentionality that the device brings in.
Of course, by becoming one with the connected machine in the phenomenon of intentionality, our worlds merge. Our scopes, our “seeing that” but also “seeing for” are affected. In the cyborg intentionality of the Pokemon Go player, the aim is to catch as many Pokemons as possible, and this is the intentionality feed brought in by the coupled device. This may get to override the relationships we commonly have with our surroundings and make us do “stupid” things, such as falling from cliffs, trying to cross highways, access restricted areas, smash into police cars while playing at the wheel, etc. Nevertheless, this is not a consequence of some carelessness, or some evil, that would be natural to human beings. It is not even the result of an Augustinian mishap in one’s ordo amoris, nor an issue tracing back to one’s weakness of will: conversely, it is a consequence of our very ability to seamlessly connect with technologies, be they simple or complex. While it is clear that the hybridization process proceeds from us to the external technology, and we infuse our scopes into it, if we reflect on the actuality of cyborg and composite intentionality, then we see that the process can also proceed the other way, the machine extends its intentionality inside of our own, thus “uploading” their vision of the world, but also their scopes and goals within us.
The aforementioned hybridization with social networking websites affects both humans’ social skills and dispositions, and the ways they exchange data and rely on information they come across. As we said, the narrative of the “knowledge society” is nurtured by the perception of the “information technologies” as repositories of knowledge, which is reinforced by the fact that social networking websites are now complex platforms of communication that let millions of pieces of contents be shared daily on a global scale, through the interaction between the individuals that registered in online communities. In other words, the human ability of gathering and exchanging information and knowledge from the environment, the ability to alter the environment so that it better serves cognitive scopes, is enhanced and boosted in the hybridization with social networking websites. Nevertheless, it can be argued that the social aim of the SNS structure and the humans’ will to control the value of the information they receive and share on those platforms are not balanced in the composition of the social networking cyborg: indeed, social networking websites provide more ways to connect the users to each other than to control the quality of the information they share and receive. This causes a two-sided effect: on one hand, the users can get more information about local and global news, scientific discoveries and research, political facts and cultural issues [19]; on the other hand, they can also develop a biased epistemological judgment over the information they receive and share due to the limited space for the non-personal information that is reserved in the SNS [20]. In fact, while the socially shared information (for instance impressive, curious or fun news) does serve an interactive and social purpose, it could also delude the users into being able to acquire actual specific or complete knowledge with little effort. In a nutshell, the hybridization with social media has an impact on both the ways human beings enhance and expand their knowledge and the ways they preserve and consolidate their ignorance. In the next section, we will offer some reasons regarding how and why social media can not only be perceived as knowledge-enhancing cyborgs, but also ignorance distributors.

5. Ignorance Technologies in the Information Society: Cyber-Ignorance and Its Social  Distribution

“The structure of our media affects the character of our society” writes Eli Pariser in the controversial book “The Filter Bubble: What the Internet is Hiding from You”. He argues that if it is true, then we have to seriously reconsider the power and the limits of social networks as the current major mass media on a global scale. Indeed, humans, through the hybridization with social media, now have the power to shape for themselves and others the frame of society, distributing both personal data, opinions and information in online communities. According to the Pew Research Centre,6 more than two-thirds of the American population use online communities, most of them in order to get news about politics, science and technology [22,23]. At the same time, it is widely reported that these networks also distribute misinformation and just-so stories. For example, a study conducted by Bessi et al. testifies that a large part of the Facebook population, upon receiving an injection of evidently false information, cannot distinguish them from grounded data. Again, the UNICEF (United Nations International Children’s Emergency Fund) Social and Civic Media Section conducted a study over the diffusion of pseudoscientific rumors and ideological beliefs in online communities in order to prevent the diffusion of anti-vaccine sentiments in Europe [24]. Misinformation is not a light word, but it has been used to describe the effect of the employment of social network and online communities as sources for news and high quality information [25]. Some studies carried out in the past ten years also reveal that a constant employment of social networks can have negative effects on students’ performances and may seriously alter cognitive functions associated with studying. For example, Frein and colleagues reported that high and low Facebook users perform differently in free memory recall [26]. Heavy Facebook users were able to recall significantly fewer words than light Facebook users. The authors are open to different explanations for this phenomenon: it is possible that people with poor memories use Facebook more frequently in order to compensate their memory weaknesses. High Facebook users “may simply be out of practice in using their memory”, or Facebook can “actually be affecting how people process information in some way that results in lower recall ability” [26]. In any case, the results have been used to confirm the problematic relation with the use of social network websites and high performances in formal educative settings. Therefore, the philosophical problem remains: how did the hybridization between humans with social media cognitive artifacts foster these problematic results? How, for instance, is the socially-distributed cyborg producing and diffusing information and knowledge but also misinformation, fake data and just-so stories?
Someone could argue that the hybridization with social media should not be considered as far as it concerns the distribution of precise data and accurate information on the Internet, since social media and online communities are just socially based Internet websites for catching up with old flames or sharing what you ate for breakfast. Unfortunately, if once it could be true, now it is just a big oversimplification. Originally, Facebook and other sites were indeed designed as personal spaces to share information about oneself, but now the amount of news, scientific information and political statements that occur on their platform should force even the most skeptical person to consider them as common venues for sharing external content with one’s actual and cognitive extended network. Wilcox, a science writer, went further, asking scientists to be aware of these new tools for science communication, calling this effort “an integral part of conducting and disseminating science in today’s world” [27]. Nevertheless, the problems regarding the hybridization of humans with social media concerns not only the will of scientists to communicate their research, and the intent of lay people to be informed on science and technology, but also the enactment of these wishes though platforms that are socially driven and do not offer ways to make a clear distinction between actual and fake or exaggerate science-based news.
Indeed, in order to provide ways to feed both the social cognition of users and their science and technology thirst, journalists, especially science journalists (and scientists who do try to communicate their research), give the public what the latter is looking for: information that is simple, useful, and interesting. Even so, science is not made by just simple, useful, and interesting facts. In order to make science appealing to a wider public, journalists diffuse on social networks just the results (some results) of scientific research, with little or less effort in explaining how the process was conducted. Indeed, we can say that they release in the media what Jackson calls “black box arguments” [28].
A black box argument is, in the words of the author, “a metaphor for modular components of argumentative discussion that are, within a particular discussion, not open to expansion”. They are the part of argumentation, often conclusive, that stands for a complete explanation of the process that conducts to that solution, but that cannot be further elaborated by the listener. They resemble the fallacy ad auctoritatem, the “appeal to authority”, in the way that they are justified by the speaker as the abbreviation of a complicated result found by competent people. However, as suggested by the author:
In another way of looking at black box arguments, they are a constantly evolving technology for coming to conclusions and making these conclusions broadly acceptable. Black boxes are to argumentation what material inventions are to engineering and related sciences. They are anchored in and constrained by fundamental natural processes, but they are also new things that require theoretical explication and practical assessment.
[28] (p. 437)
By using black box arguments, journalists, scientists and science writers do not offer much more than what is distributed by other information sources on social media. If the rhetoric of the authors is the only discriminant between high quality information (scientific reports, political news, and so on) and poor and not validated data, then it is not surprising that those who spread the latter are better prepared for the communication on social networks. Also using intuitively simple, but inaccurate or inconsistent black boxes (conspiracy theories, religious beliefs, concepts of alternative medicine, etc) allows them to offer solutions better suited for the non-academic environment for public media. Indeed, the human part of the deal in the social media cyborg is also composed of people that may encounter difficulties in understanding the process of science but could be profoundly religious, politically extremist, superstitious, etc.
Furthermore, the use of black box arguments does not only affect the appeal of science, politics and social issues for an heterogeneous averagely-educated population, but can also bring about some phenomena of shallow understanding even on the part of public that is interested in expanding their comprehension of science, politics, and other difficult topics. Users, for example, can assume that the informational content found (or shared) online can foster the acquisition of some actual knowledge of the topic. However, as we already argued, the black boxes are not open to expansion. If you read an article on gravitational waves on some popular website, you may acquire some information you did not possess before about general relativity mechanics, but it does not transform you into an expert of the field, neither does it give you the same knowledge that you would obtain reading an academic paper or a full essay on the same topic. However, on online networks, you could have the same sense of authority and control over the information you share, as if it was yours [29,30,31]. In fact, while the socially shared information (for instance impressive, curious or fun scientific tidbits) does serve an interactive and social purpose, it could also make users believe that they are able to acquire actual specific or complete knowledge with little effort.

6. Cyber-Bullies as Cyborg-Bullies

Having analyzed the ignorance component of “brutality”, let us turn our analysis to the sense of violence conveyed by the term. The actuality of cyborg-intentionality and composite-intentionality can prove very valuable as a frame to read contemporary issues such as the violence sometimes characterizing social networking websites, especially among younger users. Common explanations suggest that something is being “taken away”, for instance moral proximity, by the technological mediation. In our perspective, and adopting the cyber-phenomenological claim, we suggest that it is not a matter of something being taken away, but taken into, namely the augmented intentional relationship that, in this case, affects the way we deal with beings similar to us.
The rupture of moral proximity brought about by computer screens and anonymous avatars is often advocated as one of the causes of the lack of empathy, which results in particular verbal violence, threats and so on that would not be carried out so openly in real life. Our claim is that this presumed moral gap is a biased artifact of the analysis, informed by the honest (but untrue) answers of the subjects: what is perceived as a distance between the virtual world and real life is actually caused by the loss of references of the biological organisms (with the evolutionary inherited endowments concerning sub moralities and the enforcement of coalitions [32,33]), and the acquisition of the new references (social, too) pertaining to the DACC cyborg. Empirical research carried out to study “the transferability of basic interpersonal affect, or affinity/disaffinity, from nonverbal to verbal communication accompanying the alternative communication channels of Face-to-Face versus Computer-Mediated Communication” [34] (p. 56) seem to confirm that moral proximity is indeed not impaired by the lack of physical presence:7
Although concerns about the lack of cues in Computer-Mediated Communication may persist with regard to determining participants’ identity, or the reduction of message equivocality, as functions of bandwidth and interface design, affinity issues may be different and readily translatable from one cue system to another.
[34] (p. 58)
Considering what we exposed so far, we can indeed see how this configuration equals setting a fight arena between human beings that became DACCs, as they obtained an ubiquitous access to their social cognition (and its enhancement by bits of artificial cognition) and, in return, paid the price of forsaking (at least as long as they act-as-DACCs) the real life subdivision that separates them but also protects them, and the groups they belong to, from each other [16,36].
Many instances of cyber-bullying and cyber-violence seem affected by an element of randomness and self-righteousness, where, on the one hand, it becomes hard to tell the aggressor from the victim, and, on the other hand, the perpetrated violence embeds instances of extreme moral reactions following a perception of relevance that is different from that of “real life”. The perceived randomness depends on the artificial cognitive processes embedded in the DACC, presenting certain information instead of others to users. Furthermore, as the boundaries between DACC and groups of DACCs do not reflect those between real-life humans (because of their diffused nature, as seen in the previous section), a user might find that a remote event justifies her moral and violent intervention against another DACC, who is not “remote” in the cyberspace they connect within. Such a view does not provide justification, but explains why teenagers engage in violent mobbing aggressions against “peers” they never met on websites such as Ask.fm, or why so many Twitter users thought they had to pursue the public shaming and threatening of Alicia Ann Lynch, a 22-year-old from Michigan, who tweeted and Instagrammed a photo of herself at work dressed as a Boston Marathon bombing victim for Halloween 2013.
The different perception of relevance is also due to the fact that, whereas real-life human beings cope with a diversity of truth regimes, where truth is generally perceived as less reliable as it gets further from its source, DACCs can rely on a copy-and-paste truth regime, which does not let distance (both chronological and physical) defuse the truth-value, and hence the pragmatic relevance, of the information they stumble upon [37].
Considering all this, it might indeed be interesting to consider instances of Internet-related violence such as cyber-bullying as clashes between cyborgs, that is to say, between cyborg and composite intentionalities. This is particularly interesting as far as the incidents—albeit they induce serious real-life consequences—sometimes involve no real-life relationship between the victim and the aggressor, who, in their real-life existences, have no connection whatsoever.

7. Conclusions

The “age of technology” and the “information society” are often taken as synonyms of the “knowledge-based society”. Normally, we all agree on how knowledge is improving our lifestyles and to what extent the improvements we experience firsthand immediately derive from an increased hybridization with technology and/or an improved access to information and knowledge. However, we can all think of instances where by coupling with technology, or by the immediate access to information allowed by technology, we become more “like brutes”. We may think of this as a result of something that is taken away, be it our humanity, our moral proximity, or the “effort”—as if the effort per se could vouchsafe the quality of an output. If that is not the cause, how can technology and what it permits make us more violent, more ignorant, more brutish?
To answer this question, we took into consideration a fundamental trait of human nature, that is, its capability to become cyborg, to seamlessly connect with external technologies so that the human and the external components become as one.
The discourse around the cyborg is further complicated when the technological component that we hybridize with is not something that simply enhances some human prerogative, but is also endowed with some level of artificial cognition—in other words, when it is also endowed with a form of intentionality.
The intentionality of the human cyborg can be understood in terms of a sum of vectors, where human intentionality is a vector, that of the machine is another, and the composite intentionality results from the composition of the two original ones. This means that by becoming cyborg we get to share, to make ours, part of how the machine understands the world, the goals it is programmed to pursue, and its priorities. The same can be said for “informational” cyborgs, when our knowledge is seamlessly connected to enormous repositories of information that is not raw, but sorted, produced and prioritized in ways that are not necessarily consistent with our own, but are invisible to us.
If the process of becoming cyborg cannot be avoided (nor should be), it should at least become a matter of public awareness. To make apparent what is invisible, so that we can still pursue “knowledge and virtue” and not become the weakest link, brutes and animal-like, acted out by the machines and not actors of our being cyborg.

Acknowledgments

Research for this article was supported by the 2012 National Research Grant (PRIN) “Models and Inferences in Science: Logical, Epistemological, and Cognitive Aspects” from the Italian Ministry of Education, University and Research (MIUR).

Author Contributions

The authors contributed equally to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Durling, R.M. (Ed.) The Divine Comedy of Dante Alighieri, Volume 1: Inferno; Oxford University Press: New York, NY, USA, 1996.
  2. Clynes, M.E.; Kline, N.S. Cyborgs and space. Astronautics 1960, September, 26–27, 74–76. [Google Scholar]
  3. Magnani, L. Abductive Cognition: The Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  4. Pino, B. Re-assessing ecology of tool transparency in epistemic practices. Mind Soc. 2010, 9, 85–110. [Google Scholar] [CrossRef]
  5. Haraway, D. A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, Cyborgs and Women: The Reinvention of Nature; Haraway, D., Ed.; Routledge: New York, NY, USA, 1991; Chapter 8; pp. 149–182. [Google Scholar]
  6. Clark, A. Natural-Born Cyborgs. Minds, Technologies, and the Future of Human Intelligence; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  7. Waite, C.; Bourke, L. Using the cyborg to re-think young people’s uses of Facebook. J. Sociol. 2013. [Google Scholar] [CrossRef]
  8. Verbeek, P.-P. Cyborg intentionality: Rethinking the phenomenology of human–technology relations. Phenomenol. Cognit. Sci. 2008, 7, 387–395. [Google Scholar] [CrossRef]
  9. Clark, A. Supersizing the Mind. Embodiment, Action, and Cognitive Extension; Oxford University Press: Oxford, UK; New York, NY, USA, 2008. [Google Scholar]
  10. Greenfield, S. Tomorrow’s People: How 21st-Century Technology Is Changing the Way We Think and Feel; Penguin Books: London, UK, 2004. [Google Scholar]
  11. Warwick, K. I, Cyborg; Century: London, UK, 2002. [Google Scholar]
  12. Magnani, L. Morality in a Technological World. Knowledge as Duty; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  13. Amichai-Hamburger, Y.; McKenna, K.Y.A.; Tal, S.A. E-empowerment: Empowerment by the internet. Comput. Hum. Behav. 2008, 24, 1776–1789. [Google Scholar] [CrossRef]
  14. Bertolotti, T.W.; Bardone, E.; Magnani, L. Perverting activism. Cyberactivism and its potential failures in enhancing democratic institutions. Int. J. Technoethics 2011, 2, 14–29. [Google Scholar] [CrossRef]
  15. Faiola, A.; Matei, S.A. Cultural cognitive style and web design: Beyond a behavioral inquiry into computer-mediated communication. J. Comput.-Mediat. Commun. 2005, 11, 375–394. [Google Scholar] [CrossRef]
  16. Bertolotti, T.; Magnani, L. A philosophical and evolutionary approach to cyber-bullying: Social networks and the disruption of sub-moralities. Ethics Inf. Technol. 2013, 15, 285–299. [Google Scholar] [CrossRef]
  17. Krishnan, A. Killer Robots: Legality and Ethicality of Autonomous Weapons; Ashgate: Burlington, VT, USA, 2009. [Google Scholar]
  18. Ihde, D. Technology and the Lifeworld; Indiana University Press: Bloomington, IN, USA; Minneapolis, MN, USA, 1990. [Google Scholar]
  19. Greenhow, C.; Gibbins, T.; Menzer, M. Re-thinking scientific literacy out-of-school: Arguing science issues in a niche Facebook application. Comput. Hum. Behav. 2015, 51, 593–604. [Google Scholar] [CrossRef]
  20. Acquisti, A.; Gross, R. Imagined communities: Awareness, information sharing and privacy on Facebook. In Proceedings of the PET’06 Proceedings of the 6th International Conference on Privacy Enhancing Technologies, Cambridge, UK, 28–30 June 2006; pp. 36–58.
  21. Perrin, A. Social Networking Usage: 2005–2015. Pew Research Center. 8 October 2015. Available online: http://www.pewinternet.org/2015/10/08/2015/Social-Networking-Usage-2005-2015/ (accessed on 22 December 2016).
  22. McEwan, B. Sharing, caring, and surveilling: An actor–partner interdependence model examination of Facebook relational maintenance strategies. Cyberpsychol. Behav. Socialnetw. 2013, 16, 229–247. [Google Scholar] [CrossRef] [PubMed]
  23. Oeldorf-Hirscha, A.; Sundar, S.S. Posting, commenting, and tagging: Effects of sharing news stories on Facebook. Comput. Hum. Behav. 2015, 44, 240–249. [Google Scholar] [CrossRef]
  24. UNICEF Social and Civic Media Section. Tracking anti-vaccination sentiment in Eastern European Social Media Network; UNICEF: New York, NY, USA, 2012. [Google Scholar]
  25. Bessi, A.; Scala, A.; Rossi, L.; Zhang, Q.; Quattrociocchi, W. The economy of attention in the age of (mis)information. J. Trust Manag. 2014, 1, 12. [Google Scholar] [CrossRef]
  26. Freina, S.T.; Jonesa, S.L.; Gerow, J.E. When it comes to Facebook there may be more to bad memory than just multitasking. Comput. Hum. Behav. 2013, 29, 2179–2182. [Google Scholar] [CrossRef]
  27. Wilcox, C. It’s time to e-volve: Taking responsibility for science communication in a digital age. Biol. Bull. 2012, 222, 85–87. [Google Scholar] [CrossRef] [PubMed]
  28. Jackson, S. Black box arguments. Argumentation 2008, 22, 437–446. [Google Scholar] [CrossRef]
  29. Keen, A. The Cult of Amateur. How Today’s Internet Is Killing Our Culture and Assaulting Our Economy; Nicholas Brealey Publishing: London, UK, 2007. [Google Scholar]
  30. Sundar, S.S. Self as source: Agency and customization in interactive media. In Mediated Interpersonal Communication; Konijn, E.A., Utz, S., Tanis, M., Barnes, S.B., Eds.; Routledge: New York, NY, USA, 2008; pp. 58–74. [Google Scholar]
  31. Bruns, A.; Highfield, T. Blogs, Twitter, and breaking news: The produsage of citizen journalism. In Produsing Theory in a Digital World: The Intersection of Audiences and Production in Contemporary Theory; Lind, R.A., Ed.; Peter Lang Publishing Inc.: New York, NY, USA, 2012; pp. 15–32. [Google Scholar]
  32. Bingham, P.M. Human uniqueness: A general theory. Q. Rev. Biol. 1999, 74, 133–169. [Google Scholar] [CrossRef]
  33. Magnani, L. Understanding Violence. Morality, Religion, and Violence Intertwined: A Philosophical Stance; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  34. Walther, J.B.; Loh, T.; Granka, L. Let me count the ways: The interchange of verbal and nonverbal cues in computer-mediated and face-to-face affinity. J. Lang. Soc. Psychol. 2005, 24, 36–65. [Google Scholar] [CrossRef]
  35. Gillett, G.; Franz, E. Evolutionary neurology, responsive equilibrium, and the moral brain. Conscious. Cognit. 2016, 45, 245–250. [Google Scholar] [CrossRef] [PubMed]
  36. Debatin, B.; Lovejoy, J.P.; Horn, A.K.; Hughes, B.N. Facebook and online privacy: Attitudes, behaviors, and unintended consequences. J. Comput.-Mediat. Commun. 2009, 15, 83–108. [Google Scholar] [CrossRef]
  37. Bertolotti, T. Facebook has it: The irresistible violence of social cognition in the age of social networking. Int. J. Technoethics 2011, 2, 71–83. [Google Scholar] [CrossRef]
  • 1.We report Robert M. Durling’s translation [1] (p. 405).
  • 2.Magnani [12] carried out a thorough reflection about the increasing hybridization of human beings, considering its philosophical and cognitive implications. One of the cores of the analysis, resonating in this paper, concerned the lack of knowledge possessed by users faced with ever more intelligent devices: a lack of awareness coupled with a similar lack of the necessary technical skills that would allow humans to responsibly navigate the technological present.
  • 3.Cognitive correlations between culture and ease of use of determinate web design were proposed by [15]: these studies further corroborate contentions about the distribution of cognitive tasks between users and the Internet.
  • 4.This uncertainty is legally reverberated by the Terms and Conditions users must subscribe to, usually yielding the ownership of texts and datas to the SNS: albeit legally meaningful, this just enforces the notion of a super-cyborg (specifically a network-cyborg) transcending the individual (yet cyborgized) selves into a greater entity.
  • 5.It can be argued that also the reception of emails and text messages depend on services establishing the priority of their recovery, but they do not rank the relevance. The spam filters, conversely, can be considered as a kind of pro-social mediator inasmuch as they perform kinds of guessing and establish a course of action affecting the users’ real lives.
  • 6.See a recent report from the Pew Research center on the usage of Social Media [21].
  • 7.The notion of responsive equilibrium, developed in evolutionary neuro-ethics, suggests that the “moral function can be regarded as maximally integrating emotion, social cognition, and other-regarding sensibilities using propositionally organized cognitive structures that map a shared world of human activity and relationships so that they take account of what in social and personal life counts as something” [35] (p. 245): this can be particularly revealing for understanding how and whether moral proximity is affected by remote communication.

Share and Cite

MDPI and ACS Style

Bertolotti, T.; Arfini, S.; Magnani, L. Of Cyborgs and Brutes: Technology-Inherited Violence and Ignorance. Philosophies 2017, 2, 1-14. https://doi.org/10.3390/philosophies2010001

AMA Style

Bertolotti T, Arfini S, Magnani L. Of Cyborgs and Brutes: Technology-Inherited Violence and Ignorance. Philosophies. 2017; 2(1):1-14. https://doi.org/10.3390/philosophies2010001

Chicago/Turabian Style

Bertolotti, Tommaso, Selene Arfini, and Lorenzo Magnani. 2017. "Of Cyborgs and Brutes: Technology-Inherited Violence and Ignorance" Philosophies 2, no. 1: 1-14. https://doi.org/10.3390/philosophies2010001

Article Metrics

Back to TopTop