Skip to content
Publicly Available Published by De Gruyter October 9, 2020

Digital whiplash: The case of digital surveillance

  • Katherine Dormandy
From the journal Human Affairs

Abstract

Digital technology is rapidly transforming human life. But our cognition is honed for an analog world. I call this the problem of digital whiplash: that the digital transformation of society, like a vehicle whose sudden acceleration injures its occupants, is too fast to be safe. I focus on the unprecedented phenomenon of digital surveillance, which I argue poses a long-term threat to human autonomy that our cognition is ill-suited to recognize or respond to. Human cognition is embodied and context-sensitive, and thus faces four problems of digital whiplash vis-à-vis digital surveillance. First is the problem of signal sparsity, that there are few if any perceptible indications of digital surveillance. Second is the problem of signal elusiveness, that the few indications there are prohibitively difficult to discover. Third is the distraction problem, that using digital technologies corrodes the cognitive abilities we need to recognize and respond to digital surveillance. Fourth is the hooking problem, that digital technologies are engineered to cultivate dependency. I address, among other objections, the idea that we choose to exchange our privacy for the use of digital technologies, so their use is in fact an expression of autonomy. The response to digital whiplash, I argue, is not to slow down the digitalization of society in its current form so that we can adapt our cognitive capacities. It is to allow ourselves the time to reflect on and debate whether digitalization, in its current form, is delivering a socially and ethically desirable future.

Introduction

Digital technology is rapidly transforming human life. But our cognition is honed for an analog world. Cognition is flexible, enabling easy adaptation, but I will argue that this flexibility does not serve us in a digitalized world—and that in some ways it is even a disadvantage. I call this the problem of digital whiplash: that the digital transformation of society, like a vehicle whose sudden acceleration injures its occupants, is too fast to be safe. This is not because we need more time to adapt our cognitive capacities to a digitalized world. It is because we are forfeiting the time to reflect on whether digitalization, in its current form, is delivering a socially and ethically desirable future. As we will see, digitalization in its current form may even impede our thinking clearly about this question at all.

The problem of digital whiplash has many facets. The focus here is the unprecedented phenomenon of digital surveillance. I argue that digital surveillance poses a threat to the autonomy of digital-technology users, but that human cognition is ill-suited to recognize or respond to this threat—and that continued use of digital technologies makes things worse.

I’ll spell out the threat posed by digital surveillance (in the section below), and then discuss two features of human cognition (in the section after that). I outline four problems of digital whiplash that hinder us cognitively from self-protecting against digital surveillance (in the sections entitled ‘Signal Sparsity and signal elusiveness’ and ‘Distraction and hooking’) and then conclude.

Digital surveillance

Digital technology dominates many aspects of life, including political discourse, personal relationships, and commercial and consumer endeavors. So, as a result, does digital surveillance. To surveil someone is to closely observe him—not with the disinterested fascination of a lover gazing at a beloved, but rather with a specific aim that typically involves control or gain; think of a bouncer eying a bar crowd, or paparazzi scanning a celebrity neighborhood. Digital surveillance is the use of the Internet or Internet-enabled technologies and services (for convenience: digital technology), not literally to watch people, but to gather and crunch data about them—because of what one might control or gain by doing so.

It works like this. When we use digital technologies, we generate mountains of personal data: about our contacts, geographical movements, links clicked and amount of time spent on them, purchases made, purchases aborted, postings, and so forth. These data become the de facto property of the digital-service providers (or those who employ them), who crunch it with sophisticated algorithms in search of correlations. Service-providers also conduct controlled experiments in search of causal (and thus more informative) relations, for example by showing different users different platforms and comparing their activity (Varian, 2014, pp. 29–30; Zuboff, 2019, pp. 299–309; Stalder, 2018, pp. 127–151).

The more and greater the variety of personal data that accumulates, the more informative correlations or causal relations can be located, and the more accurately users can be profiled. User profiles help generate predictions about how we will think or behave in certain circumstances (Varian, 2014, pp. 27–29): how we tend to feel at certain times of day, what and when we might incline toward purchasing if the easy opportunity presents itself, our gender or sexual orientation, our physical and mental health, voting proclivities, and much more (Stalder, 2018, pp. 127–151; Snowden, 2019, chapter 25; Zuboff, 2019, pp. 199–328). The more accurate the user profiles, the greater the probability that, when users are put in the circumstances in question, they will behave as predicted.

Digital surveillance thus threatens users’ autonomy. A person is autonomous to the extent that her decisions and actions originate in herself and are free from coercion or manipulation. Coercion forces a person’s will by removing her acceptable alternatives, and manipulation covertly influences the deliberations that shape her will (Wood, 2014, chapter 12). You coerce a person to give you his rucksack by putting a gun to his head; you manipulate him to do so by getting him to entrust it to you while he goes to the bathroom. Digital surveillance facilitates both coercion and manipulation (Carr, 2011, pp. 116–176; Keen, 2015, pp. 162–183; Stalder, 218, pp. 127–151; Snowden, 2019, chapter 25; Zuboff, 2019, pp. 233–474). Start with coercion. The more a company or government knows about you, the more effectively they can get you to do what they want, even against your will. For example, if an insurance company with digitalized remote control of your car ignition knows that you are driving while behind on your payments, they can keep it from starting (Varian, 2010, 2014, p. 30). Or if a government knows who has communicated with dissenters, or behaved in other undesirable ways, it can withhold access to essential goods and services. As for manipulation, if a company knows when a person is apt to feel down, they can feed her targeted ads at just that time (Varian, 2014, pp. 28–29); or a curated media outlet can influence users’ sense of what issues matter by arranging upper-level categories, choice architecture, or recommendations accordingly (Alfano, Carter, & Cheong 2018, pp. 301–311).

So digital surveillance threatens users’ autonomy by facilitating coercion and manipulation on the part of digital-service providers and governments; we may call this the surveillance threat. The scale of this threat is unprecedented. One reason is that the quantity and detail of information at the disposal of tech firms and governments are unprecedented (Varian, 2010; Zuboff, 2019, pp. 12–14). Another reason, as I will now argue, is that we are cognitively ill-equipped to either recognize or respond to digital surveillance in self-protective ways.

Cognition is embodied and context-sensitive

When faced with a threat, human beings typically recognize it, and on the basis of this recognition, respond. This holds for threats to our autonomy (Brehm & Brehm, 1981). But when it comes to the threat to autonomy posed by digital surveillance, we have trouble either recognizing or responding. One reason, argue Williams (2018, pp. 43–49) and Zuboff (2019, pp. 12–14), is that conventional concepts are insufficient to grasp this unprecedented threat. Another reason, I argue, arises not from the content, but the workings of human cognition. To see this, let’s start by examining two of its features.

The first is that cognition is embodied (Noë, 2004; Shapiro, 2019). We gather sensory data through our five physical senses plus our proprioceptive awareness. Our brains, which process this data and organize it into a map of the world, are made of cells, including neurons that pass electrical and chemical signals to and from the rest of our bodies, which are also full of neurons. Our cognitive processes are intimately intertwined with our bodies.

Second, our cognitive abilities are context-sensitive. As a species, we are evolutionarily attuned to engage optimally with our environments. As individuals, our cognitive architecture and habits are influenced by how we use them. Our brains exhibit neuroplasticity (Pascual-Leone, Amedi, Fregni, & Merabet, 2005): our neural pathways, like muscles, strengthen with use and weaken with disuse; so the way in which we use our cognitive abilities affects how developed they are (Carr, 2011, pp. 17–35).

These two features of cognition—that it is embodied and context-sensitive—position us well to recognize and respond to analog surveillance. We presumably made it this far as a species because our ancestors recognized predators and enemies and responded appropriately. But when surveillance is digital, I’ll argue, the problem of digital whiplash arises: we are ill-equipped to recognize or respond in self-protecting ways. I’ll discuss four problems of digital whiplash pertaining to surveillance: signal elusiveness and distraction and hooking.

Signal sparsity and signal elusiveness

For beings cognitively outfitted as we are, there are few if any indications of digital surveillance; I call this the problem of signal sparsity. To see it, compare digital with more conventional surveillance (including digital but non-Internet connected cameras). A conventional surveillor who wants to evade detection typically leaves traces that are discoverable by our usual sensory mechanisms—operating for example via wiretaps, footprints, cameras, or disguises. And we have what it takes, cognitively, to pick up on such traces. Sometimes this requires effort (e.g., noticing disguises or hidden cameras), but it is effort that we can exert if we wish. Other traces of conventional surveillance we are already hyperalert to—such as creaking doors, voices, or glances. Mere pictures of eyes, for instance, can trigger us (Kahneman, 2011, pp. 54–58).

Digital surveillance is different. It offers few if any traces detectable by the usual embodied and context-sensitive channels. We do not sense when Facebook tracks our Web browsing, or when our phone sends location data to the nearest mobile mast. We do not feel overheard by phones or refrigerators. Nor do exertions of special effort avail us much: even for users who are IT experts, there are few if any smoking guns. Digital surveillors who want to avoid detection can do so easily.

Along with a lack of signs of surveillance that are keyed to our cognition, the few signs that are in principle discernable are hard to locate, and change unpredictably; this is the problem of signal elusiveness. Different apps, gadgets, and services have different information-gathering settings, which providers are apt to change without notice (Zuboff, 2019, pp. 218–225). Many gather data about our locations, for example by connecting to mobile masts, even when apps are disabled. Digital contexts are personalized, differing from one person to another. We have no way of knowing whether we are the subject of a controlled experiment by a service-provider. When regulating bodies compel providers to inform users about surveillance (which in many countries they do not), most providers comply as elusively as possible. Privacy notifications are notoriously difficult to read (e.g., small-print legalese), and there is rarely an option to use a given service without surveillance, for example by paying extra. Even for users who are IT experts, it can be fiendishly difficult to disable the surveilling features of an app (if the app is even built to function without them). So even when there are signals of digital surveillance that users can in principle pick up on, their elusiveness presents additional friction en route to doing so.

Signal sparsity and signal elusiveness are obstacles to recognizing digital surveillance. But they also hinder our self-protectively responding to it—for example by researching matters, finding alternatives to certain services or dropping digital technologies entirely, by lobbying for legislation, and so forth. The reason is that what knowledge we have is apt to be theoretical rather than sensory or associative (in the terms of Kahneman (2011), “system 2” rather than “system 1”). We may know in theory that surveillance occurs. Yet theoretical knowledge is less directly and viscerally motivating than sensory knowledge. People are much less inclined, for example, to act on the basis of statistics than on the basis of their experiences, even when both deliver the same information about risks (Kahneman, 2011, pp. 166–70). The vagueness of the surveillance threat, and the theoretical way in which we know about it, are thus additional sources of friction that discourage a proactive response.

One might object that signal sparsity and signal elusiveness are just teething problems during a period of technological and social transition. Just wait a few generations—our embodied and context-sensitive cognition will adapt. Just as over time we formed knee-jerk associations between eye-shaped stimuli and the presence of agents, it is plausible that we will form knee-jerk associations between digital technologies and surveillance. This will motivate us to take protective measures individually and collectively.

But this optimism is misplaced. Digital surveillance is not likely to prompt the same sort of cognitive adaptation that predators and enemies did in the past. For predators and enemies pose immediate threats to our bodies; failing to notice one can be instantly fatal. In contrast, the threat to autonomy posed by digital surveillance—of being gradually influenced to think and behave in ways that profit others—is not immediate, clear, or physical, but rather long-term, diffuse, and psychological. Surveillance is too removed from its harmful effects, and those effects are too gradual and amorphous, for a spontaneous cognitive connection to develop. Of course, matters might be different if digital surveillance were immediately life-impacting, as in a digital fascist state or a heavy-handed social-credit system; but short of some such development, my response holds.

The objector will continue. We need not be cognitively adapted to a threat in order to respond to it, but can adapt socially and institutionally. This is already happening, with developments such as tracking-free browsers and social media, and privacy regulations in many countries. [1] In response, these developments are laudable. I am not claiming that we can only recognize and respond to threats that we are cognitively adapted to. But given the lack of cognitive adaptation to digital surveillance, social and institutional adaptation is crucial—and it is doubtful whether these developments are robust enough to protect us. Reasons for this include obstacles at the institutional level, such as politics, international differences in regulatory approach, and the financial power behind personal-data gathering (see e.g. Keen, 2015; Stalder, 2018, pp.128–151; Zuboff, 2019, pp. 98–127). Additional reasons, once more, are cognitive. They constitute the two remaining problems of digital whiplash. We will see that the embodiedness and context-sensitivity of cognition, far from promoting cognitive or social adaptation to protect ourselves from digital surveillance, exacerbate the threat.

Distraction and hooking

Recognizing the surveillance threat is not a visceral matter, we just saw, but rather one of theoretical understanding. And addressing it once you do recognize it is not a matter of fighting or fleeing, but of researching options, and making and carrying out a game plan. Recognizing and responding to digital surveillance, in other words, require sustained theoretical engagement, critical thinking, commitment, focus, prioritizing, big-picture goal-setting, and planning.

Yet these skills are prone to being compromised—by our use of the technologies that surveil us. Thus arise the third and fourth problems of digital whiplash.

To see the third problem, note that human beings can only attend to a few things at once (Miller, 1956). And digital technologies notoriously consume attentional bandwidth. In addition to frequent and varied notifications (of which some can, with effort, be disabled, but others cannot), we face near-constant decisions: whether to click on a hyperlink, succumb to the urge to check email or social media, navigate away from whatever we are doing to more engaging content, abandon one search result to work through the remaining 9,999, and so forth. We might not register these distractions consciously. But our cognitive activity when engaging with digital technology, even when simply reading an e-book, registers neurologically as short-term problem-solving, rather than the sort of deep focus needed for long-term learning and understanding (Carr, 2011, pp. 177-197). What results is the distraction problem: we bear a heavy cognitive load even when we are unaware of it. This takes the form of the extraneous problem-solving involved in interacting with the digital medium, fending off the urge to switch tasks, or deciding whether a new task is worth switching to; and the switching costs of re-focusing after navigating from one task to another (Carr, 2011, pp. 115–143; Sweller, Ayres, & Kalyuga, 2011; Williams, 2018, pp. 17–25).

This is a problem because, if we use digital technologies regularly, the context-sensitivity of our cognition will ensure that this attention-scattering becomes habitual and that more focused cognition, with disuse, becomes more difficult (Carr, 2011, pp. 115–143; Williams, 2018, pp. 17–25). Over time, our capacity for sustained thought, and the longterm remembering of what we take in moment to moment, is apt to diminish. Since focus and long-term memory are necessary for critical thinking and understanding (Carr, pp. 115–14.), the latter are prone to diminish too. Further, our ability to distinguish high- from low-priority information erodes: the newest input (such as a message, or a site we have just navigated to) tends to feel as important as longer-term, more abstract-seeming, concerns. And we bring these cognitive habits with us offline (Ophir, Nass, & Wagner, 2009), as frequent use of digital technologies changes the structure of our brains (Carr, pp. 115–143). Our cognition thus adapts to function optimally in our habitual attention-scattering contexts. And this inhibits the alternative skills of sustained theoretical engagement, critical thinking, commitment, focus, perseverance, and the big-picture setting of goals and priorities—that is, the skills we need to recognize and respond to the surveillance threat.

These attention-scattering effects of digital-technology use are no accident, but are part of the model. Business booms when users attend to their devices, spend time on social media, or click on new links, including but not limited to advertisements. Every action generates personal data at least, and in some cases ad revenue (Varian, 2014, pp. 28–29; Carr, 2011, pp. 149-175; Zuboff, 2019, pp. 8–12).

So in addition to the surveillance threat described in the second section, the distraction problem highlights another digital threat to autonomy, which I’ll call the threat of cognitive erosion. This arises from the use of the technologies that surveil us, which erodes the cognitive skills that we need to respond self-protectively.

An objection looms. We are not victims. We are responsible for exercising moderation and persisting in activities—such as shutting off devices, reading a physical book, or meditating—that reinforce cognitive habits of focus, theoretical understanding, and priority-setting. To the extent that we decline to do these things, we are in no position to complain about the threat of cognitive erosion. In response, we may agree with the objector that we are responsible for our actions. But responsibility is getting ever harder to exercise as digital technology gets more sophisticated and ubiquitous. Producers of digital technology must exercise responsibility too, but their products are designed to do the opposite. This will become clearer in discussing the fourth problem of digital whiplash.

It arises from the fact that many digital technologies are “habit-forming” (Eyal, 2014), even “addictive” (Alter, 2017, pp. 1–12). A habit is a repeated default behavior like brushing your teeth or biting your nails, and an addiction is a behavior, typically with long-term consequences you know to be bad, that you experience withdrawal symptoms for not performing. Habits and addictions predate digital technologies. But digital technologies, so says the hooking problem, are engineered to cultivate them—with great success. This delivers a third threat to our autonomy: to the extent that we would ideally prefer to cut down on digital-technology use but find this hard because we are habituated or addicted, there is an element of coercion or manipulation in our use of it. We may call this the dependence threat.

There are two ways in which digital technologies are designed to hook. First, tech designers employ armies of psychologists to develop ways of encouraging unprompted repeat use (Alter, 2017, pp. 93–236; Williams, 2018, pp. 26–37). They design interfaces to stimulate our neural pleasure center (the swipe mechanism is a case in point); and determine which patterns of variable rewards yield the most effective positive reinforcement, and which unpleasant emotional states (such as fear of missing out) the best negative reinforcement. Second, the user profiling enabled by digital surveillance also contributes to hooking. The more detailed and accurate our profiles are, the easier it is to individually tailor variable content, for example on social-media feeds or search-engines, to keep us returning. The increasing prevalence of Internet addiction (plus the fact that this is now a diagnostic category) testifies to the efficacy of these methods (Alter, 2017, pp. 26–27, 254–255). Psychological pressure to engage digitally “has become industrialized” (Williams, 2018, p. 27).

So tech designers harness sophisticated knowledge about the embodied workings of our cognition in various contexts, for the sake of habituating or addicting us to the technologies that both surveil us and that, with prolonged use, erode the cognitive abilities we need to recognize and respond.

The objector will continue. Hooking or no, we make our choices. We evidently prefer habituation or addiction to abandoning our devices; and we choose to exchange privacy for cheap and convenient services. Mark Zuckerberg for example claims that “[p]eople have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that’s evolved over time” (quoted in Paul, 2010); Google’s Eric Schmidt and Jared Cohen say that “users will voluntarily relinquish things they value in the physical world—privacy, security, personal data—in order to gain the benefits that come with being connected to the virtual world” (2013, p. 258). On this view, far from threatening our autonomy, living digitally is an autonomous expression of what we value.

But matters are not so simple. The objection may hold for many. But for others, opting out of digital technology is an alternative only for the highly privileged. In less than a generation digital technologies have become, in very many spheres, a necessity for social inclusion and professional advancement. When eschewing WhatsApp leaves you wondering what your friends are planning, when lacking a Facebook account leaves you disadvantaged on the job market, or when being unavailable on your mobile makes you fodder for redundancy, the objector’s talk of responsibility, choice, and autonomy ignores reality. Schmidt and Cohen exhort us to use privacy-respecting products when they are available, and support political parties that take privacy seriously (2013, pp. 32–36). But citizens and consumers face a high hurdle given the fact that digital-technology companies (the largest of which have the economic firepower of a medium-sized country) engage in multi-pronged lobbying to weaken regulation and privacy protection, and use their market power to limit consumer choice (Zuboff, 2019, pp. 98–127). Technology companies actively manipulate users to become dependent on their products and services, and those who would opt out are increasingly coerced by social and professional structures to participate. The hooking problem, it turns out, applies not just to individual users but to society, as digital technologies—and with them surveillance—burrow into the foundations of our social structures.

These considerations help us see why the advice of Nir Eyal, the author of Hooked: How to Build Habit-Forming Products (2014) falls flat. His subsequent book, Indistractable: How to Control Your Attention and Choose Your Life (2019), is an unselfconsciously ironic manual for avoiding digital addiction. He advises users who find themselves pulled to their devices to overcome these impulses, in the short term, by learning to ride the wave when they occur. But these impulses—if digital developers have scrupulously followed Eyal’s own instructions in Hooked—will be hard to resist. This makes them just one more stone on the already formidable cognitive load that gives rise to the distraction problem. Constantly resisting impulses is a cognitive burden that depletes energy for other tasks (Baumeister & Tierney, 2011, chapter 4). As for Eyal’s long-term advice, it is that users fix whatever unsatisfactory aspects of their own lives motivate excessive technology use to begin with. But the fact that digital technologies are so embedded in our social and professional structures (a fact that Eyal himself has helped precipitate) makes a mockery of the idea that individual choices will solve the problem.

Conclusion

One might object that I am ignoring the benefits conferred by digital technologies. I hardly need list them, but highlights include digital access to bank accounts for people with poor national infrastructure; global connection and political mobilization; new powers of social, professional, and personal organization; and so forth (E. Schmidt & Cohen, 2013; Varian, 2014). These benefits are considerable. But why suppose that they can only be had at the cost of individual and collective autonomy? Why suppose, that is, that to enjoy the benefits of digital technologies we must consent to the influences on our thought and behavior enabled by digital surveillance; to decreased ability to focus, plan, and prioritize; and to digital dependency?

Digital service-providers portray these costs as inevitable features of the technology or the economic model (Varian, 2010), or as things that have lost their value anyway (Mark Zuckerberg in Paul, 2010). But these are assumptions, and digital service-providers have a strong interest in pushing technological change so fast that we do not second-guess them. This is why I have characterized the concerns raised here as whiplash: we are rushing into a digitalized future defined by those who stand to profit from it, without sufficient reflection on our own psychological, social, or ethical safety.

We avoid whiplash by slowing down. We must give ourselves the time and resources to envision the kind of people we want to be and the kind of world we want to inhabit. This needs the very sort of thinking that engagement with digital technologies erodes—theoretical, reflective, focused, big-picture, and above all autonomous. In particular, we need this sort of thinking from a partnership between the social sciences and humanities, when their results are open-ended and independent from their funding sources. We need social sciences (such as psychology, sociology, political science, and economics) to describe the effects of digitalization on society, culture, the economy, and geopolitics. But description is not enough. A complementary—and equally crucial—task is performed at the level of values. And values are the domain of the humanities: of such disciplines as philosophy, literature, theology, history, philology, and the arts (cf. Foley, 2018, pp. 27–35). These disciplines explore what it is to be human, what being human should and could be, and what matters for human flourishing.

We need the humanities to determine whether and in what ways the effects of digitalization are good, indifferent, or bad. And we need them to evaluate ourselves as we think about the digital world. Have we for example been enculturated as digital natives, or perhaps forgotten what a pre-digital world is like, and so implicitly accept Mark Zuckerberg’s dismissal of the value of privacy? If we have, should this bother us? What kind of world is most worth inhabiting, and what kind of person is it most worth being? If we lack a conscientious stance on evaluative matters like these, then other people—the loudest, the richest, and those with the most deeply entrenched interests—will impose one. With so much of what we value at stake, not least our values themselves, we need a robust discourse spearheaded by the humanities.

Yet humanities disciplines have increasingly been struggling for recognition, funding, and even, with closures threatened by financial scarcity, their own survival (Dix, 2018; Gruner, 2007; Nowotny, 2013; Arvan, 2020; Hutner & Mohamed, 2013; B. Schmidt, 2018). These developments harm the prospect of slowing down and reflecting on anything, not least on digitalization. But given the large effect that digitalization in particular has on our ability to think deeply and critically, including about the services and devices that have so quickly insinuated themselves into everyday life, the need to strengthen humanities is all the more urgent. Let us slow down and evaluate digitalization in its current form, and decide whether the world it gives rise to reflects our values—and if it does not, what kind of world to strive for instead. [2]

References

Alfano, M., Carter, J. A., & Cheong, M. (2018). Technological seduction and self-radicalization. Journal of the American Philosophical Association, 4(3), 298–322. https://doi.org/10.1017/apa.2018.2710.1017/apa.2018.27Search in Google Scholar

Alter, A. (2017). Irresistible: The rise of addictive technology and the business of keeping us addicted. New York: Penguin.Search in Google Scholar

Arvan, M. (2020). Saving philosophy from elimination: What can be done? The philosophers‘ cocoon. https://philosopherscocoon.typepad.com/blog/2020/06/saving-philosophy-from-elimination-what-can-be-done.htmlSearch in Google Scholar

Baumeister, R. F., & Tierney, J. (2011). Willpower: Recovering the greatest human strength. New York: Penguin.Search in Google Scholar

Brehm, S. S., & Brehm, J. W. (1981). Psychological reactance: A theory of freedom and control. New York: Academic Press.Search in Google Scholar

Carr, N. (2011). The shallows: What the Internet is doing to our brains. New York: W. W. Norton.Search in Google Scholar

Dix, W. (2018). It’s time to worry when colleges erase humanities departments. Forbes. Retrieved August 9, 2020, from https://www.forbes.com/sites/willarddix/2018/03/13/its-time-to-worry-when-colleges-erase-humanities-departments/#4c4cbd14461aSearch in Google Scholar

Eyal, N. (2014). Hooked: How to build habit-forming products. New York: Penguin.Search in Google Scholar

Eyal, N. (2019). Indistractable: How to control your attention and choose your life. London: Bloomsbury.Search in Google Scholar

Foley, R. (2018). The geography of insight: The sciences, the humanities, how they differ, why they matter. New York: Oxford University Press.10.1093/oso/9780190865122.001.0001Search in Google Scholar

Gruner, W. D. (2007). Krise der Geisteswissenschaften? Ihre Stellung und Rolle, insbesondere die der Geschichtswissenschaft in Deutschland im Vergleich mit Frankreich, Großbritannien und den USA. [Crisis of humanities? Its position and role, in particular that of historiography in Germany, compared to France, Great Britain and the USA]. In J.-D. Gauger & G. Rüther (Eds.), Warum die Geisteswissenschaften Zukunft haben! Freiburg im Breisgau: Herder.Search in Google Scholar

Hutner, G., & Mohamed, F. G. (2013). The real humanities crisis is happening at public universities. The New Republic. https://newrepublic.com/article/114616/public-universities-hurt-humanities-crisisSearch in Google Scholar

Jackson, M. (2018). Distracted: Reclaiming our focus in a world of lost attention. Amherst, NY: Prometheus Books.Search in Google Scholar

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.Search in Google Scholar

Keen, A. (2015). The Internet is not the answer. London: Atlantic Books.Search in Google Scholar

Miller, G. A. (1956). The magical number seven, plus or minus two. Psychological Review, 63, 81–97.10.1037/h0043158Search in Google Scholar

Noë, A. (2004). Action in perception. Cambridge, MA: MIT Press.Search in Google Scholar

Nowotny, M. (2013). Gesucht: Ein Weg aus der Krise. [Wanted. A way out of the crisis]. ORF. https://sciencev2.orf.at/stories/1733898/index.htmlSearch in Google Scholar

Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences, 106(37), 15583–15587.10.1073/pnas.0903620106Search in Google Scholar

Pascual-Leone, A., Amedi, A., Fregni, F., & Merabet, L. B. (2005). The plastic human brain cortex. Annual Review of Neuroscience, 28, 377–401. https://doi.org/10.1146/annurev.neuro.27.070203.14421610.1146/annurev.neuro.27.070203.144216Search in Google Scholar

Paul, I. (2010). Facebook CEO challenges the social norm of privacy. PC World, January 11. https://www.pcworld.com/article/186584/facebook_ceo_challenges_the_social_norm_of_privacy.htmlSearch in Google Scholar

Schmidt, B. (2018). The humanities are in crisis. The Atlantic. Retrieved August 9, 2020, from https://www.theatlantic.com/ideas/archive/2018/08/the-humanities-face-a-crisisof-confidence/567565/Search in Google Scholar

Schmidt, E., & Cohen, J. (2013). The new digital age: Transforming nations, businesses, and our lives. New York: Vintage Books.Search in Google Scholar

Shapiro, L. (2019). Embodied cognition (2nd ed.). New York: Routledge.10.4324/9781315180380Search in Google Scholar

Snowden, E. (2019). Permanent record. New York: Metropolitan Books.Search in Google Scholar

Stalder, F. (2018). The digital condition. Cambridge: Polity Press.Search in Google Scholar

Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. New York: Springer.10.1007/978-1-4419-8126-4Search in Google Scholar

Varian, H. R. (2010). Computer mediated transactions. American Economic Review, 100(2), 1–10. https://doi.org/10.1257/aer.100.2.110.1257/aer.100.2.1Search in Google Scholar

Varian, H. R. (2014). Beyond big data. Business Economics, 49(1), 27–31. https://doi.org/10.1111/2057-1615.1208610.1057/be.2014.1Search in Google Scholar

Williams, J. (2018). Stand out of our light. Cambridge: Cambridge University Press.10.1017/9781108453004Search in Google Scholar

Wood, A. W. (2014). The free development of each: Studies on freedom, right, and ethics in classical German philosophy. Oxford: Oxford University Press.10.1093/acprof:oso/9780199685530.001.0001Search in Google Scholar

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power.Search in Google Scholar

Published Online: 2020-10-09
Published in Print: 2020-10-27

© 2020 Institute for Research in Social Communication, Slovak Academy of Sciences

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/humaff-2020-0049/html
Scroll to top button