1 Introduction

In 1987, the geographer and physiologist Jared Diamond wrote an article titled “The Worst Mistake in the History of the Human Race”, outlining the consequences of our shift from a hunter-gatherer existence to subsistence agriculture. A shift he described as “a catastrophe from which we have never recovered” (1987). A move away from a nomadic, hunter-gatherer lifestyle roughly 12,000 years ago to a settled, farming one brought a stable source of calories, increase in population, and laid the foundation for trade and innovation, all of which would have made the world in which we live in today.

And yet, it also brought untold changes that proved irreversible: inequality through divisions of labor and social inequality (Shin et al. 2012); deforestation and soil erosion (Wright and White 1996); an array of diseases and malnutrition that locked societies in a bondage of reliance on a far more limited diet than their hunter-gatherer ancestors enjoyed (Martin and Goodman 2002); as well as the concentration of political and ritual authority (Laneri et al. 2015).

In a later work, Diamond noted, “If you could choose between being a middle-class American, a Bushman hunter, and peasant farmer in Ethiopia, the first choice would undoubtedly be the healthiest, but the third choice might be the least healthy” (2002). What he was pointing to is the conundrum of innovation: there is no such thing as a free lunch in evolution, be it biological, social, or technological. With the accrued benefits from one invention, so rise a litany of drawbacks that come with it. The so-called ‘progress trap’ (Wright 2004; O’Leary 2007) where material innovation leads to the rise of problems and uncertainty which societies are incapable of solving, resulting in stagnation and possible collapse.

This essay examines the developments stemming from another paradigm shift in technological innovation—the digital revolution—or what is popularly called the third industrial revolution. A revolution that is perhaps as profound as its ancient agricultural counterpart with equally far-reaching implications. Diamond identified three areas where agriculture has made a significant and negative impact: class division and inequality; adverse effects on health; and the concentration of power in the hands of the few.

The digital revolution is producing a similar impact along associated lines but at a far more rapid rate given the advances in artificial intelligence (AI). Specifically, this includes the digital divide along sociodemographic lines as well as between the developed and developing world; the social and physical effects of a near constant online existence; and the centralization and control over data (personal and otherwise) by a handful of technology corporations, and the perhaps inevitable transfer of that control over to the state.

This is not the first essay to raise such issues. Similar concerns about the geometric growth of technology and our ability to control it have been voiced over the years by people such as the mathematician Irving John Good (1966) and philosopher Martin Heidegger (1977). There are also issues that simply cannot be covered within a single essay. Two, specifically, include the looming threat of mass unemployment due to automation, and the potential (some argue likelihood) for the rise of sentient machines, or what is often what is often called the technological singularity. What this essay aims to do, however, is examine the immediate and potentially long-term effects of the digital revolution against those of agriculture which, with thousands of years of hindsight, we know forever changed humanity.

Despite the concerns over technology, there is also no denying that the digital revolution has brought about remarkable changes over the past half a century in areas, such as the democratization of higher education, personalized medicine, to advances in defense, logistics, and transportation (Scott 2006; Hui-Chen et al. 2021; Hamet and Tremblay 2017; Lau and Haugh 2018). But along with these developments have come an array of drawbacks from online ‘doxing’, Internet addiction, to the threats to individual liberties posed by the concentration of power in the hands of a few tech firms.

The first nomadic tribes experimenting with planting seeds could not foresee the physical and social detriments that arose from them, and it has only been through archaeological excavations and forensic analyses over the past several decades that we can see the true impact of agriculture. The changes unleashed by the digital revolution, by contrast, are occurring in real time. Societal destiny is not always in the hands of individuals, though, and the dramatic environmental shifts that contributed to the demise of agriculture-based civilizations of the past such as the Anasazi in southwestern US (Benson and Berry 2009), Angkor in Cambodia (Buckley et al. 2010), and possibly the Maya in Central America (Kennett et al. 2012) may await us still. But outside of such calamities and our own reckoning with a changing climate, it is highly unlikely that the digital revolution will ever reverse itself with societies choosing to adopt a pre-digital lifestyle, just as most societies never willingly gave up agriculture. If Diamond is correct in his description of the peasant farmer as worse off compared to hunter-gatherers and those in economically advanced societies, are the bulk of those whose lives revolve around digital technology his equivalent?

2 Class divisions and a perpetual developing world

The adoption of agriculture varied widely around the world and involved diverse crops, ranging from the cultivation of cereals in the Middle East more than 12,000 years before present (B.P.); the cultivation of rice in China around 9000 B.P.; domesticated maize in Mexico at 8700 B.P.; bananas and taro roughly 7000 B.P. in New Guinea; and up to 4500 B.P. with the cultivation of wild seed crops in the Eastern Woodlands of North America (Fuller et al. 2014). The picture that emerges is one of incremental change as societies moved from a mobile, hunter-gatherer existence to a more settled one that focused on food production. And while there are historical examples of societies abandoning farming due to the effects of war, disease, famine, or natural disasters, for most, once they became locked into a dependence on agriculture, there was no going back.

As populations grew, so, too, arose an expansion in inequality. Accumulated resources seen in an increase in grave goods and the size of dwellings in the Near East or control over land tenureship by individuals and powerful families in Polynesia, came with attendant dominance over economic and political matters (Price and Bar-Yosef 2010; Hayden and Villeneuve 2010). In the larger city-states of the Basin of Mexico, elites could hold sway over vast territories, acting as the bond between centers of power and outlying areas through intermarriage, trade, or other alliances (Smith 1986, 2021).

Status differentiation in and of itself is not unknown in non-agricultural societies (Borgerhoff Mulder et al. 2009) and has been witnessed in non-human primates for decades (Di Bitetti 1997; Sparks 2006 [1967]). However, it was the monopolization of predictable resources and intergenerational wealth that played a role in persistent inequality (Mattison et al. 2016). A commanding individual could rise to prominence through displays of power and prosperity, acquiring followers along the way through patron–client relationships, and eventually establishing a hereditary rule through passing that ledger of reciprocal relationships on to one’s heirs. Over time, social inequalities became ‘baked in’ and part of the social fabric from which individuals could rarely move out of.

Unlike agriculture, the digital revolution was not a set of experiments separated by thousands of years in varied locations around the world. Rather, it was spearheaded by those countries already at the forefront of technology within the span of a few decades following the end of WWII. However, whereas those monumental shifts initiated by agriculture were slow to accumulate, the inequality brought about by the digital revolution has been ushered in quickly given the global transition to digitization. The post-war interconnectedness of nations in terms of trade, defense, transportation, and diplomacy, to name a few, ushered in this transition. But so, too, did Cold War-era fears and the potential of being left behind in the steady march toward modernization.

This new form of technological inequality has centered on the so-called ‘digital divide’, an issue that has been around for decades and originally described as the advantage conferred to those with access to computers compared to those without, as well as socioeconomic variables of residence (urban versus rural), education, income, English language capabilities, and access to pre-existing technology (Rao et al. 1999; Kleinman 2001). Critics, however, have argued that issues of differential use along with deterministic assumptions that the presence of technology facilitates learning are more salient features of the divide, while the more basic needs of literacy may be the real determinant as to whether access to the Internet has any real meaning (Warschauer 2003; Chun-Yao and Hau-Ning 2010).

As technological innovations such as smartphones became more ubiquitous, particularly among developing nations that saw a greater increase across technologies (Dewan et al. 2010), the access divide decreased. Yet, globally, there continues to be a gap between sexes, age, education, and economic stability among lower paying jobs that do not require the use of the Internet and those that do (Calderón-Gómez et al. 2020); the COVID-19 pandemic making this latter distinction all the more obvious. Similarly, are continuing issues regarding inequality and class division with regard to Internet access and social networks with significant social capital (Winseck 2017; Zhao and Elesh 2007).

The divide also falls along generational lines. The degradation of historical knowledge may be inevitable the more distant in time one gets from historical events and those who lived through them, but the rapid development of the Internet and its outgrowths such as social media have accelerated this process to an unprecedented extent. The result has been a shift in the transfer of cultural knowledge away from generational sources (parents, grandparents, teachers, and so on) and toward those found online. Those born in the mid-90s and onward (Millennials and Generation Z) are less likely to be raised in a religious household compared to their parents, but also less likely to be familiar with the more seminal events of the twentieth century, and more likely to form their identity and opinions via influence from social media (Sultan 2017).

Looking at the early twentieth century by way of comparison, the generation of Americans that fought in WWI was 50 years removed from the US Civil War, but the cause of the latter continued to be vociferously debated (Ramsdell 1937). Today, more than 50 years after the collapse of South Vietnam, younger generations in the US and Vietnam express little interest or knowledge of the Vietnam War (or the American War as it is called in Vietnam), despite the momentous ramifications it had on both countries. As with their American counterparts, younger Vietnamese instead turn to social media to form their opinions around contemporary issues (Rosen 2015).

However, it is between nations that the divide may be of greater concern for countries on the lower end of the socioeconomic ladder. According to the International Telecommunications Union (ITU), urban populations and younger generations have greater access to the Internet, while developed countries have nearly double the number of users as developing countries, and four and a half times the amount of least developed countries (LDCs) (ITU 2020). Social media, likewise, has become a ubiquitous and even mundane part of our online experience with 3 billion of the 3.8 billion Internet users engaged with social media at any point in time (Kemp 2017).

The digital age has, thus, led to inequality on two levels: individually and nationally. Individuals with greater skills/know-how regarding the use and access of information (generally middle-class and college educated) will have a distinct advantage in terms of employment, social networks, and economic advancement. Nationally, the situation is potential even more dire. While access within developing countries is still an issue, the real challenge in the future and one that will impact developmental trajectory is the divide between countries with technological prowess and those without.

A country such as Niger with 6.62 children per woman will have far different needs in terms of supportive care and employment than countries such as Singapore and Taiwan that hover around one child per woman. However, as poorer nations do develop and generally become more prosperous compared to their former state, that also leads to greater health outcomes such as infant survivability and longer lives. That is, greater numbers of people striving for a better life.

Thus, it is countries such as Nigeria whose population has increased tenfold since 1950 that will likely be the wellsprings from which future migrants hail: countries that have made it out of the bottom rungs of development but not so prosperous that their populations would have no reason to leave. Throw into the mix ongoing conflicts and environmental change, and those rising populations will likely be less inclined to stay at home and more likely—and fiscally capable—of making the journey to more prosperous regions of the world (King 2017). But given the backlash over globalization throughout the Western world and severe restrictions on immigration in many countries in Asia, that journey has become all the more difficult.

A far more desperate future could just as well be a technical and economic chasm the likes of which have not been seen since the Age of Discovery. Within richer nations, the social divide may lie between those whose jobs can be conducted online, and a mass of humanity that has been made redundant by automation; the so-called ‘non-essential’ workers of the COVID-19 pandemic. Poorer countries, meanwhile, could become forever chained to the bottom rung of development, dependent on the developed world acting as paternalistic overseers that dole out technology according to the latter’s estimation of which country is capable of handling what and when, or in exchange for whatever resources that they can extract from the ground. Indeed, an even more ominous scenario could see poorer nations cease to functionally develop, becoming ever more dependent on powerful patrons who guard their technological secrets as ancient China once guarded the manufacture of silk.

3 Social and cognitive change

In his 1987-piece, Diamond wrote “From the progressivist perspective on which I was brought up, to ask ‘Why did almost all our hunter-gatherer ancestors adopt agriculture?’ is silly. Of course they adopted it because agriculture is an efficient way to get more food for less work…While the case for the progressivist view seems overwhelming, it's hard to prove.” Indeed, the opposite is the case. The amount of work hunter-gatherers spend acquiring food is less than that required for agricultural production (Sahlins 1972; Dyble et al. 2019). The fruits of their labor, so to speak, are also more diversified, leading to a far healthier diet and lower food-related pathologies compared to their agriculturally dependent counterparts.

Research from around the world has demonstrated a general decline in dental health as well as an overall increase in morbidity with the shift to agriculture and the sedentism that followed (Ubelaker 1992; Gualandi 1992; Lukacs 1996). The narrowing of diets and reliance on domesticated plants compared to animal protein led to a retardation in bone length, thickness, and robusticity (Larsen 1995), not to mention the dangers of relying on a limited number of crops should a bad harvest or disease wipe them out. A carbohydrate-based diet satisfied the need to feed a growing number of hungry mouths, but the lack of nutritional value weakened the bodies attached to them.

And so, too, for our dopamine-driven clicks on the Internet. Chamath Palihapitiya, a former vice president for user growth at Facebook, has publicly stated that the short-term, dopamine effects of social media are having long-term consequences on civil society through the manipulation and overproduction of shared information. At a talk at the Stanford Graduate School of Business in 2017, Palihapitiya said that “Bad actors can now manipulate large swaths of people to do anything you want. And we compound the problem. We curate our lives around this perceived sense of perfection, because we get rewarded in these short-term signals—hearts, likes, thumbs up—and we conflate that with value and we conflate it with truth” (Wang 2017).

While there is evidence that moderate use of social media may not pose a risk (Przybylski and Weinstein 2017), there is also evidence that heavy use of social media (5 + hours a day) among adolescents leads to higher rates of unhappiness and suicidal risk factors (Twenge and Campbell 2019). The risk for teenage girls in particular is much higher than for boys, with rates of depression, suicide-related outcomes, and suicide rates spiking after 2012 when use of social media among teens became common (Twenge et al. 2018; Haidt and Allen 2020). The explosion in growth of teenage girls coming out as transgender—70 times the expected prevalence rate—and increased use of social media has also been cited by parents of girls who belong to peer groups where one or even all friends within the group become gender dysphoric within the same timeframe (Littman 2018).

The primacy of place of social media for many has led to the rise of digital narcissistic behavior (Faucher 2018) through the amplification of one’s profile to millions online where the latest Tweet, Instagram photo, or YouTube video elicits an almost immediate response and cycles of engagement that occupy, for many, most of their waking hours. As the use of digital media has more than doubled from close to 3 h per day in 2009 to more than 6 h per day in 2018, it is perhaps no surprise that living arrangements have begun to reflect this trend, with upscale “physical social network” (Hansen-Bundy 2018) residential apartment blocks offering all the services one needs. Obviating the necessity of venturing out into other parts of a city to complete daily tasks becomes, interacting with others outside of one’s immediate sphere of influence likewise becomes obsolete.

But while these physical spaces are designed to bring individuals together and provide an antidote for the impersonality of social media, a remedy is only needed when there is a problem. Specifically, the dwindling importance many people, especially young people, place on face-to-face interaction. Instead, there is the increasing importance of immersing oneself in a sea of images, videos, and text that can be digested and curated quickly to make room for the next in an endless stream of social consciousness from all points of the globe.

An increasingly isolated populace also leads to broader repercussions for the future demographic health of a society. W. Bradford Wilcox (2018) of the National Marriage Project at the University of Virginia cites online porn and video games as contributing to the decline in birthrates in the US, but also a decline in an interest in sex among younger people. Although the US is nowhere near Japan where one-third of Japanese adults between the ages of 18–34 have never had sex, movements such as the #MeToo movement have the capacity to drive down sexual activity even further as men fear the aftermath of sexual encounters in an age where definitions of what constitutes consent remain ambiguous (ibid), and online ‘doxing’ campaigns can quickly upend lives.

The necessity of the Internet and handheld devices has also given rise to new ailments and an expanding catalog of acronyms to account for them. Internet Addiction Disorder (IAD) and problematic smartphone use (PSU) involve primarily visual stimuli, but those stimuli share similar characteristics with substance-use disorders such as rapid impulsive processes with little reflection by the user (Roh et al. 2018). PSU has been shown to be positively related to anxiety, time spent on one’s phone, and, interestingly, the number of selfies taken, but negatively related to a connection with nature. The latter, by contrast, has been positively related to photos of nature (Richardson et al. 2018). In other words, the more time spent on smartphones, the less one is likely to take a photo of, say, the Grand Canyon for the sake of its own natural splendor, but a photo of oneself at the Grand Canyon as if it were simply the background on which one’s persona is layered and validated through likes, retweets, and shares.

Our reliance on AI for tasks that were once part of the normal repertoire of human activities has saturated modern societies. These include the well-known ‘Google effect’, or relying on the Internet in place of our own memory, and a reduction in the desire to engage in demanding mental tasks and encode new information in our brains (Sparrow et al. 2011; Bohannon 2011; Storm et al., 2017). Where once our friends and family members were part of a network of transactive memory partners with whom we shared information, the Internet increasingly fills that role and has become for many the sole arbiter of knowledge (Wegner and Ward 2013). Surveys of university students have also found a similar reliance on the search engine over library services, even when students are physically present in university libraries (O’Connor and Lundstrom 2011).

These trends stand alongside more serious issues such as the increase addictive disorders and associated complications. Massively multiplayer online role-playing games (MMORPGs) can lead to Internet gaming disorder (IGD) which has been linked to negative perceptions of past experiences (So-Kum Tang et al. 2017; Lukavská 2018), as well as lower densities of gray matter in the cortex and corresponding behaviors such as impulsivity, distorted decision-making capabilities, depression, and anxiety disorders (Lee et al. 2018). And what occurs at the individual level possibly translates to the population level, meaning that greater amounts of information consumed at greater rates are indeed resulting in lower attention spans for the public at large (Firth et al. 2020).

And it is these latter developments that are perhaps the most alarming of all. As Diamond noted of average human height within skeletons found in Greece and Turkey pre- and post-agriculture, whereas hunter-gatherers reached 5′ 9″ and 5′ 5″ for men and women, respectively, their agricultural successors saw those heights plummet to 5′ 3″ and 5′ (1987). And this drop in height was in addition to associated increases in malnutrition, infectious diseases, and degenerative conditions of the spine (ibid). From 1990 to today, use of the Internet went from close to 0 to nearly 50% of the world’s population, with increases in IAD and IGD seeing similar exponential growth. As with agriculture’s impact on height, the digital revolution is having an equally profound effect on those with IAD and IGD who exhibit a decrease in frontal lobe functions and gray-matter volume of (Jun et al. 2013; Pan et al. 2018). And when considering that more than 50% of US children aged 8–18 have smartphones and spend between roughly 5–7 h a day on them (5 h a day for those 8–12; 7 h a day for those 13–18) (Rideout and Robb 2019), the long-term consequences for the cognitive abilities of younger generations are an open question.

However, acronyms and clinical analyses have a way of blinding us to the generality of such conditions, and the true number of those living in modern societies who may accurately be described as having IAD or IGD is unknown. A simple observational experiment can bear this out. Look around the coffee shop, train, airport terminal, or other public space you happen to be in now—what do you see? More than likely, it is individuals immersed in the soft glow of a screen next to others who are similarly wrapped in a constellation of pixels. A community of people who have walled their minds off from a shared reality which they dip in and out of between clicks and swipes. Our switch to agriculture and what we consumed with our mouths did not make our bodies bigger and stronger but smaller and weaker. The transition to the digital age and our sensory immersion into what for many is a perpetual online existence, is well on its way to having an equally profound effect on our neurological health.

4 Toward a universal social credit system?

The conglomeration of power through the invention of agriculture in many ways can be said to be self-fulfilling. A reliable and increased source of calories means a greater number of individuals that can be fed and survive. An increase in the number of individuals making it into adulthood means a greater number of offspring which will likely be born to those people in the following years. As agriculture requires competent organization if a community has any reasonable chance of reaping its benefits year after year, those who have the skills and charisma necessary to lead will usually rise to the top of the social hierarchy. But what historically solidified their position was an overarching religious system and cadre of specialists capable of reading the signs of nature and the heavens, thus, legitimizing the power of the sovereign, and ultimately the political and economic system in which commoner and elite all were entwined. For the average peasant, the workings of those who could divine the whims of the gods were clouded in secrecy, hidden by sacred incantations and magical objects. But above all else, they were not to be questioned.

The digital sorcery of today has its own veil that shrouds its intrigues. Social media companies, at least within the United States, fall under a protected class of companies that are not held as responsible for the content that can be accessed on them. Section 230, a provision in the Communications Decency Act passed by the US Congress in 1996, states that as platforms and not disseminators for information, they are protected from lawsuits over users’ posts while allowing them to moderate posts that express violent or misleading content. With the growth of the Internet and social media since then, this has also meant that companies, such as Google, Facebook, and Twitter, have become the defacto mediums which everyone from ordinary citizens to world leaders use to express their views. The result has been the unprecedented acquisition of power and influence by private companies over what individuals read, see, and share online (Lee 2018).

Although bemoaned in the West for its totalitarian underpinnings, the Chinese Communist Party’s (CCP) rollout of its Social Credit System may be a hallmark of things to come. The system quantifies all online actions from opinions posted online, shopping habits, to the amount of time spent watching videos or playing video games. Planning documents for the new system cite that it would “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step” (Chin 2016). As Denyer writes, “your score becomes the ultimate truth of who you are—determining whether you can borrow money, get your children into the best schools or travel abroad; whether you get a room in a fancy hotel, a seat in a top restaurant—or even just get a date” (2016).

A high score means greater ease when making online transactions; a low score means expensive purchases via Alipay (a Chinese online payment system also connected to the government) could be blocked. Those with low scores may also be prohibited from boarding planes and high-speed trains (Wade 2017). Perhaps, the most pernicious aspect of the system is that an individual’s social credit score can be lowered through the actions of his friends. You may be a firm believer in the CCP, but should your friend say something that could be construed as less than positive, your score would be in jeopardy for being associated with him.

But how different is this from the ‘court of Twitter opinion’ or other methods used by groups to silence and shame those with whom they disagree, or pressure social media to adopt their definitions of what constitutes hate (Dibble 2020; Perman 2021; Capatides 2020)? The banning of individuals from Twitter, Facebook, and YouTube has become routine, while ‘sandboxing’ (limiting access to videos that are deemed controversial or hateful) is used against ordinary people and world leaders alike (YouTube 2017; Alba et al. 2021). Such bans have been criticized across the political spectrum by politicians around the world (Brennan 2021), raising concerns of the specter of a “digital oligarchy” that in many ways has greater power than the state to silence individuals with whom it disagrees (Jennen and Nussbaum 2021).

There are those who argue that far from being a limit on information, online platforms have democratized information more than at any other time in history. To that end, they are partially correct. While it is true that foreseeably anyone is capable of putting anything online, should search engines or social media sites choose not to index it (or in the case of Amazon, refuse to let companies use its servers), that material will likely never be seen, or at least be seen be a consequential number of people. It is akin to the US Library of Congress filing a book within its vast depository but never assigning it a reference number. Sure, the information is there—but who will ever find it?

The information that does get the approval of corporate gatekeepers is recycled at a speed and degree never before seen, meaning that a post, comment, or picture can be rediscovered and repackaged, gaining a life of its own and demonstrating information’s pathogenic quality. Information comes to light and becomes ‘viral’, spreading throughout a population only to die down before being rediscovered once again, sometimes years after the fact, and resurfacing with perhaps even greater virulence than before. Appropriate behavior and acceptable conduct are policed by a vigilant and unforgiving online community who judge a person’s past deeds against the shifting sands of what is and is not considered appropriate. Authoritarianism, thus, becomes the purview of everyone online, or what Jared Lanier, one of the original creators of virtual reality, calls “digital Maoism” (in Appleyard 2011).

These threats to individual liberty are eased along by the migration by government and businesses in developed economies to an almost wholly online existence, meaning that some form of online activity is a necessity for most people. The individual is thus forced to collaborate in his slow transformation into a ‘datavidual’: An aggregation of online information accessed by codes, the combination of which is unlikely expressed in anyone else (van der Meulen and Bruinsma 2018). One could envision these trends only continuing, such as the calls for ‘vaccine passports’ and tracking of infected individuals via their smartphones during the COVID-19 pandemic, inevitably becoming part of our everyday lives. An ID card or social security number could be replaced by an individualized Internet protocol, or IP address, without which one would have no way of conducting basic transactions from buying milk at the corner store to registering to vote. A continuous online presence—which many people essentially already have via their smartphones—may soon not be an option but a necessary part of everyday life.

Primates are extremely adaptable species with humans being the most adaptable among them. Countless individuals walking among us would not be here if not for advances in everything from eyeglasses to antibiotics. However, our reliance on technology to fill the gaps of our own physical limitations also allows for a continued redefinition of what it means to be human. If we accept the premise that a continuous online presence will be more rather than less likely to occur in the future, then the probability that invasive procedures to augment those limitations may similarly become routine. As Maguire and McGee write, “As enhancements become more widespread, enhancement becomes the norm, and there is increasing social pressure to avail oneself of the ‘benefit.’ Thus, even those who initially shrink from the surgery may find it a necessity” (1999).

Technology has already moved from smartphones to smart glasses, with smart contact lenses just over the horizon. How farfetched is the idea of implanted chips becoming a necessity of daily life, or worse, even required via state decree, much as it is required of pet owners to implant a chip in the family dog?

5 Conclusion

And, so, we are left with the question posed in the title of this essay: Is the digital revolution our second worst mistake (assuming that Diamond was correct about the first)? The answer, I believe, is that both agriculture and the digital revolution are not exceptions to a rule but necessary outcomes of a rule itself. That rule being that human beings have an inexhaustible capacity to tinker and will do so inexhaustibly. Our extinct cousin, Homo erectus, lived from around two million to 100 thousand years ago; more than six times as long as our species has been in existence. However, during his time on Earth, his toolkit never advanced beyond rudimentary stone tools. By contrast, within the past 40 thousand years, our species, Homo sapiens, went from cave paintings to putting a man on the moon, with the most advanced innovations occurring after the adoption of agriculture.

The transformations brought about by the digital revolution are and will continue to have far-reaching ramifications in terms of our social and political structures, but also on education, the way we consume information, and the digitization of economic exchange. However, as with agriculture, it is also leading to changes in our physiology, reproduction, and a new range of ailments stemming from our novel online existence. And these are in addition to and likely compounded by longer working hours as the divide between home and work becomes increasingly blurred. It is the young, though, which will bear the brunt of any long-term negative effects of innovation. The impact on brains alone is enough to reconsider the steady entrenchment of younger generations in a persistent online presence, and what this may mean for societies writ large in the years to come.

The digital revolution has so thoroughly reorganized modern life, that is also similar to agriculture in that it is extremely difficult to conceive of a time in the future when an online presence would be obviated, much less optional. However, as the COVID-19 pandemic’s impact on childhood literacy has shown (World Bank 2022), the pitfalls that come with an interconnected world can chip away at the very fundamentals on which that world rests, regardless of how technologically sophisticated a country may be. And the poorer the nation, the more impoverished the learning, meaning that succeeding generations and the societies they live in may be left even further behind the digital divide.

And while democratic nations in the West lament the restrictions on freedoms implemented by China with its Social Credit System, those societies have also come to rely on technology platforms to surveil and collect data on their own citizens as never before. Where once a slip of the tongue or casual observation would be as quickly forgotten, today a post on social media can be dissected and reinterpreted—weaponized—even years after the fact. Freedom of speech and association are the most at risk, ironically through the use of the very online platforms created for sharing ideas and associating with likeminded individuals.

Thousands of years ago, human beings simply could not foresee all that would unfold in the distant future with the planting of the first seeds. Millennia after its creation, we can look back on the invention of agriculture and clearly see the costs and benefits of that turn in our collective development. Agriculture led to a settled way of life, greater populations, and the centralization of power in political and religious institutions; the two latter of which, until the modern age, were quite often of the oppressive variety. Our digital revolution is no less radical in the changes that it is bringing to our species and the societies we live in, only at a speed and scale that dwarfs agriculture.

But even if agriculture was the worst mistake in history, that does not mean that everything flowing downstream from it—including the digital revolution—is necessarily a mistake as well, any more than is the offspring of a doomed marriage. What we can say with certainty is that inequality, human health, and individual liberties have been impacted by the digital revolution both positively and negatively. How these three spheres will develop in 50 or 100 years is anyone’s guess, but there are three possible outcomes: they will improve, go unchanged, or worsen. If the drawbacks stemming from the digital revolution continue on their current trajectory as digital technology becomes even more embedded within our lives, then perhaps we can be equally certain of the latter.