Humanity stands at a precipice. -/- Our species could survive for millions of generations — enough time to end disease, poverty, and injustice; to reach new heights of flourishing. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, gaining the power to destroy ourselves, without the wisdom to ensure we won’t. Since then, these dangers have only multiplied, from climate change to engineered pandemics and unaligned artificial intelligence. If we do (...) not act fast to reach a place of safety, it may soon be too late. -/- The Precipice explores the science behind the risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies we can take today to safeguard humanity’s future. (shrink)
Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...) a human to a "posthuman" society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that of existential risks. These are threats that could case our extinction or destroy the potential of Earth - originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies. (shrink)
A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind (...) of event. Neither can probabilistic risk analysis. This paper will argue that the approach that is referred to as engineering safety could be applied to reducing the risk from black swan extinction events. It will also propose a conceptual sketch of how such a strategy may be implemented: isolated, self-sufficient, and continuously manned underground refuges. Some characteristics of such refuges are also described, in particular the psychosocial aspects. Furthermore, it is argued that this implementation of the engineering safety strategy safety barriers would be effective and plausible and could reduce the risk of an extinction event in a wide range of possible scenarios. Considering the staggering opportunity cost of an existential catastrophe, such strategies ought to be explored more vigorously. (shrink)
The standard argument to the conclusion that artificial intelligence (AI) constitutes an existentialrisk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they (...) cannot be joined as premises and the argument for the existentialrisk of AI turns out invalid. If the interpretation is incorrect and both premises use the same notion of intelligence, then at least one of the premises is false and the orthogonality thesis remains itself orthogonal to the argument to existentialrisk from AI. In either case, the standard argument for existentialrisk from AI is not sound.—Having said that, there remains a risk of instrumental AI to cause very significant damage if designed or used badly, though this is not due to superintelligence or a singularity. (shrink)
We outline an argument favoring voluntary moral bioenhancement as a response to existential risks humanity exposes itself to. We consider this type of enhancement a solution to the antithesis between the extinction of humanity and the imperative of humanity to survive at any cost. By opting for voluntary moral bioenhancement; we refrain from advocating illiberal or even totalitarian strategies that would allegedly help humanity preserve itself. We argue that such strategies; by encroaching upon the freedom of individuals; already inflict (...) a degree of existential harm on human beings. We also give some pointers as to the desirable direction for morally enhanced post-personhood. (shrink)
ABSTRACTThis paper examines and analyzes five definitions of ‘existentialrisk.’ It tentatively adopts a pluralistic approach according to which the definition that scholars employ should depend up...
Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...) to colonize Mars. I close with some thoughts on what it would take to show that we do have an obligation to colonize Mars and related issues concerning the relationship between the way we discount our preferences over time and projects with long time horizons, like space colonization. (shrink)
Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a (...) case for further engagement with the New Zealand public to determine societal values towards future lives and their protection. (shrink)
Large-scale, self-sufficient space colonization is a plausible means of efficiently reducing existential risks and ensuring our long-term survival. But humanity is by and large myopic, and as an intergenerational global public good, existentialrisk reduction is systematically undervalued, hampered by intergenerational discounting. This paper explores how these issues apply to space colonization, arguing that the motivational and psychological barriers to space colonization are a special—and especially strong—case of a more general problem. The upshot is not that large-scale, (...) self-sufficient space colonization will never occur, but that, absent institutional change, the conditions under which it is most likely to occur are precisely those conditions where the threat of suffering risks might be most high. (shrink)
Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existentialrisk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existentialrisk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking (...) that AI poses an existential threat to humanity: the control problem, the possibility of global disruption from an AI race dynamic, and the weaponization of AI. (shrink)
The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function (...) as a source of catastrophic or even existentialrisk. The paper first reviews a hypothesis by Bostrom about inevitable technological risks, named the vulnerable world hypothesis. This paper next hypothesizes that fragility may not only be a possible risk, but could be inevitable,and would therefore be a subclass or example of Bostrom’s vulnerable worlds. After introducing the titular fragile world hypothesis, the paper details the conditions under which it would be correct, and presents arguments for why the conditions may in fact may apply. Finally, the assumptions and potential mitigations of the new hypothesis are contrasted with those Bostrom suggests. (shrink)
ABSTRACTSo-called ‘existential risks’ present virtually unlimited reasons for probing them and responses to them further. The ensuing normative pull to respond to such risks thus seems to present us with reasons to abandon all other projects and commit all time, efforts and resources to the management of each existentialrisk scenario. Advocates of the urgency of attending to existentialrisk use arguments that seem to lead to this paradoxical result, while they often hold out a (...) wish to avoid it. This creates the ‘black hole challenge’: how may an ethical theory that recognizes the urgency of existential risks justify a limit to how much time and resources are committed to addressing them? This article presents two pathways to this effect by appealing to reasons for limiting the ‘price of precaution’ paid in order to manage risks. The suggestions are different in that one presents ideal theoretical reasons based on an ethical theory of risk, while the other employs pragmatic reasons to modify the applicat... (shrink)
A new book by Phil Torres, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks, is reviewed. Morality, Foresight and Human Flourishing is a primer intended to introduce students and interested scholars to the concepts and literature on existentialrisk. The book’s core methodology is to outline the various existential risks currently discussed in different disciplines and provides novel strategies for risk mitigation. The book is stylistically engaging, lucid and academically current, providing both novice (...) readers and seasoned scholars with an easy-to-read introduction to risk studies. The book is by far the most engaging and comprehensive volume on risk studies aimed at captivating new scholars to the field. (shrink)
This paper provides a critique of Bostrom’s concern with existential risks, a critique which relies on Adorno and Horkheimer’s interpretation of the Enlightenment. Their interpretation is used to elicit the inner contradictions of transhumanist thought and to show the invalid premises on which it is based. By first outlining Bostrom’s position this paper argues that transhumanism reverts to myth in its attempt to surpass the human condition. Bostrom’s argument is based on three pillars, Maxipok, Parfitian population ethics and a (...) universal notion of general human values. By attempting to transcend the human condition, to achieve post-humanity, transhumanism reverts to myth. Thus, the aim of this paper is to provide a critical examination of transhumanism which elicits its tacit contradictions. It will also be argued that transhumanism’s focus on a universal, all-encompassing, notion of humanity neglects any concern with actual lived lives. This absence is problematic because it clearly shows that there is a discrepancy, between transhumanism’s claimed concern for all of humanity and the practical implications of proposing a universal notion of humanity. This paper will conclude, that transhumanism’s lack of concern with actual lives is due to its universal and totalising gestures. Gestures which allow for universal claims such as general values or Earth-originating intelligent life. (shrink)
This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existentialrisk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages (...) and disadvantages. Section Three contains four suggestions to improve best practice in existentialrisk assessment. These suggestions centre on the types of probabilities used in risk assessment, the role of methodology rankings including the ranking of probabilistic information, the extended use of expert elicitation, and the use of confidence measures to better communicate uncertainty in probability assessments. Finally, Section Four provides an annotated literature review of catastrophic and existentialrisk probability claims as well as the methodologies that were used to produce each of them. (shrink)
This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existentialrisk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages (...) and disadvantages. Section Three contains four suggestions to improve best practice in existentialrisk assessment. These suggestions centre on the types of probabilities used in risk assessment, the role of methodology rankings including the ranking of probabilistic information, the extended use of expert elicitation, and the use of confidence measures to better communicate uncertainty in probability assessments. Finally, Section Four provides an annotated literature review of catastrophic and existentialrisk probability claims as well as the methodologies that were used to produce each of them. (shrink)
This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existentialrisk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages (...) and disadvantages. Section Three contains four suggestions to improve best practice in existentialrisk assessment. These suggestions centre on the types of probabilities used in risk assessment, the role of methodology rankings including the ranking of probabilistic information, the extended use of expert elicitation, and the use of confidence measures to better communicate uncertainty in probability assessments. Finally, Section Four provides an annotated literature review of catastrophic and existentialrisk probability claims as well as the methodologies that were used to produce each of them. (shrink)
Increasing rates of psychiatric problems like depression and anxiety among Swedish youth, predominantly among females, are considered a serious public mental health concern. Multiple studies confirm that psychological as well as existential vulnerability manifest in different ways for youths in Sweden. This multi-method study aimed at assessing existential worldview function by three factors: 1) existential worldview, 2) ontological security, and 3) self-concept, attempting to identify possible protective and risk factors for mental ill-health among female youths at (...)risk for depression and anxiety. The sample comprised ten females on the waiting list at an outpatient psychotherapy clinic for teens and young adults. Results indicated that both functional and dysfunctional factors related to mental health were present, where the quality and availability of significant interpersonal relations seemed to have an important influence. Examples of both an impaired worldview function and a lack of an operating existential worldview were found. Psychotherapeutic implications are discussed. (shrink)
Background and objectives: Physicians are exposed to matters of existential character at work, but little is known about the personal impact of such issues. Methods: To explore how physicians experience and cope with existential aspects of their clinical work and how such experiences affect their professional identities, a qualitative study using individual semistructured interviews has analysed accounts of their experiences related to coping with such challenges. Analysis was by systematic text condensation. The purposeful sample comprised 10 physicians (including (...) three women), aged 33–66 years, residents or specialists in cardiology or cardiothoracic surgery, working in a university hospital with 24-hour emergency service and one general practitioner. Results: Participants described a process by which they were able to develop a capacity for coping with the existential challenges at work. After episodes perceived as shocking or horrible earlier in their career, they at present said that they could deal with death and mostly keep it at a distance. Vulnerability was closely linked to professional responsibility and identity, perceived as a burden to be handled. These demands were balanced by an experience of meaning related to their job, connected to making a difference in their patients’ lives. Belonging to a community of their fellows was a presupposition for coping with the loneliness and powerlessness related to their vulnerable professional position. Conclusions: Physicians’ vulnerability facing life and death has been underestimated. Belonging to caring communities may assist growth and coping on exposure to existential aspects of clinical work and developing a professional identity. (shrink)
The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; its (...) reason lies in the main evolutionary trends in the development of science as a social institution. It was only later that epistemological factors were transformed into existential-ontological factors associated with the asymmetry of the existence of civilization and our biosocial nature. The perception or ignorance of a risk factor as a real fact is determined by the presence or absence of knowledge about it. In other words, risk is the result of the integration of the corresponding ontological concept into the general categorical structure. The plurality of such structures is a hallmark of multidisciplinary ontologies, each of which is associated with its own factual continuum. The aim of this study was to conceptually model for support relativistic parameter of evolutionary efficiency of stable evolutionary human strategy (SESH) in its techno-rationalistic module. The meaning of this term is equivalent to the category of scientific and technological development. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...) to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending ExistentialRisk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...) AI development, namely, before it starts self-improvement, during its takeoff, when it uses various instruments to escape its initial confinement, or after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level. (shrink)
Background and objectives: Physicians are exposed to matters of existential character at work, but little is known about the personal impact of such issues.Methods: To explore how physicians experience and cope with existential aspects of their clinical work and how such experiences affect their professional identities, a qualitative study using individual semistructured interviews has analysed accounts of their experiences related to coping with such challenges. Analysis was by systematic text condensation. The purposeful sample comprised 10 physicians , aged (...) 33–66 years, residents or specialists in cardiology or cardiothoracic surgery, working in a university hospital with 24-hour emergency service and one general practitioner.Results: Participants described a process by which they were able to develop a capacity for coping with the existential challenges at work. After episodes perceived as shocking or horrible earlier in their career, they at present said that they could deal with death and mostly keep it at a distance. Vulnerability was closely linked to professional responsibility and identity, perceived as a burden to be handled. These demands were balanced by an experience of meaning related to their job, connected to making a difference in their patients’ lives. Belonging to a community of their fellows was a presupposition for coping with the loneliness and powerlessness related to their vulnerable professional position.Conclusions: Physicians’ vulnerability facing life and death has been underestimated. Belonging to caring communities may assist growth and coping on exposure to existential aspects of clinical work and developing a professional identity. (shrink)
If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...) critically evaluating such proposals. (shrink)
The greatest existential threats to humanity stem from increasingly powerful advanced technologies. Yet the “risk potential” of such tools can only be realized when coupled with a suitable agent who; through error or terror; could use the tool to bring about an existential catastrophe. While the existentialrisk literature has provided many accounts of how advanced technologies might be misused and abused to cause unprecedented harm; no scholar has yet explored the other half of the (...) agent-tool coupling; namely the agent. This paper aims to correct this failure by offering a comprehensive overview of what we could call “agential riskology.” Only by studying the unique properties of different agential risk types can one acquire an accurate picture of the existential danger before us. (shrink)
While the notion of risk remains under-theorised in moral philosophy, risk aversion and moralist self-protection appear as dominant cultural tendencies saturating educational orientation and practice. Philosophy of education has responded to the educational emphasis on risk management by exposing the unavoidable and positive presence of risk in any endeavour to learn and teach. Taking such responses into account, I discuss how the theoretical connection of risk and education could be radicalised through an ethical approach combined (...) with epistemological and existential concerns. My aim is to propose an ethics that is sensitive to the difference between risks taken and risks imposed and to the cultural variations of what counts as danger. Finally, I explain how the educational relevance of such an ethics requires a prior questioning of the western understanding of self and world that has functioned as a subtext of the dominant view of risk. (shrink)
Increasing attention to existentialist thought by criminologists and other social scientists in recent decades has created an opportunity to envision new possibilities in critical theoretic inquiry that extend well beyond the classical formulations of this tradition. In this essay, I draw on existentialist ideas to outline a critical perspective rooted in recent developments associated with Ulrich Beck's notion of "risk society" and the related theory of reflexive modernization. I argue that, though the detraditionalization consequences of reflexive modernization give greater (...) scope to agency in the risk society, transcendence in the existentialist sense is found in the hermeneutic reflexivity one experiences in high risk practices I call "edgework". Finally, I explore several options for using existential transcendence in hermeneutic reflexivity as a reference for critical analysis and, in doing so, suggest an alternative to Beck's own critical approach—cosmopolitanism—as a foundation for a critical theory of the second modern social order. (shrink)
This article assesses how autonomy and machine learning impact the existentialrisk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and (...) control, avoiding autonomy and machine learning is recommended as one means to reduce the existentialrisk of unintended nuclear conflict. (shrink)
The development of new e-culture becomes one of the most important phenomena of the digital age. The concept ‘e-culture’ has been still developing; though it is evident, that as a phenomenon, it cannot be compared with anything that has ever existed. It requires the necessity of its deep study in general and in terms of axiological and ethical aspects, reflecting the nature of its influence on human world view and behaviour. The author offers the concept of e-culture as a new (...) type of creative activity, the ‘third nature’, progressively replacing ‘living culture’ and natural environment for human beings. Digital culture gives human beings new ways to solve existential problems, forming in this regard new dependences and risks for biosocioelectronical subjects developing within its conditions. First of all, internet-dependence refers to such risks. It enhances ‘existential vacuum’, axiological disorientation in the real sphere; deformation of interpersonal communication essence with the virtualization of its sensual and emotional aspects; appearance of new freedom forms of personality ethical choice, generated by virtual interaction. An existential approach to the understanding of e-culture allows us to determine the relation between deep ontological personality problems and new technology achievements of the digital age. (shrink)
Sources of evolutionary risk for stable strategy of adaptive Homo sapiens are an imbalance of: (1) the intra-genomic co-evolution (intragenomic conflicts); (2) the gene-cultural co-evolution; (3) inter-cultural co-evolution; (4) techno-humanitarian balance; (5) inter-technological conflicts (technological traps). At least phenomenologically the components of the evolutionary risk are reversible, but in the aggregate they are in potentio irreversible destructive ones for biosocial, and cultural self-identity of Homo sapiens. When the actual evolution is the subject of a rationalist control and/or manipulation, (...) the magnitude of the 4th and 5th components of the evolutionary risk reaches a level of existential significance. (shrink)
The Risk of Being attempts to forge a new language and a new way of reasoning about what it is like to be good and bad by focusing on existential phenomena that reveal what it means to be good and bad.
Stable adaptive strategy of Homo sapiens (SESH) is a superposition of three different adaptive data arrays: biological, socio-cultural and technological modules, based on three independent processes of generation and replication of an adaptive information – genetic, socio-cultural and symbolic transmissions (inheritance). Third component SESH focused equally to the adaptive transformation of the environment and carrier of SESH. With the advent of High Hume technology, risk has reached the existential significance level. The existential level of technical risk (...) is, by definition, an evolutionary risk as possible leads to the genesis of disappearance of humanity as a species. The emergence of bioethics has to consider as a form of modern (transdisciplinary) scientific concept and sociocultural adaptation for regulate human identity in the global-evolutionary transformation and performs the function of self-preservation. (shrink)
If the rhetorical and economic investment of educators, policy makers and the popular press in the United States is any indication, then unbridled enthusiasm for the introduction of computer mediated communication (CMC) into the educational process is wide-spread. In large part this enthusiasm is rooted in the hope that through the use of Internet-based CMC we may create an expanded community of learners and educators not principally bounded by physical geography. The purpose of this paper is to reflect critically upon (...) whether students and teachers are truly linked together as a``community'' through the use of Internet-based CMC. The paper uses the writings of Kierkegaard, and Hubert Dreyfus's exploration of Kierkegaardian ideas, to look more closely at the prospects and problems embedded in the use of Internet-based CMC to create "distributed communities" of teachers and learners. It is argued that from Kierkegaard's perspective, technologically mediated communications run a serious risk of attenuating interpersonal connectivity. Insofar as interpersonal connectivity is an integral component of education, such attenuation bodes ill for some, and perhaps many instances of Internet-based CMC. (shrink)
The goal of this paper is to describe the mechanism of the public perception of risk of artificial intelligence. For that we apply the social amplification of risk framework to the public perception of artificial intelligence using data collected from Twitter from 2007 to 2018. We analyzed when and how there appeared a significant representation of the association between risk and artificial intelligence in the public awareness of artificial intelligence. A significant finding is that the image of (...) the risk of AI is mostly associated with existential risks that became popular after the fourth quarter of 2014. The source of that was the public positioning of experts who happen to be the real movers of the risk perception of AI so far instead of actual disasters. We analyze here how this kind of risk was amplified, its secondary effects, what are the varieties of risk unrelated to existentialrisk, and what is the dynamics of the experts in addressing their concerns to the audience of lay people. (shrink)
From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the (...) dominance of vertical (transgenerational) and horizontal (infection, contagion) adaptive streams of information, respectively. Evolutionary risk is the result of an imbalance of autonomous adaptive systems have an essential attribute of adaptibe strategy of Homo. Technological civilization inherent predisposition to overcome their dependence on biological and physical components. This feature serves as an enhancer of the evolutionary generating conjugate with the scientific and technological development risk We can assume the existence of an intention of Western mentality to a high priority (positive or negative) of technological modifications micro-social environment and post- Soviet (East Slavic) mentality to modification of macro-social system. (shrink)
This article argues that an artificial superintelligence emerging in a world where war is still normalised constitutes a catastrophic existentialrisk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should (...) “refrain in their international relations from the threat or use of force”, while allowing for UN Security Council-endorsed military measures and self-defense. As UN Member States no longer declare war on each other, instead, only ‘international armed conflicts’ occur. However, costly interstate conflicts, both hot and cold and tantamount to wars, still take place. Further, a New Cold War between AI superpowers looms. An ASI-directed/enabled future conflict could trigger total war, including nuclear conflict, and is therefore high risk. Via conforming instrumentalism, an international relations theory, we advocate risk reduction by optimising peace through a Universal Global Peace Treaty, contributing towards the ending of existing wars and prevention of future wars, as well as a Cyberweapons and Artificial Intelligence Convention. This strategy could influence state actors, including those developing ASIs, or an agential ASI, particularly if it values conforming instrumentalism and peace. (shrink)
Here we reconsider teachers’ changing subjectivities as autonomous agents whose practices acknowledge risk as an essential element in intellectual inquiry. We seek alternative descriptions to the limiting language of teachers’ current practices within the primacy of the market. We are convinced by Levinas’s claim that ethics is the first philosophy with its concomitant responsibility for the Other. This provides a valuable point of departure and our understanding of its relevance is expanded by Biesta and Todd. This perspective allows interruption (...) of the global reform ensemble with its reductionist understandings of teachers’ subjectivities within concerns for a ‘visible pedagogy’ and performativity. We illustrate how this global policy imperative is reworked in policies in the Republic of Ireland and share reflexive insights from our tutoring of teachers studying for a Master’s degree in Education. We show that teachers’ autonomy, which we understand as the capacity of teachers to facilitate risk and make ethically informed local judgements, is severely restricted by imposed standards, codes and laws to which there is tightly policed adherence. Instead we describe teachers’ practices occurring within an Invisible Pedagogy, which is not concerned with totalising and limited performativity but instead, explores risks associated with existential possibilities beyond commodification. (shrink)
Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing (...) class='Hi'>risk-taking pressure from the scientific theories and technological realities. The sequence of diagnostic signs of a new era once again split into technological and natural sciences’ from one hand, and humanitarian and anthropological sciences’, from other. The natural sciences series corresponds to a system of technological risks be solved using algorithms established safety procedures. The socio-humanitarian series presented anthropological risk. Global bioethics phenomenon is regarded as systemic socio-cultural adaptation for technology-driven human evolution. The conceptual model for meta-structure of stable evolutionary strategy of Homo sapiens (SESH) is proposes. In accordance to model, SESH composed of genetic, socio-cultural and techno-rationalist modules, and global bioethics as a tool to minimize existential evolutionary risk. An existence of objectively descriptive and value-teleological evolutionary trajectory parameters of humanity in the modern technological and civilizational context (1), and the genesis of global bioethics as a system social adaptation to ensure self-identity (2) are postulated. -/- . (shrink)
Support in different modes, expressions and actions is at the core of the public welfare culture. In this paper, support is examined as an everyday interpersonal phenomenon with a variety of expressions in language and ways of relating, and its essential meaning is explored. The fulcrum for reflection is the lived experience shared by a young woman with mental health problems of her respective encounters with two professionals in mental health facilities. A phenomenological analysis of the contrasting accounts suggests that, (...) when the professional relationship includes openness and risk, a certain degree of freedom of action is possible for both parties involved in the inevitably asymmetrical relationship. Support as “given” eludes controllable and measurable objectives, but imposes itself on the lived experiences of both the giver and the receiver as subject to readiness for acceptance. By not making assumptions about what support is, we open ourselves to the possibility of reciprocally experiencing moments revealing the essential meaning of support as lived. (shrink)
Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...) with the Darwin-Weismann modus, or according to the Lamarck modus, the difference between them is clear from the title. The integral attribute of the system of systems including ESSH is the production of evolutionary risks. The sources of evolutionary risk for stable adaptive strategy of Homo sapiens are the imbalance of (1) the intra-genomic co-evolution (intragenomic conflicts); (2) the gene-cultural coevolution; (3) the inter-cultural co-evolution; (4) techno-humanitarian balance; (5) intertechnological conflicts (technological traps). At least phenomenologically the components of the evolutionary risk are reversible, but in the aggregate they are in potentio irreversible destructive ones for bio-social, and cultural self-identity of Homo sapiens. When the actual evolution is the subject of a rationalist control and/or manipulation, the magnitude of the 4th and 5th components of the evolutionary risk reaches the level of existential significance. (shrink)